You are on page 1of 49

Create Database Manually

1] c:\>oradim –new –sid <sid name> -srvc <service_name>


2] Create sub directory in oracle directory
d:\oracle9i\admin\sonu\archive and \udump,\bdump,\cdump,\pfile,\create
3] create initsid.ora file in pfile directory
db_name=sonu
instance_name=sonu
control_files (d:\oracle\oradata\sonu\control01.ctl,d:\oracle\oradata\sonu\control02.ctl)
4] create database
Multiplexing Control File
Using SPFILE
1] Alter system set control_files=’D:\oracle9i\oradata\sonu\control01.ctl’,
’D:\oracle9i\oradata\sonu\control03.ctl’ scope=spfile;
2] Shutdown Normal;
3] Copy the file using os command
4] startup
Using PFILE
1] Shutdown Normal
2] copy the file using os command
3] add new file with location and name in initsid.ora file.
4] startup
Online Redo Logfile
ALTER SYSTEM CHECKPOINT;
ALTER DATABASE ADD LOGFILE GROUP 3
(‘D:\ORACLE91\ORADATA\log3a.rdo’,
‘D:\ORACLE91\ORADATA\log3b.rdo’) SIZE 1M;
ALTER DATABASE CLEAR LOGFILE ‘D:\ORACLE91\ORADATA\log3c.rdo’
VIEW :-V$log ,V$logfile
Rename Online Redo Log Member.
1]First you have to Shutdown the Instance.
Shutdown Immediate
2]Copy Old Redolog file in New Location or New Name
3]Start Database with MOUNT mode.
Connect sys as sysdba
Startup Mount
4] alter database rename file 'D:\ORACLE9I\ORADATA\SONU\OLD_NAME.LOG' to
'D:\ORACLE9I\ORADATA\SONU\NEW_NAME.LOG';
5] alter database open;
Re Size the Online Redo Log Group.
1] First you want to add new log Group in Database with size which you want.
alter database add logfile group 3
('D:\ORACLE9I\ORADATA\SONU\REDOa3.log') size 1m;
2] Drop the old group (if it is current group)
ALTER SYSTEM SWITCH LOGFILE;
alter database drop logfile group 3;
Droping Log Group
1]ALTER DATABASE DROP LOGFILE GROUP 3;
*
ERROR at line 1:
ORA-01623: log 3 is current log for thread 1 - cannot drop
ORA-00312: online log 3 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO03.LOG'
Adding Log Member in Group
2]ALTER SYSTEM SWITCH LOGFILE;
3]ALTER DATABASE DROP LOGFILE GROUP 3;
Adding Redo Log Member in Group
alter database add logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG' to group 1;
alter database add logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO02a.LOG' to group 2;
Dropping Log Member
1]alter database drop logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG';
ERROR at line 1:
ORA-01609: log 2 is the current log for thread 1 - cannot drop members
ORA-00312: online log 2 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO02.LOG'
ORA-00312: online log 2 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO02A.LOG'
if the status is CURRENT of the group which member you can not delete.if any way you want to delete
2]ALTER SYSTEM SWITCH LOGFILE;
3]alter database drop logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG';
Tablespaces
Locally Managed Dictionary Managed
1] Free Space manage within Tablespace Free Space Manage within data dictionary
2] No redo generated when space is allocated/de allocated Redo is generated
3] No Coalescing required Coalescing required if the pctincrease non
Zero
Creating Tablespace
1] create tablespace test datafile
'D:\ORACLE9I\ORADATA\SONU\TEST.DBF' SIZE 200M
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 128K;
2] create tablespace test1 datafile
'D:\ORACLE9I\ORADATA\SONU\TEST1.DBF' SIZE 500M
AUTOEXTEND ON NEXT 5M;
3] create tablespace test2 datafile
'D:\ORACLE9I\ORADATA\SONU\TEST2.DBF' SIZE 200M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
4] create tablespace test3 datafile
'D:\ORACLE9I\ORADATA\SONU\TEST3.DBF' SIZE 200M
EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE (initial 1M next 1M);
5] create tablespace test4 datafile
'D:\ORACLE9I\ORADATA\SONU\TEST2.DBF' SIZE 200M
EXTENT MANAGEMENT LOCAL
AUTOALLOCATE;
Offline Tablespace
alter tablespace test offline;
Altering Tablespace
1] Adding Datafile in Tablespace
alter tablespace test1 add datafile
'D:\ORACLE9I\ORADATA\SONU\TEST_A.DBF' SIZE 200M;
2] Renaming Datafile
You must follow these steps when the file is Non-System File
1] Bring the tablespace offline.
alter tablespace test offline normal;
2] Use the operating system command to move or copy the file
3] Execute the ALTER TABLESPACE RENAME DATAFILE command
alter tablespace test rename file
'D:\oracle9i\oradata\sonu\test.dbf' to 'd:\oracle9i\oradata\sonu\test100.dbf';
4] Bring the tablespace online.
alter tablespace test online;

3] Resize Datafile
alter database datafile 'd:\ora9i\oradata\test1\test.dbf' resize 50m;
Resize Tempfile
select TABLESPACE_NAME,sum(bytes)/1024/1024 MB from dba_data_files group by
TABLESPACE_NAME
Creating and Managing Index
Btree Bitmap
Suitable for high cardinality column Low cardinality column.
Inefficient for queries using OR Operator efficient for queries using OR Operator
Use for OLTP Use for data warehousing.

Creating Normal B-tree Index


Create Index hr.emp_last_name_idx on hr.emp(last_name) pctfree 30
Storage(initial 200k next 200k pctincrease 0 maxextents 50) Tablespace indx;
Creating Bit Map Index
Create bitmap index ord_idx on order(order_id) pctfree 30
Storage(initial 200k next 200k pctincrease 0 maxextents 50) Tablespace indx;
Changing Storage Parameter for Index
Alter Index ord_idx Storage(next 400k maxextents 100);
Allocating and Deallocating Index Space
Alter index ord_idx allocate extent (size 200k datafile ‘d:\disk6\index01.dbf’)
Alter index ord_idx deallocate unused;
Rebuilding Index
1] Move an Index to different Tablespace.
2] Improve the space Utilization by removing deleted entries.
3] Change the reverse key index to Normal B-Tree Index vice versa.
Alter Index ord_idx rebuild tablespace index01;
Alter table ……Move Tablespace;
Online Rebuild of Indexes
1] Rebuilding indexes can be done with minimal table locking.
Alter Index ord_idx rebuild Online;
Coalescing Index
Alter index ord_idx Coalesce;
Checking Index Validity/Analyze Index
Analyze Index ord_idx Validate Structure;
--Populate the view Index_stats with the index information.
Dropping Index
Drop Index ord_idx;
Identify Unused Index
1]To start Monitoring index usage
Alter index ord_idx Monitoring Usage;
2]To Stop Monitoring index usage
Alter index ord_idx Nomonitoring Usage;
Object Information View
DBA_INDEXES,DBA_IND_COLUMNS, DBA_IND_EXPRESSIONS,V$OBJECT_USAGE,
INDEX_STATE
Tablespace Backup
1] select file_name,tablespace_name from dba_data_file;
2] alter tablespace test begin backup;
3] go to command prompt
4] if the multiple database is there then SET ORACLE_SID=SONU
5] OCOPY d:\oracle9i\oradata\sonu\test.dbf d:\bkp
6] alter tablespace test end backup;
7] Alter System Archivelog current;

RMAN Configuration
1] Set Database in Archivelog Mode.
A] connect sys as sysdba
B] startup mount
C] alter database archivelog
D] alter database open
2] set parameter in init.ora
a] log_archive_start=true
b] log_archive_dest=d:\ora9i\admin\test1\archive
c] log_archive_format=’%t%s.dbf
3] create user rman identified by rman default tablespace tools temporary tablespace temp;
4] grant recovery_catalog_owner,connect,resource to rman;
5] c:\>rman catalog rman/rman
6] rman>create catalog;
7] rman>exit;
8] c:\>rman target system/manager@test catalog rman/rman
9] rman>register database;
10] rman>backup database;
11] shutdown the db
12] delete datafile and startup db
13] datafile error
14] cmd
15] connect rman target system/manager
16] restore database;
17] recover database;
Using RMAN Incomplete Recovery
First you Backup the whole database using rman
1] Mount the database
2] Allocate multiple channel for parallelization
3] Restore all datafiles
4] Recover the database using UNTIL SCN, UNTIL TIME, UNTIL SEQUENCE
5] Open the database using RESETLOGS option
6] Perform Whole Database Backup.
First You have to set this variable in registry in ORACLE_HOME
NLS_LANG=American
NLS_DATE_FORMAT=’YYYY-MM-DD:HH24:MI:SS’
Example:-
1] Shutdown Immediate
2] Startup Mount
3] Open New Command Window
4] c:\>rman target /
5] rman>run{ allocate channel c1 type disk;
allocate channel c2 type disk;
set until time =’2005-09-14 14:21:00’;
restore database;
recover database;
alter database open resetlogs;}
Using RMAN Complete Recovery
1] connect to rman
c:\>rman target /
2] Backup the database
rman>backup database;
3] After completing the backup accidentally your datafile is deleted or copy different location
4] Start database in mount mode
Startup Mount;
5] Open another window
c:\>rman target /
6] restore database;
7] recover database;
8] alter database open;
9] select * from v$datafile;
Your datafile is loss / USER MANAGE RECOVERY
1] Shut down the database
shu immediate;
2] Cut particular datafile from directory and paste it to different location
3] Startup database [it will give error ]
4] Make datafile offline which is not present.
alter database datafile 'D:\ORA9I\ORADATA\TEST1\TEST.DBF' offline;
5] Open the database
alter database open;
6] Make tablespace offline
alter tablespace test offline immediate;
7] recover automatic tablespace test;
[if there are no changes happens with datafile error give]
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required
8] Online the tablespace
alter tablespace test online;
9] Online the datafile.
alter database datafile 'D:\ORA9I\ORADATA\TEST1\TEST.DBF' online;

Backup all Archive Log Files


1] Your Database must be started
2] RMAN> run{ allocate channel d1 type disk;
backup format 'd:\bkp\log_t%t_s%s_p%p'
(archivelog all);
release channel d1;}

SQL * Loader
1] connect jagat/jagat
2] create table dept (deptno number(2), dname char(14) , loc char(13) ) ;
3] Create dept.ctl file with contents
load data
infile *
into table dept
fields terminated by ‘,’ optionally enclosed by “”
(deptno,dname,loc)
begindata
12,research,”Saratoga”
10,”accounting”,cleveland
11,finance,”boston”
21,”sales”,phila
22,”sales”,Rochester
42,”int’l”,”san fran”
4] c:\>sqlldr control=dept.ctl log=dept.log userid=jagat/jagat

Default Sizing
Log_buffer_size =512k
Shared_pool_size=44m
Sga_max_size=112m
Sort_area_size=512k
db_cache_size=32m
db_block_size=4k
large_pool_size=1m
java_pool_size=32m
STATPACK
1] SQL>create tablespace statpk datafile 'd:\oracle9i\oradata\sonu\statpk.dbf' size 100m;
# Applicable for if before you run this utility
SQL>@d:\oracle9i\ rdbms\admin\spdrop.sql;
2] When you run this script you specify the tablespace name for statpack (i.e statpk) and temporary
tablespace
SQL>@d:\oracle9i\ rdbms\admin\spcreate.sql;
3] SQL>execute statspack.snap;
SQL>execute statspack.snap;
4] when you run this script enter the value for start snap id and end snap id (i.e 1 to 2) and enter report name
which is default store in c drive
5] SQL>@d:\oracle9i\rdbms\admin\spreport.sql
TKPROF
Procedure to enable SQL trace for users on your database:
1. Get the SID and SERIAL# for the process you want to trace.
SQL> select sid, serial# from sys.v_$session where...
SID SERIAL#
---------- ----------
8 13607
2. Enable tracing for your selected process:
SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);
3. Ask user to run just the necessary to demonstrate his problem.
4. Disable tracing for your selected process:
SQL> execute dbms_system.set_sql_trace_in_session (8,13607, false);
SQL> ALTER SYSTEM SET TIMED_STATISTICS = FALSE;
5. Look for trace file in USER_DUMP_DEST
$ cd /app/oracle/admin/oradba/udump
6. Run TKPROF to analyze trace output
$ tkprof d:\ora9i\admin\test1\udump\ora_9294.trc x.txt EXPLAIN=system/manager SYS=NO
7. View or print the output file x.txt

UTLBSTAT and UTLESTAT


1] Run the Script
SQL>@d:\ora9i\rdbms\admin\utlbstat.sql;
2] After 4 to 5 hours run second script
SQL>@d:\ora9i\rdbms\admin\utlestat.sql;
3] Second script will create report on c: drive with report.txt

EXPLAIN PLAN
1] Connect to local user
2] sql>set time on
3] execute any sql statement
4] see the time required for the query output
5] sql>@d:\ora9i\rdbms\admin\utlxplan.sql;
6] select * from plan_table; (the table is empty)
7] explain plan for SQL STATEMENT
8] sql>@d:\ora9i\rdbms\admin\utlxplp.sql;
9] if the output of this script is TABLE ACCESS FULL then
Create index on column which is in where clause and execute sql statement
Create index no on y(no) tablespace test_ind;
ANALYZE SCHEMA and TABLES
1] Check the TABLESPACE Name from DBA_TABLESPACE
sql>select tablespace_name from dba_tablespace;
2] if you want to check the table belongs to which tablespace;
sql>select tablespace_name,table_name from dba_tables where owner=’SCOTT’;
3] Move tables
sql>Alter table finance.dt_transaction move tablespace finance
storage(initial 10m next 5m pctincrease 0 maxextents unlimited);
4] execute dbms_utility.analyze_schema(‘FINANCE’,’compute’);
REF CURSOR
Create or replace procdure(p_table in varchar2) as
type t_deptemp is ref cursor;
v_cur t_deptemp;
begin
if p_table=upper(‘emp’) then
open v_cur from select * from emp;
elsif p_table=upper(‘dept’) then
open v_cur from select * from dept;
else
message(‘Cursor unable to Open’);
end if;
end;
Check Database Size
1] select sum(BYTES)/1024/1024/1024 from dba_data_files;

PL/SQL TABLE

DECLARE
TYPE NumList IS TABLE OF NUMBER;
depts NumList := NumList(10, 20, 30, 40);
BEGIN
depts.DELETE(3); -- delete third element
FORALL i IN depts.FIRST..depts.LAST
DELETE FROM emp WHERE deptno = depts(i); -- causes an error
END;

Using Rman Cloning Databse


connect target sys/secure@origdb
connect catalog rman/rman@catdb
connect auxiliary /
run {
set newname for datafile 1 to '/ORADATA/u01/system01.dbf';
set newname for datafile 2 to '/ORADATA/u02/undotbs01.dbf';
set newname for datafile 3 to '/ORADATA/u03/users01.dbf';
set newname for datafile 4 to '/ORADATA/u03/indx01.dbf';
set newname for datafile 5 to '/ORADATA/u02/example01.dbf';
allocate auxiliary channel dupdb1 type disk;
set until sequence 2 thread 1;

duplicate target database to dupdb


logfile
GROUP 1 ('/ORADATA/u02/redo01.log') SIZE 200k REUSE,
GROUP 2 ('/ORADATA/u03/redo02.log') SIZE 200k REUSE;
}

Manually Start Archive Log


alter system archive log start;
Rollback Segment
Creating Rollback Segment
Create rollback segment rbs01 tablespace rbs
storage (initial 100k next 100k minextents 20k maxextents 100 optimal 2000k);
1] Rollback segment specify Public or Private at the time of creation and cannot be changed.
2] For rollback segment Minextens must be at least two.
3] Pctincrease cannot be specify or if specify set to 0.
Oracle Securities
1] Fine Grained Auditing
-- Add policy on table with autiting condition...
execute dbms_fga.add_policy('HR', 'EMP', 'policy1', 'deptno > 10');
-- Must ANALYZE, this feature works with CBO (Cost Based Optimizer)
analyze table EMP compute statistics;

select * from EMP where c1 = 11; -- Will trigger auditing


select * from EMP where c1 = 09; -- No auditing

-- Now we can see the statments that triggered the auditing condition...
select sqltext from sys.fga_log$;
delete from sys.fga_log$;
2] How dose one switch to another user in Oracle
1] Sql>select password from dba_users where username=’SCOTT’
PASSWORD
----------------
F894844C34402B67
2] SQL>alter user scott identified by lion;
User altered
3] Connect scott/lion
4] Do whatever you like
5] connect system/manager
6] alter user scott identified by values ‘F894844C34402B67’
7] connect scott/tiger

3] How does one add users to a password file


One can select from the SYS.V_$PWFILE_USERS view to see which users are listed in the
password file. New users can be added to the password file by granting them SYSDBA or
SYSOPER privileges, or by using the orapwd utility.

Select * from V_$PWFILE_USERS;


GRANT SYSDBA TO scott;
Select * from V_$PWFILE_USERS;

4] How does one create a password file


The Oracle Password File ($ORACLE_HOME/dbs/orapw or orapwSID) stores passwords for
users with administrative privileges. One needs to create a password files before remote
administrators (like OEM) will be allowed to connect.
Follow this procedure to create a new password file:
Log in as the Oracle software owner

Run command: orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=mypasswd


Shutdown the database (SQLPLUS> SHUTDOWN IMMEDIATE)
Edit the INIT.ORA file and ensure REMOTE_LOGIN_PASSWORDFILE=exclusive is set.
Startup the database (SQLPLUS> STARTUP)

5] How does one manage Oracle database users


Oracle user accounts can be locked, unlocked, forced to choose new passwords, etc. For example,
all accounts except SYS and SYSTEM will be locked after creating an Oracle9iDB database
using the DB Configuration Assistant (dbca). DBA's must unlock these accounts to make them
available to users.
Look at these examples:
ALTER USER scott ACCOUNT LOCK -- lock a user account
ALTER USER scott ACCOUNT UNLOCK; -- unlocks a locked users account
ALTER USER scott PASSWORD EXPIRE; -- Force user to choose a new password
6] How does one change an Oracle user's password
Issue the following SQL command:
ALTER USER <username> IDENTIFIED BY <new_password>;
From Oracle8 you can just type "password" from SQL*Plus, or if you need to change another
user's password, type "password user_name". Look at this example:
SQL> password
Changing password for SCOTT
Old password:
New password:
Retype new password:
Database Administration
1] How does one rename a database
Follow these steps to rename a database:
1. Start by making a full database backup of your database (in case you need to restore if this procedure
is not working).
2. Execute this command from sqlplus while connected to 'SYS AS SYSDBA':
ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS;
3. Locate the latest dump file in your USER_DUMP_DEST directory (show parameter
USER_DUMP_DEST) - rename it to something like dbrename.sql.
4. Edit dbrename.sql, remove all headers and comments, and change the database's name. Also change
"CREATE CONTROLFILE REUSE ..." to "CREATE CONTROLFILE SET ...".
5. Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE, don't ABORT!) and run
dbrename.sql.
6. Rename the database's global name:
ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name;
2] How do I find used/free space in a TEMPORARY tablespace
Unlike normal tablespaces, true temporary tablespace information is not listed in DBA_FREE_SPACE.
Instead use the V$TEMP_SPACE_HEADER view:
SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free)
FROM V$temp_space_header
GROUP BY tablespace_name;
3] Where can one find the high water mark for a table

select substr(file_name,26,20) filen, hwm, blocks total_blocks, blocks-hwm+1 shrinkage_possible


from dba_data_files a, ( select file_id, max(block_id+blocks) hwm from dba_extents group
by file_id ) b where a.file_id = b.file_id

There is no single system table which contains the high water mark (HWM) for a table. A table's HWM
can be calculated using the results from the following SQL statements:
SELECT BLOCKS FROM DBA_SEGMENTS WHERE OWNER=UPPER(owner) AND
SEGMENT_NAME = UPPER(table);

ANALYZE TABLE owner.table ESTIMATE STATISTICS;

SELECT EMPTY_BLOCKS FROM DBA_TABLES WHERE OWNER=UPPER(owner) AND


TABLE_NAME = UPPER(table);
Thus, the tables' HWM = (query result 1) - (query result 2) - 1
NOTE: You can also use the DBMS_SPACE package and calculate the HWM = TOTAL_BLOCKS –
UNUSED_BLOCKS - 1.
4] Can one rename a database user (schema)
No, this is listed as Enhancement Request 158508. Workaround:
Do a user-level export of user A
create new user B
import system/manager fromuser=A touser=B
drop user A
5] Can one rename a tablespace
No, this is listed as Enhancement Request 148742. Workaround:
Export all of the objects from the tablespace
Drop the tablespace including contents
Recreate the tablespace
Import the objects

Measure the Buffer Cache Hit Ratio


Increase DB_BLOCK_BUFFER if cache hit ratio < 90%
select 1-(phy.value / (cur.value + con.value)) "Cache Hit Ratio",
round((1-(phy.value / (cur.value + con.value)))*100,2) "% Ratio"
from v$sysstat cur, v$sysstat con, v$sysstat phy where cur.name = 'db block gets' and
con.name = 'consistent gets' and phy.name = 'physical reads'

Reports free memory available in the SGA


select name,sgasize/1024/1024 "Allocated (M)",
bytes/1024 "Free (K)", round(bytes/sgasize*100, 2) "% Free"
from (select sum(bytes) sgasize from sys.v_$sgastat) s, sys.v_$sgastat f
where f.name = 'free memory';

To back up offline tablespaces


1. Before beginning a backup of a tablespace, identify the tablespace's datafiles by querying the
DBA_DATA_FILES view. For example, assume that you want to back up the users tablespace. Enter the
following in SQL*Plus:
2. SELECT TABLESPACE_NAME, FILE_NAME
3. FROM SYS.DBA_DATA_FILES
4. WHERE TABLESPACE_NAME = 'users';
5.
6. TABLESPACE_NAME FILE_NAME
7. ------------------------------- -------------------
8. users /oracle/dbs/users.f
9.
In this example, /oracle/dbs/users.f is a fully specified filename corresponding to
the datafile in the users tablespace.
2. Take the tablespace offline using normal priority if possible. Normal priority is recommended because it
guarantees that you can subsequently bring the tablespace online without the requirement for tablespace
recovery. For example, the following statement takes a tablespace named users offline normally:
3. SQL> ALTER TABLESPACE users OFFLINE NORMAL;
4.
After you take a tablespace offline with normal priority, all datafiles of the
tablespace are closed.
3. Back up the offline datafiles. For example, a UNIX user might enter the following to back up the datafile
users.f:
4. % cp /disk1/oracle/dbs/users.f /disk2/backup/users.backup
5.

4. Bring the tablespace online. For example, the following statement brings tablespace users back online:
5. ALTER TABLESPACE users ONLINE;
6. After you bring a tablespace online, it is open and available for use.

5. Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived.
For example, enter:
6. ALTER SYSTEM ARCHIVE LOG CURRENT;
Making User-Managed Backups of Online Tablespaces and Datafiles
You can back up all or only specific datafiles of an online tablespace while the database is open. The procedure
differs depending on whether the online tablespace is read/write or read-only.
This section contains these topics:
• Making User-Managed Backups of Online Read/Write Tablespaces
• Making Multiple User-Managed Backups of Online Read/Write Tablespaces
• Ending a Backup After an Instance Failure or SHUTDOWN ABORT
• Making User-Managed Backups of Read-Only Tablespaces
• Making User-Managed Backups of Undo Tablespaces

Making User-Managed Backups of Online Read/Write Tablespaces


You must put a read/write tablespace in backup mode to make user-managed datafile backups when the
tablespace is online and the database is open. The ALTER TABLESPACE BEGIN BACKUP statement places a
tablespace in backup mode.
Oracle stops recording checkpoints to the datafiles in the tablespace when a tablespace is in backup mode.
Because a block can be partially updated at the very moment that the operating system backup utility is copying
it, Oracle copies whole changed data blocks into the redo stream while in backup mode. After you take the
tablespace out of backup mode with the ALTER TABLESPACE ... END BACKUP or ALTER DATABASE END BACKUP
statement, Oracle advances the datafile header to the current database checkpoint.
When you restore a datafile backed up in this way, the datafile header has a record of the most recent datafile
checkpoint that occurred before the online tablespace backup, not any that occurred during it. As a result, Oracle
asks for the appropriate set of redo log files to apply should recovery be needed. The redo logs contain all
changes required to recover the datafiles and make them consistent.
To back up online read/write tablespaces in an open database:
1. Before beginning a backup of a tablespace, identify all of the datafiles in the tablespace with the
DBA_DATA_FILES data dictionary view. For example, assume that you want to back up the users
tablespace. Enter the following:
2. SELECT TABLESPACE_NAME, FILE_NAME
3. FROM SYS.DBA_DATA_FILES
4. WHERE TABLESPACE_NAME = 'users';
5.
6. TABLESPACE_NAME FILE_NAME
7. ------------------------------- --------------------
8. USERS /oracle/dbs/tbs_21.f
9. USERS /oracle/dbs/tbs_22.f
10.
In this example, /oracle/dbs/tbs_21.f and /oracle/dbs/tbs_22.f are fully specified
filenames corresponding to the datafiles of the users tablespace.
2. Mark the beginning of the online tablespace backup. For example, the following statement marks the start
of an online backup for the tablespace users:
3. SQL> ALTER TABLESPACE users BEGIN BACKUP;

3. Back up the online datafiles of the online tablespace with operating system commands. For example,
UNIX users might enter:
4. % cp /oracle/dbs/tbs_21.f /oracle/backup/tbs_21.backup
5. % cp /oracle/dbs/tbs_22.f /oracle/backup/tbs_22.backup
6.

4. After backing up the datafiles of the online tablespace, indicate the end of the online backup by using the
SQL statement ALTER TABLESPACE with the END BACKUP option. For example, the following statement
ends the online backup of the tablespace users:
5. SQL> ALTER TABLESPACE users END BACKUP;
6.

5. Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived.
For example, enter:
6. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Making Multiple User-Managed Backups of Online Read/Write Tablespaces
When backing up several online tablespaces, you can back them up either serially or in parallel. Use either of the
following procedures depending on your needs.

Backing Up Online Tablespaces in Parallel

You can simultaneously put all tablespaces requiring backups in backup mode. Note that online redo logs can
grow large if multiple users are updating these tablespaces because the redo must contain a copy of each changed
data block.
To back up online tablespaces in parallel:
1. Prepare all online tablespaces for backup by issuing all necessary ALTER TABLESPACE statements at once.
For example, put tablespaces ts1, ts2, and ts3 in backup mode as follows:
2. SQL> ALTER TABLESPACE ts1 BEGIN BACKUP;
3. SQL> ALTER TABLESPACE ts2 BEGIN BACKUP;
4. SQL> ALTER TABLESPACE ts3 BEGIN BACKUP;
5.

2. Back up all files of the online tablespaces. For example, a UNIX user might back up datafiles with the
tbs_ prefix as follows:
3. % cp /oracle/dbs/tbs_* /oracle/backup
4.

3. Take the tablespaces out of backup mode as in the following example:


4. SQL> ALTER TABLESPACE ts1 END BACKUP;
5. SQL> ALTER TABLESPACE ts2 END BACKUP;
6. SQL> ALTER TABLESPACE ts3 END BACKUP;
7.

4. Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived.
For example, enter:
5. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Backing Up Online Tablespaces Serially

You can place all tablespaces requiring online backups in backup mode one at a time. Oracle Corporation
recommends the serial backup option because it minimizes the time between ALTER TABLESPACE ... BEGIN/END
BACKUP statements. During online backups, more redo information is generated for the tablespace because whole
data blocks are copied into the redo log.
To back up online tablespaces serially:
1. Prepare a tablespace for online backup. For example, to put tablespace tbs_1 in backup mode enter the
following:
2. SQL> ALTER TABLESPACE tbs_1 BEGIN BACKUP;
3.

2. Back up the datafiles in the tablespace. For example, enter:


3. % cp /oracle/dbs/tbs_1.f /oracle/backup/tbs_1.bak
4.

3. Take the tablespace out of backup mode. For example, enter:


4. SQL> ALTER TABLESPACE tbs_1 END BACKUP;
5.

4. Repeat this procedure for each remaining tablespace until you have backed up all the desired tablespaces.

5. Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived.
For example, enter:
6. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Ending a Backup After an Instance Failure or SHUTDOWN ABORT
This section contains these topics:
• About Instance Failures When Tablespaces are in Backup Mode
• Ending Backup Mode with the ALTER DATABASE END BACKUP Statement
• Ending Backup Mode with the RECOVER Command

About Instance Failures When Tablespaces are in Backup Mode

The following situations can cause a tablespace backup to fail and be incomplete:
• The backup completed, but you did not indicate the end of the online tablespace backup operation with
the ALTER TABLESPACE ... END BACKUP statement.
• An instance failure or SHUTDOWN ABORT interrupted the backup before you could complete it.

Whenever crash recovery is required (not instance recovery, because in this case the datafiles are open already),
if a datafile is in backup mode when an attempt is made to open it, then the system assumes that the file is a
restored backup. Oracle will not open the database until either a recovery command is issued, or the datafile is
taken out of backup mode.
For example, Oracle may display a message such as the following when you run the STARTUP statement:
ORA-01113: file 12 needs media recovery
ORA-01110: data file 12: '/oracle/dbs/tbs_41.f'

If Oracle indicates that the datafiles for multiple tablespaces require media recovery because you forgot to end
the online backups for these tablespaces, then so long as the database is mounted, running the ALTER DATABASE
END BACKUP statement takes all the datafiles out of backup mode simultaneously.
In high availability situations, and in situations when no DBA is monitoring the database (for example, in the
early morning hours), the requirement for user intervention is intolerable. Hence, you can write a crash recovery
script that does the following:
1. Mounts the database
2. Runs the ALTER DATABASE END BACKUP statement

3. Runs ALTER DATABASE OPEN, allowing the system to come up automatically

An automated crash recovery script containing ALTER DATABASE END BACKUP is especially useful in the following
situations:
• All nodes in an Oracle Real Application Clusters configuration fail.
• One node fails in a cold failover cluster (that is, a cluster that is not an Oracle Real Application Cluster
in which the secondary node must mount and recover the database when the first node fails).

Alternatively, you can take the following manual measures after the system fails with tablespaces in backup
mode:
• Recover the database and avoid issuing END BACKUP statements altogether.
• Mount the database, then run ALTER TABLESPACE ... END BACKUP for each tablespace still in backup
mode.

Ending Backup Mode with the ALTER DATABASE END BACKUP Statement

You can run the ALTER DATABASE END BACKUP statement when you have multiple tablespaces still in backup
mode. The primary purpose of this command is to allow a crash recovery script to restart a failed system without
DBA intervention. You can also perform the following procedure manually.
To take tablespaces out of backup mode simultaneously:
1. Mount but do not open the database. For example, enter:
2. SQL> STARTUP MOUNT
3.

2. If performing this procedure manually (that is, not as part of a crash recovery script), query the V$BACKUP
view to list the datafiles of the tablespaces that were being backed up before the database was restarted:
3. SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';
4. FILE# STATUS CHANGE# TIME
5. ---------- ------------------ ---------- ---------
6. 12 ACTIVE 20863 25-NOV-00
7. 13 ACTIVE 20863 25-NOV-00
8. 20 ACTIVE 20863 25-NOV-00
9. 3 rows selected.
10.

3. Issue the ALTER DATABASE END BACKUP statement to take all datafiles currently in backup mode out of
backup mode. For example, enter:
4. SQL> ALTER DATABASE END BACKUP;
5.
You can use this statement only when the database is mounted but not open. If the
database is open, use ALTER TABLESPACE ... END BACKUP or ALTER DATABASE DATAFILE ...
END BACKUP for each affected tablespace or datafile.

Ending Backup Mode with the RECOVER Command

The ALTER DATABASE END BACKUP statement is not the only way to respond to a failed online backup: you can
also run the RECOVER command. This method is useful when you are not sure whether someone has restored a
backup, because if someone has indeed restored a backup, then the RECOVER command brings the backup up to
date. Only run the ALTER DATABASE END BACKUP or ALTER TABLESPACE ... END BACKUP statement if you are sure
that the files are current.
To take tablespaces out of backup mode with the RECOVER command:
1. Mount the database. For example, enter:
2. SQL> STARTUP MOUNT
3.

2. Recover the database as normal. For example, enter:


3. SQL> RECOVER DATABASE
4.

3. Use the V$BACKUP view to confirm that there are no active datafiles:
4. SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE';
5. FILE# STATUS CHANGE# TIME
6. ---------- ------------------ ---------- ---------
7. 0 rows selected.
Making User-Managed Backups of Read-Only Tablespaces
When backing up an online read-only tablespace, you can simply back up the online datafiles. You do not have
to place the tablespace in backup mode because the system is permitting changes to the datafiles.
If the set of read-only tablespaces is self-contained, then in addition to backing up the tablespaces with operating
system commands, you can also export the tablespace metadata by using the transportable tablespace
functionality. In the event of a media error or a user error (such as accidentally dropping a table in the read-only
tablespace), you can transport the tablespace back into the database.
To back up online read-only tablespaces in an open database:
1. Query the DBA_TABLESPACES view to determine which tablespaces are read-only. For example, run this
query:
2. SELECT TABLESPACE_NAME, STATUS
3. FROM DBA_TABLESPACES
4. WHERE STATUS = 'READ ONLY';
5.

2. Before beginning a backup of a read-only tablespace, identify all of the tablespace's datafiles by querying
the DBA_DATA_FILES data dictionary view. For example, assume that you want to back up the history
tablespace. Enter the following:
3. SELECT TABLESPACE_NAME, FILE_NAME
4. FROM SYS.DBA_DATA_FILES
5. WHERE TABLESPACE_NAME = 'HISTORY';
6.
7. TABLESPACE_NAME FILE_NAME
8. ------------------------------- --------------------
9. HISTORY /oracle/dbs/tbs_hist1.f
10. HISTORY /oracle/dbs/tbs_hist2.f
11.
In this example, /oracle/dbs/tbs_hist1.f and /oracle/dbs/tbs_hist2.f are fully
specified filenames corresponding to the datafiles of the history tablespace.
3. Back up the online datafiles of the read-only tablespace with operating system commands. You do not
have to take the tablespace offline or put the tablespace in backup mode because users are automatically
prevented from making changes to the read-only tablespace. For example, UNIX users can enter:
4. % cp /oracle/dbs/tbs_hist*.f /backup

4. Optionally, export the metadata in the read-only tablespace. By using the transportable tablespace feature,
you can quickly restore the datafiles and import the metadata in case of media failure or user error. For
example, export the metadata for tablespace history as follows:
5. % exp TRANSPORT_TABLESPACE=y TABLESPACES=(history) FILE=/oracle/backup/tbs_hist.dmp
Making User-Managed Backups of Undo Tablespaces
In releases prior to Oracle9i, undo space management was based on rollback segments. This method is called
manual undo management mode. In Oracle9i, you have the option of placing the database in automatic undo
management mode. With this design, you allocate undo space in a single undo tablespace instead of distributing
space into a set of statically allocated rollback segments.
The procedures for backing up undo tablespaces are exactly the same as for backing up any other read/write
tablespace. Because the automatic undo tablespace is so important for recovery and for read consistency, you
should back it up frequently as you would for tablespaces containing rollback segments when running in manual
undo management mode.
If the datafiles in the undo tablespace were lost while the database was open, and you did not have a backup, you
could receive error messages when querying objects containing uncommitted changes. Also, if an instance
failure occurred, you would not be able to roll back uncommitted transactions to their original values.
Making User-Managed Backups in SUSPEND Mode
This section contains the following topics:
• About the Suspend/Resume Feature
• Making Backups in a Suspended Database

About the Suspend/Resume Feature


Some third-party tools allow you to mirror a set of disks or logical devices, that is, maintain an exact duplicate of
the primary data in another location, and then split the mirror. Splitting the mirror involves separating the
copies so that you can use them independently.
With the SUSPEND/RESUME functionality, you can suspend I/O to the database, then split the mirror and make a
backup of the split mirror. By using this feature, which complements the backup mode functionality, you can
suspend database I/Os so that no new I/O can be performed. You can then access the suspended database to
make backups without I/O interference.
You do not need to use SUSPEND/RESUME to make split mirror backups in most cases, although it is necessary if
your system requires the database cache to be free of dirty buffers before a volume can be split.
The ALTER SYSTEM SUSPEND statement suspends the database by halting I/Os to datafile headers, datafiles, and
control files. When the database is suspended, all pre-existing I/O operations can complete; however, any new
database I/O access attempts are queued.
The ALTER SYSTEM SUSPEND and ALTER SYSTEM RESUME statements operate on the database and not just the
instance. If the ALTER SYSTEM SUSPEND statement is entered on one system in an Oracle Real Application
Clusters configuration, then the internal locking mechanisms propagate the halt request across instances, thereby
suspending I/O operations for all active instances in a given cluster.
Making Backups in a Suspended Database
After a successful database suspension, you can back up the database to disk or break the mirrors. Because
suspending a database does not guarantee immediate termination of I/O, Oracle recommends that you precede
the ALTER SYSTEM SUSPEND statement with a BEGIN BACKUP statement so that the tablespaces are placed in
backup mode.
You must use conventional user-managed backup methods to back up split mirrors. RMAN cannot make
database backups or copies because these operations require reading the datafile headers. After the database
backup is finished or the mirrors are re-silvered, then you can resume normal database operations using the
ALTER SYSTEM RESUME statement.
Backing up a suspended database without splitting mirrors can cause an extended database outage because the
database is inaccessible during this time. If backups are taken by splitting mirrors, however, then the outage is
nominal. The outage time depends on the size of cache to flush, the number of datafiles, and the time required to
break the mirror.
Note the following restrictions for the SUSPEND/RESUME feature:
• In an Oracle Real Application Clusters configuration, you should not start a new instance while the
original nodes are suspended.
• No checkpoint is initiated by the ALTER SYSTEM SUSPEND or ALTER SYSTEM RESUME statements.
• You cannot issue SHUTDOWN with IMMEDIATE or NORMAL options while the database is suspended.
• Issuing SHUTDOWN ABORT on a database that was already suspended reactivates the database. This
operation prevents media recovery or crash recovery from hanging.

To make a split mirror backup in SUSPEND mode:


1] Place the database tablespaces in backup mode. i.e, to place tablespace users in backup mode enter:
ALTER TABLESPACE users BEGIN BACKUP;
2] If your mirror system has problems with splitting a mirror while disk writes are occurring, then suspend
the database. For example, issue the following:
ALTER SYSTEM SUSPEND;
3] Check to make sure that the database is suspended by querying V$INSTANCE. For example:
SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
-----------------
SUSPENDED
4] Split the mirrors at the operating system or hardware level. End the database suspension. For example,
issue the following statement:
ALTER SYSTEM RESUME;
5] Check to make sure that the database is active by querying V$INSTANCE. For example, enter:
SELECT DATABASE_STATUS FROM V$INSTANCE;
DATABASE_STATUS
-----------------
ACTIVE
6] Take the specified tablespaces out of backup mode. For example, enter the following to take tablespace
users out of backup mode:
ALTER TABLESPACE users END BACKUP;
7] Copy the control file and archive the online redo logs as usual for a backup.

Temporary Tablespace
To see the datafile belongs to temporary tablespace.
select * from DBA_TEMP_FILES;

CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01.dbf'


SIZE 20M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;

The following statement adding datafile in a temporary tablespace


ALTER TABLESPACE lmtemp
ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 2M REUSE;

The following statement resizes a temporary file:


ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 4M;
alter database tempfile 'D:\ORA9I\ORADATA\TEST1\TEMP01.DBF' resize 100m;

ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' OFFLINE;


ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' ONLINE;

ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES;


The following statement creates a temporary dictionary-managed tablespace:
CREATE TABLESPACE sort
DATAFILE '/u02/oracle/data/sort01.dbf' SIZE 50M
DEFAULT STORAGE (
INITIAL 2M
NEXT 2M
MINEXTENTS 1
PCTINCREASE 0)
EXTENT MANAGEMENT DICTIONARY
TEMPORARY;
How do I find used/free space in a TEMPORARY tablespace
Unlike normal tablespaces, true temporary tablespace information is not listed in DBA_FREE_SPACE.
Instead use the V$TEMP_SPACE_HEADER view:
SELECT tablespace_name, SUM(bytes_used), SUM(bytes_free)
FROM V$temp_space_header
GROUP BY tablespace_name;

ANALYZE
select * from v$librarycache;
select sum(pins) "exe",sum(reloads) "cache Miss",sum(reloads)/sum(pins) from v$librarycache;
select count(*) from jagat.y;
select namespace,pins,reloads,invalidations from v$librarycache;
analyze table jagat.y compute statistics;
select count(*) from jagat.y;
select namespace,pins,reloads,invalidations from v$librarycache;

Recovery Steps
1] Restored all datafiles.
2] Redo Applied
3] Database Containing Uncommitted and commited transaction.
4] Undo applied
5] Recover Database.
User Managed Recovery in NoArchiveLog Mode
1] Shutdown Abort
2] Copy all Datafiles from backup to Oradata Directory
3] Connect as sysdba
4] Startup
Recovery in Noarchivelog mode without redo log file Backup.
1] Shutdown the Instance.
Shutdown Immediate;
2] Restore the Datafiles and Control file from most resent whole Database Backup.
Copy All Datafiles and Control File
3] Perform Cancel Based Recovery
Recover Database Until Cancel
4] Open Database with Resetlog Mode.
Alter Database Open Resetlogs;
Recovery in Archive Log Mode :-Advantage and Disadvantages
Advantage
1] Need to restore only lost or damaged file only.
2] No committed data is lost.
3] Recovery can be performed while the database is Open(except system tablespace files and datafiles
that contain online rollback segment)
DisAdvantage
1] You must have all the Archive Redo log files from the time your last backup.if you missing one
you cannot perform complete recovery. Because all Archive file applied in sequence.
Close Database Recovery
1] Shutdown Abort;
2] Copy All Datafiles.
3] Startup Mount
4] Recover Database; or Recover Datafile ‘<PATH>’;
5] Alter Database Open;
Open Database Recovery
1] if the datafile 2 is online then Take the datafile 2 offline
2] Restore datafile 2 from Backup.
3] Recover Datafile <path of Datafile> or
Recover Tablespace < Tablespace Name Datafile belongs to that>
4] Alter database datafile <path> online; or
Alter tablespace <Tablespace Name Datafile belongs to that>;
Recovery Without Backup Datafile
1] Mount the Database
Startup Mount;
2] Offline the Tablespace
Alter tablespace table_data offline immediate;
3] Select * from V$recover_file;
4] Alter Database create Datafile ‘/disk2/data/df04.dbf’ as ‘/disk4/data/df04.dbf’
5] Select * from V$recover_file;
6] Recover Tablespace Table_data; or
Alter Database recover Tablespace Table_data
7] Alter Tablespace table_data online;
Using RMAN Recover Database in NoArchive Log Mode
1] rman target /
2] startup mount
3] Restore Database;
4] Recover Database;
5] Alter Database Open Resetlogs;
Using RMAN to Restore Datafiles to New Location
1] Rman Target /
2] Startup Mount;
3] Run{ set new name for datafile 1 to ‘/<newdir>/system.dbf’
Restore Database;
Switch Datafile all;
Recover Database;
Alter Database Open;}
Using Rman Recover Tablespace
1] Run{ Sql “Alter Tablespace users offline immediate;
Restore Tablespace Users;
Recover Tablespace Users;
Sql “Alter tablespace Users Online”; }
Using Rman Relocate Tablespace
1] Run{ Sql “Alter Tablespace Users Offline Immediate”;
Set Newname for datafile ‘/oradata/u03/users01.dbf’ to ‘/oradata/u04/users01.dbf’;
Restore (tablespace Users);
Switch Datafile 3;
Recover Tablespace Users;
Sql “Alter Tablespace tbs online”;}
Script Example
1] Create script Level0Backup {
backup
incremental level 0
format ‘/u01/db1/backup/%d_%s_%p’
fileperset 5
(database include current controlfile);
sql ‘alter database archive log current’;}
2] To Execute This Script
RMAN>run {execute script Level0Backup;}
EXPORT
1] Direct=Y
You can extract data much faster. Export utility reads directly from data layer instated of going through
the SQL-Command processing Layer.
Example:- Exp userid=system/manager file=exp_dir.dmp full=y direct=y;
Restriction
A] Client-side and Server-side Character sets must be same.
B] You cannot use the Direct Path option to export rows containing LOB, BFILE, REF
or object types.

CONSISTENT=Y Default: n
Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure that the data
seen by Export is consistent to a single point in time and does not change during the execution of the exp
command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the
target data after an export has started.
If you use CONSISTENT=n, each table is usually exported in a single transaction. However, if a table contains
nested tables, the outer table and each inner table are exported as separate transactions. If a table is partitioned,
each partition is exported as a separate transaction.
Therefore, if nested tables and partitioned tables are being updated by other applications, the data that is exported
could be inconsistent. To minimize this possibility, export those tables at a time when updates are not being
done.
If the export uses CONSISTENT=y, none of the updates by user2 are written to the export file.
If the export uses CONSISTENT=n, the updates to TAB:P1 are not written to the export file. However, the
updates to TAB:P2 are written to the export file because the update transaction is committed before the export of
TAB:P2 begins. As a result, the user2 transaction is only partially recorded in the export file, making it
inconsistent.
If you use CONSISTENT=y and the volume of updates is large, the rollback segment usage will be large. In
addition, the export of each table will be slower because the rollback segment must be scanned for uncommitted
transactions.
Keep in mind the following points about using CONSISTENT=y:
CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are
using AS SYSDBA, or both.
Export of certain metadata may require the use of the SYS schema within recursive SQL. In such situations, the
use of CONSISTENT=y will be ignored. Oracle Corporation recommends that you avoid making metadata
changes during an export process in which CONSISTENT=y is selected.
To minimize the time and space required for such exports, you should export tables that need to remain
consistent separately from those that do not.
For example, export the emp and dept tables together in a consistent export, and then export the remainder of the
database in a second pass.
A "snapshot too old" error occurs when rollback space is used up, and space taken up by committed transactions
is reused for new transactions. Reusing space in the rollback segment allows database integrity to be preserved
with minimum space requirements, but it imposes a limit on the amount of time that a read-consistent image can
be preserved.
If a committed transaction has been overwritten and the information is needed for a read-consistent view of the
database, a "snapshot too old" error results.
To avoid this error, you should minimize the time taken by a read-consistent export. (Do this by restricting the
number of objects exported and, if possible, by reducing the database transaction rate.) Also, make the rollback
segment as large as possible.

Row Migration
Oracle will try to shift the entire row from the current block to another block having 25 (10+15) units of free
space. However, it will not remove all the relevant entries for that row from the old block. It will store the new
block row ID into the old block.

Now, if I want to view that record, Oracle will internally first check the old block and then from there it will get
the new row ID and display the row data from the new block. With this extra amount of I/O operation required,
you likely have guessed correctly that it would degrade the performance.

Now the first question that you might ask is what is the use of maintaining the old row ID if the entire row data
has been migrated from the old block to the new one? This is because of Oracle's internal mechanism -- for the
entire lifespan of a row data, its row ID will never change. That's why Oracle has to maintain two row IDs -- one
is because of Oracle's internal mechanism and one is for the current location of the data.

Row Chaining
What we have discussed to this point is the case where we have data in the block and new insertion is not
possible into that block, which leads Oracle to go ahead and use a new block.

So what happens when a row is so large that it cannot fit into one free block? In this case, Oracle will span the
data into a number of blocks so that it can hold all of the data. The existence of such data results in "Row
Chaining".

Row Chaining is the storage of data in a chain of blocks. This primarily occurs in the lob, clob, blob or big
varchar2 data types.

How to Avoid/Eliminate RM/RC


For avoiding row migration, we can use a higher PCTFREE value since migration is typically caused by update
operations. However, there is a tradeoff as the space allocated to PCTFREE is not used for normal insert
operations and can end up wasted.

A temporary solution (since it will only take care of the existing migrated rows and not the future ones) is to
delete the migrated row from the table and perform the insert again. To do this follow these steps:
Analyze the table to get the row ID
Copy those rows to a temporary table
Delete the rows from the original table
Insert the rows from step 2 back to the original table
Avoiding row chaining is very difficult since it is generally caused by insert operations using large data types (i.e
lob, etc.). A good precaution is to either use a large block size (which can't be changed without creating a new
database) or use a large extent size.
To summarize what we have discussed:
Row migration (RM) is typically caused by UPDATE operations.
Row chaining (RC) is typically caused by INSERT operations.
SQL statements which are creating/querying these RM/RC data will degrade the performance due to more I/O
work.
To diagnose this, use the ANALYZE command, query V$SYSSTAT view, or generate a report.txt file
To remove RM, use a higher PCTFREE value.

Share Pool
Library Cache
If the size of Shared pool is too small then SQL Statement continue reload in Library Cache. Which affect the
Performance? Using v$librarycache view you can check the reload if the is non zero then increase the size of
Share Pool.
Consists of most recently SQL/PLSQL Statement.
I] Shared SQL
II] Shared PLSQL.
Data Dictionary Cache
If the size of Data Dictionary Cache is too small, then the database has to query the data dictionary table
repeatedly for information needed by the database. These queries are called recursive calls it affect the
performance. It include the information about Database files, Indexs,Tables,Column, Users, Privileges.
-Consists of Independence Cache
DB_CACHE_SIZE
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
--DB_CACHE_ADVICE can be set to gather static from different cache size. This
parameter have three values(ON,OFF,READY)
ON:-Advisory is turn on and both CPU and memory overhead is incurred.
OFF:- Advisory is off and the memory for advisory is not allocated.
READY:- Advisory is turn off but memory for the advisory remain allocated.
--View :-V$DB_CACHE_ADVICE
Database Buffer Cache
Database Buffer Cache stores the copies of data block that have been retrieved from Data files. Its manage
through the LRU algorithm.
When query is processed Oracle Server process looks in the Database Buffer Cache for any blocks it needs. If
the Block not found in Database Buffer Cache, Server process reads block from data file and place the copy in
Database Buffer Cache, Because subsequent request for the same block may find the block in memory, the
request may not required Physical Read.
Database Buffer Cache can be dynamically resized using
ALTER SYSTEM SET DB_CACHE_SIZE=96m;
Redo Log Buffer
Primary purpose is recovery.
Changes recorded within are called redo entries.
Size define by Log buffer.
Large Pool
The Large Pool optional area of memory in SGA configured only in the Shared Server environment.
When users connect through the Shared Server Oracle needs to allocate additional space in the Shared Pool for
storing information about the connection between the User process, Dispatcher and Servers. The Large Pool
relieves the burden of Shared Pool.
A] This Configured for Backup and Restore,UGA,I/O.
B] Large Pool does not used LRU list.
C] Sized by Large_pool_size
ALTER SYSTEM SET LARGE_POOL_SIZE=64M;
Java Pool
Required if installing and using Java.
Size by Java_pool_size.
PGA
Memory reserved for each user process that connects to oracle database.

Back ground Process


Mandatory Optional
1]DBWR 1]ARCn
2]LGWR 2]QMNn
3]SMON 3]LCKn
4]PMON 4]Snnn
5]CKPT 5]LMON
6]RECO 6]Dnnn
1]DBWR : Written when
Checkpoint occur, No free Buffer in Database Buffer Cache, Timeout occur, RAC Ping request,
Tablespace Offline, Tablespace Read only,Tablespace Begin Backup ,Table Drop or Truncate,
Durty Buffer reach threshold.
2]LGWR : Writes When
Commit, One third full,1 MB of Redo, Every Three Seconds, Before DBWRn writes
3]SMON :Responsibilities
a] Instance Recovery
--Roll forward changes
--Opens the database for user access
--Rollback Uncommitted changes
b] Coalesces the free space every three seconds in datafile.
c] Deallocates temporary segments
4]PMON :Cleans up after failed Processes by.
a] Rolling back the transaction.
b] Releasing Locks.
c] Restart dead dispatcher.
5]CKPT :Responsible for
a] Signaling DBWn at Checkpoints.
b] Update Data file and Control file header with checkpoint information.
6]ARCn :
a] Optional Background Process.
b] Automatically Archive redo log files when ARCHIVELOG mode is set.
c] Preserve the records of all changes made to database.
LOGICAL STRUCTURE
--An Oracle Database is group of Tablespaces.
--Tablespace may consist of one or more segment.
--Segment is made up of extents.
--an extent is made up of Logical Block.
--a block is small unit to read and write operations
PHYSICAL STRUCTURE
1]Data Files.
2]Control File.
3]Redo Log File
4]Parameter File
--There are two types of parameter.
-Implicit :Having an entry in the file.
-Explicit :No entry within the file but assuming Oracle default Values.
--Pfile is the static Parameter File.
--Spfile is the Dynamic Parameter File.
--Parameter File Contents
-List of Instance Parameter.
-The name of the Database Instance is associated with
-Allocation of Memory Structure of SGA.
-Online Redo Log File Information.
-The Name and Location of Control File.
-Information about Undo Segment.
--Pfile [InitSID.ora]
-The Pfile is text file that can be modified by an operating system editor.
-Modification to the file are made manually.
-Changes to file take effect on the next startup.
--Default location is $ORACLE_HOME/dbs
--Pfile Example
db_name=db01
Instance_name=db01
Control_file=(D:\ORADATA\DB01\CONTROL01DB01.CTL,
D:\ORADATA\DB01\CONTROL02DB01.CTL)
db_block_size=4096
db_block_buffer=500
share_pool_size=31457280
db_files=1024
max_dumps_file_size=1024
background_dump_dest=d:\oracle9i\admin\db01\bdump
user_dump_dest=d:\oracle9i\admin\db01\udump
core_dump_dest=d:\oracle9i\admin\db01\cdump
undo_management= auto
undo_tablespace=undotbs
-Rule For specifying Parameter.
--All parameter are Optional.
--Parameter can Specify in any other Order.
--Enclose Parameter in Double Quotation marks to include Character literal.
--Additional file can be included with the keyword IFILE.
-SPFILE
--Binary File with the ability to make changes persistence across Shutdown &
Startup.
--Maintain by Oracle Server.
--Records Parameter value changes made with the ALTER SYSTEM.
--Can specify weather the changes being made is Temporary or Persistent
--ALTER SYSTEM SET UNDO_TABLESPACE=’UNDO2’;
--ALTER SYSTEM SET parameter=value[SCOPE|MEMORY|SPFILE|BOTH]
--Creating Spfile
---CREATE SPFILE FROM PFILE;
-ORACLE MANAGE FILES
--OMF are established by setting two parameters.
---DB_CREATE_FILE_DEST
Set to give the default location for data files.
---DB_CREATE_ONLINE_LOG_DEST_n
Set to give the default location for Online Redo log File and
Control File ,Upto Maximum of 5 Location.
-Starting Up Database NOMOUNT
--Startup :-NOMOUNT(Instance Startup),MOUNT,STARTUP
--Shutdown :-MOUNT,NOMOUNT,SHUTDOWN
--Usually you would start an Instance without mounting a database
only during database creation or re-creation of control files.
--Starting Instance Include Following Task.
---Read Spfilesid.ora (if not found)
---Spfile.ora
---initSID.ora
--Allocating SGA.
--Starting Background Preocess.
--Opening alertsid.log and trace files.
-Starting Up Database MOUNT
--Startup:- NOMOUNT(Instance Startup),MOUNT(Control File Open,
STARTUP
--Shutdown:-MOUNT,NOMOUNT,SHUTDOWN
-- Mounting the Database
---Renaming Datafiles.
---Enabling and Disabling redo log Archive Options
---Performing Full Database Recovery.
--ALTER DATABASE DB01 MOUNT;
--ALTER DATABASE DB01 OPEN READ ONLY;
---Execute Queries.
---Execute Disk Sort using Locally Manage Tablespace.
---Take datafile offline and online not tablespace.
---Perform recovery of offline data files and tablespace.
--ALTER SYSTEM KILL SESSION SID,SERIAL#;
---Rollback the current user transation.
---Release all currently held tables or row locks.
5]Password File.
6]Archive Log File.

AlertSID.log File
--Its default location is Specify in BACKGROUND_DUMP_DEST.
--Keep Following Records.
---When the database was started or shutdown.
---List of non-default initialization parameter.
---The startup of background processes.
---The Thread being used by an instance.
---The log sequence no LGWR is written to.
---Information regarding to log switch.
---Creation of tablespace and undo segments.
---Alter statement that have been issued.
---Information regarding ORA-600
Enabling and disabling Tracing
Session Level:
ALTER SESSION SET SQL_TRACE=’TRUE’;
OR
dbms_system.SET_SQL_TRACE_IN_SESSION
Instance Level
By setting the init.ora file SQL_TRACE=TRUE

SGA is dynamic and sized using SGA_MAX_SIZE


Granual Size is depend on estimated size of SGA.
If the SGA Size is Less than 128mb then granual size is 4MB
Otherwise 16MB

Optimal Flexible Architecture(OFA)


Oracle recommended standard database architecture layout
OFA involves three major rules.
1]Establish directory structure where any database file can be stored on any disk resource
2]Separate objects with different behavior into different tablespace.
3]Maximize database reliability and performance by separating database component across different disk
resources
Using Password file Authentication
--Create password file using password file utility.
--$orapwd file=$ORACLE_HOME/dbs/orapwu15 password=admin entires=5
--set the REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE in the init.ora file.
--Add users to the password file and assign appropriate privilege to each user.
GRANT SYSDBA TO HR;

Control File
The control file is the binary file define the current State of Physical Database.
Loss of Control file required recovery
Is read at mount Stage.
Is linked to single Database.
Should be multiplexed.
Maintains integrity of database.
Size initially by CREATE DATABSE
(Maxlogfiles,Maxlogmembers,Maxloghistory,Maxdatafiles,Maxinstance)
Control File Contains
Database name and identifier
Time stamp of database creation
Tablespace name
Name and location of Datafile and Redo log file.
Current Redo log file sequence no.
Checkpoint information.
Begin and end of Undo Segment.
Redo log Archive information
Backup information.
View V$controlfile, V$controlfile_Record_Section, V$parameter.
Redo Log File
Redo log files are organized into groups.
An Oracle database required at least two groups.
Each redo log within group called member.
Work
Redo log work in cyclic fashion
When the redo log file is full LGWR will move to next log group.
-This is called log switch.
-Checkpoint operation also occurs.
-Information is written to the control file.
Checkpoint
At every log switch
When an instance shutdown Normal, Transactional, Immediate.
When forced by setting by initialization parameter FAST_START_MTTR_TARGET.
When manually request by DBA {Alter system Checkpoint]
When the ALTER TABLESPACE{BEGIN BACKUP|OFFLINE NORMAL|READ ONLY}
What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to
complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for
Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from
disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can
be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for
rebuilding the control file and reading datafile headers (P2=1). In general, this event is indicative of disk
contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into
different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table
scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then
show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:
prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';

How Export One user and Drop the Existing and Creating New One.
Take the export of Schema1
Using:

Exp system/manger@<DataSource> file=c:\Schema1.dmp owner=Schema1


Once the export is complete without errors
Drop the Schema1 User with cascade option
Using:

Drop user Schema1 cascade;


Create the new user
and Import Schema1 to the new user
Using:
Imp system/manager@<DataSource> file=c:\Schema1.dmp
fromuser=<old_username> touser=<new_username>

alter table employee modify ( year default to_char(sysdate,'YYYY'), country default 'Pakistan' ) ;

Create Database Manually


1] c:\>oradim –new -srvc punch
c:\>net start OracleServicepunch
c:\>net stop OracleServicepunch
2] Create sub directory in oracle directory
d:\oracle9i\admin\punch \archive
\udump
\bdump
\cdump
\pfile
\control
\data
3] create initsid.ora file in pfile directory
db_name=punch
instance_name=punch
control_files=(D:\ora9i\admin\punch\control\con01.ctl,D:\ora9i\admin\punch\control\con02.ctl)
db_block_size=4096
db_block_buffers=500
shared_pool_size=31457280
db_files=200
compatible = 9.0.1.1.1
background_dump_dest=d:\ora9i\admin\punch\bdump
core_dump_dest=d:\ora9i\admin\punch\cdump
user_dump_dest=d:\ora9i\admin\punch\udump
undo_management=auto
undo_tablespace=undotbs
4] c:\>net start OracleServicepunch
5] c:\>set oracle_sid=punch
6] c:\>sqlplus “/as sysdba”
7] SQL> startup nomount pfile=d:\ora9i\admin\punch\pfile\initpunch.ora
8] Create Database Statement
create database punch
maxdatafiles 100
maxlogfiles 5
maxlogmembers 5
maxloghistory 1
maxinstances 1
logfile
group 1('d:\ora9i\admin\punch\data\redo1_01.rdo') size 1m,
group 2('d:\ora9i\admin\punch\data\redo2_01.rdo') size 1m
force logging
datafile 'd:\ora9i\admin\punch\data\system01.dbf' size 50m
autoextend on
undo tablespace UNDOTBS
datafile 'd:\ora9i\admin\punch\data\undo.dbf' size 30m
default temporary tablespace TEMP
tempfile 'd:\ora9i\admin\punch\data\temp.dbf' size 20m
extent management local uniform size 128k;
9] Run the Scripts
SQL>@d:\ora9i\rdbms\admin\catalog.sql
SQL>@d:\ora9i\rdbms\admin\catproc.sql
SQL>@d:\ora9i\rdbms\admin\sql.bsq
Diffrance Between Standby Database And Replication
Standby Replication
A Standby database (or Databases) are used for A Replication Database (or Databases) are used for
Database security, That is they can be used as Load sheading,that is when you have too many
backup Server, where by the changes to the users on a respective database u can replicate
production databases are propagated to the this database so that you can cutdown the performance
Standby database through the Data Guard degradition
(and you can switch to the Standby Database
when the Production database is not available).

What is the difference between Logical and Physical Backups?


Logical Backups: In this backup Oracle Export utility stores data in Binary file at OS level.
Physical Backups: In this backup physically Datafiles are copied from one location to another( Disk, Tape etc.)
Cloning Database
This procedure can be use to quickly migrate a system from one UNIX server to another. It clones the Oracle
database and this Oracle cloning procedures is often the fastest way to copy a Oracle database.
STEP 1: On the old system, go into SQL*Plus, sign on as SYSDBA and issue: “alter database backup controlfile
to trace”. This will put the create database syntax in the trace file directory. The trace keyword tells oracle to
generate a script containing a create controlfile command and store it in the trace directory identified in the
user_dump_dest parameter of the init.ora file. It will look something like this:
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf','/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf','/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf','/u01/oradata/oldlsq/mydatabase.dbf';
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
STEP 2: Shutdown the old database
STEP 3: Copy all data files into the new directories on the new server. You may change the file names if you
want, but you must edit the controlfile to reflect the new data files names on the new server.
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq
rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq
rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq
STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1, modify the controlfile creation
script by changing the following:
Old: CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS
New: CREATE CONTROLFILE SET DATABASE "NEWLSQ" NORESETLOGS
STEP 5: Remove the “recover database” and “alter database open” syntax
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
STEP 6: Re-names of the data files names that have changed.
Save as db_create_controlfile.sql.
Old: DATAFILE '/u01/oradata/oldlsq/system01.dbf','/u01/oradata/oldlsq/mydatabase.dbf'
New: DATAFILE '/u01/oradata/newlsq/system01.dbf','/u01/oradata/newlsq/mydatabase.dbf'
STEP 7: Create the bdump, udump and cdump directories
cd $DBA/admin
mkdir newlsq
cd newlsq
mkdir bdump
mkdir udump
mkdir cdump
mkdir pfile
STEP 8: Copy-over the old init.ora file
rcp $DBA/admin/olslsq/pfile/*.ora newhost:/u01/oracle/admin/newlsq/pfile
STEP 9: Start the new database
startup nomount;
@db_create_controlfile.sql
STEP 10: Place the new database in archivelog mode

How Can Contention for Rollback Segment Be Reduced?


Answer: Contention for rollback segments is an issue for Oracle DBAs in the RMU mode. The following
techniques can be useful to reduce contention for rollback segments:
Increase the number of rollback segments.
Set the storage parameter NEXT to be the same as INITIAL for rollback segments.
Set the storage parameter MIN_EXTENTS for rollback segments to be at least 20.
Set the storage parameter OPTIMAL for rollback segments to be equal to INITIAL x MIN_EXTENTS.
Ensure plenty of free space in rollback tablespace.
Why is RBO being removed?
Oracle 9i release 2 will be the last version that officially supports RBO. Oracle recommends all partners and
customers to certify their applications with CBO before this version is no longer supported. Though RBO will be
available in Oracle 10i, it will no longer be supported.
As per a published Oracle note, the existence of RBO prevents Oracle from making key enhancements to its
query-processing engine. Its removal will permit Oracle to improve performance and reliability of the query-
processing components of the database engine.
Presently, Oracle support for RBO is limited to bug fixes only and no new functionality will be added to RBO.
Why move to CBO?
Key benefits that come to mind:
1. Oracle stopped developing for RBO environment a long time back.
2. RBO will subsequently be removed from the Oracle database.
3. RBO has a limited number of access methods compared to CBO.
4. All the new features require CBO. CBO is enabled to identify these features, and how to evaluate their cost.
Most of these features will be of importance for any setup; e.g. Index organized tables, bitmap indexes, Function-
based indexes, reverse-key indexes, Partitioning, Hash joins, Materialized views, parallel query, star joins, etc.
5. Metalink Support.
Once RBO is no longer supported, Oracle support will not be available.
6. CBO has matured.
Prior to Oracle 7, RBO could outperform CBO in some situations. Moreover, CBO would not behave as
expected and often choose bad execution plans. CBO has been improved across releases and today it is a much
better alternative considering the benefits and advances towards new features.
7. Distributed and remote queries are more reliable.
In RBO, it was difficult to fine tune queries that used database links and had more than one table from the local
and remote database. CBO outperforms RBO in this regard. In CBO, the local optimizer is aware of the statistics
present in the remote table and is able to make better decisions on execution plans. RBO may not consider
indexes on remote databases, but CBO has access to statistics and information regarding indexes on a remote
database and can decide on an execution plan.
How Check Health of Oracle Database
1]Hardware Raid{CPU,Memory,Disk},Software Raid,DSN,HTTP
2]Review OFA[Environmental Variable{ORACLE_HOME,DBA},File Naming]
3]Review of Oracle Parameter[Init.ora{Parameter},Optimizer Mode, Sqlnet.ora, Listner.ora]
4]Review the object parameter[Pctfree, Pctused, Freelists]
5]Review the tablespace setup[LMT ,ASSM (9i and beyond)]
6]Review the Schedule Jobs [ Statpack.snap{hourly},Backup,File Cleanup{Remove old trace
file,Dumps,Archive Log Files}]

This Query arrange date in Ascending Order.


SELECT TO_DATE(200510,'yyyymm')+ROWNUM-1 FROM all_objects
WHERE ROWNUM <= (SELECT TO_CHAR(LAST_DAY(TO_DATE(200510,'yyyymm')),'dd')
FROM dual )
Taking Hot & Cold Backup,
Tablespace Backup
Export and Import
Using RMAN Incomplete Recovery and Complete Recovery, User Manage Recovery
Multiplex Control File
Add, Resize and Rename Log File
Create Tablespace,Add ,Resize and Renaming datafile
Create ,Rebuilding and Analyze Index
Database in Archive log and No Archive log Mode.
Configure Rman
Creating New Database Manually.
Oracle 8i Data Migrate into Oracle 9i.
Use SQL*LOADER Load data in Oracle table from Text file.
Tune fine the Sql Statement using Explain Plan, UTLBSTAT, UTLESTAT, TKPROF
Creating Statspack Report.
Analyze Schema and Tables.
To make changes in any other schema with knowing password
I have answered some of your questions . For all other experts
please check if these are correct and sufficient full answers.
Pls add your explainations if needed.

1)what is the shared memory segment?


ANs) SGA is shared memory segment.
Shared memory segment : It is a segment of memory that is shared
between processes (SMON , PMON, etc) .Shared Memory is an
efficeint means of passing data between programs. One program will
create a memory portion which other processes (if permitted) can
access.

SHMMAX parameter : suppose u hv 4GB to physical ram . ur current SGA


size is 2.5GB and u hv set SHMMAX as 3GB . this means that ur SGA can
grow till 3GB limit.

8) Semaphore Management: A semaphore is a term used for a signal flag


used by the Navy to communicate between ships. In some
dialects of UNIX, semaphores are used by Oracle to serialize internal
Oracle processes and guarantee that one thing happens before another
thing. Oracle uses semaphores in HP/UX and Solaris to synchronize
shadow processes and background processes.

However, AIX UNIX does not use semaphores, and a post/wait driver is
used instead to serialize tasks.

The number of semaphores for an Oracle database is normally equal to


the value of the processes initialization parameter.
For example, a database where processes=200 would need to have 200
UNIX semaphores allocated for the Oracle database.
When allocating semaphore in UNIX, it is critical that your UNIX
kernel parameter semmns be set to at least double the high-water mark
of processes for every database instance on your server. If you fail
to allocate enough semaphores by setting semmns too low, your Oracle
database will fail at startup time with the message:

ORA-7279: spcre: semget error, unable to get first semaphore set

42)Explain an ORA-01555 - You get this error when you get a snapshot
too old within rollback. It can usually be solved by
increasing the undo retention or increasing the size of rollbacks.
You should also look at the logic involved in the
application getting the error message.

21) what is UNDO retention.

automatic undo management allows the DBA to specify how long undo
information should be retained after commit, preventing "snapshot too
old" errors on long running queries.

This is done by setting the UNDO_RETENTION parameter. The default is


900 seconds (5 minutes), and you can set this parameter to guarantee
that Oracle keeps undo logs for extended periods of time.

25) What is SGA


The SGA is a chunk of memory that is allocated by an Oracle Instance
(during the nomount stage) and is shared among Oracle processes,
hence the name. It contains all sorts of information about the
instance and the database that is needed to operate.
Components of the SGA
The SGA consists of the following four parts:
Fixed Portion
Variable Portion
Shared pool
java pool

27)Instance is started means what?


Instance = memory structure + Background process.
so when memory structure are allocate and background process are
started then we say Instance is started.

29)What is the Hot Backup?


Taking backup when the database is ON working online.

1)what is the shared memory segment?


we have to configure this parameter SHMMAX in kernel of unix to populate the SGA in the RAM.Becos SGA is
also shared memory segment.

2)How can you started your standby server?


startup nomount
alter database mount standby database;
alter database recover managed standby database disconnect from session;

3)Why Becos after nomount stage of stanby server we have to mount as standby server.
4)Installation of oracle on solaris platform witch file you have to modified first?
/etc/system for configuration of kernel

5)If your database is hanged and you can't connect thro' sqlplus
how can you shutdown database?
ipcs -s|grep $ORACLE_SID gives us semaphore segment set process id
and we
can kill them with the ipcrm commands.

6)What is the $0?


shescript'name

7)what is the $1?


argument after shell script name

8)What is the semaphore management?


That is the internal latch contention and resource wait set of Unix

9)when you used resetlogs and noresetlogs?


When we performed complete recovery we can opened our database with noresetlogs and when we performed
incomplite recovery that time we have to opened our database with resetlogs.

10)When you performed only one datafile recovery that time what u used resetlogs or norestlogs?
That is also part of the complite recovery we have to used the norestlogs

11)But if you are issued the resetlogs then what you after it?
Once resetlogs issued after that we have to take full cold backup.

12)If our buffer cache hit ratio is 90% but system is going to slow how you
can judge that buffer cache is too small?
from the v$waitstat in the data block contention.

13)what is the 10046 event?


That is the best practice for dba to understand system wait event created by
query.

14)What is the crontab?


in Unix that is the utility for scheduling the process

15)Whithout use of crontab witch command you used for this?


at commend
16)How you can check which oracle instance are populated in you server?
ps -ef|grep ora|grep -g grep

17)which file you have to modified when you intersted to set oracle instance automaticaly started?
/etc/oratab

18)what is the pctfree?


19)what is the pctused?

20)what is the optimal?

21)what is the undo_retention?

22)how you can fast process the import which has large index?
to set sort_area_size more

23)what is the parallel server?


shared database with more than the 2 instance called parallel server.

24)what is parallel query?


server process spawn more than the one slave process and all slave fetch
data from the disk.

25)What is the SGA?

26)What is the your present activities as DBA?

27)Instance is started means what?

28)How can you configured Backup strategy?

29)What is the Hot Backup?

30)In tuning how can you performed it?

31)What is the OFA?

32)Suppose one of the your end user come to you and told you that his report is hanged what should you do?

33)If you database is in no archivelog mode and you took backup yesterday and your database size is 40gb very
next day when you start database that time it gives error that one of the redolog file corrupted how can you
recovered it?

34)What is the buffer busy wait?

35)when data block buffer has too many contention what to do?

36)what is the localy managed tablespace?

37)why freelist contention is occured?

38)what is the row chaining and row migration? How can you judged it and tune it?

39)if your database size is more than the 100gb and running 24*7*365 how planned to decide to take backup?

40)what is the rollback segment contention?

41)why we used undo_retention?

42)if you got ora-1555 error snapshot too old what to do?
43)If you give proper hint in sql query but when you check from the trace that hint is not working why?

44)Give the parameter which is affected on Oracle block size.

45)What is the pctthreshold?

46)If you want to check disk io how to obtained it?

47)To avoid sorting what to do?

kind regards.

these are some interview quetions i found on a website.


Kindly help in adding answers .

1)what is the shared memory segment?


we have to configure this parameter SHMMAX in kernel of unix to populate the SGA in the RAM.Becos SGA is
also shared memory segment.

2)How can you started your standby server?

3)Why Becos after nomount stage of stanby server we have to mount as standby server.

4)Installation of oracle on solaris platform witch file you have to modified first?
/etc/system for configuration of kernel

5)If your database is hanged and you can't connect thro' sqlplus how can you shutdown database?
ipcs -s|grep $ORACLE_SID gives us semaphore segment set process id and we can kill them with the ipcrm
commands.

6)What is the $0?


shescript'name

7)what is the $1?


argument after shellscript'name

8)What is the semaphore management?


That is the internal latch contention and resource wait set of Unix

9)when you used resetlogs and noresetlogs?


When we performed complite recovery we can opened our database with noresetlogs and when we performed
incomplete recovery that time we have to opened our database with resetlogs.

10)When you performed only one datafile recovery that time what u used resetlogs or norestlogs?
That is also part of the complite recovery we have to used the norestlogs

11)But if you are issued the resetlogs then what you after it?
Once resetlogs issued after that we have to take full cold backup.
12)If our buffer cache hit ratio is 90% but system is going to slow how you can judge that buffer cache is too
small?
from the v$waitstat in the data block contention.

13)what is the 10046 event?


That is the best practice for dba to understand system wait event created by query.

14)What is the crontab?


in Unix that is the utility for scheduling the process

15)Whithout use of crontab witch command you used for this?


at commend
16)How you can check which oracle instance are populated in you server?
ps -ef|grep ora|grep -g grep

17)which file you have to modified when you intersted to set oracle instance automaticaly started?
/etc/oratab

18)what is the pctfree?

19)what is the pctused?

20)what is the optimal?

21)what is the undo_retention?

22)how you can fast process the import which has large index?
to set sort_area_size more

23)what is the parallel server?


shared database with more than the 2 instance called parallel server.

24)what is parallel query?


server process spawn more than the one slave process and all slave fetch data from the disk.

25)What is the SGA?

26)What is the your present activities as DBA?

27)Instance is started means what?

28)How can you configured Backup stategy?

29)What is the Hot Backup?

30)In tunning how can you performed it?

31)What is the OFA?


32)Suppoce one of the your end user come to you and told you that his report is hanged what should you do?

33)If you database is in no archivelog mode and you took backup yesterday and your database size is 40gb very
next day when you start database that time it gives error that one of the redolog file corrupted how can you
recovered it?
34)What is the buffer busy wait?

35)when data block buffer has too many contention what to do?

36)what is the localy managed tablespace?

37)why freelist contention is occured?

38)what is the row chaining and row migration? How can you judged it and tune it?

39)if your database size is more than the 100gb and running 24*7*365 how planned to decide to take backup?

40)what is the rollback segment contention?

41)why we used undo_retention?

42)if you got ora-1555 error snapshot too old what to do?

43)If you give proper hint in sql query but when you check from the trace that hint is not working why?

44)Give the parameter which is affected on Oracle block size.

45)What is the pctthreshold?

46)If you want to check disk io how to obtained it?

47)To avoid sorting what to do?


Standby databases make data available for emergencies and more.

Today's 24/7 e-world demands that business data be highly available. Oracle Data Guard is a built-in component of Oracle9i Database
that protects your database from unforeseen disasters, data corruption, and user errors. It also reduces planned downtime and allows
you to provide additional data access for reporting.

Known in earlier database releases as the standby option, the feature now called Oracle Data Guard has long been a dependable and
cost-effective method for achieving high availability and disaster-recovery protection.

This article looks at Oracle Data Guard features and processes both old and new:

• The physical-standby database is a longstanding but invaluable feature that is straightforward to set up and maintain.
• The logical-standby database is new in Oracle9i Database Release 2, and this type of standby provides additional availability
beyond that of the physical-standby database.
• Archive-gap management adds automation to keep primary and standby databases in sync.

Physical-Standby Database
The tried-and-true workhorse of Oracle Data Guard is the physical-standby database, an identical copy, down to the block level, of your
primary database. Usually, the physical-standby database resides on a different server than your primary database. In Oracle9i
Database, Data Guard is typically implemented in managed-recovery mode. This means Oracle processes automatically handle copying
and application of archive redo to the standby database.
Setting up a managed-recovery physical-standby database is fairly straightforward. When getting started with the Oracle Data Guard
setup, it's easiest to implement a physical-standby database if you:

• Set up Oracle Data Guard on two servers: the primary and the standby.
• Ensure that the mount points and directories are named the same for each of the servers.
• Make the database name the same for the primary and standby.

This way, all you have to do is take a backup of your primary and lay it onto your standby server without having to modify parameters in
your SPFILE (or init.ora file) that handles filename conversions.
Here are the steps for setting up an Oracle9i Database physical-standby database in managed-recovery mode. In this example, the
primary host name is primary_host, the standby host name is standby_host, and both the primary database and the standby database
are named BRDSTN. The mount points and directory structures are identical between the primary and standby servers.
1. Ensure that your primary database is in archive-log mode. The easiest way to check this is to log on to SQL*Plus as the SYS user
and issue the following command:
SQL> archive log list
The database-log mode must be "Archive Mode." If your primary database isn't in archive-log mode, enable this feature as follows:
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;

2. Create a backup of your primary database. Back up only datafiles, not online redo log files or control files. You can view the files
you need to back up by querying V$DATAFILE:
SQL> select name from v$datafile;
You can use a hot, cold, or Recovery Manager (RMAN) backup to create the backup. I usually use a cold backup because there are
fewer steps involved.
3. Copy the backup datafiles to the standby server. Use your favorite network copy utility (FTP, SCP, etc.) to copy the backup files
generated from Step 2 to the standby server. If you use a hot (online) backup, you also need to transfer any archived redo logs
generated during the backup to the standby environment.
4. Create a standby control file. Create this via the ALTER DATBASE CREATE STANDBY command on the primary database.
This creates a special control file that will be used by a standby database.

SQL> alter database create standby controlfile as '/ora01/oradata/BRDSTN/stbycf.ctl';


5. Copy the standby control file to the standby environment. In this example, I FTP the control file from the primary location of
/ora01/oradata/BRDSTN /stbycf.ctl to the standby-environment location of /ora01/oradata/BRDSTN/ stbycf.ctl.
On the primary server, initiate an FTP session, and log in to the standby server. After a successful login, FTP the control file as follows:
ftp> binary
ftp> cd /ora01/oradata/BRDSTN
ftp> send stbycf.ctl
6. Configure the primary init.ora file. The primary database must be in archive-log mode; it also needs to know where to write the
archived redo log files on both the primary and the standby servers. You'll need to set the following parameters in your primary
initialization file (init.ora or SPFILE):
log_archive_format = 'arch_%T_%S.arc'

log_archive_start = true

# location of archive redo on primary


log_archive_dest_1='LOCATION=
/ora01/oradata/BRDSTN/'

# location of archive redo on the


# standby
log_archive_dest_2='SERVICE=
standby1 optional'
log_archive_dest_state_2=ENABLE
This tells the Oracle archiver process that it needs to write to two destinations. The first one, log_archive_dest_1, is a local file
system and is mandatory. The second location is identified by an Oracle Net service name that points to the standby server and is
optional (see Step 8 for more details). The second location is optional, because Oracle9i Database doesn't have to successfully write to
this secondary location for the primary database to keep functioning. This secondary location can be set to "mandatory" if your business
needs require that you cannot lose any archived redo logs.
Note: log_archive_dest and log_archive_duplex_dest have been deprecated, and you should now use log_archive_dest_n
to specify archived redo log locations.
7. Copy the primary init.ora file to the standby server and make modifications for standby database. I recommend keeping the
parameters the same as configured on the primary database, with the following modifications to the secondary server init.ora file:
# ensure that your standby database
# is pointing at the standby control
# file you created
# and copied to the standby server
control_files = '/ora01/oradata/BRDSTN/stbycf.ctl'

# location where archive redo logs are


# being written in standby environment

standby_archive_dest=
/ora01/oradata/BRDSTN

# Enable archive gap management


fal_client=standby1
fal_server=primary1
fal_client and fal_server are new parameters that enable archive-gap management. In this example, standby1 is the Oracle Net
name of the standby database and primary1 is the Oracle Net name of the primary database. The fetch archive log (FAL) background
processes reference these parameters to determine the location of the physical-standby and primary databases.
8. Configure Oracle Net. In managed-recovery mode, the primary and standby databases need to be able to communicate with each
other via Oracle Net. There needs to be a listener on both the primary and standby servers, and Oracle Net must be able to connect
from one database to the other.
Primary listener.ora file. The primary database needs a listener listening for incoming requests. The following text describes the
listener for the primary listener.ora file:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)(HOST = primary_host)(PORT = 1521))
)
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY =
EXTPROC))
)))

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /ora01/app/oracle/product/9.2.0)
(PROGRAM = extproc)
)

(SID_DESC =
(SID_NAME = BRDSTN)
(ORACLE_HOME = /ora01/app/oracle/product/9.2.0))
Primary tnsnames.ora file. I recommend placing both the primary and standby service locations in the tnsnames.ora file on both the
primary and standby servers. This makes troubleshooting easier and also makes failover and switchover operations smoother. In this
example, primary1 points to the primary database on a host named primary_host, and standby1 points to the standby database on
a server named standby_host.
The following text describes the connection for the primary tnsnames.ora file:
primary1 =
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=primary_host))
(CONNECT_DATA=(SERVICE_NAME=BRDSTN)))

standby1 =

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=standby_host))


(CONNECT_DATA=(SERVICE_NAME=BRDSTN)))
Standby listener.ora file. The standby database needs to be able to service incoming Oracle Net communication from the primary
database. The following text describes the listener for the standby server listener.ora file:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby_host)(PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)))

SID_LIST_LISTENER =
(SID_LIST =

(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /ora01/app/oracle/product/9.2.0)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = BRDSTN)
(ORACLE_HOME = /ora01/app/oracle/product/9.2.0))
Standby tnsnames.ora file. The following is the same connection information entered into the primary server tnsnames.ora file:
primary1 =
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=primary_host))
(CONNECT_DATA=(SERVICE_NAME=BRDSTN)))

standby1 =
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=standby_host))
(CONNECT_DATA=(SERVICE_NAME=BRDSTN)))

When you operate Oracle Data Guard in managed-recovery mode, it is critical to have Oracle Net connectivity between the primary and
standby databases.
9. Start up and mount the standby database. The standby database needs to be started and mounted in standby mode. On the
standby database, execute the following:
SQL> startup nomount;
SQL> alter database mount standby database;
10. Enable managed-recovery mode. To enable automatic copying and application of archived redo logs, place the standby database
in managed-recovery mode as follows:
SQL> recover managed standby database disconnect;

At this point, you should be able to generate archive redo information on your primary database and see it applied to the standby. As you
deploy your Oracle Data Guard environment and perform maintenance operations, pertinent messages are often written to the primary
and standby alert.log files. I've found that concepts jell more quickly and issues are resolved more rapidly when I simultaneously monitor
the contents of both the standby and primary alert.log files. In a UNIX environment, the tail command is particularly helpful in this regard:
$ tail -f alert_BRDSTN.log
Most of the problems you'll encounter with your Oracle Data Guard physical-standby database setup will be because either Oracle Net
was not configured correctly or you fat-fingered an initialization parameter. The alert logs are usually pretty good at pointing out the
source of the problem.
Data Guard SQL Apply
Prior to Oracle9i Database Release 2, you could implement only a physical-standby database. One aspect of a physical-standby
database is that it can be placed in either recovery mode or read-only mode, but not both at the same time.
With the new Data Guard SQL Apply feature of Oracle9i Database Release 2, you can now create a logical-standby database that is
continuously available for querying while simultaneously applying transactions from the primary. This is ideal if your business requires a
near-real-time copy of your production database that is also accessible 24/7 for reporting.
A logical-standby database differs architecturally from a physical-standby database, which is physically identical to the primary database
block-for-block, because the Oracle media-recovery mechanism is used on the redo information received from the primary database to
apply those redo changes to a physical data-block address.
Unlike a physical-standby database, a logical-standby database transforms the redo data received from the primary into SQL statements
(using LogMiner technology) and then applies the SQL statements. It is logically identical to the primary database but not physically
identical at the block level.
Just like a physical-standby database, a logical-standby database can assume the role of the primary database in the event of a
disaster. You can configure your Data Guard environment to use a combination of both logical-standby and physical-standby databases.
If you use your logical-standby database for reporting, you can further tune it by adding indexes and materialized views independent of
the primary.
There are many steps involved with setting up a logical-standby database. The best sources for detailed instructions are MetaLink note
186150.1 and the Oracle9i Database Release 2 Oracle Data Guard Concepts and Administration manual.
Archive-Gap Management
In a managed-recovery configuration, if the primary log-transport mechanism isn't able to ship the redo data from the primary database
to the physical-standby database, you have a gap between what archive logs have been generated by the primary database and what
have been applied to the physical-standby database. This situation can arise if there is a network problem or if the standby database is
unavailable for any reason.
Detecting and resolving archive gaps is crucial to maintaining high availability. An
untimely gap-resolution mechanism compromises the availability of data in the standby Next Steps
environment. For example, if you have an undetected gap and disaster strikes, the
result is a loss of data after you fail over to your standby database. READ Oracle Data Guard documentation
In Oracle8i, you had the responsibility of manually monitoring gaps. After you detected /docs/products/oracle9i
a gap, you then had to manually perform the following steps on your standby database: /deploy/availability/pdf/DG92_ TWP.pdf
metalink.oracle.com
• Determine which archived redo logs are part of the gap
• Manually copy those archived redo gap logs to standby
• Take the standby database out of managed-recovery mode
• Issue the RECOVER STANDBY DATABASE command
• Alter the standby database back into managed-recovery mode

With Oracle9i Database, Data Guard has two mechanisms for automatically resolving gaps. The first mechanism for gap resolution uses
a periodic background communication between the primary database and all of its standby databases. Data Guard compares log files
generated on the primary database with log files received on the standby databases to compute the gaps. Once a gap is detected, the
missing log files are shipped to the appropriate standby databases. This automatic method does not need any parameter configuration.
The second mechanism for automatic gap resolution uses two new Oracle9i Database initialization parameters on your standby
database:
SQL> alter system set FAL_CLIENT = standby1;
SQL> alter system set FAL_SERVER = primary1;
Once these parameters are set on the standby database, the managed-recovery process in the physical-standby database will
automatically check and resolve gaps at the time redo is applied.

Convert tablespace

Updates at the bottom


Most people will now have already seen the benefits of using locally managed tablespaces as a simple
cure for fragmentation and relieving stress on the data dictionary extent tables UET$ and FET$.
However, many people are yet to take advantage of this new feature because up until now, the only
way to convert to locally managed tablespaces was to unload and reload your tablespace data.

Not any more...

8.1.6 now gives two built-in functions to allow you to switch back and forth between dictionary and
locally managed tablespaces without unloading/reloading the data. It was 8.1.5 that introduced the
dbms_space_admin package which is used 'behind the scenes' to probe the tablespace bitmaps in
locally managed tablespace, mainly for use in the everyday views DBA_EXTENTS and
DBA_SEGMENTS. In 8.1.6, two new procedures have been added to the package:

• DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL
• DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_FROM_LOCAL

A few of things are important to note:

• It is not sufficient to have compatible set to 8.1, it must be 8.1.6

• When converting from dictionary management to locally management, a "full conversion" does
not actually take place - simply an appropriate bitmap is built to map the existing extents, along
with some rounding of the space used. Thus you end up with an interesting hybrid, where if you
query DBA_TABLESPACES, you will see that management policy is LOCAL but the allocation
policy is USER (not uniform or automatic). In short, you will have definitely solved one problem,
that of stress on UET$ and FET$, but you are not guaranteed to resolve the fragmentation
issue. Even so, in a tightly managed production environment, steps to avoid fragmentation
should already be in place.

• When converting from dictionary management to locally management, there must be some free
space in the tablespace hold the bitmap that will be created (at least 64k), otherwise migration
will fail.

Finally, the documentation also provides a "tempter", that local management of the SYSTEM
tablespace will also be supported in future..

Update

When converting from a dictionary managed tablespace to a locally managed one, then produce a
bitmap that all extents can be mapped by, clearly each bit in the bitmap must represent a size that will
divide evenly into:

• All of the existing extents


• All of the extents that have been used in the past that are now free

and Oracle also ensures that if you have set the MINIMUM EXTENT clause, then this is respected as
well. The documentation says that during conversion, it will try to find the largest suitable extent size,
that is, the greatest commmon denominator of all of the used/free extents.

However, it would appear that one very critical factor is not documented. If a MINIMUM EXTENT
clause has NOT been specified, then rather than this clause being ignored, Oracle assumes it to be
the smallest possible size of an extent (whether it exists or not). In an 8k block size database, this is
40k (Oracle rounds extents bar the initial one up to 5 blocks), so no matter what distribution of extents
you have, the bitmap will always be chosen as 1bit per 40k, unless you specify the MINIMUM
EXTENT clause.

Example:

SQL> create tablespace blah


2 datafile 'G:\ORA9I\ORADATA\DB9\BLAH.DBF' size 10m reuse
3 extent management dictionary;

Tablespace created.

SQL> drop table t1;


drop table t1
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> drop table t2;


drop table t2
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> drop table t3;


drop table t3
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> col bytes format 999,999,999


SQL> select * from dba_free_space where tablespace_name = 'BLAH';

TABLESPACE_NAME FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO


---------------- ---------- ---------- ------------ ---------- ------------
BLAH 8 2 10,477,568 1279 8

SQL> create table t1 ( x number ) storage ( initial 400k) tablespace blah;

Table created.

SQL> create table t2 ( x number ) storage ( initial 800k) tablespace blah;

Table created.

SQL> create table t3 ( x number ) storage ( initial 1200k) tablespace blah;

Table created.

SQL> select * from dba_free_space where tablespace_name = 'BLAH';


TABLESPACE_NAME FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
---------------- ---------- ---------- ------------ ---------- ------------
BLAH 8 302 8,019,968 979 8

SQL> select bytes from dba_extents where tablespace_name = 'BLAH';

BYTES
------------
409,600
819,200
1,228,800

--
-- So at this stage, one would imagine that the unit size for a conversion
-- to a locally managed tablespace is 50 ( 50 * 8192 = 400k ). But if we try...
--

SQL> exec dbms_space_admin.TABLESPACE_MIGRATE_TO_LOCAL('BLAH',50);


BEGIN dbms_space_admin.TABLESPACE_MIGRATE_TO_LOCAL('BLAH',50); END;

*
ERROR at line 1:
ORA-03241: Invalid unit size
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 0
ORA-06512: at line 1

--
-- But if we set the minimum extent...
--

SQL> alter tablespace blah minimum extent 400k;

Tablespace altered.

SQL> exec dbms_space_admin.TABLESPACE_MIGRATE_TO_LOCAL('BLAH',50);

PL/SQL procedure successfully completed.

SQL>

To Configured Shared Server which parameter specify in server init.ora


Shared_Sever
Max_shared_server
Circuits
SERVER=SHARED
MAX_DISPATCHERS
Oracle 10 g New Feature PPT
Automatic Shared Memory Management (ASMM)
10g method for automating SGA management.

alter system set sga_target=‘x’;


sga_target -- This parameter is new in Oracle Database 10g and reflects the total size of memory an SGA can
consume.
Shared pool
Buffer cache
Java Pool
Large Pool

Requires an SPFILE and SGA_TARGET > 0. Can not exceed sga_max_size.

Does not apply to the following parameters.

Log Buffer
Other Buffer Caches (KEEP/RECYCLE, other block sizes)
Streams Pool (new in Oracle Database 10g)
Fixed SGA and other internal allocations

Can be adjusted via EM or command line.

A new background process named Memory Manager (MMAN) manages the automatic shared memory.
Data Pump
High performance import and export
• 60% faster than 9i export (single thread)
• 15x-45x faster than 9i import (single thread)

The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data
Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more
improvement dynamically. Creates a separate dump file for each degree of parallelism.

Pre configuration steps


===================
Create directory pump_dir as ‘/……..’;
Grant read on directory pump_dir to scott;
Export Command
===============
oracle@testserver-test1>expdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db.dmp
logfile=pump_dir:test_tbs_full_db.log

Import Command
===============
oracle@testserver-test2>impdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db.dmp schemas=devuser
remap_tablespace=system:users

Flashback Database
10g method for point-in-time recovery
1. Shutdown the database
2. Startup the database in mount state
3. SQL> flashback database to timestamp to_timestamp(‘2004-12-16 16:10:00’, ‘YYYY-
MM-DD HH24:MI:SS’);
4. Open the database – open resetlogs
New strategy for point-in-time recovery
Flashback Log captures old versions of changed blocks.
• Think of it as a continuous backup
• Replay log to restore DB to time
• Restores just changed blocks
It’s fast - recovers in minutes, not hours. More over, this feature removes the need for database incomplete
recoveries that require physical movement of datafiles/restores.
It’s easy - single command restore
• SQL> Flashback Database to scn 1329643
Rename Tablespace
10g method for renaming tablespaces
SQL> alter tablespace users rename to users3;
Oracle allows the renaming of tablespaces in 10g. A simple alter tablespace command is all you need.

SQL> alter tablespace users rename to users3;


Tablespace altered.

SQL> alter tablespace users3 rename to users;


Tablespace altered.
• Rename tablespace feature has lessened the workload for TTS operations. There’s no need to delete
tablespaces on the target prior to impdp metadata.
• Doesn’t Support System or Sysaux tablespaces
• Supports Default, Temporary, and Undo Tablespaces (dynamically changes the spfile).
Dictionary View Improvements
New columns in v$session allow you to easily identify sessions that are blocking and waiting for other sessions.
V$session also contains information from v$session_wait view. No need to join the two views.
--Display blocked session and their blocking session details.
SELECT sid, serial#, blocking_session_status, blocking_session FROM v$session WHERE blocking_session
IS NOT NULL; or
SELECT blocking_session_status, blocking_session FROM v$session WHERE sid = 444; /* Blocked Session */
Flush Buffer Cache
8i/9i method for flushing the buffer cache
Prior to 10g, this wasn’t possible without shutting down and restarting the database or using the following
undocumented commands:
• SQL> alter session set events = 'immediate trace name flush_cache';

• alter tablespace offline/online to flush the buffer cache of blocks relating to that tablespace (As
per Tom Kytes Article).

Side-Note - You were able to flush the shared pool


SQL> ALTER SYSTEM FLUSH SHARED_POOL;

10g method for flushing the buffer cache


10g has provided the ability to flush the buffer cache. This isn’t suggested for a production environment, but
might be useful for QA/Testing. The bigger the cache, the larger the LRU and dirty list becomes. That results in
longer search times. However, if the buffer cache is undersized, than running the following command can
improve performance and take the burden off the DBWR. In addition to decreasing free buffer waits.
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;

Flashback Drop
10g method for undoing a dropped table
SQL> flashback table emp to before drop;

• Recycle Bin (Sounds familiar)……….A logical representation of the dropped object. The
dropped/renamed table is still occupying space in it’s original tablespace. You can still query it
after it’s dropped.
• You can empty out the recycle bin by purging the objects.

select object_name, original_name, type from user_recycle_bin; or show recycle;


Purge table mm$$55777$table$1;
• You can permanently drop a table without writing to recycle bin.
Drop table emp purge;
Has a few quirks
• Doesn’t restore foreign key constraints or materialized views.
• Restores indexes, constraints, and triggers with it’s klingon language - mm$$55777$table$1
(Requires a rename of triggers and constraints).

Flashback Table
9i method for restoring data
INSERT INTO emp (SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP(’16-Sep-04
1:00:00’,’DD-MON-YY HH24:MI:SS’) MINUS SELECT * FROM emp );
10g method for restoring data
Flashback table emp to TIMESTAMP
TO_TIMESTAMP(’16-Sep-04 1:00:00’,’DD-MON-YY HH24:MI:SS’)
Make sure you enable row movement prior to the restore.
SQL> alter table emp enable row movement;
Flashback Transaction Query
8i/9i method for generating sql undo statements
Log Miner (Good luck parsing through those logs).
10g method for generating sql undo statements
SELECT undo_sql FROM flashback_transaction_query WHERE table_owner=‘SCOTT’
AND table_name=‘EMP’ AND start_scn between 21553 and 44933;
(You can also use timestamp)
Flashback Transaction Query provides the ability to generate the SQL statements for undoing DML .

You might also like