Create Database Manually 1] c:\>oradim –new –sid <sid name> -srvc <service_name> 2] Create sub directory in oracle directory

d:\oracle9i\admin\sonu\archive and \udump,\bdump,\cdump,\pfile,\create 3] create initsid.ora file in pfile directory db_name=sonu instance_name=sonu control_files (d:\oracle\oradata\sonu\control01.ctl,d:\oracle\oradata\sonu\control02.ctl) 4] create database Multiplexing Control File Using SPFILE 1] Alter system set control_files=’D:\oracle9i\oradata\sonu\control01.ctl’, ’D:\oracle9i\oradata\sonu\control03.ctl’ scope=spfile; 2] Shutdown Normal; 3] Copy the file using os command 4] startup Using PFILE 1] Shutdown Normal 2] copy the file using os command 3] add new file with location and name in initsid.ora file. 4] startup Online Redo Logfile ALTER SYSTEM CHECKPOINT; ALTER DATABASE ADD LOGFILE GROUP 3 (‘D:\ORACLE91\ORADATA\log3a.rdo’, ‘D:\ORACLE91\ORADATA\log3b.rdo’) SIZE 1M; ALTER DATABASE CLEAR LOGFILE ‘D:\ORACLE91\ORADATA\log3c.rdo’ VIEW :-V$log ,V$logfile Rename Online Redo Log Member. 1]First you have to Shutdown the Instance. Shutdown Immediate 2]Copy Old Redolog file in New Location or New Name 3]Start Database with MOUNT mode. Connect sys as sysdba Startup Mount 4] alter database rename file 'D:\ORACLE9I\ORADATA\SONU\OLD_NAME.LOG' to 'D:\ORACLE9I\ORADATA\SONU\NEW_NAME.LOG'; 5] alter database open; Re Size the Online Redo Log Group. 1] First you want to add new log Group in Database with size which you want. alter database add logfile group 3 ('D:\ORACLE9I\ORADATA\SONU\REDOa3.log') size 1m; 2] Drop the old group (if it is current group) ALTER SYSTEM SWITCH LOGFILE; alter database drop logfile group 3; Droping Log Group 1]ALTER DATABASE DROP LOGFILE GROUP 3; * ERROR at line 1: ORA-01623: log 3 is current log for thread 1 - cannot drop ORA-00312: online log 3 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO03.LOG' Adding Log Member in Group 2]ALTER SYSTEM SWITCH LOGFILE;

3]ALTER DATABASE DROP LOGFILE GROUP 3; Adding Redo Log Member in Group alter database add logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG' to group 1; alter database add logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO02a.LOG' to group 2; Dropping Log Member 1]alter database drop logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG'; ERROR at line 1: ORA-01609: log 2 is the current log for thread 1 - cannot drop members ORA-00312: online log 2 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO02.LOG' ORA-00312: online log 2 thread 1: 'D:\ORACLE9I\ORADATA\SONU\REDO02A.LOG' if the status is CURRENT of the group which member you can not delete.if any way you want to delete 2]ALTER SYSTEM SWITCH LOGFILE; 3]alter database drop logfile member 'D:\ORACLE9I\ORADATA\SONU\REDO01a.LOG'; Tablespaces Locally Managed Dictionary Managed 1] Free Space manage within Tablespace Free Space Manage within data dictionary 2] No redo generated when space is allocated/de allocated Redo is generated 3] No Coalescing required Coalescing required if the pctincrease non Zero Creating Tablespace 1] create tablespace test datafile 'D:\ORACLE9I\ORADATA\SONU\TEST.DBF' SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; 2] create tablespace test1 datafile 'D:\ORACLE9I\ORADATA\SONU\TEST1.DBF' SIZE 500M AUTOEXTEND ON NEXT 5M; 3] create tablespace test2 datafile 'D:\ORACLE9I\ORADATA\SONU\TEST2.DBF' SIZE 200M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; 4] create tablespace test3 datafile 'D:\ORACLE9I\ORADATA\SONU\TEST3.DBF' SIZE 200M EXTENT MANAGEMENT DICTIONARY DEFAULT STORAGE (initial 1M next 1M); 5] create tablespace test4 datafile 'D:\ORACLE9I\ORADATA\SONU\TEST2.DBF' SIZE 200M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; Offline Tablespace alter tablespace test offline; Altering Tablespace 1] Adding Datafile in Tablespace alter tablespace test1 add datafile 'D:\ORACLE9I\ORADATA\SONU\TEST_A.DBF' SIZE 200M; 2] Renaming Datafile You must follow these steps when the file is Non-System File 1] Bring the tablespace offline. alter tablespace test offline normal; 2] Use the operating system command to move or copy the file 3] Execute the ALTER TABLESPACE RENAME DATAFILE command alter tablespace test rename file 'D:\oracle9i\oradata\sonu\test.dbf' to 'd:\oracle9i\oradata\sonu\test100.dbf';

4] Bring the tablespace online. alter tablespace test online; 3] Resize Datafile alter database datafile 'd:\ora9i\oradata\test1\test.dbf' resize 50m; Resize Tempfile select TABLESPACE_NAME,sum(bytes)/1024/1024 MB from dba_data_files group by TABLESPACE_NAME Creating and Managing Index Btree Bitmap Suitable for high cardinality column Low cardinality column. Inefficient for queries using OR Operator efficient for queries using OR Operator Use for OLTP Use for data warehousing.
Creating Normal B-tree Index

Create Index hr.emp_last_name_idx on hr.emp(last_name) pctfree 30 Storage(initial 200k next 200k pctincrease 0 maxextents 50) Tablespace indx;
Creating Bit Map Index

Create bitmap index ord_idx on order(order_id) pctfree 30 Storage(initial 200k next 200k pctincrease 0 maxextents 50) Tablespace indx; Changing Storage Parameter for Index Alter Index ord_idx Storage(next 400k maxextents 100); Allocating and Deallocating Index Space Alter index ord_idx allocate extent (size 200k datafile ‘d:\disk6\index01.dbf’) Alter index ord_idx deallocate unused; Rebuilding Index 1] Move an Index to different Tablespace. 2] Improve the space Utilization by removing deleted entries. 3] Change the reverse key index to Normal B-Tree Index vice versa. Alter Index ord_idx rebuild tablespace index01; Alter table ……Move Tablespace; Online Rebuild of Indexes 1] Rebuilding indexes can be done with minimal table locking. Alter Index ord_idx rebuild Online; Coalescing Index Alter index ord_idx Coalesce; Checking Index Validity/Analyze Index Analyze Index ord_idx Validate Structure; --Populate the view Index_stats with the index information. Dropping Index Drop Index ord_idx; Identify Unused Index 1]To start Monitoring index usage Alter index ord_idx Monitoring Usage; 2]To Stop Monitoring index usage Alter index ord_idx Nomonitoring Usage; Object Information View DBA_INDEXES,DBA_IND_COLUMNS, DBA_IND_EXPRESSIONS,V$OBJECT_USAGE, INDEX_STATE Tablespace Backup 1] select file_name,tablespace_name from dba_data_file; 2] alter tablespace test begin backup; 3] go to command prompt 4] if the multiple database is there then SET ORACLE_SID=SONU

5] 6] 7]

OCOPY d:\oracle9i\oradata\sonu\test.dbf d:\bkp alter tablespace test end backup; Alter System Archivelog current;

RMAN Configuration 1] Set Database in Archivelog Mode. A] connect sys as sysdba B] startup mount C] alter database archivelog D] alter database open 2] set parameter in init.ora a] log_archive_start=true b] log_archive_dest=d:\ora9i\admin\test1\archive c] log_archive_format=’%t%s.dbf 3] create user rman identified by rman default tablespace tools temporary tablespace temp; 4] grant recovery_catalog_owner,connect,resource to rman; 5] c:\>rman catalog rman/rman 6] rman>create catalog; 7] rman>exit; 8] c:\>rman target system/manager@test catalog rman/rman 9] rman>register database; 10] rman>backup database; 11] shutdown the db 12] delete datafile and startup db 13] datafile error 14] cmd 15] connect rman target system/manager 16] restore database; 17] recover database; Using RMAN Incomplete Recovery First you Backup the whole database using rman 1] Mount the database 2] Allocate multiple channel for parallelization 3] Restore all datafiles 4] Recover the database using UNTIL SCN, UNTIL TIME, UNTIL SEQUENCE 5] Open the database using RESETLOGS option 6] Perform Whole Database Backup. First You have to set this variable in registry in ORACLE_HOME NLS_LANG=American NLS_DATE_FORMAT=’YYYY-MM-DD:HH24:MI:SS’ Example:1] Shutdown Immediate 2] Startup Mount 3] Open New Command Window 4] c:\>rman target / 5] rman>run{ allocate channel c1 type disk; allocate channel c2 type disk; set until time =’2005-09-14 14:21:00’; restore database; recover database; alter database open resetlogs;} Using RMAN Complete Recovery 1] connect to rman

} SQL * Loader 1] connect jagat/jagat 2] create table dept (deptno number(2). 9] select * from v$datafile.loc) begindata 12.’ optionally enclosed by “” (deptno. 7] recover database. backup format 'd:\bkp\log_t%t_s%s_p%p' (archivelog all).dname.c:\>rman target / 2] Backup the database rman>backup database. loc char(13) ) . 3] After completing the backup accidentally your datafile is deleted or copy different location 4] Start database in mount mode Startup Mount. alter database datafile 'D:\ORA9I\ORADATA\TEST1\TEST.cleveland 11. dname char(14) .finance. Backup all Archive Log Files 1] Your Database must be started 2] RMAN> run{ allocate channel d1 type disk.DBF' offline.”sales”. 7] recover automatic tablespace test. 9] Online the datafile.phila .ctl file with contents load data infile * into table dept fields terminated by ‘. 5] Open another window c:\>rman target / 6] restore database. 2] Cut particular datafile from directory and paste it to different location 3] Startup database [it will give error ] 4] Make datafile offline which is not present.DBF' online. alter database datafile 'D:\ORA9I\ORADATA\TEST1\TEST. release channel d1. 5] Open the database alter database open.”accounting”.”Saratoga” 10. 3] Create dept. 6] Make tablespace offline alter tablespace test offline immediate. Your datafile is loss / USER MANAGE RECOVERY 1] Shut down the database shu immediate.”boston” 21. 8] alter database open.research. [if there are no changes happens with datafile error give] ORA-00283: recovery session canceled due to errors ORA-00264: no recovery required 8] Online the tablespace alter tablespace test online.

---------8 13607 2.. View or print the output file x. SQL> execute dbms_system.ctl log=dept. 4. Enable tracing for your selected process: SQL> ALTER SYSTEM SET TIMED_STATISTICS = TRUE.sql.txt EXPLAIN=system/manager SYS=NO 7.sql. When you run this script you specify the tablespace name for statpack (i.e statpk) and temporary tablespace SQL>@d:\oracle9i\ rdbms\admin\spcreate. 3..sql TKPROF Procedure to enable SQL trace for users on your database: 1. Run TKPROF to analyze trace output $ tkprof d:\ora9i\admin\test1\udump\ora_9294. Get the SID and SERIAL# for the process you want to trace.4] 22. when you run this script enter the value for start snap id and end snap id (i. true).snap.v_$session where. SQL>execute statspack. Disable tracing for your selected process: SQL> execute dbms_system.txt UTLBSTAT and UTLESTAT 1] Run the Script SQL>@d:\ora9i\rdbms\admin\utlbstat. SQL> ALTER SYSTEM SET TIMED_STATISTICS = FALSE.trc x. serial# from sys. 5. 3] Second script will create report on c: drive with report. # Applicable for if before you run this utility SQL>@d:\oracle9i\ rdbms\admin\spdrop.log userid=jagat/jagat Default Sizing Log_buffer_size =512k Shared_pool_size=44m Sga_max_size=112m Sort_area_size=512k db_cache_size=32m db_block_size=4k large_pool_size=1m java_pool_size=32m STATPACK 1] 2] 3] 4] 5] SQL>create tablespace statpk datafile 'd:\oracle9i\oradata\sonu\statpk. SID SERIAL# ---------.sql. false). SQL> select sid. 2] After 4 to 5 hours run second script SQL>@d:\ora9i\rdbms\admin\utlestat.e 1 to 2) and enter report name which is default store in c drive SQL>@d:\oracle9i\rdbms\admin\spreport.13607.sql. SQL>execute statspack. Look for trace file in USER_DUMP_DEST $ cd /app/oracle/admin/oradba/udump 6.set_sql_trace_in_session(8.”sales”.txt EXPLAIN PLAN 1] Connect to local user . 13607.”san fran” c:\>sqlldr control=dept.Rochester 42.”int’l”.set_sql_trace_in_session (8.snap. Ask user to run just the necessary to demonstrate his problem.dbf' size 100m.

causes an error END. PL/SQL TABLE DECLARE TYPE NumList IS TABLE OF NUMBER.LAST DELETE FROM emp WHERE deptno = depts(i).sql. depts NumList := NumList(10. . elsif p_table=upper(‘dept’) then open v_cur from select * from dept. Check Database Size 1] select sum(BYTES)/1024/1024/1024 from dba_data_files.2] 3] 4] 5] 6] 7] 8] 9] sql>set time on execute any sql statement see the time required for the query output sql>@d:\ora9i\rdbms\admin\utlxplan. REF CURSOR Create or replace procdure(p_table in varchar2) as type t_deptemp is ref cursor. select * from plan_table. if the output of this script is TABLE ACCESS FULL then Create index on column which is in where clause and execute sql statement Create index no on y(no) tablespace test_ind. end. Using Rman Cloning Databse connect target sys/secure@origdb connect catalog rman/rman@catdb connect auxiliary / run { set newname for datafile 1 to '/ORADATA/u01/system01.. v_cur t_deptemp. BEGIN depts. sql>select tablespace_name. 30.sql. 2] if you want to check the table belongs to which tablespace. 40). -.dbf'. 20.analyze_schema(‘FINANCE’. 3] Move tables sql>Alter table finance.dt_transaction move tablespace finance storage(initial 10m next 5m pctincrease 0 maxextents unlimited). begin if p_table=upper(‘emp’) then open v_cur from select * from emp. 4] execute dbms_utility.’compute’). end if.dbf'.dbf'.FIRST. ANALYZE SCHEMA and TABLES 1] Check the TABLESPACE Name from DBA_TABLESPACE sql>select tablespace_name from dba_tablespace. set newname for datafile 3 to '/ORADATA/u03/users01.DELETE(3). (the table is empty) explain plan for SQL STATEMENT sql>@d:\ora9i\rdbms\admin\utlxplp.delete third element FORALL i IN depts. -. set newname for datafile 2 to '/ORADATA/u02/undotbs01. else message(‘Cursor unable to Open’).depts.table_name from dba_tables where owner=’SCOTT’.

fga_log$. execute dbms_fga. select * from EMP where c1 = 11.. 'policy1'. Select * from V_$PWFILE_USERS.dbf'. -.. Rollback Segment Creating Rollback Segment Create rollback segment rbs01 tablespace rbs storage (initial 100k next 100k minextents 20k maxextents 100 optimal 2000k).log') SIZE 200k REUSE. 'deptno > 10').. 3] Pctincrease cannot be specify or if specify set to 0. allocate auxiliary channel dupdb1 type disk.Now we can see the statments that triggered the auditing condition. set until sequence 2 thread 1. GROUP 2 ('/ORADATA/u03/redo02. delete from sys. or by using the orapwd utility..fga_log$. } Manually Start Archive Log alter system archive log start. -.set newname for datafile 4 to '/ORADATA/u03/indx01. this feature works with CBO (Cost Based Optimizer) analyze table EMP compute statistics. 2] 3] .Will trigger auditing select * from EMP where c1 = 09.Add policy on table with autiting condition. 2] For rollback segment Minextens must be at least two. duplicate target database to dupdb logfile GROUP 1 ('/ORADATA/u02/redo01. How dose one switch to another user in Oracle 1] Sql>select password from dba_users where username=’SCOTT’ PASSWORD ---------------F894844C34402B67 2] SQL>alter user scott identified by lion.No auditing -. select sqltext from sys. Oracle Securities 1] Fine Grained Auditing -. set newname for datafile 5 to '/ORADATA/u02/example01.Must ANALYZE. 'EMP'.add_policy('HR'.dbf'. New users can be added to the password file by granting them SYSDBA or SYSOPER privileges. GRANT SYSDBA TO scott. -. 1] Rollback segment specify Public or Private at the time of creation and cannot be changed.V_$PWFILE_USERS view to see which users are listed in the password file.log') SIZE 200k REUSE. User altered 3] Connect scott/lion 4] Do whatever you like 5] connect system/manager 6] alter user scott identified by values ‘F894844C34402B67’ 7] connect scott/tiger How does one add users to a password file One can select from the SYS.

Shutdown the database (use SHUTDOWN NORMAL or IMMEDIATE. Rename the database's global name: ALTER DATABASE RENAME GLOBAL_NAME TO new_db_name. etc.". forced to choose new passwords.sql. 5. or if you need to change another user's password. From Oracle8 you can just type "password" from SQL*Plus. 4] How does one create a password file The Oracle Password File ($ORACLE_HOME/dbs/orapw or orapwSID) stores passwords for users with administrative privileges. and change the database's name. Execute this command from sqlplus while connected to 'SYS AS SYSDBA': ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS. type "password user_name". 6. -. remove all headers and comments. 4.Select * from V_$PWFILE_USERS. 3. One needs to create a password files before remote administrators (like OEM) will be allowed to connect.rename it to something like dbrename.Force user to choose a new password 6] How does one change an Oracle user's password Issue the following SQL command: ALTER USER <username> IDENTIFIED BY <new_password>. don't ABORT!) and run dbrename. Locate the latest dump file in your USER_DUMP_DEST directory (show parameter USER_DUMP_DEST) .sql. Look at this example: SQL> password Changing password for SCOTT Old password: New password: Retype new password: Database Administration 1] How does one rename a database Follow these steps to rename a database: 1. SUM(bytes_used)... Edit dbrename.." to "CREATE CONTROLFILE SET . SUM(bytes_free) FROM V$temp_space_header 5] . Instead use the V$TEMP_SPACE_HEADER view: SELECT tablespace_name. Look at these examples: ALTER USER scott ACCOUNT LOCK -.sql.ORA file and ensure REMOTE_LOGIN_PASSWORDFILE=exclusive is set. Start by making a full database backup of your database (in case you need to restore if this procedure is not working). -.unlocks a locked users account ALTER USER scott PASSWORD EXPIRE.. 2] How do I find used/free space in a TEMPORARY tablespace Unlike normal tablespaces. 2. Follow this procedure to create a new password file: Log in as the Oracle software owner Run command: orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=mypasswd Shutdown the database (SQLPLUS> SHUTDOWN IMMEDIATE) Edit the INIT. Startup the database (SQLPLUS> STARTUP) How does one manage Oracle database users Oracle user accounts can be locked. For example. all accounts except SYS and SYSTEM will be locked after creating an Oracle9iDB database using the DB Configuration Assistant (dbca). true temporary tablespace information is not listed in DBA_FREE_SPACE. DBA's must unlock these accounts to make them available to users. unlocked. Also change "CREATE CONTROLFILE REUSE .lock a user account ALTER USER scott ACCOUNT UNLOCK.

2) "% Ratio" from v$sysstat cur. TABLESPACE_NAME FILE_NAME . ( select file_id. this is listed as Enhancement Request 158508. assume that you want to back up the users tablespace. v$sysstat con. bytes/1024 "Free (K)".name = 'physical reads' Reports free memory available in the SGA select name. WHERE TABLESPACE_NAME = 'users'. Enter the following in SQL*Plus: 2.20) filen.DBA_DATA_FILES 4. round(bytes/sgasize* = 'free memory'. FILE_NAME 3.file_id There is no single system table which contains the high water mark (HWM) for a table. identify the tablespace's datafiles by querying the DBA_DATA_FILES view.v_$sgastat f where f. SELECT EMPTY_BLOCKS FROM DBA_TABLES WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table). blocks total_blocks. v$sysstat phy where = 'consistent gets' and phy.sgasize/1024/1024 "Allocated (M)".value / (cur. Thus.(query result 2) . sys. hwm.value + con. ANALYZE TABLE owner.value + = 'db block gets' and con. FROM SYS.file_id = b. 2) "% Free" from (select sum(bytes) sgasize from sys.1.value)) "Cache Hit Ratio". round((1-(phy.v_$sgastat) s.1 NOTE: You can also use the DBMS_SPACE package and calculate the HWM = TOTAL_BLOCKS – UNUSED_BLOCKS .3] GROUP BY tablespace_name. Workaround: Export all of the objects from the tablespace Drop the tablespace including contents Recreate the tablespace Import the objects 4] 5] Measure the Buffer Cache Hit Ratio Increase DB_BLOCK_BUFFER if cache hit ratio < 90% select 1-(phy. A table's HWM can be calculated using the results from the following SQL statements: SELECT BLOCKS FROM DBA_SEGMENTS WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table). the tables' HWM = (query result 1) . SELECT TABLESPACE_NAME. Workaround: Do a user-level export of user A create new user B import system/manager fromuser=A touser=B drop user A Can one rename a tablespace No.value / (cur. blocks-hwm+1 shrinkage_possible from dba_data_files a.value)))*100.table ESTIMATE STATISTICS. 5. Can one rename a database user (schema) No. 6. Before beginning a backup of a tablespace. this is listed as Enhancement Request 148742. To back up offline tablespaces 1. Where can one find the high water mark for a table select substr(file_name. For example. max(block_id+blocks) hwm from dba_extents group by file_id ) b where a.

f 9. all datafiles of the tablespace are closed. Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. After you take the tablespace out of backup mode with the ALTER TABLESPACE . Take the tablespace offline using normal priority if possible. When you restore a datafile backed up in this way. Oracle stops recording checkpoints to the datafiles in the tablespace when a tablespace is in backup mode. not any that occurred during it. ------------------------------------------------8. END BACKUP or ALTER DATABASE END BACKUP statement. The redo logs contain all changes required to recover the datafiles and make them consistent. the datafile header has a record of the most recent datafile checkpoint that occurred before the online tablespace backup. As a result. For example. a UNIX user might enter the following to back up the datafile users. ALTER TABLESPACE users ONLINE. 3. Enter the following: 2. Oracle asks for the appropriate set of redo log files to apply should recovery be needed. SQL> ALTER TABLESPACE users OFFLINE NORMAL. For example.f: 4. Oracle advances the datafile header to the current database checkpoint. % cp /disk1/oracle/dbs/users. it is open and available for use. /oracle/dbs/users.f is a fully specified the datafile in the users tablespace...f /disk2/backup/users. Because a block can be partially updated at the very moment that the operating system backup utility is copying it. Back up the offline datafiles. users /oracle/dbs/users. Oracle copies whole changed data blocks into the redo stream while in backup mode. the following statement brings tablespace users back online: 6. assume that you want to back up the users tablespace. In this example. the following statement takes a tablespace named users offline normally: 3. After you bring a tablespace online. The procedure differs depending on whether the online tablespace is read/write or read-only. 4. Making User-Managed Backups of Online Tablespaces and Datafiles You can back up all or only specific datafiles of an online tablespace while the database is open. For example. The ALTER TABLESPACE BEGIN BACKUP statement places a tablespace in backup mode. identify all of the datafiles in the tablespace with the DBA_DATA_FILES data dictionary view. enter: 6. SELECT TABLESPACE_NAME. filename corresponding to 2. This section contains these topics: • • • • • Making User-Managed Backups of Online Read/Write Tablespaces Making Multiple User-Managed Backups of Online Read/Write Tablespaces Ending a Backup After an Instance Failure or SHUTDOWN ABORT Making User-Managed Backups of Read-Only Tablespaces Making User-Managed Backups of Undo Tablespaces Making User-Managed Backups of Online Read/Write Tablespaces You must put a read/write tablespace in backup mode to make user-managed datafile backups when the tablespace is online and the database is open. For example. To back up online read/write tablespaces in an open database: 1. 5. 5.backup 5. 4. ALTER SYSTEM ARCHIVE LOG CURRENT. Bring the tablespace online. Before beginning a backup of a tablespace. FILE_NAME . After you take a tablespace offline with normal priority.7. For example. Normal priority is recommended because it guarantees that you can subsequently bring the tablespace online without the requirement for tablespace recovery.

3. 3. you can back them up either serially or in parallel.f 9. ts2. Prepare all online tablespaces for backup by issuing all necessary ALTER TABLESPACE statements at once. 6. 2. SQL> ALTER TABLESPACE ts1 BEGIN BACKUP. Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived. enter: 6. 4. FROM SYS. /oracle/dbs/tbs_21. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT. enter: 5. TABLESPACE_NAME FILE_NAME 7. 5. . For example.f are filenames corresponding to the datafiles of the users tablespace. For example. USERS /oracle/dbs/tbs_21. 3.f and /oracle/dbs/tbs_22. UNIX users might enter: 4. Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. Note that online redo logs can grow large if multiple users are updating these tablespaces because the redo must contain a copy of each changed data block. SQL> ALTER TABLESPACE ts3 BEGIN BACKUP. SQL> ALTER TABLESPACE ts1 END BACKUP. % cp /oracle/dbs/tbs_* /oracle/backup 4. For example. Mark the beginning of the online tablespace backup. 6. fully specified 2. For example. 5. For example.f /oracle/backup/tbs_22. 5. SQL> ALTER TABLESPACE users BEGIN BACKUP. put tablespaces ts1.f /oracle/backup/tbs_21. Making Multiple User-Managed Backups of Online Read/Write Tablespaces When backing up several online tablespaces.backup 6.3. In this example. For example. WHERE TABLESPACE_NAME = 'users'. After backing up the datafiles of the online tablespace. Back up all files of the online tablespaces. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT. % cp /oracle/dbs/tbs_22. the following statement marks the start of an online backup for the tablespace users: 3. indicate the end of the online backup by using the SQL statement ALTER TABLESPACE with the END BACKUP option. 5.backup 5. Take the tablespaces out of backup mode as in the following example: 4. 4. -------------------------------------------------8. To back up online tablespaces in parallel: 1. a UNIX user might back up datafiles with the tbs_ prefix as follows: 3. SQL> ALTER TABLESPACE ts2 BEGIN BACKUP. Backing Up Online Tablespaces in Parallel You can simultaneously put all tablespaces requiring backups in backup mode. 6. the following statement ends the online backup of the tablespace users: 5. SQL> ALTER TABLESPACE ts3 END BACKUP. USERS /oracle/dbs/tbs_22. Use either of the following procedures depending on your needs.DBA_DATA_FILES 4. SQL> ALTER TABLESPACE ts2 END BACKUP. 4. 7. For example. Back up the online datafiles of the online tablespace with operating system commands. and ts3 in backup mode as follows: 2. % cp /oracle/dbs/tbs_21.f 10. SQL> ALTER TABLESPACE users END BACKUP.

Whenever crash recovery is required (not instance recovery. Oracle may display a message such as the following when you run the STARTUP statement: ORA-01113: file 12 needs media recovery ORA-01110: data file 12: '/oracle/dbs/tbs_41. BEGIN/END BACKUP statements.. but you did not indicate the end of the online tablespace backup operation with the ALTER TABLESPACE . you can write a crash recovery script that does the following: 1.. in the early morning hours). 2. enter: 6. 4. running the ALTER DATABASE END BACKUP statement takes all the datafiles out of backup mode simultaneously.Backing Up Online Tablespaces Serially You can place all tablespaces requiring online backups in backup mode one at a time. Back up the datafiles in the tablespace. END BACKUP statement. Hence. or the datafile is taken out of backup mode. To back up online tablespaces serially: 1. then so long as the database is mounted. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT. 5. Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived. Oracle will not open the database until either a recovery command is issued. Take the tablespace out of backup mode. In high availability situations. Ending a Backup After an Instance Failure or SHUTDOWN ABORT This section contains these topics: • • • About Instance Failures When Tablespaces are in Backup Mode Ending Backup Mode with the ALTER DATABASE END BACKUP Statement Ending Backup Mode with the RECOVER Command About Instance Failures When Tablespaces are in Backup Mode The following situations can cause a tablespace backup to fail and be incomplete: • • The backup completed. and in situations when no DBA is monitoring the database (for example. more redo information is generated for the tablespace because whole data blocks are copied into the redo log. For example. SQL> ALTER TABLESPACE tbs_1 BEGIN BACKUP. % cp /oracle/dbs/tbs_1. For example.bak 4. Prepare a tablespace for online backup... enter: 3. SQL> ALTER TABLESPACE tbs_1 END BACKUP. because in this case the datafiles are open already). During online backups. to put tablespace tbs_1 in backup mode enter the following: 2. Repeat this procedure for each remaining tablespace until you have backed up all the desired tablespaces. 5.f /oracle/backup/tbs_1. Oracle Corporation recommends the serial backup option because it minimizes the time between ALTER TABLESPACE . 3. Mounts the database . enter: 4. An instance failure or SHUTDOWN ABORT interrupted the backup before you could complete it. For example. if a datafile is in backup mode when an attempt is made to open it.f' If Oracle indicates that the datafiles for multiple tablespaces require media recovery because you forgot to end the online backups for these tablespaces. For example. the requirement for user intervention is intolerable. For example. 3. then the system assumes that the file is a restored backup.

---------.2. 4. you can take the following manual measures after the system fails with tablespaces in backup mode: • • Recover the database and avoid issuing END BACKUP statements altogether. Mount the database. To take tablespaces out of backup mode simultaneously: 1. END BACKUP for each tablespace still in backup mode. Ending Backup Mode with the ALTER DATABASE END BACKUP Statement You can run the ALTER DATABASE END BACKUP statement when you have multiple tablespaces still in backup mode. SQL> ALTER DATABASE END BACKUP. END BACKUP or ALTER DATABASE DATAFILE .. The primary purpose of this command is to allow a crash recovery script to restart a failed system without DBA intervention.. If the database is open. You can use this statement only when the database is mounted but not open. 13 ACTIVE 20863 25-NOV-00 8. use ALTER TABLESPACE .. then the RECOVER command brings the backup up to date. Mount but do not open the database.--------6. For example. Runs the ALTER DATABASE END BACKUP statement 3. enter: . allowing the system to come up automatically An automated crash recovery script containing ALTER DATABASE END BACKUP is especially useful in the following situations: • • All nodes in an Oracle Real Application Clusters configuration fail... Only run the ALTER DATABASE END BACKUP or ALTER TABLESPACE . END BACKUP for each affected tablespace or datafile. 2. query the V$BACKUP view to list the datafiles of the tablespaces that were being backed up before the database was restarted: 3. You can also perform the following procedure manually. 20 ACTIVE 20863 25-NOV-00 9. SQL> STARTUP MOUNT 3. FILE# STATUS CHANGE# TIME 5. For example.. enter: 2. enter: 4. END BACKUP statement if you are sure that the files are current. 3 rows selected. To take tablespaces out of backup mode with the RECOVER command: 1. If performing this procedure manually (that is. Runs ALTER DATABASE OPEN. Alternatively. This method is useful when you are not sure whether someone has restored a backup.. 3. because if someone has indeed restored a backup. Mount the database. 12 ACTIVE 20863 25-NOV-00 7. SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE'. For example. a cluster that is not an Oracle Real Application Cluster in which the secondary node must mount and recover the database when the first node fails).---------. 10. 5. not as part of a crash recovery script). then run ALTER TABLESPACE . Issue the ALTER DATABASE END BACKUP statement to take all datafiles currently in backup mode out of backup mode. Ending Backup Mode with the RECOVER Command The ALTER DATABASE END BACKUP statement is not the only way to respond to a failed online backup: you can also run the RECOVER command.. One node fails in a cold failover cluster (that is.-----------------.

undo space management was based on rollback segments. For example. enter: 3. 5.f 11. For example.2. With this design. export the metadata for tablespace history as follows: 5. you can transport the tablespace back into the database. you can simply back up the online datafiles. You do not have to place the tablespace in backup mode because the system is permitting changes to the datafiles. 7. In Oracle9i.dmp Making User-Managed Backups of Undo Tablespaces In releases prior to Oracle9i. export the metadata in the read-only tablespace. WHERE STATUS = 'READ ONLY'. then in addition to backing up the tablespaces with operating system commands. For example. you can also export the tablespace metadata by using the transportable tablespace functionality. UNIX users can enter: 4. In the event of a media error or a user error (such as accidentally dropping a table in the read-only tablespace).---------. /oracle/dbs/tbs_hist1.--------0 rows selected. To back up online read-only tablespaces in an open database: 1. HISTORY /oracle/dbs/tbs_hist1.f are fully specified filenames corresponding to the datafiles of the history tablespace. SQL> STARTUP MOUNT 3. Enter the following: 3. -------------------------------------------------9. If the set of read-only tablespaces is self-contained. WHERE TABLESPACE_NAME = 'HISTORY'. 2. identify all of the tablespace's datafiles by querying the DBA_DATA_FILES data dictionary view. HISTORY /oracle/dbs/tbs_hist2. By using the transportable tablespace feature. Recover the database as normal. % cp /oracle/dbs/tbs_hist*.f /backup 4. % exp TRANSPORT_TABLESPACE=y TABLESPACES=(history) FILE=/oracle/backup/tbs_hist. SELECT TABLESPACE_NAME. Back up the online datafiles of the read-only tablespace with operating system commands. FROM DBA_TABLESPACES 4. Query the DBA_TABLESPACES view to determine which tablespaces are read-only. you have the option of placing the database in automatic undo management mode. 6. Use the V$BACKUP view to confirm that there are no active datafiles: 4.-----------------. run this query: 2. The procedures for backing up undo tablespaces are exactly the same as for backing up any other read/write tablespace. SELECT TABLESPACE_NAME. Before beginning a backup of a read-only tablespace. you can quickly restore the datafiles and import the metadata in case of media failure or user error. For example. This method is called manual undo management mode.f and /oracle/dbs/tbs_hist2. 6. Making User-Managed Backups of Read-Only Tablespaces When backing up an online read-only tablespace.DBA_DATA_FILES 5. In this example. you should back it up frequently as you would for tablespaces containing rollback segments when running in manual undo management mode.f 10. STATUS 3. Because the automatic undo tablespace is so important for recovery and for read consistency. 2. FILE# STATUS CHANGE# TIME ---------. 3. you allocate undo space in a single undo tablespace instead of distributing space into a set of statically allocated rollback segments. SQL> RECOVER DATABASE 4. assume that you want to back up the history tablespace. SQL> SELECT * FROM V$BACKUP WHERE STATUS = 'ACTIVE'. 5. 3. . FROM SYS. 7. FILE_NAME 4. For example. TABLESPACE_NAME FILE_NAME 8. You do not have to take the tablespace offline or put the tablespace in backup mode because users are automatically prevented from making changes to the read-only tablespace. Optionally.

and control files. datafiles. You do not need to use SUSPEND/RESUME to make split mirror backups in most cases. the number of datafiles.e. Note the following restrictions for the SUSPEND/RESUME feature: • • • • In an Oracle Real Application Clusters configuration. When the database is suspended. Oracle recommends that you precede the ALTER SYSTEM SUSPEND statement with a BEGIN BACKUP statement so that the tablespaces are placed in backup mode. Making Backups in a Suspended Database After a successful database suspension. This operation prevents media recovery or crash recovery from hanging. however. Also. you should not start a new instance while the original nodes are suspended. Splitting the mirror involves separating the copies so that you can use them independently. then you can resume normal database operations using the ALTER SYSTEM RESUME statement. to place tablespace users in backup mode enter: ALTER TABLESPACE users BEGIN BACKUP.If the datafiles in the undo tablespace were lost while the database was open. you would not be able to roll back uncommitted transactions to their original values. For example. thereby suspending I/O operations for all active instances in a given cluster. and the time required to break the mirror. By using this feature. if an instance failure occurred. No checkpoint is initiated by the ALTER SYSTEM SUSPEND or ALTER SYSTEM RESUME statements. all pre-existing I/O operations can complete. you could receive error messages when querying objects containing uncommitted changes. Making User-Managed Backups in SUSPEND Mode This section contains the following topics: • • About the Suspend/Resume Feature Making Backups in a Suspended Database About the Suspend/Resume Feature Some third-party tools allow you to mirror a set of disks or logical devices. that is. issue the following: . Backing up a suspended database without splitting mirrors can cause an extended database outage because the database is inaccessible during this time. you can suspend I/O to the database. Issuing SHUTDOWN ABORT on a database that was already suspended reactivates the database. To make a split mirror backup in SUSPEND mode: 1] Place the database tablespaces in backup mode. however. you can back up the database to disk or break the mirrors. The ALTER SYSTEM SUSPEND and ALTER SYSTEM RESUME statements operate on the database and not just the instance. With the SUSPEND/RESUME functionality. You must use conventional user-managed backup methods to back up split mirrors. The outage time depends on the size of cache to flush. i. You can then access the suspended database to make backups without I/O interference. and then split the mirror. which complements the backup mode functionality. then split the mirror and make a backup of the split mirror. The ALTER SYSTEM SUSPEND statement suspends the database by halting I/Os to datafile headers. and you did not have a backup. Because suspending a database does not guarantee immediate termination of I/O. maintain an exact duplicate of the primary data in another location. 2] If your mirror system has problems with splitting a mirror while disk writes are occurring. then the outage is nominal. any new database I/O access attempts are queued. You cannot issue SHUTDOWN with IMMEDIATE or NORMAL options while the database is suspended. you can suspend database I/Os so that no new I/O can be performed. although it is necessary if your system requires the database cache to be free of dirty buffers before a volume can be split. RMAN cannot make database backups or copies because these operations require reading the datafile headers. then suspend the database. After the database backup is finished or the mirrors are re-silvered. then the internal locking mechanisms propagate the halt request across instances. If backups are taken by splitting mirrors. If the ALTER SYSTEM SUSPEND statement is entered on one system in an Oracle Real Application Clusters configuration.

For example: SELECT DATABASE_STATUS FROM V$INSTANCE.3] 4] 5] 6] 7] ALTER SYSTEM SUSPEND. DATABASE_STATUS ----------------SUSPENDED Split the mirrors at the operating system or hardware level.dbf' RESIZE 4M. Temporary Tablespace To see the datafile belongs to temporary tablespace. For example. The following statement resizes a temporary file: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. . The following statement adding datafile in a temporary tablespace ALTER TABLESPACE lmtemp ADD TEMPFILE '/u02/oracle/data/lmtemp02. enter the following to take tablespace users out of backup mode: ALTER TABLESPACE users END BACKUP.dbf' OFFLINE.dbf' SIZE 2M REUSE. ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. select * from DBA_TEMP_FILES. DATABASE_STATUS ----------------ACTIVE Take the specified tablespaces out of backup mode. Check to make sure that the database is suspended by querying V$INSTANCE.dbf' SIZE 20M REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M. CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01. For example. issue the following statement: ALTER SYSTEM RESUME. enter: SELECT DATABASE_STATUS FROM V$INSTANCE. Check to make sure that the database is active by querying V$INSTANCE. For example.DBF' resize 100m. alter database tempfile 'D:\ORA9I\ORADATA\TEST1\TEMP01. End the database suspension. Copy the control file and archive the online redo logs as usual for a backup.

ANALYZE select * from v$librarycache. ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. SUM(bytes_free) FROM V$temp_space_header GROUP BY tablespace_name. How do I find used/free space in a TEMPORARY tablespace Unlike normal tablespaces.dbf' SIZE 50M DEFAULT STORAGE ( INITIAL 2M NEXT 2M MINEXTENTS 1 PCTINCREASE 0) EXTENT MANAGEMENT DICTIONARY TEMPORARY. Instead use the V$TEMP_SPACE_HEADER view: SELECT tablespace_name. The following statement creates a temporary dictionary-managed tablespace: CREATE TABLESPACE sort DATAFILE '/u02/oracle/data/sort01.dbf' DROP INCLUDING DATAFILES. Recovery Steps 1] Restored all datafiles. select count(*) from jagat.invalidations from v$librarycache.pins.y compute statistics.ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02. select namespace. true temporary tablespace information is not listed in DBA_FREE_SPACE. 1] Shutdown the Instance. analyze table jagat.reloads.sum(reloads)/sum(pins) from v$librarycache. Copy All Datafiles and Control File 3] Perform Cancel Based Recovery Recover Database Until Cancel 4] Open Database with Resetlog Mode.y. SUM(bytes_used). select count(*) from jagat. Shutdown Immediate.invalidations from v$librarycache. User Managed Recovery in NoArchiveLog Mode 1] Shutdown Abort 2] Copy all Datafiles from backup to Oradata Directory 3] Connect as sysdba 4] Startup Recovery in Noarchivelog mode without redo log file Backup. 2] Restore the Datafiles and Control file from most resent whole Database Backup.y. 4] Undo applied 5] Recover Database.dbf' ONLINE. . 2] Redo Applied 3] Database Containing Uncommitted and commited transaction. select sum(pins) "exe".sum(reloads) "cache Miss". Alter Database Open Resetlogs. select namespace.pins.reloads.

4] Alter Database create Datafile ‘/disk2/data/df04.} Using Rman Recover Tablespace 1] Run{ Sql “Alter Tablespace users offline immediate. Using RMAN to Restore Datafiles to New Location 1] Rman Target / 2] Startup Mount. 6] Recover Tablespace Table_data. or Recover Datafile ‘<PATH>’. Close Database Recovery 1] Shutdown Abort. Sql “Alter tablespace Users Online”.if you missing one you cannot perform complete recovery. 3] Run{ set new name for datafile 1 to ‘/<newdir>/system. 2] No committed data is lost. Open Database Recovery 1] if the datafile 2 is online then Take the datafile 2 offline 2] Restore datafile 2 from Backup. 2] Offline the Tablespace Alter tablespace table_data offline immediate. 3] Select * from V$recover_file. Recover Tablespace Users. or Alter Database recover Tablespace Table_data 7] Alter Tablespace table_data online. Because all Archive file applied in sequence. Switch Datafile all. 2] Copy All Datafiles.Recovery in Archive Log Mode :-Advantage and Disadvantages Advantage 1] Need to restore only lost or damaged file only. Alter Database Open. Using RMAN Recover Database in NoArchive Log Mode 1] rman target / 2] startup mount 3] Restore Database. Recover Database. Restore Tablespace Users. 3] Recovery can be performed while the database is Open(except system tablespace files and datafiles that contain online rollback segment) DisAdvantage 1] You must have all the Archive Redo log files from the time your last backup.dbf’ as ‘/disk4/data/df04. 3] Startup Mount 4] Recover Database. 5] Alter Database Open Resetlogs.dbf’ Restore Database. or Alter tablespace <Tablespace Name Datafile belongs to that>. 5] Alter Database Open. 3] Recover Datafile <path of Datafile> or Recover Tablespace < Tablespace Name Datafile belongs to that> 4] Alter database datafile <path> online. 4] Recover Database. } Using Rman Relocate Tablespace .dbf’ 5] Select * from V$recover_file. Recovery Without Backup Datafile 1] Mount the Database Startup Mount.

Export utility reads directly from data layer instated of going through the SQL-Command processing Layer. sql ‘alter database archive log current’. However. Switch Datafile 3. the user2 transaction is only partially recorded in the export file. In such situations. .1] Run{ Sql “Alter Tablespace Users Offline Immediate”. CONSISTENT=Y Default: n Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp command. each table is usually exported in a single transaction. the rollback segment usage will be large. making it inconsistent. the use of CONSISTENT=y will be ignored. If you use CONSISTENT=n. Restore (tablespace Users). or both. you should export tables that need to remain consistent separately from those that do not. the updates to TAB:P2 are written to the export file because the update transaction is committed before the export of TAB:P2 begins. Restriction A] Client-side and Server-side Character sets must be same. As a result.dbf’. each partition is exported as a separate transaction. If the export uses CONSISTENT=y. Keep in mind the following points about using CONSISTENT=y: CONSISTENT=y is unsupported for exports that are performed when you are connected as user SYS or you are using AS SYSDBA. If you use CONSISTENT=y and the volume of updates is large. if nested tables and partitioned tables are being updated by other applications. export those tables at a time when updates are not being done. B] You cannot use the Direct Path option to export rows containing LOB. Therefore. Example:Exp userid=system/manager file=exp_dir.dmp full=y direct=y.} 2] To Execute This Script RMAN>run {execute script Level0Backup. the data that is exported could be inconsistent. If a table is partitioned.} Script Example 1] Create script Level0Backup { backup incremental level 0 format ‘/u01/db1/backup/%d_%s_%p’ fileperset 5 (database include current controlfile). the updates to TAB:P1 are not written to the export file.dbf’ to ‘/oradata/u04/users01. If the export uses CONSISTENT=n. none of the updates by user2 are written to the export file. Recover Tablespace Users. Export of certain metadata may require the use of the SYS schema within recursive SQL. However. the export of each table will be slower because the rollback segment must be scanned for uncommitted transactions. the outer table and each inner table are exported as separate transactions. Set Newname for datafile ‘/oradata/u03/users01. REF or object types. To minimize the time and space required for such exports. if a table contains nested tables. To minimize this possibility.} EXPORT 1] Direct=Y You can extract data much faster. You should specify CONSISTENT=y when you anticipate that other applications will be updating the target data after an export has started. Oracle Corporation recommends that you avoid making metadata changes during an export process in which CONSISTENT=y is selected. Sql “Alter Tablespace tbs online”. In addition. BFILE.

Oracle will span the data into a number of blocks so that it can hold all of the data. Row Chaining What we have discussed to this point is the case where we have data in the block and new insertion is not possible into that block. To do this follow these steps: Analyze the table to get the row ID Copy those rows to a temporary table Delete the rows from the original table Insert the rows from step 2 back to the original table Avoiding row chaining is very difficult since it is generally caused by insert operations using large data types (i. Reusing space in the rollback segment allows database integrity to be preserved with minimum space is because of Oracle's internal mechanism and one is for the current location of the data.e lob. With this extra amount of I/O operation required. if possible. a "snapshot too old" error results. Now. by reducing the database transaction rate. there is a tradeoff as the space allocated to PCTFREE is not used for normal insert operations and can end up wasted. Oracle will internally first check the old block and then from there it will get the new row ID and display the row data from the new block.For example. blob or big varchar2 data types. To avoid this error. if I want to view that record. clob. This primarily occurs in the lob. but it imposes a limit on the amount of time that a read-consistent image can be preserved. The existence of such data results in "Row Chaining". export the emp and dept tables together in a consistent export. its row ID will never change. How to Avoid/Eliminate RM/RC For avoiding row migration. etc.) Also. Row Migration Oracle will try to shift the entire row from the current block to another block having 25 (10+15) units of free space. That's why Oracle has to maintain two row IDs -. and then export the remainder of the database in a second pass. So what happens when a row is so large that it cannot fit into one free block? In this case. If a committed transaction has been overwritten and the information is needed for a read-consistent view of the database. which leads Oracle to go ahead and use a new block. you should minimize the time taken by a read-consistent export. we can use a higher PCTFREE value since migration is typically caused by update operations. make the rollback segment as large as possible. A temporary solution (since it will only take care of the existing migrated rows and not the future ones) is to delete the migrated row from the table and perform the insert again. A "snapshot too old" error occurs when rollback space is used up. Row Chaining is the storage of data in a chain of blocks. Now the first question that you might ask is what is the use of maintaining the old row ID if the entire row data has been migrated from the old block to the new one? This is because of Oracle's internal mechanism -. (Do this by restricting the number of objects exported and. it will not remove all the relevant entries for that row from the old block. you likely have guessed correctly that it would degrade the performance. However.for the entire lifespan of a row data. and space taken up by committed transactions is reused for new transactions. A good precaution is to either use a large block size (which can't be changed without creating a new database) or use a large extent size. . It will store the new block row ID into the old block.). However.

query V$SYSSTAT view.Tables.Column. Users. To diagnose this. use the ANALYZE command. Indexs. Which affect the Performance? Using v$librarycache view you can check the reload if the is non zero then increase the size of Share Pool. It include the information about Database files. Privileges. Consists of most recently SQL/PLSQL Statement. then the database has to query the data dictionary table repeatedly for information needed by the database.To summarize what we have discussed: Row migration (RM) is typically caused by UPDATE operations. These queries are called recursive calls it affect the performance. SQL statements which are creating/querying these RM/RC data will degrade the performance due to more I/O work.txt file To remove RM. Row chaining (RC) is typically caused by INSERT operations. use a higher PCTFREE value. or generate a report. Data Dictionary Cache If the size of Data Dictionary Cache is too small. I] Shared SQL II] Shared PLSQL. -Consists of Independence Cache . Share Pool Library Cache If the size of Shared pool is too small then SQL Statement continue reload in Library Cache.

UGA. No free Buffer in Database Buffer Cache. If the Block not found in Database Buffer Cache. Back ground Process Mandatory Optional 1]DBWR 1]ARCn 2]LGWR 2]QMNn 3]SMON 3]LCKn 4]PMON 4]Snnn 5]CKPT 5]LMON 6]RECO 6]Dnnn 1]DBWR : Written when Checkpoint occur. C] Sized by Large_pool_size ALTER SYSTEM SET LARGE_POOL_SIZE=64M.OFF. --View :-V$DB_CACHE_ADVICE Database Buffer Cache Database Buffer Cache stores the copies of data block that have been retrieved from Data files.Tablespace Begin Backup . Tablespace Offline.DB_CACHE_SIZE DB_KEEP_CACHE_SIZE DB_RECYCLE_CACHE_SIZE --DB_CACHE_ADVICE can be set to gather static from different cache size. Because subsequent request for the same block may find the block in memory. Timeout occur. Dispatcher and Servers. Redo Log Buffer Primary purpose is recovery. Tablespace Read only. Database Buffer Cache can be dynamically resized using ALTER SYSTEM SET DB_CACHE_SIZE=96m. Large Pool The Large Pool optional area of memory in SGA configured only in the Shared Server environment. OFF:. The Large Pool relieves the burden of Shared Pool. Every Three Seconds.I/O. PGA Memory reserved for each user process that connects to oracle database. Size define by Log buffer. A] This Configured for Backup and Restore. When query is processed Oracle Server process looks in the Database Buffer Cache for any blocks it needs.1 MB of Redo. Changes recorded within are called redo entries. Size by Java_pool_size.Advisory is off and the memory for advisory is not allocated. Server process reads block from data file and place the copy in Database Buffer Cache. When users connect through the Shared Server Oracle needs to allocate additional space in the Shared Pool for storing information about the connection between the User process. Durty Buffer reach threshold. Its manage through the LRU algorithm. Java Pool Required if installing and using Java. RAC Ping request. 2]LGWR : Writes When Commit. the request may not required Physical Read. One third full. Before DBWRn writes 3]SMON :Responsibilities a] Instance Recovery .Advisory is turn off but memory for the advisory remain allocated. This parameter have three values(ON. READY:.Table Drop or Truncate.READY) ON:-Advisory is turn on and both CPU and memory overhead is incurred. B] Large Pool does not used LRU list.

-The Name and Location of Control File. -Implicit :Having an entry in the file. 5]CKPT :Responsible for a] Signaling DBWn at Checkpoints.CTL) db_block_size=4096 db_block_buffer=500 share_pool_size=31457280 db_files=1024 . -Online Redo Log File Information. c] Deallocates temporary segments 4]PMON :Cleans up after failed Processes by. c] Preserve the records of all changes made to database. b] Automatically Archive redo log files when ARCHIVELOG mode is set. a] Rolling back the transaction. --Spfile is the Dynamic Parameter File. -Information about Undo Segment. -Explicit :No entry within the file but assuming Oracle default Values. --an extent is made up of Logical Block. -Changes to file take effect on the next startup. D:\ORADATA\DB01\CONTROL02DB01.CTL. --Default location is $ORACLE_HOME/dbs --Pfile Example db_name=db01 Instance_name=db01 Control_file=(D:\ORADATA\DB01\CONTROL01DB01. c] Restart dead dispatcher. --Pfile is the static Parameter File. b] Update Data file and Control file header with checkpoint information. 3]Redo Log File 4]Parameter File --There are two types of parameter. 2]Control File. --Pfile [InitSID. --Segment is made up of extents. -The name of the Database Instance is associated with -Allocation of Memory Structure of SGA. -Modification to the file are made manually. b] Releasing Locks. --a block is small unit to read and write operations PHYSICAL STRUCTURE 1]Data Files. --Parameter File Contents -List of Instance Parameter. LOGICAL STRUCTURE --An Oracle Database is group of Tablespaces. --Tablespace may consist of one or more segment.--Roll forward changes --Opens the database for user access --Rollback Uncommitted changes b] Coalesces the free space every three seconds in datafile. 6]ARCn : a] Optional Background Process.ora] -The Pfile is text file that can be modified by an operating system editor.

--Starting Instance Include Following Task.ora --Allocating SGA.Mounting the Database ---Renaming Datafiles. ---DB_CREATE_FILE_DEST Set to give the default location for data files. -Starting Up Database NOMOUNT --Startup :-NOMOUNT(Instance Startup). ---Execute Queries.MOUNT.STARTUP --Shutdown :-MOUNT. -ORACLE MANAGE FILES --OMF are established by setting two parameters.SHUTDOWN --Usually you would start an Instance without mounting a database only during database creation or re-creation of control files. --ALTER SYSTEM SET parameter=value[SCOPE|MEMORY|SPFILE|BOTH] --Creating Spfile ---CREATE SPFILE FROM PFILE. ---DB_CREATE_ONLINE_LOG_DEST_n Set to give the default location for Online Redo log File and Control File .MOUNT(Control File Open. --All parameter are Optional. STARTUP --Shutdown:-MOUNT.NOMOUNT. --ALTER DATABASE DB01 OPEN READ ONLY. ---Execute Disk Sort using Locally Manage Tablespace. -Starting Up Database MOUNT --Startup:. --Can specify weather the changes being made is Temporary or Persistent --ALTER SYSTEM SET UNDO_TABLESPACE=’UNDO2’. --Records Parameter value changes made with the ALTER SYSTEM.NOMOUNT. --Additional file can be included with the keyword IFILE. . ---Take datafile offline and online not tablespace.ora (if not found) ---Spfile.NOMOUNT(Instance Startup).SHUTDOWN -. ---Read Spfilesid.ora ---initSID. -SPFILE --Binary File with the ability to make changes persistence across Shutdown & Startup. --Enclose Parameter in Double Quotation marks to include Character literal. ---Enabling and Disabling redo log Archive Options ---Performing Full Database Recovery. --Starting Background Preocess.log and trace files. --Parameter can Specify in any other Order.max_dumps_file_size=1024 background_dump_dest=d:\oracle9i\admin\db01\bdump user_dump_dest=d:\oracle9i\admin\db01\udump core_dump_dest=d:\oracle9i\admin\db01\cdump undo_management= auto undo_tablespace=undotbs -Rule For specifying Parameter.Upto Maximum of 5 Location. --ALTER DATABASE DB01 MOUNT. --Opening alertsid. --Maintain by Oracle Server.

--ALTER SYSTEM KILL SESSION SID. ---The startup of background processes. AlertSID.---Perform recovery of offline data files and tablespace. ---Information regarding ORA-600 Enabling and disabling Tracing Session Level: ALTER SESSION SET SQL_TRACE=’TRUE’. Is linked to single Database. 3]Maximize database reliability and performance by separating database component across different disk resources Using Password file Authentication --Create password file using password file utility.ora file SQL_TRACE=TRUE SGA is dynamic and sized using SGA_MAX_SIZE Granual Size is depend on estimated size of SGA. OR dbms_system. ---The Thread being used by an instance. . If the SGA Size is Less than 128mb then granual size is 4MB Otherwise 16MB Optimal Flexible Architecture(OFA) Oracle recommended standard database architecture layout OFA involves three major rules. Should be multiplexed. ---Release all currently held tables or row locks. ---Rollback the current user transation. Control File The control file is the binary file define the current State of Physical Database. ---Alter statement that have been issued. ---Creation of tablespace and undo segments. ---List of non-default initialization parameter. 1]Establish directory structure where any database file can be stored on any disk resource 2]Separate objects with different behavior into different tablespace. --$orapwd file=$ORACLE_HOME/dbs/orapwu15 password=admin entires=5 --set the REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE in the init. Maintains integrity of database. ---When the database was started or shutdown. 5]Password File.log File --Its default location is Specify in BACKGROUND_DUMP_DEST. --Keep Following Records. --Add users to the password file and assign appropriate privilege to each user.ora file. Loss of Control file required recovery Is read at mount Stage. 6]Archive Log File.SET_SQL_TRACE_IN_SESSION Instance Level By setting the init. ---Information regarding to log switch. ---The log sequence no LGWR is written to. GRANT SYSDBA TO HR.SERIAL#.

Work Redo log work in cyclic fashion When the redo log file is full LGWR will move to next log group.average_wait "SCAT READ" from sys. When forced by setting by initialization parameter FAST_START_MTTR_TARGET. Time is reported in 100's of a second for Oracle 8i releases and below. This event is also used for rebuilding the control file and reading datafile headers (P2=1).Maxloghistory. Redo Log File Redo log files are organized into groups. and 1000's of a second for Oracle 9i and above. db file scattered read: Similar to db file sequential reads.v_$system_event b where a. How Export One user and Drop the Existing and Creating New One. these waits would then show up as sequential reads instead of scattered reads. Each redo log within group called member. this event is indicative of disk contention on index reads.Maxdatafiles. data from full table scans could be fitted into a contiguous buffer area. but can be multiple blocks). Immediate. V$controlfile_Record_Section. Single block I/Os are usually the result of using indexes. -This is called log switch. The following query shows average wait time for sequential versus scattered reads: prompt "AVERAGE WAIT TIME FOR READ REQUESTS" select a.average_wait "SEQ READ". -Information is written to the control file. An Oracle database required at least two groups. except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. sys. Redo log Archive information Backup information. -Checkpoint operation also occurs.Size initially by CREATE DATABSE (Maxlogfiles. When manually request by DBA {Alter system Checkpoint] When the ALTER TABLESPACE{BEGIN BACKUP|OFFLINE NORMAL|READ ONLY} What is the difference between DBFile Sequential and Scattered Reads? Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete.v_$system_event a. Checkpoint At every log switch When an instance shutdown Normal. V$parameter. Transactional. Current Redo log file sequence no. b.Maxinstance) Control File Contains Database name and identifier Time stamp of database creation Tablespace name Name and location of Datafile and Redo log file.Maxlogmembers. Checkpoint information.event = 'db file sequential read' and b. Instead they should think of how data is read into the SGA buffer cache. Begin and end of Undo Segment. Most people confuse these events with each other as they think of how data is read from disk. View V$controlfile. This statistic is NORMALLY indicating disk contention on full table scans. In general.event = 'db file scattered read'. db file sequential read: A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1. Take the export of Schema1 . Rarely.

dmp owner=Schema1 Once the export is complete without errors Drop the Schema1 User with cascade option Using: Drop user Schema1 cascade.ora Create Database Statement create database punch maxdatafiles 100 maxlogfiles 5 maxlogmembers 5 maxloghistory 1 3] 4] 5] 6] 7] 8] . Create the new user and Import Schema1 to the new user Using: Imp system/manager@<DataSource> file=c:\Schema1.ora file in pfile directory db_name=punch instance_name=punch control_files=(D:\ora9i\admin\punch\control\con01. country default 'Pakistan' ) .ctl) db_block_size=4096 db_block_buffers=500 shared_pool_size=31457280 db_files=200 compatible = 9.1 background_dump_dest=d:\ora9i\admin\punch\bdump core_dump_dest=d:\ora9i\admin\punch\cdump user_dump_dest=d:\ora9i\admin\punch\udump undo_management=auto undo_tablespace=undotbs c:\>net start OracleServicepunch c:\>set oracle_sid=punch c:\>sqlplus “/as sysdba” SQL> startup nomount pfile=d:\ora9i\admin\punch\pfile\initpunch.0. Create Database Manually 1] 2] c:\>oradim –new -srvc punch c:\>net start OracleServicepunch c:\>net stop OracleServicepunch Create sub directory in oracle directory d:\oracle9i\admin\punch \archive \udump \bdump \cdump \pfile \control \data create initsid.1.'YYYY').dmp fromuser=<old_username> touser=<new_username> alter table employee modify ( year default to_char(sysdate.D:\ora9i\admin\punch\control\con02.ctl.1.Using: Exp system/manger@<DataSource> file=c:\Schema1.

It will look something like this: STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 240 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 ('/u03/oradata/oldlsq/log1a.'/u04/oradata/oldlsq/log2b.dbf'. sign on as SYSDBA and issue: “alter database backup controlfile to trace”.ora file. or if the last shutdown was not normal or immediate. This will put the create database syntax in the trace file directory.dbf' size 20m extent management local uniform size 128k. That is they can be used as Load sheading.that is when you have too many backup Server.rdo') size 1m force logging datafile 'd:\ora9i\admin\punch\data\system01.dbf' size 50m autoextend on undo tablespace UNDOTBS datafile 'd:\ora9i\admin\punch\data\undo.bsq Diffrance Between Standby Database And Replication Standby Replication A Standby database (or Databases) are used for A Replication Database (or Databases) are used for Database security. What is the difference between Logical and Physical Backups? Logical Backups: In this backup Oracle Export utility stores data in Binary file at OS level.'/u01/oradata/oldlsq/mydatabase.dbf') SIZE 30M. Tape etc.dbf' size 30m default temporary tablespace TEMP tempfile 'd:\ora9i\admin\punch\data\temp. go into SQL*Plus.dbf'. GROUP 2 ('/u04/oradata/oldlsq/log2a. group 2('d:\ora9i\admin\punch\data\redo2_01. 9] Run the Scripts SQL>@d:\ora9i\rdbms\admin\catalog.sql SQL>@d:\ora9i\rdbms\admin\sql. . RECOVER DATABASE # Database can now be opened normally. The trace keyword tells oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the user_dump_dest parameter of the init.) Cloning Database This procedure can be use to quickly migrate a system from one UNIX server to another.dbf'.maxinstances 1 logfile group 1('d:\ora9i\admin\punch\data\redo1_01. # Recovery is required if any of the datafiles are restored # backups.dbf') SIZE 30M DATAFILE '/u01/oradata/oldlsq/system01. It clones the Oracle database and this Oracle cloning procedures is often the fastest way to copy a Oracle database. Physical Backups: In this backup physically Datafiles are copied from one location to another( Disk.dbf'. STEP 1: On the old system.'/u03/oradata/olslsq/log1b.sql SQL>@d:\ora9i\rdbms\admin\catproc.rdo') size 1m. where by the changes to the users on a respective database u can replicate production databases are propagated to the this database so that you can cutdown the performance Standby database through the Data Guard degradition (and you can switch to the Standby Database when the Production database is not available).

Oracle support for RBO is limited to bug fixes only and no new functionality will be added to RBO. Though RBO will be available in Oracle 10i. Set the storage parameter MIN_EXTENTS for rollback segments to be at least 20. The following techniques can be useful to reduce contention for rollback segments: Increase the number of rollback segments. ALTER DATABASE OPEN.sql STEP 10: Place the new database in archivelog mode How Can Contention for Rollback Segment Be Reduced? Answer: Contention for rollback segments is an issue for Oracle DBAs in the RMU mode.ora newhost:/u01/oracle/admin/newlsq/pfile STEP 9: Start the new database startup nomount. Oracle recommends all partners and customers to certify their applications with CBO before this version is no longer supported. Old: DATAFILE '/u01/oradata/oldlsq/system01. . STEP 6: Re-names of the data files names that have changed. STEP 2: Shutdown the old database STEP 3: Copy all data files into the new directories on the new server. or if the last shutdown was not normal or immediate. Its removal will permit Oracle to improve performance and reliability of the queryprocessing components of the database engine.dbf' New: DATAFILE '/u01/oradata/newlsq/system01. Why is RBO being removed? Oracle 9i release 2 will be the last version that officially supports RBO.sql.dbf'. the existence of RBO prevents Oracle from making key enhancements to its query-processing engine. but you must edit the controlfile to reflect the new data files names on the new server. RECOVER DATABASE # Database can now be opened normally.dbf' STEP 7: Create the bdump.'/u01/oradata/newlsq/mydatabase. it will no longer be supported. You may change the file names if you want. @db_create_controlfile.dbf'. Set the storage parameter NEXT to be the same as INITIAL for rollback segments. Save as db_create_controlfile. modify the controlfile creation script by changing the following: Old: CREATE CONTROLFILE REUSE DATABASE "OLDLSQ" NORESETLOGS New: CREATE CONTROLFILE SET DATABASE "NEWLSQ" NORESETLOGS STEP 5: Remove the “recover database” and “alter database open” syntax # Recovery is required if any of the datafiles are restored # backups. Set the storage parameter OPTIMAL for rollback segments to be equal to INITIAL x MIN_EXTENTS.ora file rcp $DBA/admin/olslsq/pfile/*. udump and cdump directories cd $DBA/admin mkdir newlsq cd newlsq mkdir bdump mkdir udump mkdir cdump mkdir pfile STEP 8: Copy-over the old init. Ensure plenty of free space in rollback tablespace. As per a published Oracle note. Presently. rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq rcp /u01/oradata/oldlsq/* newhost:/u01/oradata/newlsq rcp /u03/oradata/oldlsq/* newhost:/u03/oradata/newlsq rcp /u04/oradata/oldlsq/* newhost:/u04/oradata/newlsq STEP 4: Copy and Edit the Control file – Using the output syntax from STEP 1.'/u01/oradata/oldlsq/mydatabase.ALTER DATABASE OPEN.

RBO has a limited number of access methods compared to CBO. reverse-key indexes.Dumps. All the new features require CBO. 3.snap{hourly}. Distributed and remote queries are more reliable. etc. Sqlnet. CBO is enabled to identify these features. RBO may not consider indexes on remote databases.'yyyymm')). the local optimizer is aware of the statistics present in the remote table and is able to make better decisions on execution plans. it was difficult to fine tune queries that used database links and had more than one table from the local and remote database.Optimizer Mode.Backup.'dd') FROM dual ) . CBO has matured. Oracle support will not be available. Functionbased indexes. Metalink Support.DSN.Why move to CBO? Key benefits that come to mind: 1. Pctused. 4. but CBO has access to statistics and information regarding indexes on a remote database and can decide on an execution plan. CBO outperforms RBO in this regard.Memory. 5.File Naming] 3]Review of Oracle Parameter[Init. e. RBO will subsequently be removed from the Oracle database.DBA}. 6.g.HTTP 2]Review OFA[Environmental Variable{ORACLE_HOME. Partitioning. RBO could outperform CBO in some situations.ora] 4]Review the object parameter[Pctfree. Oracle stopped developing for RBO environment a long time back. Most of these features will be of importance for any setup. How Check Health of Oracle Database 1]Hardware Raid{CPU. parallel query. Listner. In CBO. Hash joins. Prior to Oracle 7. Freelists] 5]Review the tablespace setup[LMT .'yyyymm')+ROWNUM-1 FROM all_objects WHERE ROWNUM <= (SELECT TO_CHAR(LAST_DAY(TO_DATE(200510.ora{Parameter}.ora. SELECT TO_DATE(200510. Index organized tables.Software Raid. CBO has been improved across releases and today it is a much better alternative considering the benefits and advances towards new features. star joins. 2.Archive Log Files}] This Query arrange date in Ascending Order. CBO would not behave as expected and often choose bad execution plans. Once RBO is no longer supported. bitmap indexes. and how to evaluate their cost.Disk}. Materialized views.ASSM (9i and beyond)] 6]Review the Schedule Jobs [ Statpack. 7.File Cleanup{Remove old trace file. Moreover. In RBO.

1)what is the shared memory segment? ANs) SGA is shared memory segment. UTLBSTAT. TKPROF Creating Statspack Report.Resize and Renaming datafile Create . Pls add your explainations if needed. Use SQL*LOADER Load data in Oracle table from Text file. ur current SGA size is 2.Shared Memory is an efficeint means of passing data between programs. a database where processes=200 would need to have 200 UNIX semaphores allocated for the Oracle database. Oracle 8i Data Migrate into Oracle 9i. PMON. Tune fine the Sql Statement using Explain Plan. One program will create a memory portion which other processes (if permitted) can access.Taking Hot & Cold Backup. 8) Semaphore Management: A semaphore is a term used for a signal flag used by the Navy to communicate between ships.Rebuilding and Analyze Index Database in Archive log and No Archive log Mode. SHMMAX parameter : suppose u hv 4GB to physical ram . semaphores are used by Oracle to serialize internal Oracle processes and guarantee that one thing happens before another thing. However. Tablespace Backup Export and Import Using RMAN Incomplete Recovery and Complete Recovery. AIX UNIX does not use semaphores. To make changes in any other schema with knowing password I have answered some of your questions . Analyze Schema and Tables. etc) . Oracle uses semaphores in HP/UX and Solaris to synchronize shadow processes and background processes.Add . this means that ur SGA can grow till 3GB limit. Shared memory segment : It is a segment of memory that is shared between processes (SMON .5GB and u hv set SHMMAX as 3GB . The number of semaphores for an Oracle database is normally equal to the value of the processes initialization parameter. . In some dialects of UNIX. For example. Resize and Rename Log File Create Tablespace. User Manage Recovery Multiplex Control File Add. UTLESTAT. For all other experts please check if these are correct and sufficient full answers. and a post/wait driver is used instead to serialize tasks. Configure Rman Creating New Database Manually.

Components of the SGA The SGA consists of the following four parts: Fixed Portion Variable Portion Shared pool java pool 27)Instance is started means what? Instance = memory structure + Background process. 1)what is the shared memory segment? we have to configure this parameter SHMMAX in kernel of unix to populate the SGA in the RAM. your Oracle database will fail at startup time with the message: ORA-7279: spcre: semget error. unable to get first semaphore set 42)Explain an ORA-01555 . and you can set this parameter to guarantee that Oracle keeps undo logs for extended periods of time. 25) What is SGA The SGA is a chunk of memory that is allocated by an Oracle Instance (during the nomount stage) and is shared among Oracle processes. it is critical that your UNIX kernel parameter semmns be set to at least double the high-water mark of processes for every database instance on your server. 21) what is UNDO retention. This is done by setting the UNDO_RETENTION parameter. preventing "snapshot too old" errors on long running queries. automatic undo management allows the DBA to specify how long undo information should be retained after commit. It contains all sorts of information about the instance and the database that is needed to operate.When allocating semaphore in UNIX.Becos SGA is also shared memory segment. hence the name. It can usually be solved by increasing the undo retention or increasing the size of rollbacks. The default is 900 seconds (5 minutes). 2)How can you started your standby server? startup nomount . You should also look at the logic involved in the application getting the error message. If you fail to allocate enough semaphores by setting semmns too low. so when memory structure are allocate and background process are started then we say Instance is started.You get this error when you get a snapshot too old within rollback. 29)What is the Hot Backup? Taking backup when the database is ON working online.

13)what is the 10046 event? That is the best practice for dba to understand system wait event created by query. 10)When you performed only one datafile recovery that time what u used resetlogs or norestlogs? That is also part of the complite recovery we have to used the norestlogs 11)But if you are issued the resetlogs then what you after it? Once resetlogs issued after that we have to take full cold backup. 6)What is the $0? shescript'name 7)what is the $1? argument after shell script name 8)What is the semaphore management? That is the internal latch contention and resource wait set of Unix 9)when you used resetlogs and noresetlogs? When we performed complete recovery we can opened our database with noresetlogs and when we performed incomplite recovery that time we have to opened our database with resetlogs.alter database mount standby database. alter database recover managed standby database disconnect from session. 12)If our buffer cache hit ratio is 90% but system is going to slow how you can judge that buffer cache is too small? from the v$waitstat in the data block contention. 3)Why Becos after nomount stage of stanby server we have to mount as standby server. 14)What is the crontab? in Unix that is the utility for scheduling the process 15)Whithout use of crontab witch command you used for this? at commend 16)How you can check which oracle instance are populated in you server? ps -ef|grep ora|grep -g grep 17)which file you have to modified when you intersted to set oracle instance automaticaly started? /etc/oratab 18)what is the pctfree? . 4)Installation of oracle on solaris platform witch file you have to modified first? /etc/system for configuration of kernel 5)If your database is hanged and you can't connect thro' sqlplus how can you shutdown database? ipcs -s|grep $ORACLE_SID gives us semaphore segment set process id and we can kill them with the ipcrm commands.

24)what is parallel query? server process spawn more than the one slave process and all slave fetch data from the disk.19)what is the pctused? 20)what is the optimal? 21)what is the undo_retention? 22)how you can fast process the import which has large index? to set sort_area_size more 23)what is the parallel server? shared database with more than the 2 instance called parallel server. 25)What is the SGA? 26)What is the your present activities as DBA? 27)Instance is started means what? 28)How can you configured Backup strategy? 29)What is the Hot Backup? 30)In tuning how can you performed it? 31)What is the OFA? 32)Suppose one of the your end user come to you and told you that his report is hanged what should you do? 33)If you database is in no archivelog mode and you took backup yesterday and your database size is 40gb very next day when you start database that time it gives error that one of the redolog file corrupted how can you recovered it? 34)What is the buffer busy wait? 35)when data block buffer has too many contention what to do? 36)what is the localy managed tablespace? 37)why freelist contention is occured? 38)what is the row chaining and row migration? How can you judged it and tune it? 39)if your database size is more than the 100gb and running 24*7*365 how planned to decide to take backup? 40)what is the rollback segment contention? 41)why we used undo_retention? 42)if you got ora-1555 error snapshot too old what to do? .

.Becos SGA is also shared memory segment. these are some interview quetions i found on a website. Kindly help in adding answers . 2)How can you started your standby server? 3)Why Becos after nomount stage of stanby server we have to mount as standby server.43)If you give proper hint in sql query but when you check from the trace that hint is not working why? 44)Give the parameter which is affected on Oracle block size. 4)Installation of oracle on solaris platform witch file you have to modified first? /etc/system for configuration of kernel 5)If your database is hanged and you can't connect thro' sqlplus how can you shutdown database? ipcs -s|grep $ORACLE_SID gives us semaphore segment set process id and we can kill them with the ipcrm commands. 6)What is the $0? shescript'name 7)what is the $1? argument after shellscript'name 8)What is the semaphore management? That is the internal latch contention and resource wait set of Unix 9)when you used resetlogs and noresetlogs? When we performed complite recovery we can opened our database with noresetlogs and when we performed incomplete recovery that time we have to opened our database with resetlogs. 45)What is the pctthreshold? 46)If you want to check disk io how to obtained it? 47)To avoid sorting what to do? kind regards. 1)what is the shared memory segment? we have to configure this parameter SHMMAX in kernel of unix to populate the SGA in the RAM. 10)When you performed only one datafile recovery that time what u used resetlogs or norestlogs? That is also part of the complite recovery we have to used the norestlogs 11)But if you are issued the resetlogs then what you after it? Once resetlogs issued after that we have to take full cold backup.

12)If our buffer cache hit ratio is 90% but system is going to slow how you can judge that buffer cache is too small? from the v$waitstat in the data block contention. 13)what is the 10046 event? That is the best practice for dba to understand system wait event created by query. 14)What is the crontab? in Unix that is the utility for scheduling the process 15)Whithout use of crontab witch command you used for this? at commend 16)How you can check which oracle instance are populated in you server? ps -ef|grep ora|grep -g grep 17)which file you have to modified when you intersted to set oracle instance automaticaly started? /etc/oratab 18)what is the pctfree? 19)what is the pctused? 20)what is the optimal? 21)what is the undo_retention? 22)how you can fast process the import which has large index? to set sort_area_size more 23)what is the parallel server? shared database with more than the 2 instance called parallel server. 25)What is the SGA? 26)What is the your present activities as DBA? 27)Instance is started means what? 28)How can you configured Backup stategy? 29)What is the Hot Backup? 30)In tunning how can you performed it? 31)What is the OFA? 32)Suppoce one of the your end user come to you and told you that his report is hanged what should you do? 33)If you database is in no archivelog mode and you took backup yesterday and your database size is 40gb very next day when you start database that time it gives error that one of the redolog file corrupted how can you recovered it? . 24)what is parallel query? server process spawn more than the one slave process and all slave fetch data from the disk.

34)What is the buffer busy wait? 35)when data block buffer has too many contention what to do? 36)what is the localy managed tablespace? 37)why freelist contention is occured? 38)what is the row chaining and row migration? How can you judged it and tune it? 39)if your database size is more than the 100gb and running 24*7*365 how planned to decide to take backup? 40)what is the rollback segment contention? 41)why we used undo_retention? 42)if you got ora-1555 error snapshot too old what to do? 43)If you give proper hint in sql query but when you check from the trace that hint is not working why? 44)Give the parameter which is affected on Oracle block size. 45)What is the pctthreshold? 46)If you want to check disk io how to obtained it? 47)To avoid sorting what to do? .

2. It also reduces planned downtime and allows you to provide additional data access for reporting. Create a backup of your primary database. In this example. Today's 24/7 e-world demands that business data be highly available. You can view the files you need to back up by querying V$DATAFILE: SQL> select name from v$datafile. This article looks at Oracle Data Guard features and processes both old and new: • • • The physical-standby database is a longstanding but invaluable feature that is straightforward to set up and maintain. it's easiest to implement a physical-standby database if you: • • • Set up Oracle Data Guard on two servers: the primary and the standby. This way. . the primary host name is primary_host. the standby host name is standby_host. 1. alter database open. Oracle Data Guard is a built-in component of Oracle9i Database that protects your database from unforeseen disasters. not online redo log files or control files. cold. Make the database name the same for the primary and standby. the physical-standby database resides on a different server than your primary database.ora file) that handles filename conversions. When getting started with the Oracle Data Guard setup. of your primary database. Setting up a managed-recovery physical-standby database is fairly straightforward. Data Guard is typically implemented in managed-recovery mode. Known in earlier database releases as the standby option. I usually use a cold backup because there are fewer steps involved. an identical copy. Archive-gap management adds automation to keep primary and standby databases in sync." If your primary database isn't in archive-log mode. down to the block level. Ensure that the mount points and directories are named the same for each of the servers.Standby databases make data available for emergencies and more. Physical-Standby Database The tried-and-true workhorse of Oracle Data Guard is the physical-standby database. and this type of standby provides additional availability beyond that of the physical-standby database. or Recovery Manager (RMAN) backup to create the backup. The mount points and directory structures are identical between the primary and standby servers. and both the primary database and the standby database are named BRDSTN. startup mount. Usually. This means Oracle processes automatically handle copying and application of archive redo to the standby database. In Oracle9i Database. Here are the steps for setting up an Oracle9i Database physical-standby database in managed-recovery mode. The easiest way to check this is to log on to SQL*Plus as the SYS user and issue the following command: SQL> archive log list The database-log mode must be "Archive Mode. You can use a hot. data corruption. enable this feature as follows: SQL> SQL> SQL> SQL> shutdown immediate. Back up only datafiles. alter database archivelog. the feature now called Oracle Data Guard has long been a dependable and cost-effective method for achieving high availability and disaster-recovery protection. Ensure that your primary database is in archive-log mode. all you have to do is take a backup of your primary and lay it onto your standby server without having to modify parameters in your SPFILE (or init. The logical-standby database is new in Oracle9i Database Release 2. and user errors.

In this example. In managed-recovery mode. I recommend keeping the parameters the same as configured on the primary database. Create this via the ALTER DATBASE CREATE STANDBY command on the primary database.) to copy the backup files generated from Step 2 to the standby server. and log in to the standby server.ctl' # location where archive redo logs are # being written in standby environment standby_archive_dest= /ora01/oradata/BRDSTN # Enable archive gap management fal_client=standby1 fal_server=primary1 fal_client and fal_server are new parameters that enable archive-gap management.ora file. SCP. Copy the standby control file to the standby environment. and you should now use log_archive_dest_n to specify archived redo log locations.ctl'.ctl 6. On the primary server. etc. You'll need to set the following parameters in your primary initialization file (init.arc' log_archive_start = true # location of archive redo on primary log_archive_dest_1='LOCATION= /ora01/oradata/BRDSTN/' # location of archive redo on the # standby log_archive_dest_2='SERVICE= standby1 optional' log_archive_dest_state_2=ENABLE This tells the Oracle archiver process that it needs to write to two destinations. you also need to transfer any archived redo logs generated during the backup to the standby environment. the primary and standby databases need to be able to communicate with each other via Oracle Net. is a local file system and is mandatory. The second location is identified by an Oracle Net service name that points to the standby server and is optional (see Step 8 for more details). Configure the primary init. In this example. standby1 is the Oracle Net name of the standby database and primary1 is the Oracle Net name of the primary database. Use your favorite network copy utility (FTP. initiate an FTP session.ora file. with the following modifications to the secondary server init. Create a standby control file. 4.ctl. I FTP the control file from the primary location of /ora01/oradata/BRDSTN /stbycf. SQL> alter database create standby controlfile as '/ora01/oradata/BRDSTN/stbycf. 7. Copy the primary init. Copy the backup datafiles to the standby server. because Oracle9i Database doesn't have to successfully write to this secondary location for the primary database to keep functioning. There needs to be a listener on both the primary and standby servers. The second location is optional. This secondary location can be set to "mandatory" if your business needs require that you cannot lose any archived redo logs. The first one. it also needs to know where to write the archived redo log files on both the primary and the standby servers. Primary listener. and Oracle Net must be able to connect from one database to the other. The primary database must be in archive-log mode. If you use a hot (online) backup.ora file: LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = primary_host)(PORT = 1521)) . FTP the control file as follows: ftp> binary ftp> cd /ora01/oradata/BRDSTN ftp> send stbycf.ora file: # ensure that your standby database # is pointing at the standby control # file you created # and copied to the standby server control_files = '/ora01/oradata/BRDSTN/stbycf. After a successful login.3. log_archive_dest_1. Note: log_archive_dest and log_archive_duplex_dest have been deprecated. The following text describes the listener for the primary listener. The fetch archive log (FAL) background processes reference these parameters to determine the location of the physical-standby and primary databases. Configure Oracle Net.ctl to the standby-environment location of /ora01/oradata/BRDSTN/ stbycf.ora file to the standby server and make modifications for standby database. The primary database needs a listener listening for incoming requests. 5. This creates a special control file that will be used by a standby database. 8.ora or SPFILE): log_archive_format = 'arch_%T_%S.

.2.0) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = BRDSTN) (ORACLE_HOME = /ora01/app/oracle/product/9.0)) Primary tnsnames.ora file on both the primary and standby servers. The following text describes the listener for the standby server listener.ora file.ora file.0)) Standby tnsnames.ora file.ora file: primary1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=primary_host)) (CONNECT_DATA=(SERVICE_NAME=BRDSTN))) standby1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=standby_host)) (CONNECT_DATA=(SERVICE_NAME=BRDSTN))) Standby listener. I recommend placing both the primary and standby service locations in the tnsnames.2. execute the following: SQL> startup nomount.) (ADDRESS_LIST = EXTPROC)) ))) (ADDRESS = (PROTOCOL = IPC)(KEY = SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /ora01/app/oracle/product/9.0) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = BRDSTN) (ORACLE_HOME = /ora01/app/oracle/product/9. The following text describes the connection for the primary tnsnames.2. The following is the same connection information entered into the primary server tnsnames.ora file: LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = standby_host)(PORT = 1521)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ))) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /ora01/app/oracle/product/9. The standby database needs to be started and mounted in standby mode.2. Start up and mount the standby database. primary1 points to the primary database on a host named primary_host. In this example. On the standby database. it is critical to have Oracle Net connectivity between the primary and standby databases.ora file: primary1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=primary_host)) (CONNECT_DATA=(SERVICE_NAME=BRDSTN))) standby1 = (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (PORT=1521) (HOST=standby_host)) (CONNECT_DATA=(SERVICE_NAME=BRDSTN))) When you operate Oracle Data Guard in managed-recovery mode. and standby1 points to the standby database on a server named standby_host. This makes troubleshooting easier and also makes failover and switchover operations smoother. 9. The standby database needs to be able to service incoming Oracle Net communication from the primary database.

the managed-recovery process in the physical-standby database will automatically check and resolve gaps at the time redo is applied. After you detected /docs/products/oracle9i a gap. Archive-Gap Management In a managed-recovery configuration. There are many steps involved with setting up a logical-standby database. In a UNIX environment. Just like a physical-standby database. but not both at the same time. An Next Steps untimely gap-resolution mechanism compromises the availability of data in the standby environment. I've found that concepts jell more quickly and issues are resolved more rapidly when I simultaneously monitor the contents of both the standby and primary alert. which is physically identical to the primary database block-for-block. you have a gap between what archive logs have been generated by the primary database and what have been applied to the physical-standby • • • • • Determine which archived redo logs are part of the gap Manually copy those archived redo gap logs to standby Take the standby database out of managed-recovery mode Issue the RECOVER STANDBY DATABASE command Alter the standby database back into managed-recovery mode With Oracle9i Database. Detecting and resolving archive gaps is crucial to maintaining high availability. A logical-standby database differs architecturally from a physical-standby database.log files. The alert logs are usually pretty good at pointing out the source of the problem. because the Oracle media-recovery mechanism is used on the redo information received from the primary database to apply those redo changes to a physical data-block address. At this point.pdf metalink. This automatic method does not need any parameter configuration. the missing log files are shipped to the appropriate standby databases. the result is a loss of data after you fail over to your standby database. The first mechanism for gap resolution uses a periodic background communication between the primary database and all of its standby databases. Once these parameters are set on the standby database. It is logically identical to the primary database but not physically identical at the block level. if you have an undetected gap and disaster strikes. 10. Convert tablespace Updates at the bottom . This situation can arise if there is a network problem or if the standby database is unavailable for any reason. a logical-standby database can assume the role of the primary database in the event of a disaster. you had the responsibility of manually monitoring gaps. The best sources for detailed instructions are MetaLink note 186150. One aspect of a physical-standby database is that it can be placed in either recovery mode or read-only mode. you should be able to generate archive redo information on your primary database and see it applied to the READ Oracle Data Guard documentation In Oracle8i. you can further tune it by adding indexes and materialized views independent of the primary. To enable automatic copying and application of archived redo logs.1 and the Oracle9i Database Release 2 Oracle Data Guard Concepts and Administration manual. a logical-standby database transforms the redo data received from the primary into SQL statements (using LogMiner technology) and then applies the SQL statements. if the primary log-transport mechanism isn't able to ship the redo data from the primary database to the physical-standby database. Data Guard compares log files generated on the primary database with log files received on the standby databases to compute the gaps. Enable managed-recovery mode. You can configure your Data Guard environment to use a combination of both logical-standby and physical-standby databases.log Most of the problems you'll encounter with your Oracle Data Guard physical-standby database setup will be because either Oracle Net was not configured correctly or you fat-fingered an initialization parameter. This is ideal if your business requires a near-real-time copy of your production database that is also accessible 24/7 for reporting. Once a gap is detected. Data Guard has two mechanisms for automatically resolving gaps. place the standby database in managed-recovery mode as follows: SQL> recover managed standby database disconnect. For example. As you deploy your Oracle Data Guard environment and perform maintenance operations. Data Guard SQL Apply Prior to Oracle9i Database Release 2. you could implement only a physical-standby database.log files. SQL> alter system set FAL_SERVER = primary1. the tail command is particularly helpful in this regard: $ tail -f alert_BRDSTN. you then had to manually perform the following steps on your standby database: /deploy/availability/pdf/DG92_ TWP.SQL> alter database mount standby database. The second mechanism for automatic gap resolution uses two new Oracle9i Database initialization parameters on your standby database: SQL> alter system set FAL_CLIENT = standby1. pertinent messages are often written to the primary and standby alert. If you use your logical-standby database for reporting. you can now create a logical-standby database that is continuously available for querying while simultaneously applying transactions from the primary. With the new Data Guard SQL Apply feature of Oracle9i Database Release 2. Unlike a physical-standby database.

so no matter what distribution of extents . In 8. but you are not guaranteed to resolve the fragmentation issue. there must be some free space in the tablespace hold the bitmap that will be created (at least 64k). It was 8. Not any more.6. Oracle assumes it to be the smallest possible size of an extent (whether it exists or not). it would appear that one very critical factor is not documented. the greatest commmon denominator of all of the used/free extents. it will try to find the largest suitable extent size. In short. mainly for use in the everyday views DBA_EXTENTS and DBA_SEGMENTS. When converting from dictionary management to locally management. the only way to convert to locally managed tablespaces was to unload and reload your tablespace data. steps to avoid fragmentation should already be in place. along with some rounding of the space used. 8. a "full conversion" does not actually take place . where if you query DBA_TABLESPACES. that is. you will have definitely solved one problem. then rather than this clause being ignored..simply an appropriate bitmap is built to map the existing extents.. otherwise migration will fail.TABLESPACE_MIGRATE_TO_LOCAL DBMS_SPACE_ADMIN. the documentation also provides a "tempter". this is 40k (Oracle rounds extents bar the initial one up to 5 blocks). The documentation says that during conversion.5 that introduced the dbms_space_admin package which is used 'behind the scenes' to probe the tablespace bitmaps in locally managed tablespace. that of stress on UET$ and FET$.1.1. then produce a bitmap that all extents can be mapped by. However.1. many people are yet to take advantage of this new feature because up until now. clearly each bit in the bitmap must represent a size that will divide evenly into: • • All of the existing extents All of the extents that have been used in the past that are now free and Oracle also ensures that if you have set the MINIMUM EXTENT clause. However.. In an 8k block size database. that local management of the SYSTEM tablespace will also be supported in future. Update When converting from a dictionary managed tablespace to a locally managed one. If a MINIMUM EXTENT clause has NOT been specified.1.6 When converting from dictionary management to locally management. • Finally.TABLESPACE_MIGRATE_FROM_LOCAL A few of things are important to note: • • It is not sufficient to have compatible set to 8. Thus you end up with an interesting hybrid. you will see that management policy is LOCAL but the allocation policy is USER (not uniform or automatic). it must be 8.1. in a tightly managed production environment. Even so. then this is respected as well.6 now gives two built-in functions to allow you to switch back and forth between dictionary and locally managed tablespaces without unloading/reloading the data. two new procedures have been added to the package: • • DBMS_SPACE_ADMIN.Most people will now have already seen the benefits of using locally managed tablespaces as a simple cure for fragmentation and relieving stress on the data dictionary extent tables UET$ and FET$.

999 SQL> select * from dba_free_space where tablespace_name = 'BLAH'. drop table t3 * ERROR at line 1: ORA-00942: table or view does not exist SQL> col bytes format 999. unless you specify the MINIMUM EXTENT clause.568 1279 8 BLOCKS RELATIVE_FNO SQL> create table t1 ( x number ) storage ( initial 400k) tablespace blah. drop table t2 * ERROR at line 1: ORA-00942: table or view does not exist SQL> drop table t3. Table created.-----------. SQL> create table t3 ( x number ) storage ( initial 1200k) tablespace blah. Tablespace have.---------. Table created. SQL> drop table t1.DBF' size 10m reuse 3 extent management dictionary. Example: SQL> create tablespace blah 2 datafile 'G:\ORA9I\ORADATA\DB9\BLAH.-----------BLAH 8 2 10. SQL> create table t2 ( x number ) storage ( initial 800k) tablespace blah. the bitmap will always be chosen as 1bit per 40k. TABLESPACE_NAME FILE_ID BLOCK_ID BYTES ---------------.---------.999. Table created.---------. .477. SQL> select * from dba_free_space where tablespace_name = 'BLAH'. drop table t1 * ERROR at line 1: ORA-00942: table or view does not exist SQL> drop table t2.

line 0 ORA-06512: at line 1 --.TABLESPACE_MIGRATE_TO_LOCAL('BLAH'.50). -SQL> exec dbms_space_admin.TABLESPACE_NAME FILE_ID BLOCK_ID BYTES ---------------. SQL> exec dbms_space_admin. END. SQL> To Configured Shared Server which parameter specify in server init. BYTES -----------409.But if we set the minimum extent.968 979 8 BLOCKS RELATIVE_FNO SQL> select bytes from dba_extents where tablespace_name = 'BLAH'.TABLESPACE_MIGRATE_TO_LOCAL('BLAH'.228.ora Shared_Sever Max_shared_server Circuits SERVER=SHARED MAX_DISPATCHERS .DBMS_SPACE_ADMIN".800 --. * ERROR at line 1: ORA-03241: Invalid unit size ORA-06512: at "SYS.TABLESPACE_MIGRATE_TO_LOCAL('BLAH'.600 a locally managed tablespace is 50 ( 50 * 8192 = 400k ). Tablespace altered.---------. -SQL> alter tablespace blah minimum extent 400k. one would imagine that the unit size for a conversion -.50).. But if we try.---------.-----------BLAH 8 302 8.200 1.50). BEGIN dbms_space_admin. PL/SQL procedure successfully completed..019.So at this stage..-----------..

Can not exceed sga_max_size.This parameter is new in Oracle Database 10g and reflects the total size of memory an SGA can consume. . alter system set sga_target=‘x’. A new background process named Memory Manager (MMAN) manages the automatic shared memory.Oracle 10 g New Feature PPT Automatic Shared Memory Management (ASMM) 10g method for automating SGA management. Shared pool Buffer cache Java Pool Large Pool Requires an SPFILE and SGA_TARGET > 0. sga_target -. Does not apply to the following parameters. Log Buffer Other Buffer Caches (KEEP/RECYCLE. other block sizes) Streams Pool (new in Oracle Database 10g) Fixed SGA and other internal allocations Can be adjusted via EM or command line.

• Doesn’t Support System or Sysaux tablespaces . Pre configuration steps =================== Create directory pump_dir as ‘/……. A simple alter tablespace command is all you need. As with Export. Export Command =============== oracle@testserver-test1>expdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db. Startup the database in mount state 3.’. Creates a separate dump file for each degree of parallelism.single command restore • SQL> Flashback Database to scn 1329643 Rename Tablespace 10g method for renaming tablespaces SQL> alter tablespace users rename to users3. Grant read on directory pump_dir to scott.log Import Command =============== oracle@testserver-test2>impdp hh/xxxxx dumpfile=pump_dir:test_tbs_full_db. • Rename tablespace feature has lessened the workload for TTS operations. • Think of it as a continuous backup • Replay log to restore DB to time • Restores just changed blocks It’s fast . SQL> flashback database to timestamp to_timestamp(‘2004-12-16 16:10:00’. the job can be parallelized for even more improvement dynamically. ‘YYYYMM-DD HH24:MI:SS’). Tablespace altered. SQL> alter tablespace users3 rename to users. Tablespace altered. 4. SQL> alter tablespace users rename to users3.Data Pump High performance import and export • 60% faster than 9i export (single thread) • 15x-45x faster than 9i import (single thread) The reason it is so much faster is that Conventional Import uses only conventional mode inserts.dmp schemas=devuser remap_tablespace=system:users Flashback Database 10g method for point-in-time recovery 1.recovers in minutes. not hours. whereas Data Pump Import uses the Direct Path method of loading.dmp logfile=pump_dir:test_tbs_full_db. Open the database – open resetlogs New strategy for point-in-time recovery Flashback Log captures old versions of changed blocks. More over. Shutdown the database 2. this feature removes the need for database incomplete recoveries that require physical movement of datafiles/restores. There’s no need to delete tablespaces on the target prior to impdp metadata. It’s easy . Oracle allows the renaming of tablespaces in 10g..

type from user_recycle_bin.’DD-MON-YY HH24:MI:SS’) . In addition to decreasing free buffer waits. V$session also contains information from v$session_wait view. 10g method for flushing the buffer cache 10g has provided the ability to flush the buffer cache.• Supports Default. serial#. Dictionary View Improvements New columns in v$session allow you to easily identify sessions that are blocking and waiting for other sessions. The dropped/renamed table is still occupying space in it’s original tablespace. this wasn’t possible without shutting down and restarting the database or using the following undocumented commands: • SQL> alter session set events = 'immediate trace name flush_cache'. • alter tablespace offline/online to flush the buffer cache of blocks relating to that tablespace (As per Tom Kytes Article). • You can permanently drop a table without writing to recycle bin. original_name. than running the following command can improve performance and take the burden off the DBWR. select object_name. the larger the LRU and dirty list becomes. constraints. 10g method for restoring data Flashback table emp to TIMESTAMP TO_TIMESTAMP(’16-Sep-04 1:00:00’. The bigger the cache. Has a few quirks • Doesn’t restore foreign key constraints or materialized$$55777$table$1 (Requires a rename of triggers and constraints). • Restores indexes. or show recycle. SQL> ALTER SYSTEM FLUSH BUFFER_CACHE. Side-Note . --Display blocked session and their blocking session details. or SELECT blocking_session_status. No need to join the two views. Purge table mm$$55777$table$1. However. You can still query it after it’s dropped. but might be useful for QA/Testing. blocking_session_status.A logical representation of the dropped object.’DD-MON-YY HH24:MI:SS’) MINUS SELECT * FROM emp ). blocking_session FROM v$session WHERE sid = 444. Temporary.You were able to flush the shared pool SQL> ALTER SYSTEM FLUSH SHARED_POOL. Flashback Drop 10g method for undoing a dropped table SQL> flashback table emp to before drop. This isn’t suggested for a production environment. blocking_session FROM v$session WHERE blocking_session IS NOT NULL. • • Recycle Bin (Sounds familiar)………. Flashback Table 9i method for restoring data INSERT INTO emp (SELECT * FROM emp AS OF TIMESTAMP TO_TIMESTAMP(’16-Sep-04 1:00:00’. if the buffer cache is undersized. You can empty out the recycle bin by purging the objects. That results in longer search times. /* Blocked Session */ Flush Buffer Cache 8i/9i method for flushing the buffer cache Prior to 10g. SELECT sid. Drop table emp purge. and triggers with it’s klingon language . and Undo Tablespaces (dynamically changes the spfile).

(You can also use timestamp) Flashback Transaction Query provides the ability to generate the SQL statements for undoing DML . .Make sure you enable row movement prior to the restore. 10g method for generating sql undo statements SELECT undo_sql FROM flashback_transaction_query WHERE table_owner=‘SCOTT’ AND table_name=‘EMP’ AND start_scn between 21553 and 44933. Flashback Transaction Query 8i/9i method for generating sql undo statements Log Miner (Good luck parsing through those logs). SQL> alter table emp enable row movement.

Sign up to vote on this title
UsefulNot useful