You are on page 1of 111

Oracle 11g DBA Interview Questions

Sr No. Topics Page No.

1 Oracle 11g Architecture 2-13

2 Controlfile and Redolog File Management 14-15

3 Database Creation and Tablespace Management 16-18

4 User Management and Undo Management 19-21

5 Networking 22

6 Backup/Recovery 23-27

7 Log Miner SQL Loader Auditing 28-30

8 RMAN- RECOVERY MANAGER 31-43

9 Patching and Upgradation 44-49

10 DATA PUMP UTILITIES 50-52

11 FLASHBACK TECHNOLOGY 53-54

12 DATA GUARD 55-64

13 ASM- Automatic Storage Management 65-68

14 PERFORMANCE TUNNING 69-80

15 ORACLE GOLDEN GATE 81-83

16 Linux interview questions for DBA 84-95

17 Data Dictionary Views in Oracle 11g 96-98

18 EXPERIENCED DBA IQ's 99-103

19 Oracle 11g and 12c New features 104-113

NK-8125652330
Oracle 11g Architecture Question's

1) What is a database?

• Database offer a single point of mechanism for storing and retrieving information with the help of
tables.
• Table is made up of columns and rows where each column stores specific attribute and each row
displays a value for the corresponding attribute. 
• It is a structure that stores information about the attributes of the entities and relationships among
them.
• It also stores data types for attributes and indexes.
• Well known DBMS include Oracle, ibm db2, Microsoft sql server, Microsoft access, mysql and
sqlLite.

2) What are the different types of storage systems available and which one is used by Oracle?

Two types of storage systems are available


• Relational Database Management System (RDBMS) and Hierarchical Storage Management System
(HSM)

• Most databases use RDBMS model, Oracle also uses RDBMS model.

• Hierarchical Storage Management System (HSM)


• Information Management System (IMS) from IBM.
• Integrated Database Management System (IDMS) from CA.

3) Explain some examples of join methods

• Join methods are of mainly 3 types


• Merge Join – Sorting both the tables using join key and then merge the rows which are sorted.
• Nested loop join – It gets a result set after applying filter conditions based on the outer table.
• Then it joins the inner table with the respective result set.
• Hash join - It uses hash algorithm first on smaller table and then on the other table to produce
joined columns. After that matching rows are returned.

4) What are the components of logical data model and list some differences between logical and
physical data model?

Components of logical data model are 


• Entity – Entity refers to an object that we use to store information. It has its own table.
• Attribute – It represents the information of the entity that we are interested in. It is stored as a
column of the table and has specific datatype associated with it.
• Record – It refers to a collection of all the properties associated with an entity for one specific
condition, represented as row in a table.
• Domain – It is the set of all the possible values for a particular attribute.
• Relation – Represents a relation between two entities.
Difference between Logical and Physical data model

• Logical data model represents database in terms of logical objects, such as entities and
relationships.

NK-8125652330
• Physical data model represents database in terms of physical objects, such as tables and
constraints.

5) What is normalization? What are the different forms of normalization?

• Normalization is a process of organizing the fields and tables of a relational database to minimize
redundancy and dependency.
• It saves storage space and ensures consistency of our data.

There are six different normal forms

• First Normal Form – If all underlying domains contain atomic values only.
• Second Normal Form – If it is in first normal form and every non key attribute is fully functionally
dependent on primary key.
• Third Normal Form - If it is in 2nd normal form and every non key attribute is non transitively
dependent on the primary key.
• Boyce Codd Normal Form - A relation R is in BCNF if and only every determinant is a candidate
key.

• Fourth Normal Form 


• Fifth Normal Form

6) Differentiate between a database and Instance and explain relation between them?

• Database is a collection of three important files which include data files, control files and redo log
files which physically exist on a disk
• Whereas instance is a combination of oracle background process (SMON, PMON, DBWR, LGWR) and
memory structure (SGA, PGA).
• Oracle background processes running on a computer share same memory area. 
• An instance can mount and open only a single database, ever.
• A database may be mounted and opened by one or more instances (using RAC).

7) What are the components of SGA?

• SGA is used to store shared information across all database users.


• It mainly includes Library cache, Data Dictionary cache, Database Buffer Cache, Redo log Buffer
cache, Shared Pool.
• Library cache – It is used to store Oracle statements.
• Data Dictionary Cache – It contains the definition of Database objects and privileges granted to
users.
• Data Base buffer cache – It holds copies of data blocks which are frequently accessed, so that they
can be retrieved faster for any future requests.
• Redo log buffer cache – It records all changes made to the data files.

8) Difference between SGA and PGA.

• SGA (System Global Area) is a memory area allocated during an instance start up.
• SGA is allocated as 40% of RAM size by default.
• SGA size is controlled by DB_CACHE_SIZE parameter defined in initialization parameter file
(init.ora file or SPFILE).

• PGA (Program or Process Global Area) is a memory area that stores a user session specific

NK-8125652330
information.

• PGA is allocated as 10% of RAM size by default.

9) What are the disk components in Oracle?

These are the physical components which gets stored in the disk.
• Data files
• Redo Log files
• Control files 
• Password files
• Parameter files

10) What is System Change Number (SCN)?

• SCN is a unique ID that Oracle generates for every committed transaction. 


• It is recorded for every change in the redo entry.
• SCN is also generated for every checkpoint (CKPT) occurred.
• It is an ever increasing number which is updated for every 3 seconds
• You can get the SCN number by querying select SCN from v$database from SQLPLUS.

11) What is Database Writer (DBWR) and when does DBWR write to the data file?

• DBWR is a background process that writes data blocks information from Database buffer cache to
data files.
There are 4 important situations when DBWR writes to data file
• Every 3 seconds
• Whenever checkpoint occurs
• When server process needs free space in database buffer cache to read new blocks.
• Whenever number of changed blocks reaches a maximum value.

12) What is Log Writer and when does LGWR writes to log file?

• LGWR writes redo or changed information from redo log buffer cache to redo log files in database.
• It is responsible for moving redo buffer information to online redo log files, when you commit and a
log switch also occurs.
• LGWR writes to redo files when the redo log buffer is 1/3 rd full.
• It also writes for every 3 seconds.

• Before DBWR writes modified blocks to the datafiles, LGWR writes to the
log file

13) Which Table spaces are created automatically when you create a database?

• SYSTEM tablespace is created automatically during database creation.


• It will be always online when the database is open.
Other Tablespaces include
• SYSAUX tablespace
• UNDO tablespace
• TEMP tablespace
• UNDO & TEMP tablespace are optional when you create a database.

NK-8125652330
14) Which file is accessed first when Oracle database is started and What is the difference between
SPFILE and PFILE?

• Init<SID>.ora parameter file or SPFILE is accessed first .( SID is instance name)


• Settings required for starting a database are stored as parameters in this file.

• SPFILE is by default created during database creation whereas PFILE should be created from
SPFILE.
• PFILE is client side text file whereas SPFILE is server side binary file.
• SPFILE is a binary file (it can’t be opened) whereas PFILE is a text file we can edit it and set
parameter values.
• Changes made in SPFILE are dynamically effected with running database whereas PFILE changes
are effected after bouncing the database.
• We can backup SPFILE using RMAN.

15) What are advantages of using SPFILE over PFILE?

• SPFILE is available from Oracle 9i and above.


• Parameters in SPFILE are changed dynamically.
• You can’t make any changes to PFILE when the database is up.
• RMAN cant backup PFILE, It can backup SPFILE.
• SPFILE parameters changes are checked before they are accepted as it is maintained by Oracle
server thereby reducing the human typo errors.

16) How can you find out if the database is using PFILE or SPFILE?

• You can query Dynamic performance view (v$parameter) to know your database is using PFILE or
SPFILE.

• SQL> select value from V$parameter where name= ‘SPFILE’;


• A non-null value indicates the database is using SPFILE.
• Null value indicates database is using PFILE.
• You can force a database to use a PFILE by issuing a startup command as
• SQL> startup PFILE = ‘full path of Pfile location’;

17) Where are parameter files stored and how can you start a database using a specific parameter
file?

• In UNIX they are stored in the location $ORACLE_HOME/dbs and ORACLE_HOME/database for
Windows directory.

• Oracle by default starts with SPFILE located in $ORACLE_HOME/dbs.


• If you want to start the database with specific file we can append it at the startup command as
SQL > startup PFILE = ‘full path of parameter file ‘;

• You can create PFILE from SPFILE as create PFILE from SPFILE;
• All the parameter values are now updated with SPFILE.
• Similarly, create SPFILE from PFILE; command creates SPFILE from PFILE.

18) What is PGA_AGGREGATE_TARGET parameter?

NK-8125652330
• PGA_AGGREGATE TARGET parameter specifies target aggregate PGA memory available to all
server process attached to an instance.
• Oracle sets its value to 20% of SGA.
• It is used to set overall size of work-area required by various components.
• Its value can be known by querying v$pgastat dynamic performance view.
• From sqlplus it can be known by using SQL> show parameter pga.

19) What is the purpose of configuring more than one Database Writer Processes? How many should
be used? (On UNIX)

• DBWn process writes modified buffers in Database Buffer Cache to data files, so that user process
can always find free buffers.

• To efficiently free the buffer cache to make it available to user processes, you can use multiple
DBWn processes.

• We can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to improve
write performance if our system modifies data heavily.

• The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes


upto a maximum number of 20.

• If the Unix system being used is capable of asynchronous input/output processing then only one
DBWn process is enough, if not the case the total DBWn processes required will be twice the number
of disks used by oracle, and this can be set with DB_WRITER_PROCESSES initialization parameter.

20) List out the major installation steps of oracle software on UNIX in brief?

• Set up disk and make sure you have Installation file (run Installer) in your dump.
• Check the swap and TEMP space .
• Export the following environment variables
1) ORACLE_BASE
2) ORACLE_HOME
3) PATH
4) LD_LIBRARY_PATH
5) TNS_ADMIN

• Set up the kernel parameters and file maximum descriptors.


• Source the Environment file to the respective bash profile and now run Oracle Universal Installer.

21) Can we check number of instances running on Oracle server and how to set kernel parameters in
Linux?

• Editing the /etc/oratab file on a server gives the list of oracle instances running on your server.

• Editing /etc/sysctl.conf file with vi editor will open a text file listing out kernel level parameters.

• We can make changes to kernel parameters as required for our environment only as a root user.
• To make the changes affected permanently to kernel run the command /sbin/sysctl –p.
• We must also set file maximum descriptors during oracle installation which can be done by
editing /etc/security/limits.conf as a root user.

NK-8125652330
22) What is System Activity Reporter (SAR) and SHMMAX?

• SAR is a utility to display resource usage on the UNIX system.


• sar –u shows CPU activity.
• Sar –w shows swapping activity
• Sar –b shows buffer activity
• SHMMAX is the maximum size of a shared memory segment on a Linux system.

23) List out some major environment variable used in installation?

• ORACLE_BASE=/u01/app/<installation-directory>
• ORACLE_HOME=$ORACLE_BASE/product/11.2.0(for 11g)/dbhome_1
• ORACLE_SID=<instance-name>
• PATH=$ORACLE_HOME/bin:$PATH
• LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
• TNS_ADMIN=$ORACLE_HOME/network/admin
• These are absolutely critical environment variables in running OUI.

24) What is a control file?

• Control file is a binary file which records the physical structure of a database.
• It includes number of log files and their respective location, Database name and timestamp when
database is created, checkpoint information.
• We find CONTROL_FILE parameter in initialization parameter file which stores information about
control file location.
• We can multiplex control files, store in different locations to make control files available even if one
is corrupted.
• We can also avoid the risk of single point of failure.

25) At what stage of instance, control file information is read and can we recover control file and how
to know information in a control file?

• During database mounting, control file information is read.


• We can’t recover or restore lost control file, but we can still startup the database using control files
created using multiplexing in different locations.
• We can query the following command
SQL> alter database backup controlfile to trace;
• We find a trace file(.trc) in udump location,we can edit it and find the complete database structure.
• Multiplexing can also be done using Following command
SQL> alter database backup controlfile to <Different location/path>.

26) How can you obtain Information about control file?

• Control file information can be shown in initialization parameter file.


• We can query v$controlfile to display the names of control files
• From sql we can execute 
SQL> show parameter control_files;
• The above query gives the name, Location of control files in our physical disk.
• We can edit PFILE using a vi editor and control_files parameter gives us information about number
of and location of control files.

NK-8125652330
27) How do you resize a data file and tablespace?

• Prior to Oracle 7.2 you can’t resize a datafile.


• The solution was to delete the tablespace, recreating it with different sized datafiles.
• After 7.2 you can resize a datafile by using ALTER DATABASE DATAFILE <file_name> RESIZE;
• Resizing Table space includes creation of new data file or resizing existing data file.
• ALTER TABLESPACE <tablespacename> ADD DATAFILE ‘<datafile name> ‘ size M; creates a new
datafile.

28) Name the views used to look at the size of a datafile, controlfiles, block size, determine free
space in a tablespace ?

• DBA_DATA_FILES or v$datafile view can be used to look at the size of a datafile .


• DBA_FREE_SPACE is used to determine free space in a tablespace.
• V$contolfile used to look at the size of a control file which includes maxlogfiles, maxlogmembers,
maxinstances.
• Select * from v$controlfile gives the size of a controlfile.
• From sqlplus, query show parameter block_size to get size of db_block_size.

29) What is archive log file?

• In archive log mode, the database will makes archive of all redo log files that are filled, called as
archived redo logs or archive log files.

• By default your database runs in NO ARCHIVE LOG mode, so we can’t perform online backup’s
(HOT backup).
• You must shut down database to perform clean backup (COLD backup) and recovery can be done
to the previous backup state.
• Archive log files are stored in a default location called FRA (Flash Recovery Area).
• We can also define our own backup location by setting log_archive_dest parameter.

30) Assume you work in an xyz company as senior DBA and on your absence your back up DBA has
corrupted all the control files while working with the ALTER DATABASE BACKUP CONTROLFILE
command. What do you do?

• As long as all data files are safe and on a successful completion of BACKUP control file command by
your Back up DBA you are in safe zone.
• We can restore the control file by performing following commands
1) CONNECT INTERNAL STARTUP MOUNT
2) TAKE ANY OFFLINE TABLESPACE (Read-only)
3) ALTER DATABASE DATAFILE (OFFLINE)
4) RECOVER DATABASE USING BACKUP CONTROL FILE 
5) ALTER DATABASE OPEN RESETLOGS
6) BRING READ ONLY TABLE SPACE BACK ONLINE

• Shutdown and back up the system. Then restart.

• Then give the command ALTER DATABSE BACKUP CONTROL FILE TO TRACE

• This output can be used for control file recovery as well.

NK-8125652330
If control file backup is not available, then the following will be required

1) CONNECT INTERNAL STARTUP NOMOUNT


2) CREATE CONTROL FILE .....;
• But we need to know all of the datafiles, logfiles, and settings of MAXLOGFILES, MAXLOGMEMBERS,
MAXLOGHISTORY, MAXDATAFILES for the database to use the command.

31) Can we reduce the space of TEMP datafile? How?

• Yes, we can reduce the space of the TEMP datafile.

• Prior to oracle 11g,,you had to recreate the datafile.

• In oracle 11g you reduce space of TEMP datafile by shrinking the TEMP tablespace.It is a new
feature to 11g.

• The dynamic performance view can be very useful in determining which table space to shrink.

32) What do you mean by database backup and which files must be backed up?

• Database stores most crucial data of business ,so it’s important to keep the data safe and this can
be achieved by backup.
• The following files must be backed up

• Database files (Header of datafiles is freezed during backup)


• Control files
• Archived log files
• Parameter files (spfile and pfile)
• Password file

33) What is a full backup and name some tools you use for full backup?

• A full backup is a backup of all the control files, data files, and parameter file (SPFILE or PFILE).
• You must also backup your ORACLE_HOME binaries which are used for cloning.
• A full backup can be performed when our database runs in NON ARCHIVE LOG mode.
• As a thumb rule, you must shutdown your database before you perform full backup.
• Full or COLD backup can be performed by using copy command in unix.

34) What are the different types of backup’s available and also explain the difference between them?

• There are 2 types of backup’s 

1) COLD backup(User managed & RMAN)


2) HOT backup(User managed & RMAN)

• Hot backup is taken when the database is still online and database should be in ARCHIVE LOG
MODE.
• Cold backup is taken when the database is in offline mode.
• Hot backup is inconsistent backup where as cold backup is consistent backup.
• You can begin backup by using the following command

NK-8125652330
SQL> alter database begin backup;
• End backup by
SQL> alter database end backup;

35) How to recover database if we lost the control file and we do not have a backup and what is
RMAN?

• We can recover our database at any point of time, when we have backup of our control files in
different mount points.
• Also check whether the control file is available in trace file located in USERDUMP or the alert log to
recover the database.
RMAN

• RMAN called as Recovery manager tool supplied by oracle that can be used to manage backup and
recovery activities.
• You can perform both offline (Cold) and online (Hot) backup’s using RMAN.
• We need to configure Flash_Recovery_Area of database to use RMAN.
• RMAN maintains the repository of backup information in control file.

36) Name the architectural components of RMAN

• RMAN executable
• Server process
• Channels
• Target database
• Recovery catalog database
• Media management layer
• Backup sets and backup pieces

37) What is a recovery catalog?

• Recovery catalog is an inventory of backup taken by RMAN for the database.


• The size of the recovery catalog schema depends upon the number of databases monitored by the
catalog.
• It is used to restore a physical backup, reconstruct it, and make it available to the oracle server.
• RMAN can be used without recovery catalog.
• Recovery catalog also holds RMAN stored scripts.

38) List some advantages of using RMAN.

• Table spaces are not put in the backup mode ,therefore there is no extra redo log file during online
backups.
• Incremental backups that only copy data blocks, which have changed since last backup.
• Detection of corrupt blocks.
• Built in reporting and listing commands.
• Parallelization of I/O operations.

39) How to bring a database from ARCHIVE LOG mode to NON ARCHIVE LOG MODE?

NK-8125652330
• You should change your init<SID>.ora file with the following information
• log_archive_dest=’/u01/oradata/archlog’ (for example)
• log_archive_format=’%t_%s.dbf’
• log_archive_start=true (prior to 10g)
• sql>shutdown;
• sql> startup mount;
• sql> alter database archivelog;
• sql>alter database open;
• Make sure you backup your database before switching to ARCHIVELOG mode.

40) What are the different stages of database startup?

• Database undergoes different stages before making itself available to end users
• Following stages are involved in the startup of database
? NoMount
? Mount
? Open

• NoMount – Oracle Instance is available based on the parameters defined in SPFile.


• Mount - Based on the Information from parameter control files location in spfile, it opens and reads
them and available to next stage.
• Open - Datafiles, redo log files are available to the end users.

41) Name some of the important dynamic performance views used in Oracle.

• V$Parameter
• V$Database
• V$Instance
• V$Datafiles
• V$controlfiles
• V$logfiles

42) What are the different methods we can shutdown our database?

• SHUTDOWN (or) SHUTDOWN NORMAL 


No new connections are accepted and wait for the user to close the session.
• SHUTDOWN TRANSACTIONAL
No new connections are accepted and wait for the existing transactions to commit and logouts the
session without the permission of user.
• SHUTDOWN IMMEDIATE
No new connections are accepted and all committed transactions are reflected in database and all the
transactions are about to commit are rolled back to previous value.
• SHUTDOWN ABORT

It’s just like an immediate power off for a database, it doesn’t mind what are the transactions
running it just stops entire activity -(even committed transactions are not reflected in database) and
make database unavailable. SMON process takes responsibility for recovery during next startup of
database.

• SHUTDOWN NORMAL, TRANSACTIONAL, IMMEDIATE are clean shutdown methods as database


maintains its consistency.
• SHUTDOWN ABORT leaves our database in an inconsistent state,data integrity is lost.

NK-8125652330
43) What are the different types of indexes available in Oracle?

Oracle provides several Indexing schemas


• B-tree index – Retrieves a small amount of information from a large table.
• Global and Local index – Relates to partitioned tables and indexes.
• Reverse Key Index - It Is most useful for oracle real application clusters applications.
• Domain Index – Refers to an application 
• Hash cluster Index – Refers to the index that is defined specifically for a hash cluster.

44) What is the use of ALERT log file? Where can you find the ALERT log file?

• Alert log file is a log file that records database-wide events which is used for trouble shooting.
• We can find the Log file in BACKGROUND_DUMP_DEST parameter.
• Following events are recorded in ALERT log file:
• Database shutdown and startup information.
• All non-default parameters.
• Oracle internal (ORA-600) errors.
• Information about a modified control file.
• Log switch change.

45) What is a user process trace file?

• It is an optional file which is produced by user session.


• It is generated only if the value of SQL_TRACE parameter is set to true for a session.
• SQL_TRACE parameter can be set at database, instance, or session level.
• If it set at instance level, trace file will be created for all connected sessions.
• If it is set at session level, trace file will be generated only for specified session.
• The location of user process trace file is specified in the USER_DUMP_DEST parameter.

46) What are different types of locks?

There are different types of locks, which are given as follows:


• System locks – controlled by oracle and held for a very brief period of time.
• User locks – Created and managed using dbms_lock package.
• Different types of user locks are given as follows

• UL Lock – Defined with dbms_lock package.


• TX Lock – Acquired once for every transaction. It isa row transaction lock.
• TM Lock – Acquired once for each object, which is being changed. It is a DML lock. The ID1 column
identifies the object being modified.

47) What do db_file_sequential_read and db_file_scattered_read events define?

• Db_file_sequential_read event generally indicates index usage.


• It shows an access by row id.
• While the db_file-scattered_read event indicates full table scan.
• Db_file_sequential_read event reads a single block at one time.
• Whereas db_file_scattered_read event reads multiple blocks.

48) What is a latch and explain its significance?

NK-8125652330
• Latch is an on/off switch in oracle that a process must access in order to perform certain type of
activities.
• They enforce serial access to the resources and limit the amount of time for which a single process
can use a resource.
• A latch is acquired for a very short amount of time to ensure that the resource is allocated.
• We may face performance issues which may be due to either of the two following reasons
• Lack of availability of resource.
• Poor application programming resulting in high number of requests for resource.
• Latch information is available in the v$LATCH and v$LATCHHOLDER dynamic performance views.

49) Explain the architecture of data guard?

Data guard architecture includes the following components

• Primary database – Refers to the production database.


• Standby Database – Refers to a copy of primary or production database.It may have more than
one standby database.
• Log transport service – Manages transfer of archive log files primary to standby database.
• Network configuration – Refers to the network connection between primary and standby database.
• Applies archived logs to the standby database.
• Role management services – Manages the role change between primary and standby database.
• Data guard broker – Manages data guard creation process and monitors the dataguard.

50) What is role transition and when does it happen?

• Database operates in one of the following mutually exclusive roles


Primary
Standby
• Role transition is the change of role between primary and standby databases.
• Data guard enables you to change this roles dynamically by issuing the sql statements.
• Transition happens in between primary and standby databases in the following way
• Switchover, where primary database is switched to standby database and standby to primary
database.
• Failover, where a standby database can be used as a disaster recovery solution in case of a failure
in primary database.
• DRM allows you to create resource plans, which specify resource allocation to various consumer
groups.
• DRM offers an easy-to-use and flexible system by defining distinct independent components.
• Enables you to limit the length of time a user session can stay idle and automatically terminates
long-running SQL statement and users sessions.
• Sets the initial login priorities for various consumer groups.
• DRM will automatically queue all the subsequent requests until the currently running sessions
complete.

NK-8125652330
Controlfile and Redolog File Management
1. What is Archived Redo Log?
 Archived Redo Log consists of Redo Log files that have archived before being reused.

2. What is Mirrored on-line Redo Log?


 A mirrored on-line redo log consists of copies of on-line redo log files physically located on separate
disks; changes made to one member of the group are made to all members.
3. What is the use of Control File?
 When an instance of an ORACLE database is started, its control file is used to identify the database
and redo log files that must be opened for database operation to proceed. It is also used in database
recovery.
4. What does a Control file Contain?
A Control file records the physical structure of the database. It contains the following information.
Database Name and locations of a database's files and redolog files. Time stamp of database
creation.

5. What happens when archive log destination becomes 100% full when the database is running in
ARCHIVELOG mode? How do you recover? The database gets shutdown. We should move old
archives to different location and startup the database.

6. How do you know whether archive log mode is enabled or not?


 Issue command 'archive log list' at sys prompt

7. What is a stale?
The redolog file which has not been used yet

8. How many maximum control files you can create in Oracle database?


   What is the error number you get if u tries to create more than 8?
8, ora 208            

9. If controlfile crashed, no backup. how to recover?


 By recreating controlfile.

10. How do increase the count of datafiles?


 Generate the control file syntax from the existing control file and recreate the control file by
changing the parameter MAXDATAFILES = yourdesired size
Procedure:
1. Open the database
2. Generate the control file change the maxdatafiles
3. Open the db in nomount
4. Execute the syntax with noresetlogs
5. Alter database open

11. If you want to maintain one more archive destination which parameter you have to set?
It’s a dynamic parameter you have to set log_archive_dest_1=

12. How do you take backup of a controlfile?


 Alter database backup controlfille to destination.file will be save in the your destination

13. How do you make your redolog group inactive?


 By taking manual logswitch.alter system switch logfile;

NK-8125652330
14. What is the parameter allows you to create max no. of groups?
 In controlfile recreation script , maxlogfiles=

15. How to create a trace file? “Alter database backup controlfile to trace;”

16. How do you rename a Database?     -Alter System switch log file.


-Alter database backup control file to trace.
-Shutdown.
-Edit control file
   Create control file reuse database 'oldname' resetlogs to create control file set database 'newname'
resetlogs
   remove the line stating recover database using backup controlfile.
-change the init.ora file.
-change TNS_names.ora 

17. Can we rename the redologs? If yes, in which stage? In DB up or down?


Yes,
If inactive -------------  open stage
If current -------------- cannot rename in open state                
18. What is a trace file and how is it created?
Each server and background process can write an associated trace file. When an internal error is
detected by a process or user process, it dumps information about the error to its trace. This can be
used for tuning the database.
19. How to implement the multiple control files for an existing database?  Shutdown the database
 Copy one of the existing control file to new location
  Edit init.ora file by adding new control filename
  Restart the database.

20. How do you know how much archives are generated?


Using the view v$log_history

NK-8125652330
Database Creation and Tablespace Management

1. What are the steps involved in Database Startup?


  Start an instance, Mount the Database and Open the Database.

2. What are the steps involved in Database Shutdown? Close the Database; Dismount the Database
and Shutdown the Instance.

3. What is Restricted Mode of Instance Startup?


 An instance can be started in restricted mode so that when the database is open connections are
limited only to those whose user accounts have been granted the RESTRICTED SESSION system
privilege.

4. What mode the instance should be to create the database?


No mount

5. While creating database can you specify the size of control files?
      No

6. Which parameter determines the size of SHARED POOL?


SHARED_POOL_SIZE.

7. After mounting a database using command STARTP MOUNT can you open your database in
RESTRICTED MODE. Using command “alter database open restrict;”
 No.

8. While creating database can you specify more than one datafile for SYSTEM Tablespace?
 Yes

9. What is a trace file and how is it created?


 Each server and background process can write an associated trace file. When an internal error is
detected by a process or user process, it dumps information about the error to its trace. This can be
used for tuning the database.

10. What are the minimum parameters should exist in the parameter file (init.ora)? DB NAME - Must
set to a text string of no more than 8 characters and it will be stored inside the datafiles, redo log
files and control files and control file while database creation.
DB_DOMAIN - It is string that specifies the network domain where the database is created. The
global database name is identified by setting these parameters
(DB_NAME & DB_DOMAIN) CONTORL FILES - List of control filenames of the database. If name is not
mentioned then default name will be used.
DB_BLOCK_BUFFERS - To determine the no of buffers in the buffer cache in SGA.
PROCESSES - To determine number of operating system processes that can be connected
toORACLE concurrently. The value should be 5 (background process) and additional 1 for each user.
ROLLBACK_SEGMENTS - List of rollback segments an ORACLE instance acquires at database startup.
Also optionally
11. How do you get the create syntax of a table or index or function or procedure?           Select
dbms_metadata.get_ddl('TABLE','EMP','SCOTT') from dual;

12. Explain about System Table space and sysaux tablespace? System table space has all the view
relating to oracle. Sysaux tablespace is used for storage of non-system related tables and indexes

NK-8125652330
that traditionally were placed in the System Tablespace. Like RMAN recovery catalog, Automatic
workload repository and ultra search.

13. What are the different modes of mounting a Database with the Parallel Server?Exclusive Mode If
the first instance that mounts a database does so in exclusive mode, only that Instance can mount
the database. Parallel Mode If the first instance that mounts a database is started in parallel mode,
other instances that are started in parallel mode can also mount the database.
14. Where alert log is stored? What is the parameter?
 In trace directory /disk2/oradata/prod/diag/rdbms/prod/prod/trace).parameter is diagnostic_dest
(oracle 11g)

15. What is a Data Dictionary?The Oracle data dictionary is one of the most important components of


the Oracle DBMS.It contains all information about the structures and objects of the database such as
tables,columns, users, data files etc. The data stored in the data dictionary are also often called
metadata.

16. How many redo logs should you have and how should they be configured for maximum
recoverability? You should have at least three groups of two redo logs with the two logs each on a
separate disk spindle (mirrored by Oracle). 

17. What is the Max Size of the SID?


 15char

1. What is a Tablespace?
 A database is divided into Logical Storage Unit called tablespaces.A tablespace is used to grouped
related logical structures together

2. What are the Characteristics of Data Files?


A data file can be associated with only one Tablespaces. One or more data files can be use as data of
database storage called a tablespace.

3. How do you drop a tablespace, if it has database objects?  Drop tablespace tablespacename


including contents

4. Can we create a tablespace with multiple datafiles in a single stroke?


Yes

5. Can a datafile be associated with two different tablespaces?


No.

6. Can we rename a datafile when the corresponding tablespace is in      read-only mode?


 No

7. For transportable tablespace what should be the tablespace status?


Read-only

8. How to rename a datafile?


     Tablespace datafile rename: 
    -Take tablespace offline;
              Sys>>  Alter tablespace <TBSNAME> offline;
  -Change the name at OS level

NK-8125652330
               Linux> cp oldname to newname
-Change the datafile name;
               Sys>> Alter tablepsace <TBSNAME> rename datafile 'oldname' to 'newname';

9. Explain the relationship among Database, Tablespace and Data file? Each databases logically
divided into one or more tablespaces one or more data files are explicitly created for each tablespace
10. How do you drop a tablespace?  Drop tablespace ts1 including contents and datafiles;

11. What is the procedure for Transportable tablespace migration?


 Transportable Tablespaces (TTS) allows you to copy a set of datafiles for a tablespace on one
database and plug that tablespace into a different database.As we noted, you cannot transport a
single partition of a table without transporting the entire table. Therefore, we will need to exchange
the partition with a stand-alone table temporarily, so that the partition becomes its own table.

12. How to rename the tablespace?


 Sys>alter tablespace <tablesapce name> rename to <tablespace name>

13. How to know the default tablespaces?


  Sys> select * from database-properties what property-name like ‘%default%’;

14. How many datafiles can you add for a tablespace?


 65,536

NK-8125652330
User Management and Undo Management

1. Can objects of the same Schema reside in different tablespaces?Yes.

2. Can a Tablespace hold objects from different Schemas?Yes.

3. Is CONNECT a system privilege or a role? If the answer is 'a role',   what system privileges are
assigned to this role by default?
It is a role .create session.

4. Where do you get the information of quotas?


Dba_ts_quotas view, user_ts_quotas view.

5. What is the init parameter to make use of profile?


Resource_limit=true
     
6. While creating user can you assign any role?
 Yes

7. Can a segment (table) present on more than one tablespace?  No, Possible only when a table if
create by using partition feature.

8. Can a segment (table) present on more than one datafiles?


Yes, Datafiles should belong to one tablespace.

9. Can we create the permanent objects in temporary tablespace?


No

10. Can you drop an object if tablespace is Offline?


 Yes

11. What privileges u gives normally when you create users?


 Create session

12. What is a View?


 A view is a virtual table. Every view has a Query attached to it.(The Query is a SELECTstatement
that identifies the columns and rows of the table(s) the view uses.)
13. Can a View based on another View? Yes.14. Does a View contain Data?
 Views do not contain or store data.
15. When does a Transaction end?  When it is committed or Roll backed.

16. Define Transaction? A Transaction is a logical unit of work that comprises one or more SQL
statements executed by a single user.
17. What is default tablespace? The Tablespace to contain schema objects created without specifying
a tablespace name.
18. What is Tablespace Quota? The collective amount of disk space available to the objects in a
schema on a particular tablespace.
19. What is the use of Roles? REDUCED GRANTING OF PRIVILEGES - Rather than explicitly granting
the same set of privileges to many users a database administrator can grant the privileges for a
group of related users granted to a role and then grant only the role to each member of the group.

NK-8125652330
DYNAMIC PRIVILEGE MANAGEMENT - When the privileges of a group must change, only the
privileges of the role need to be modified. The security domains of all users granted the group's role
automatically reflect the changes made to the role.
SELECTIVE AVAILABILITY OF PRIVILEGES - The roles granted to a user can be selectively enable
(available for use) or disabled (not available for use). This allows specific control of a user's
privileges in any given situation.
APPLICATION AWARENESS - A database application can be designed to automatically enable and
disable selective roles when a user attempts to use the application.
20. What is a profile?Each database user is assigned a Profile that specifies limitations on various
system resources available to the user.
21. What are the roles and user accounts created automatically with the database?20 roles are
created, 7 user accounts are created when database is created manually

22. What is user Account in Oracle database?A user account is not a physical structure in Database
but it is having important relationship to the objects in the database and will be having certain
privileges.

23. How do you create a table in another tablespace name?


 Create table xyz (a number) tablespace system

24. What are roles? How can we implement roles?Roles are the easiest way to grant and manage
common privileges needed by different groups of database users. Creating roles and assigning
provides to roles. Assign each role to group of users. This will simplify the job of assigning privileges
to individual users.

25. What does ROLLBACK do?ROLLBACK retracts any of the changes resulting from the SQL
statements in the transaction.
26. What is a deadlock? Explain.Two processes waiting to update the same rows of a table which are
locked by the other process then deadlock arises. In a database environment this will often happen
because of not issuing proper row lock commands. Poor design of front-end application may cause
this situation and the performance of server will reduce   drastically. These locks will be released
automatically when a commit/rollback operation performed or any one of these processes being
killed externally

27. How do you get you demo files get created in your user?
 $ORACLE_HOME/rdbms/admin/utlsampl.sql, edit and execute this script

28. Can we drop default profile?


 No

29. Can we edit default profile?


 Yes

1. Tell me about ORA 1555 and how do you address if you get this error?
It usually occurs after queries or batch processes have been running for a long time, which means
you can lose many hours of processing when the error crops up.There are three situations that can
cause the ORA-01555 error:

NK-8125652330
A. An active database with an insufficient number of small-sized rollback segments
B.A rollback segment corruption that prevents a consistent read requested by the query
C.A fetch across commits while your cursor is open

2. Flashback Query is possible with UNDO as well as ROLLBACK?


FALSE (only possible with Undo)

3. What is the view to see UNDO for how much data flushed out and how much data it having
now….how many active blocks, expired blocks?
 V$undostat

4. Two users fired a same statement at same time. performance degrades or   good. What happens if
it is select statement and for insert statement?
 Performance comes down….there is chance for getting 1555 error.

5. What does ROLLBACK do? ROLLBACK retracts any of the changes resulting from the SQL
statements in the transaction.

6. What are the init parameters you have to set to make use of undo management?
undo_tablespace= undotbs1
undo_management= auto
undo_retention= time in minutes
comment rollback_segment

7. What is Rollback Segment?A Database contains one or more Rollback Segments to temporarily
store "undo" information.

8. Can flashback work on database with out undo and with rollback   segments?
No.

NK-8125652330
Networking
1. What background process refreshes materialized views? Cjqo (co-ordinate job queue) 

2. What is the default location for tnsnames.ora and sqlnet.ora files and if you don't find them there,
where do you look? $ORACLE_HOME/network/admin/samples

3. Can we start multiple database services with in one listener service? Yes    

4. How to give password for listener? By using $lsnrctl

5. Can we have two listeners for 1 database?Yes

6. Can we have same listener names for two databases?No

7. What is the password you have to set in the init.ora to enable remote
login                       remote_login_password_file=exclusive

8. How do you connect to db and startup and shutdown the db without having dba group?Using
remote_login_passwordfile

9. What is the environment variable to set the location of the listener.ora?TNS_ADMIN

10. How do you know whether listener is running or not?


  $ lsnrctl status
  ps -ef grep tns

11. What need of password file? Password file provides remote db authentication.

13. What is a runaway session?You killed a session in the database but it still remains in the os level
and vice versa. Its called runaway session. If runaway session is there CPU consumes more usage.

14. What is netstat? And it’s Usage? It is a utility to know the port numbers availability. Usage:
netstat -na grep port number

NK-8125652330
Backup/Recovery

1. Which types of backups you can take in Oracle?


Backup Types:
1. Physical backup (Physical files backup-Datafile, Archived Redo log file, Control file, parameter file
and password file- User Managed Backup, RMAN backup)
2. Logical backup(Tables, schema, tablespace and database backup-taking object backup and
transfer to same or other database
Note: In oracle when we talk about the backup/recovery, it is mostly dedicated to physical backup
not logical.
So, here we have types of backup:
1. Online/Hot/Inconsistent backup
2. Offline/Cold/Consistent Backup
3. Whole database backup
4. Incremental backup
5. Differential backup
6. Partial backup

2. A database is running in NOARCHIVELOG mode then which type of backups you can take?
Offline/Cold/Consistent Backup

3. Can you take partial backups if the Database is running in NOARCHIVELOG mode?
No, Partial backup cannot take while database is not in archive log mode.
Partial backup will not be synchronized with the rest of the database. It is a copy of just part of the
database, at a particular moment in time. If it is ever necessary to restore a file from partial backup ,
it will have to resynchronized with the rest of the database with the help of archivelog. Means
applying the changes from the archived and online redo log files to bring it up to date.
So, there is no concept of partial backup, if our database is not running in archive log mode.

4. Can you take Online Backups if the database is running in NOARCHIVELOG mode?
No, we can’t take online backup while database is running in no archive log mode.A datafile that is
backed up online will not be synchronized with any particular SCN, nor it will be synchronized with
other data fileor the control files. Archive log applying is mandatory to resynchrinize the backed up
datafile with SCN and other datafiles or control file.

5. How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?
Note: To put the database from no archive to archive, database must be in mount mode( No mount
or open mode will not allow to put the database in archive log mode). Similary to put the database in
no archivelog mode from archive, the same rule will be applicable (No mount or open mode will not
allow to put the database in archive log mode).
Archive to No Archive log mode
Sql> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
Sql> shut down immediate;
Sql> startup mount;
sql> alter database noarchivelog;
sql> select log_mode from v$database;

NK-8125652330
LOG_MODE
------------
NOARCHIVELOGNo Archive log mode to archive log mode
Sql> select log_mode from v$database;
LOG_MODE
------------
NOARCHIVELOG
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 14
Current log sequence 17
SQL> shutdown immediate;
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 369098752 bytes


Fixed Size 1249056 bytes
Variable Size 201326816 bytes
Database Buffers 163577856 bytes
Redo Buffers 2945024 bytes
Database mounted.
SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG
SQL> alter database archivelog;
Database altered.
SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG
SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> archive log list;


Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 14
Next log sequence to archive 17
Current log sequence 17
SQL>

NK-8125652330
6. You cannot shutdown the database for even some minutes, then in which mode you should run
the database?
Database must be running in Archive log mode.
Differences concerning backups
No archive log
Archive log
Must backup entire database.
Can backup parts of database (datafiles tablespaces)
DB must be shut down.
hot backups possible
Only entire DB can be restored
Tablespaces can be restored
In case of a failure, all changes sinces the last backup will be lost
All commited transactions will be restorable

7. Where should you place Archive logfiles, in the same disk where DB is or another disk?For the
database performance reason and disk failure , we should store the archive log file at another disk .

8. Can you take online backup of a Control file if yes, how?


There are the way by that we can take the online backup of control file
1. User managed backup technique:
Sql> alter database backup controlfile to trace;
The control file creation sql script will be created in user_dump_dest (udump) directory.
Ther trace file will be having sql script to create the control file
Sql> alter database backup control file to ‘/oracle/app/ctrl.bak’ resue
To take the control file backup in binary format
2. RMAN is another technique by that we can take the online backup of control file, but database
must be in mount mode, else RAMN will not be able to connect with database and do the required
operation
Rman> backup current controlfile;
Rman> backup current controlfile to ‘/backup/ctrlfile.copy’;
Rman> configure controlfile autobackup on;

9. What is a Logical Backup?


Logical backup is the way to take the backup of data using sql commands in binary format file and
import the same to the other database.
It’s a logical backup why because we used to take the objects(data) backup from one database and
to restore to other database.
Traditional import/export and Datapump is the technique to perform the logical backup.Example of
logical backup are:
Table backup, tablespace backup, schema backup, full database backup

10. Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
No, there is no need to take the backup of online redo log file while our database is running in no
archivelog mode, why because whatever information is containg by redo log jhas already been
moved to archived redo log file before switching to next redo. So there is no benefit to take up the
backup of online redo log files.

NK-8125652330
11. Why do you take tablespaces in Backup mode?
The goal of ALTER TABLESPACE BEGIN BACKUP and END BACKUP is to set special actions in the
current database files in order to make their copy usable , without affecting the current operations.
Nothing needs to be changed in the current datafile , but, as the copying is done by the external
tool( Operating system utility), the only way to have something set in the copy is to do it in the
current datafiles before the copy , and revert it back at the end.
Sql> alter tablespace begin backup;
While putting the tablespace in backup mode,
-the hot backup falg in the datafile header is set , so that the copy is identified to be a hot backup
copy. This is to manage the backup consistency issue when the copy will be used for recovery.
- Checkpoint is done for the tablespace, so that in case of recovery , no redo generated before that
point will be applied. Begin backup command completes only when checkpoint is done.

12. Can Full Backup be performed when the database is open?


 No.

13. What are the advantages of operating a database in ARCHIVELOG mode over operating it in
NO ARCHIVELOG mode?
Complete database recovery from disk failure is possible only in ARCHIVELOG mode. Online database
backup is possible only in ARCHIVELOG mode.

14. How do you restore and recover a datafile while the database is up and running?
 Make the datafile offline, restore and recover datafile, make the datafile online;

15. Can you take COLD backup while database is running?


 No

16. HOT backup is consistent backup or inconsistent backup?


 Inconsistent

17. Can you take logical backups in mount stage?


 No

18. How do you know whether the specific tablespace is in begin backup mode?
 Select status from v$backup. if it is active it means it is in begin backup mode

19. When will you take Cold back up especially?


During up gradation and migration

20. What are the modes/options in incomplete recovery?


Cancel based, change based, time based

21. How do you applying archive logs to cold backup of previous day?
 Steps involved in recovery are:
- restore cold backup
- Startup mount
-recover database using backup control file until cancel

NK-8125652330
-alter database open resetlogs
-shutdown
-startup

22. What is hot backup and how it can be taken?


The database which is 24/7 those databases are never shutdown. Such databases backup should be
taken when database is running on.This type of physical backup is called as Hot backup.
Steps to take Hot backup.
1 Begin backup
2 Cp *.dbf <Backup location>
3 End backup
4 Log  switch
5 Take controlfile backup with sql statement.

23. What is cold backup?


Cold backup is taking backup of all physical files after normal shutdown of database.  We need to
take.
1. All Data files.
2. All Control files.
3. All on-line redo log files.
4. The init.ora file (Optional)

24. What is Partial Backup? A Partial Backup is any operating system backup short of a full backup,
taken while the database is open or shut down.

25. Explain the difference between a hot backup and a cold backup and the benefits associated with
each.
A hot backup is basically taking a backup of the database while it is still up and running and it must
be in archive log mode. A cold backup is taking a backup of the database while it is shut down and
does not require being in archive log mode. The benefit of taking a hot backup is that the database is
still available for use while the backup is occurring and you can recover the database to any point in
time. The benefit of taking a cold backup is that it is typically easier to administer the backup and
recovery process. In addition, since you are taking cold backups the database does not require being
in archive log mode and thus there will be a slight performance gain as the database is not cutting
archive logs to disk.

26. You have just had to restore from backup and do not have any control files. How would you go
about bringing up this database?I would create a text based backup control file, stipulating where on
disk all the data files where and then issue the recover command with the using backup control file
clause.

NK-8125652330
Log Miner SQL Loader Auditing

1. How to find out a record last updated?


This can done using logminer utility. 

2. What is log miner?


Oracle Log Miner, which is part of Oracle Database, enables you to query online and archived log files
through a SQL interface.  Because   redo log files contain information about the history of activity on
a database.If there were a small number of transactions that required rollback,prior to Logminer
utility. you would have to restore the table to earlier state and apply archived logfiles to bring the
table forward to just before the corruption when restoring the table and applied the archived log
files, you would have risk losing later transaction that you would like to retain.you can now use
logminer to rollback  only those trasactions without losing any transactions.

3. From which version logminer has started?


Oracle 8i

4. Benefits of using logminer?1.Determine what actions you should  have to take to do fine-grained


recovery at the transaction level. If you fully understood  and take into an account existing
dependencies, it may be possible to perform a table-specific undo operation to return the table to its
original state.
2.Performance tuning and capacity planning through trend analysis. You can determine which tables
get the most updates and inserts.
3..Performing postauditing. LogMiner can be used to track any data manipulation language (DML)
and data definition language (DDL) statements executed on the database, the order in which they
were executed, and who executed them.
5. From where we can get archivelog contents which logminer has digged?Query
theV$LOGMNR_CONTENTS view.

6. What is the usage of mine_value? The following usage rules apply to


the MINE_VALUEand COLUMN_PRESENT functions:
1. They can only be used within a Log Miner session.
2. They must be invoked in the context of a select operation from the V$LOGMNR_CONTENTSview.
3. They do not support LONG, LONG RAW, CLOB, BLOB, NCLOB, ADT, or COLLECTIONdatatypes.
7. What is the parameter which is used for logminer?
 UTL_FILE_DIR = /oracle/database
8. Logminer configuration?
There are three basic objects in a Log Miner configuration that you should be familiar with: the
source database,  the Log Miner dictionary, and the redo log files containing the data of interest:
The source database is the database that produces all the redo log files that you want Log Miner to
analyze.
The Log Miner dictionary allows Log Miner to provide table and column names, instead of internal
object IDs, when it presents the redo log data that you request.
Log Miner uses the dictionary to translate internal object identifiers and datatypes to object names
and external data formats. Without a dictionary, Log Miner returns internal object IDs and presents
data as binary data.

9. What are Log Miner dictionary options?

NK-8125652330
Using the Online Catalog: Oracle recommends that you use this option when you will have access to
the source database from which the redo log files were created and when no changes to the column
definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.
Extracting a Log Miner Dictionary to the Redo Log Files: Oracle recommends that you use this option
when you do not expect to have access to the source database from which the redo log files were
created, or if you anticipate that changes will be made to the column definitions in the tables of
interest.
Extracting the LogMiner Dictionary to a Flat File:This option is maintained for backward compatibility
with previous releases. This option does not guarantee transactional
consistency. Oracle recommends that you use either the online catalog or extract the dictionary from
redo log files instead.

10. Views related to logminer?


 Once Log Miner is started, the contents of the logfiles can be queried using the following views:
1. V$LOGMNR_DICTIONARY - The dictionary file in use.
2. V$LOGMNR_PARAMETERS - Current parameter settings for Log Miner.
3. V$LOGMNR_LOGS - Which redo log files are being analyzed.
4. V$LOGMNR_CONTENTS - The contents of the redo log files being analyzed.

1. What is Auditing?
 Monitoring of user access to aid in the investigation of database use.
2. What is the default destination where Oracle creates audit trail files automatically when you login
as a sysdba?
$ORACLE_HOME/rdbms/audit

3. A table deleted by a user, how u come to know which user was deleted? If he deleted alert log file
also how u come to know?
 By using AUDITING.

4. What are the different Levels of Auditing?Statement Auditing, Privilege Auditing and Object
Auditing.
5. What is Statement Auditing?Statement auditing is the auditing of the powerful system privileges
without regard to specifically named objects6. What are the database administrator’s utilities
available?SQL * DBA - This allows DBA to monitor and control an ORACLE database. SQL * Loader -
It loads data from standard operating system files (Flat files) into ORACLE database tables.
Export (EXP) and Import (imp) utilities allow you to move existing data in ORACLE
format to and from ORACLE database.
7. What is Privilege Auditing?Privilege auditing is the auditing of the use of powerful system
privileges without regard to specifically named objects.
8. What is Object Auditing?Object auditing is the auditing of accesses to specific schema objects
without regard to user.

9. What is the difference between SQL loader and exp/imp?


SQL is most used for ETL purpose and is mostly used for loading data from Non Oracle Databases.
Exp/Imp is an oracle tool used of for moving data from oracle database to another oracle database.

10. Oracle 11g new feature in auditing level?Oracle Audit Vault is a new feature that will provide a
solution to help customers address the most difficult security problems remaining today, protecting
against insider threat and meeting regulatory compliance requirements.
 

NK-8125652330
11. Is it Mandatory to mention log file in Sql Loader?
 No. Once you load data then automatically logfile will be created

12. How can you skip some records in Sql Loader?


 By Using Skip Parameter

13. A file consists of 10 records. How can you load 5th and 6th records.
 Sqlloader control=<controlfilename> skip=4 load=2

14. What a bad file contains?


 Oracle rejected records

15. Can you insert some records for a non empty table?
 No. It is not possible to insert the table should be empty

16. What is infile *?


 * Indicates the data is present within the controlfile.,

NK-8125652330
RECOVERY MANAGER

1)What is RMAN and How to configure it?

RMAN is an Oracle Database client 


It performs backup and recovery tasks on your databases and automates administration of your
backup strategies 
It greatly simplifies the dba jobs by managing the production database's backing up,restoring, and
recovering database files 
  This tool integrates with sessions running on an Oracle database to perform a range
of backup and recovery activities, including maintaining an RMAN repository of historical data about
backups 
 
There is no additional installation required for this tool 
  It is by default get installed with the oracle database installation 
  The RMAN environment consists of the utilities and databases that play a role in acking up
your data 
  We can access RMAN through the command line or through Oracle Enterprise Manager

2) Why to use RMAN?


   
RMAN gives you access to several backup and recovery techniques and features not available with
user-managed backup and recovery. The most noteworthy are the following:

Automatic specification of files to include in a backup


Establishes the name and locations of all files to be backed up

Maintain backup repository


   Backups are recorded in the control file, which is the main repository of RMAN
metadata 
  Additionally, you can store this metadata in a recovery catalog

Incremental backups 
Incremental backup stores only blocks changed since a previous backup
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply
redo during datafile media recovery
Unused block compression: 
  In unused block compression, RMAN can skip data blocks that have never been used

Block media recovery


  We can repair a datafile with only a small number of corrupt data blocks without
taking it offline or restoring it from backup

Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of
backups

Encrypted backups
  RMAN uses backup encryption capabilities integrated into Oracle Database to store
backup sets in an encrypted format

Corrupt block detection


RMAN checks for the block corruption before taking its backup

NK-8125652330
3) How RMAN works?
 
RMAN backup and recovery operation for a target database are managed by RMAN client 
 RMAN uses the target database control file to gather metadata about the target database and to
store information about its own operations 
  The RMAN client itself does not perform backup, restore, or recovery operations 
  When you connect the RMAN client to a target database, RMAN allocates server sessions on the
target instance and directs them to perform the operations 
  The work of backup and recovery is performed by server sessions running on the  target database 
  A channel establishes a connection from the RMAN client to a target or auxiliary database instance
by starting a server session on the instance 
  The channel reads data into memory, processes it, and writes it to the output device 
  When you take a database backup using RMAN, you need to connect to the target database using
RMAN Client 
  The RMAN client can use Oracle Net to connect to a target database, so it can be located on any
host that is connected to the target host through Oracle Net 
  For backup you need to allocate explicit or implicit channel to the target database 
An RMAN channel represents one stream of data to a device, and corresponds to one database
server session. 
 This session dynamically collect information of the files from the target database control file before
taking the backup or while restoring 
  For example if you give ' Backup database ' from RMAN, it will first get all the datafiles information
from the controlfile 
  Then it will divide all the datafiles among the allocated channels. (Roughly equal size of work as per
the datafile size) 
  Then it takes the backup in 2 steps

Step1:
The channel will read all the Blocks of the entire datafile to find out all the formatted blocks to
backup

Note:
  RMAN do not take backup of the unformatted blocks

Step2:
  In the second step it takes back up of the formatted blocks

Example:
This is the best advantage of using RMAN as it only takes back up of the required blocks 
  Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is
free then RMAN will only take backup of those 10 MB

4) What O/S and oracle user privilege required using RMAN? 


 
RMAN always connects to the target or auxiliary database using the SYSDBA privilege 
  RMAN always connects to the target or auxiliary database using the SYSDBA privilege 
  Its connections to a database are specified and authenticated in the same way as SQL*Plus
connections to a database 
  The O/S user should be part of the DBA group 
  For remote connection it needs the password file Authentication 
  Target database should have the initialization parameter REMOTE_LOGIN_PASSWORDFILE set to
EXCLUSIVE or SHARED

NK-8125652330
5) RMAN terminology:

A target database:
  An Oracle database to which RMAN is connected with the TARGET keyword 
  A target database is a database on which RMAN is performing backup and recovery operations 
  RMAN always maintains metadata about its operations on a database in the control file of the
database

A recovery Catalog:
  A separate database schema used to record RMAN activity against one or more target databases 
  A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much
easier to restore and recover following the loss of the control file 
  The database may overwrite older records in the control file, but RMAN maintains records forever in
the catalog unless deleted by the user

Backup sets:
RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of
an RMAN backup 
  One backup set contains one or more datafiles a section of datafile or archivelogs

Backup Piece:
 A backup set contains one or more binary files in an RMAN-specific format 
  This file is known as a backup piece 
  Each backup piece is a single output file 
  The size of a backup piece can be restricted; if the size is not restricted, the backup set will
comprise one backup piece 
  Backup piece size should be restricted to no larger than the maximum file size that your filesystem
will support

Image copies: 
An image copy is a copy of a single file (datafile, archivelog, or controlfile) 
  It is very similar to an O/S copy of the file 
  It is not a backupset or a backup piece 
  No compression is performed

Snapshot Controlfile:
When RMAN needs to resynchronize from a read-consistent version of the control file,
it creates a temporary snapshot control file 
  The default name for the snapshot control file is port-specific

Database Incarnation:
Whenever you perform incomplete recovery or perform recovery using a backup
control file, you must reset the online redo logs when you open the database 
  The new version of the reset database is called a new incarnation 
  The reset database command directs RMAN to create a new database incarnation record in the
recovery catalog 
  This new incarnation record indicates the current incarnation
  

6) What is RMAN Configuration and how to configure it?

The RMAN backup and recovery environment is preconfigured for each target database 

NK-8125652330
  The configuration is persistent and applies to all subsequent operations on this target database,
even if you exit and restart RMAN 
  RMAN configured settings can specify backup devices, configure a connection to a backup device ,
policies affecting backup strategy, encryption algorithm, snap shot controlfile loaion  and others 
  By default there are few default configuration are set when you login to RMAN 
  You can customize them as per your requirement 
  Any time you can check the current setting by using the "Show all” command 
  CONFIGURE command is used to create persistent settings in the RMAN environment, which apply
to all subsequent operations, even if you exit and restart RMAN

7) How to check RMAN configuration?


RMAN>Show all;

8) How to reset to default configuration?


To reset the default configuration setting use Connect to the target database from
sqlplus and run
SQL> connect @target_database;  
SQL> execute dbms_backup_restore.resetConfig; 

RMAN Catalog Database

9) What is Catalog database and How to configure it?


  This is a separate database which contains catalog schema
  You can use the same target database as the catalog database but it’s not at all recommended

10) How Many catalog database I can have?

You can have multiple catalog databases for the same target database 
  But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have
multiple catalog database

11) Is this mandatory to use catalog database?


       No! It’s an optional one

12) What is the advantage of catalog database?


 Catalog database is a secondary storage of backup metadata 
  It’s very useful in case you lost the current controlfile, as all the backup information are there in the
catalog schema 
  Secondly from contolfile the older backup information are aged out depending upon the
control_file_record_keep_time 
  RMAN catalog database maintain the history of data

13) What is the difference between catalog database & catalog schema?
Catalog database is like any other database which contains the RMAN catalog user's
schema

14)  What happen if catalog database lost?


Since catalog database is an optional there is no direct effect of loss of catalog
database 
  Create a new catalog database and register the target database with the newly created catalog one

NK-8125652330
All the backup information from the target database current controlfile will be updated to the catalog
schema 
  If any backup information which is aged out from the target database then you need to manually
catalog those backup pieces

RMAN backup:

15)  What are the database file's that RMAN can backup?
  RMAN can backup Controlfile, Datafiles, Archive logs, standby database controfile,
Spfile

16) What are the database file's that RMAN cannot backup?
RMAN can not take backup of the pfile, Redo logs, network configuration files,
password files, external tables and the contents of the Oracle home files 

17) Can I have archivelogs and datafile backup in a single backupset?

    No.  We can not put datafiles and archive logs in the same backupset

18)  Can I have datafiles and contolfile backup in a single backup set?
 Yes 
  If the controlfile autobackup is not ON then RMAN takes backup of controlfile along
with the datafile 1, whenever you take backup of the database or System tablespace

19) Can I regulate the size of backup piece and backup set?
  Yes! 
  You can set max size of the backupset as well as the backup piece 
  By default one RMAN channel creates a single backupset with one backup piece in it 
  You can use the MAXPIECESIZE channel parameter to set limits on the size of backup pieces 
  You can also use the MAXSETSIZE parameter on the BACKUP and CONFIGURE commands to set a
limit for the size of backup sets

20) What is the difference between backup set backup and Image copy backup?
A backup set is an RMAN-specific proprietary format, whereas an image copy is a bit-
for-bit copy of a file 
  By default, RMAN creates backup sets

21) What is RMAN consistent backup and inconsistent backup?


A consistent backup occurs when the database is in a consistent state 
  That means backup of the database taken after a shutdown immediate, shutdown normal or
shutdown transactional 
  If the database is shutdown with abort option then its not a consistent backup 
A backup when the database is up and running is called an inconsistent backup 
  When a database is restored from an inconsistent backup, Oracle must perform media recovery
before the database can be opened, applying any pending changes from the redo logs 
  You can not take inconsistent backup when the database is in Noarchivelog mode

22)  Can I take RMAN backup when the database is down?


No!
   You can take RMAN backup only when the target database is Open or in Mount stage
   It’s because RMAN keep the backup metadata in controfile
   Only in open or mount mode controlfile is accessible

NK-8125652330
23)  Do I need to place the database in begin backup mode while taking RMAN inconsistent backup?
RMAN does not require extra logging or backup mode because it knows the format of
data blocks
       RMAN is guaranteed not to back up fractured blocks
       No extra redo is generated during RMAN backup

24) Can I compress RMAN backups?

 RMAN supports binary compression of backup sets


 The supported algorithms are BZIP2 (default) and ZLIB 
 It’s not recommended to compress the RMAN backup using any other OS or third party utility

Note:
 RMAN compressed backup with BZIP2 provides great compression but is CPU intensive
 Using ZLIB compression requires the Oracle Database 11g Advanced Compression Option and is
only supported with an 11g database
 The feature is not backward compatible with 10g databases

25) Can I encrypt RMAN backup?

 RMAN supports backup encryption for backup sets


 You can use wallet-based transparent encryption, password-based encryption, or both
 You can use the CONFIGURE ENCRYPTION command to configure persistent transparent encryption
ü  Use the SET ENCRYPTION, command at the RMAN session level to specify password-based
encryption

26)  Can RMAN take backup to Tape?

 Yes!
 We can use RMAN for the tape backup
 But RMAN can not able to write directly to tape
 You need to have third party Media Management Software installed
 Oracle has published an API specification which Media Management Vendor's who are members of
Oracle's Backup Solutions Partner program have access to
 Media Management Vendors (MMVs) then write an interface library which the Oracle server uses to
write and read to and from tape
Starting from oracle 10g R2 oracle has its Own Media management software for the database backup
to tape called OSB

27) How RMAN Interact with Media manager?

 Before performing backup or restore to a media manager, you must allocate one or more channels
or configure default channels for use with the media manager to handle the communication with the
media manager
 RMAN does not issue specific commands to load, label, or unload tapes
 When backing up, RMAN gives the media manager a stream of bytes and associates a unique name
with this stream
 When RMAN needs to restore the backup, it asks the media manager to retrieve the byte stream
 All details of how and where that stream is stored are handled entirely by the media manager

NK-8125652330
28) What is Proxy copy backup to tape?

 Proxy copy is functionality, supported by few media manager in which they handle the entire data
movement between datafiles and the backup devices
 Such products may use technologies such as high-speed connections between storage and media
subsystems to reduce load on the primary database server
 RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes
all decisions regarding how and when to move the data

29) What is Oracle Secure backup?

 Oracle Secure Backup is a media manager provided by oracle that provides reliable and secure data
protection through file system backup to tape
 All major tape drives and tape libraries in SAN, Gigabit Ethernet, and SCSI environments are
supported

30) Can I restore or duplicate my previous version database using a later version of Oracle?
For example, is it possible to restore a 9i backup while using the 10g executables?

It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or
11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM
RMAN is configured so that a higher release is able to restore a lower release, but it is strongly
suggested you use only the same version

31) Can I restore or duplicate between two different patchset levels?

 As you can restore between different Oracle versions, you can also do so between two different
patchset levels
Alter database open resetlogs upgrade; 
OR 
alter database open resetlogs downgrade; 

32) Can I restore or duplicate between two different versions of the same operating system?
For example, can I restore my 9.2.0.1.0 RMAN backup taken against a host running Solaris 9 to a
different machine where 9.2.0.1.0 is installed but where that host is running Solaris 10? 

 If the same Oracle Server installation CDs (media pack) can be used to install 9.2.0.1.0 on Solaris 9
and Solaris 10, this type of restore is supportable

33) Is it possible to restore or duplicate when the bit level (32 bit or 64 bit) of Oracle does not
match? 
For example, is it possible to restore or duplicate my 9.2. 64-bit database to a 9.2.32-bit installation?
 It is preferable to keep the same bit version when performing a restore/recovery 
  However, excluding the use of duplicate command, the use of the same operating system platform
should allow for a restore/recovery between bit levels (32 bit or 64 bit) of Oracle 
  Note, this may be specific to the particular operating system and any problems with this should be
reported to Oracle Support 
  If you will be running the 64-bit database against the 32-bit binary files or vice versa, after the
recovery has ended the database bit version must be converted using utlirp.sql
If you do not run utlirp.sql you will see errors including but not limited to:
ORA-06553: PLS-801: INTERNAL ERROR [56319]

34) Can I restore or duplicate my RMAN backup between two different platforms such as Solaris to

NK-8125652330
Linux?
In general, you cannot restore or duplicate between two different platforms

35) What are the corruption types?

ü  Datafile Block Corruption - Physical/Logical


ü  Table/Index Inconsistency
ü  Extents Inconsistencies
ü  Data Dictionary Inconsistencies

Scenarios:
Goal: How to identify all the corrupted segments in the database reported by RMAN?

Solution:

Step 1: Identify the corrupt blocks (Datafile Block Corruption - Intra block corruption)
RMAN> backup validate check logical database;

To make it faster, it can be configured to use PARALLELISM with multiple channels:

RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}

Step2:  Using the view v$database_block_corruption:


SQL> select * from v$database_block_corruption; 

          FILE#          BLOCK#          BLOCKS CORRUPTION_CHANGE# CORRUPTIO 


------------------------------------------------------------------------------------------------------------ 
              6              10                          1          8183236781662                      LOGICAL 
              6              42                          1                  0                                      FRACTURED 
              6              34                          2                  0                                      CHECKSUM 
              6              50                          1      8183236781952                          LOGICAL 
              6              26                          4                  0                                      FRACTURED

5 rows selected.

Datafile Block Corruption - Intra block corruption


It refers to intra block corruptions that may cause different errors like ORA-1578, ORA-8103, ORA-
1410, ORA-600 etc. 
  Oracle classifies the corruptions as Physical and Logical
ü  To identify both Physical and Logical Block Corruptions use the "CHECK LOGICAL" option
ü  It checks the complete database for both corruptions without actually doing a backup

Solution1:

$ rman target /

NK-8125652330
RMAN> backup check logical validate database;

$ rman target /
RMAN> backup check logical database;

Solution2:
ü  Chek the view V$DATABASE_BLOCK_CORRUPTION to identify the block corruptions detected by
RMAN

Solution3: DBVerify - Identify Datafile Block Corruptions


ü  DBVERIFY identify Physical and Logical Intra Block Corruptions by default
ü  Dbverify cannot be run for the whole database in a single command
ü  It does not need a database connection either

dbv file= blocksize=

RMAN Vs DBVerify - Datafile Intra Block Corruption

When the logical option is used by RMAN, it does exactly the same checks as DBV does for intra
block corruption.
RMAN can be run with PARALLELISM using multiple channels making it faster than DBV which can
not be run in parallel in a single command
DBV checks for empty blocks. In 10g RMAN may not check blocks in free extents when Locally
Managed Tablespaces are used. In 11g RMAN checks for both free and used extents.
Both DBV and RMAN (11g) can check for a range of blocks. RMAN: VALIDATE DATAFILE 1 BLOCK 10
to 100;.  DBV: start=10 end=100
RMAN keeps corruption information in the control file (v$database_block_corruption,
v$backup_corruption). DBV does not. 
RMAN may not report the corruption details like what is exactly corrupted in a block reported as a
LOGICAL corrupted block. DBV reports the corruption details in the screen or in a log file.
DBV can scan blocks with a higher SCN than a given SCN.
DBV does not need a connection to the database.

dentify TABLE / INDEX Inconsistency


Table / Index inconsistencies is when an entry in the Table does not exist in the Index or vice versa.
The common errors are ORA-8102, ORA-600 [kdsgrp1], ORA-1499 by "analyze validate structure
cascade".
The tool to identify TABLE / INDEX inconsistencies is the ANALYZE command:
analyze table validate structure cascade;

When an inconsistency is identified, the above analyze command will produce error ORA-1499 and a
trace file.

35) What Happens When A Tablespace/Database Is Kept In Begin Backup Mode?

One danger in making online backups is the possibility of inconsistent data within a block  For
example, assume that you are backing up block 100 in datafile users. dbf  Also, assume that the
copy utility reads the entire block while DBWR is in the middle of updating the block
 In this case, the copy utility may read the old data in the top half of the block and the new data in
the bottom top half of the block The result is called a fractured block, meaning that the data

NK-8125652330
contained in this block is not consistent at a given SCN

Therefore oracle internally manages the consistency as below : 


The first time a block is changed in a datafile that is in hot backup mode, the entire block is written
to the redo log files, not just the changed bytes
Normally only the changed bytes (a redo vector) is written
In hot backup mode, the entire block is logged the first time
This is because you can get into a situation where the process copying the datafile and DBWR are
working on the same block simultaneously
Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a
time). The backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR
has asked to rewrite this block. the OS schedules the DBWR write to occur right now. The entire 8k
block is rewritten. The backup program starts running again (multi-tasking OS here) and reads the
last 4k of the block. The backup program has now gotten an fractured block -- the head and tail are
from two points in time. 
We cannot deal with that during recovery. Hence, we log the entire block image so that during
recovery, this block is totally rewritten from redo and is consistent with itself atleast. We can recover
it from there. 

2.  The datafile headers which contain the SCN of the last completed checkpoint are not updated
while a file is in hot backup mode. This lets the recovery process understand what archive redo log
files might be needed to fully recover this file. 

Oracle RMAN Interview Questions/FAQs:

1) Difference between catalog and nocatalog?

ANS: CATALOG is used when you use a repository database as catalog.


NOCATALOG is used when you used the controlfile to register your backup information.
Default in NOCATALOG.

2) Difference between using recovery catalog and control file?

ANS:When new incarnation happens, the old backup information in control file will be lost. 
It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.

3) Can we use same target database as catalog?

ANS:No. 

The recovery catalog should not reside in the target database (database to be backed up),
because the database can't be recovered in the mounted state.

4) How do u know how much RMAN task has been completed?

ANS:By querying v$rman_status or v$session_longops

5) From where list & report commands will get input

LIST:The primary purpose of the LIST command is to list backup and copies. For example, you can
list:

NK-8125652330
Backups and proxy copies of a database, tablespace, datafile, archived redo log, or control file
Backups that have expired
Backups restricted by time, path name, device type, tag, or recoverability
Archived redo log files and disk copies

REPORT:
You can use the REPORT command to answer important questions, such as:
Which files need a backup?
Which files have had unrecoverable operations performed on them?
Which backups are obsolete and can be deleted?
What was the physical schema of the target database or a database in the Data Guard environment
at some previous time?
Which files have not been backed up recently?

6) Command to delete archive logs older than 7days?

ANS:
RMAN> delete archivelog all completed before sysdate-7;

7) What is the use of crosscheck command in RMAN?

ANS:
Crosscheck will be useful to check whether the catalog information is intact with OS level
information.

8) What are the differences between crosscheck and validate commands


ANS:
Use the CROSSCHECK command to synchronize the physical reality of backups and copies with their
logical records in the RMAN repository.
Use the VALIDATE command to check for corrupt blocks and missing files, or to determine whether a
backup set can be restored.

9) Which is one is good, differential (incremental) backup or cumulative (incremental) backup?


ANS:
A differential backup, which backs up all blocks changed after the most recent incremental backup at
level 1 or 0

A cumulative backup, which backs up all blocks changed after the most recent incremental backup at
level 0

10) What is Level 0, Level 1 backup?


ANS:

A level 0 incremental backup, which is the base for subsequent incremental backups, copies all
blocks containing data, 
backing the datafile up into a backup set just as a full backup would. 
A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at
level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at
level 0

NK-8125652330
11)  Can we perform level 1 backup without level 0 backup?

ANS:
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. 
If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the
backup. 
If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores
the results as a level 1 backup. 
In other words, the SCN at the time the incremental backup is taken is the file creation SCN.

12) Will RMAN put the database/tablespace/datafile in backup mode ?

RMAN does not require you to put the database in backup mode.

13) What is snapshot control file?


ANS:
The snapshot CONTROLFILE is a copy of the CONTROLFILE that RMAN utilizes during long running
operation (such as backup). 
RMAN needs a read consistent view of the CONTROLFILE for the backup operation, but by its nature
the control file is extremely volatile.  
Instead of putting lock on the control file and causing all kinds of db enqueue problems, RMAN
makes a copy of controlfile called snapshot controlfile.  
The snapshot is refreshed at the beginning of every backup.

14) what is controlfile auto backup ?


ANS:
then RMAN automatically backs up the control file and server parameter file after every backup and
after database structural changes. 
The control file autobackup contains metadata about the previous backup, which is crucial for
disaster recovery.

15) What is the difference between backup set and backup piece?
ANS:

Backup set is logical and backup piece is physical.

16) What is obsolete backup & expired backup?

A status of "expired" means that the backup piece or backup set is not found in the backup
destination.
A status of "obsolete" means the backup piece is still available, but it is no longer needed. 
The backup piece is no longer needed since RMAN has been configured to no longer need this piece
after so many days have elapsed, 
or so many backups have been performed.

17)  What is the difference between hot backup & RMAN backup?

NK-8125652330
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.

18)  How to put manual/user-managed backup in RMAN (recovery catalog)?


By using catalog command.
RMAN> CATALOG START WITH '/tmp/backup.ctl';

19)  What is the difference between auxiliary channel and maintenance channel ?

AUXILIARY:
Specifies a connection between RMAN and an auxiliary database instance.
An auxiliary instance is used when executing the DUPLICATE or TRANSPORT TABLESPACE command, 
and when performing TSPITR with RECOVER TABLESPACE . When specifying this option, the auxiliary
instance must be started but not mounted.
See Also: DUPLICATE to learn how to duplicate a database, and CONNECT to learn how to connect to
a duplicate database instance

CHANNEL:

Specifies a connection between RMAN and the target database instance. 


The channel_id is the case-sensitive name of the channel. 
The database uses the channel_id to report I/O errors.
Each connection initiates an database server session on the target or auxiliary instance: this session
performs the work of backing up, restoring, or recovering RMAN backups. 
You cannot make a connection to a shared server session.
Whether ALLOCATE CHANNEL allocates operating system resources immediately depends on the
operating system. 
On some platforms, operating system resources are allocated at the time the command is issued. 
On other platforms, operating system resources are not allocated until you open a file for reading or
writing.
Each channel operates on one backup set or image copy at a time. 
RMAN automatically releases the channel at the end of the job.

Patching and Upgradation

NK-8125652330
When you moved oracle binary files from one ORACLE_HOME server to another server then which
oracle utility will be used to make this new ORACLE_HOME usable?
Relink all.

In which months oracle release CPU patches?


JAN, APR, JUL, OCT

When we applying single Patch, can you use opatch utility?


Yes, you can use Opatch incase of single patch. The only type of patch that cannot be used with
OPatch is a patchset.

Is it possible to apply OPATCH without downtime?


As you know for apply patch your database and listener must be down. When you apply OPTACH it
will update your current ORACLE_HOME. Thus coming to your question to the point in fact it is not
possible without or zero downtime in case of single instance but in RAC you can Apply Opatch
without downtime as there will be more separate ORACLE_HOME and more separate instances
(running once instance on each ORACLE_HOME).

You have collection of patch (nearly 100 patches) or patchset. How can you apply only one patch
from it?
With Napply itself (by providing patch location and specific patch id) you can apply only one patch
from a collection of extracted patch. For more information check the opatch util NApply –help. It will
give you clear picture.
For Example:
opatch util napply <patch_location> -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of patch
installed in your ORACLE_HOME. 

If both CPU and PSU are available for given version which one, you will prefer to apply?
From the above discussion it is clear once you apply the PSU then the recommended way is to apply
the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU (If you apply
CPU over PSU will considered you are trying to rollback the PSU and will require more effort in fact).
So if you have not decided or applied any of the patches then, I will suggest you to go to use PSU
patches. For more details refer: Oracle Products [ID 1430923.1], ID 1446582.1

PSU is superset of CPU then why someone choose to apply a CPU rather than a PSU?
CPUs are smaller and more focused than PSU and mostly deal with security issues. It seems to be
theoretically more consecutive approach and can cause less trouble than PSU as it has less code
changing in it. Thus any one who is concerned only with security fixes and not functionality fixes,
CPU may be good approach.  

How to Download Patches, Patchset or Opatch from metalink?

If you are using latest support.oracle.com then after login to metalink Dashboard
- Click on "Patches & Updates" tab
- On the left sidebar click on "Latest Patchsets" under "Oracle Server/Tools".
- A new window will appear.
- Just mouseover on your product in the "Latest Oracle Server/Tools Patchsets" page.
- Corresponding oracle platform version will appear. Then simply choose the patchset version and
click on that.

NK-8125652330
- You will go the download page. From the download page you can also change your platform and
patchset version.

REFERENCES:
http://docs.oracle.com/cd/E11857_01/em.111/e12255/e_oui_appendix.htm
Oracle® Universal Installer and OPatch User's Guide
11g Release 2 (11.2) for Windows and UNIX
Part Number E12255-11

What is the recent Patch applied?

What is OPatch?

How to Apply Opatch in Oracle?

1. You MUST read the Readme.txt file included in opatch file, look for any prereq. steps/ post
installation steps or and DB related changes. Also, make sure that you have the correct opatch
version required by this patch.
2.Make sure you have a good backup of database.
3. Make a note of all Invalid objects in the database prior to the patch.
4. Shutdown All the Oracle Processes running from that Oracle Home , including the Listener and
Database instance, Management agent etc.
5. You MUST Backup your oracle Home and Inventory
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip > Backup_Software_Version.tar.gz
6. Unzip the patch in $ORACLE_HOME/patches
7. cd to the patch direcory and do opatch -apply to apply the patch.
8. Read the output/log file to make sure there were no errors.

Patching Oracle Software with OPatch ?

opatch napply <patch_location> -skip_subset -skip_duplicate


OPatch skips duplicate patches and subset patches (patches under <patch_location> that are
subsets of patches installed in the Oracle home).

What is Opactch in Oracle?

OPATCH Utility (Oracle RDBMS Patching)

1. Download the required Patch from Metalink based on OS Bit Version and DB Version.
2. Need to down the database before applying patch.
3. Unzip and Apply the Patch using ”opatch apply” command.On successfully applied of patch you will
see successful message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch
lsinventory” command .
4. Each patch has a unique ID, the command to rollback a patch is “opatch rollback -id  <patch
no.>” command.On successfully applied of patch you will see successful message “OPatch
succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
5. Patch file format will be like, “p<patch no.>_<db version>_<os>.zip”
6. We can check the opatch version using “opatch -version” command.
7. Generally, takes 2 minutes to apply a patch.
8. To get latest Opatch version download “patch 6880880 - latest opatch tool”, it contains OPatch
directory.
9. Contents of downloaded patches will be like “etc,files directories and a README file”

NK-8125652330
10. Log file for Opatch utility can be found at $ORACLE_HOME/cfgtoollogs/opatch
11. OPatch also maintains an index of the commands executed with OPatch and the log files
associated with it in the history.txt file located in the <ORACLE_HOME>/cfgtoollogs/opatch directory.
12. Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the
Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2
(11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).
13. Direct upgrade to Oracle 10g is only supported if your database is running one of the following
releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to one of these
releases or use a different upgrade option (like export/ import).
14.Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+, 10.1.0.2+ or
10.2.0.1+. Upgrades from other versions are supported only via intermediate upgrades to a
supported upgrade version.

http://avdeo.com/2008/08/19/opatch-utility-oracle-rdbms-patching/

Oracle version 10.2.0.4.0 what does each number refers to?


Oracle version number refers:
10 – Major database release number
 2 – Database Maintenance release number
 0 – Application server release number
 4 – Component Specific release number
 0 – Platform specific release number

Types of Patches?

How to rollback a patch?

What is PSU?

What is Rolling Patch?

How to check installed Patches?

How much time will it take for Patching?

Common issues faced in Patching?

REFERENCES:
OPATCH Utility (Oracle RDBMS Patching)
http://avdeo.com/2008/08/19/opatch-utility-oracle-rdbms-patching/

How to apply Database Patches


http://rafioracledba.blogspot.in/search/label/Database%20Patches

Critical Patch Updates, Security Alerts and Third Party Bulletin


http://www.oracle.com/technetwork/topics/security/alerts-086861.html

Oracle: Quick Guide to Opatch - (Oracle Database Patching utility)


http://www.dbalifeline.com/content/oracle-quick-guide-opatch-oracle-database-patching-utility

How to Design an Effective Patch Management Process


http://www.computing.net/howtos/show/how-to-design-an-effective-patch-management-process/
744.html

NK-8125652330
Oracle Database 11.2.0.2 Patch Set (English)
http://www.dbacomp.com.br/blog/?p=69

Apply Oracle CPUApr2010 – 9352191 for Oracle10.2.0.4 in Aix5L


http://hendrydasan.com/2010/05/21/apply-oracle-cpuapr2010-9352191-for-oracle10-2-0-4-in-aix5l/ 

Upgrade
=======

What is rolling upgrade?It is a new ASM feature from Database 11g.ASM instances in Oracle
database 11g release(from 11.1) can be upgraded or patched using rolling upgrade feature. This
enables us to patch or upgrade ASM nodes in a clustered environment without affecting database
availability.During a rolling upgrade we can maintain a functional cluster while one or more of the
nodes in the cluster are running in different software versions.Rolling upgrade can be used only for
Oracle database 11g releases(from 11.1).

Steps to Upgrade in Oracle ?

Manual upgrade which involves the following steps:


1.Backup the database.
2.In UNIX/Linux environments, set the $ORACLE_HOME and $PATH variables to point to the new 11g
Oracle home.
3.Analyze the existing instance using the "$ORACLE_HOME/rdbms/admin/utlu111i.sql" script.
4.Start the original database using the STARTUP UPGRADE command and proceed with the upgrade
by running the "$ORACLE_HOME/rdbms/admin/catupgrd.sql" script.
5.Recompile invalid objects.
6.Restart the database.
7.Run the "$ORACLE_HOME/rdbms/admin/utlu111s.sql" script and check the result of the upgrade.
8.Troubleshoot any issues or abort the upgrade.

What happens when you give "STARTUP UPGRADE"?

$sqlplus "/as sysdba"


SQL> STARTUP UPGRADE

Note:
----
The UPGRADE keyword enables you to open a database based on an earlier Oracle Database release.
It also restricts logons to AS SYSDBAsessions, disables system triggers, and performs additional
operations that prepare the environment for the upgrade.

You might be required to use the PFILE option to specify the location of your initialization parameter
file.
Once the database is started in upgrade mode, only queries on fixed views execute without errors
until after the catupgrd.sql script is run. Before running catupgrd.sql, queries on any other view or
the use of PL/SQL returns an error.

What is the difference between startup Upgrade and Migrate ?

startup migrate:

NK-8125652330
---------------
Used to upgrade a database till 9i.

Startup Upgrade
---------------
From 10G  we are using startup upgrade to upgrade database.

What happens internally when you use startup upgrade/migrate?

It will adjust few database (init) parameters (irrespective of what you have defined) automatically to
certain values in order to run upgrade scripts smoothely.
in other way..it will issue few alter statements to set certain parameters which are required to
complete the upgrade scripts without any issues.

REFERENCE:
---------
Oracle® Database Upgrade Guide 11g Release 2 (11.2)
http://docs.oracle.com/cd/E11882_01/server.112/e23633/upgrade.htm

Common issues faced in Upgrade? 

Error is related to timezone file


Started database in upgrade mode and fired catupgrd.sql :

SQL> startup upgrade


ORACLE instance started.
Total System Global Area 6413680640 bytes
Fixed Size                  2160112 bytes
Variable Size            1946159632 bytes
Database Buffers         4429185024 bytes
Redo Buffers               36175872 bytes
Database mounted.
Database opened.
SQL> @catupgrd.sql
DOC>##########################################################
#############
DOC>##########################################################
#############
DOC>
DOC>   The first time this script is run, there should be no error messages
DOC>   generated; all normal upgrade error messages are suppressed.
DOC>
DOC>   If this script is being re-run after correcting some problem, then
DOC>   expect the following error which is not automatically suppressed:
DOC>
DOC>   ORA-00001: unique constraint () violated
DOC>#
   FROM registry$database
        *
ERROR at line 2:
ORA-00942: table or view does not exist
This  error is related to timezone file  which must be version 4 for Oracle version 11g.If timezone is
not version 4 than patch needs to be applied.

NK-8125652330
Query to check timezone file  is:
SQL> select * from v$timezone_file;
FILENAME        VERSION
———— ———-
timezlrg.dat          4
SQL> select * from v$timezone_file;
FILENAME        VERSION
———— ———-
timezlrg.dat          4
So I had correct version.I remember applying patch before upgrade.I got lucky because patch
existed for version 10.2.0.3.
If there is no patch for your Oracle versions than patch can be download for similar version and 
applied manually.
Instructions are below:
1. Download the identified patch.
2. Unzip the patch, and locate the 2 files timezone.dat and timezlrg.dat in the
“files/oracore/zoneinfo” directory of the uncompressed patch (or from the relevant .jar file of a  
patchset). If there is also a readme.txt in this location then make a note of this as well.
3. Backup your existing files in $ORACLE_HOME/oracore/zoneinfo – THIS CAN BE VITAL, DO NOT
SKIP.
Note:
Before going on with step 4, make sure the current files are not in use.
On Windows the files will simply refuse to be removed when the are in use.
On Unix replacing the files whilst they are in use can cause the files to become corrupt. Use the fuser
command before replacing the files to make sure they are not in use.
4. Copy the 2 .dat files and possibly the readme.txt file that were found in step 2 into the
$ORACLE_HOME/oracore/zoneinfo directory.
5. Restart the database (in case of installation on a database), or restart the client applications (in
case of client install). Note that the database did not need to be down before the time zone files were
applied, but it does need to be restarted afterwards.

DATA PUMP UTILITIES

NK-8125652330
1. What is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.

2. What is use of DIRECT=Y option in exp?


Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the
SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.

3. What is use of COMPRESS option in exp?


Imports into one extent. Specifies how export will manage the initial extent for the table data. This
parameter is helpful during database re-organization. Export the objects (especially tables and
indexes) with COMPRESS=Y. If table was spawning 20 Extents of 1M each (which is not desirable,
taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will
have initial of 20M. Later on when importing the extents will be coalesced. Sometime it is found
desirable to export with COMPRESS=N, in situations where you do not have contiguous space
ondisk (tablespace), and do not want imports to fail.

4. How to improve exp performance?


1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).

5. How to improve imp performance?
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N
and building indexes later on after the import. Indexes can easily be recreated after the data was
successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle
11g) during import.

6. What is use of INDEXFILE option in imp?


Will write DDLs of the objects in the dumpfile into the specified file.

7. What is use of IGNORE option in imp?


Will ignore the errors during import and will continue the import.

8. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

NK-8125652330
9. Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode. 
Data Pump will do parallel execution.
Data Pump uses direct path API.

10. How to improve expdp performance?


Using parallel option which increases worker threads. This should be set based on the number of
cpus.

11. How to improve impdp performance?


Using parallel option which increases worker threads. This should be set based on the number of
cpus.

12. In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it
will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and
will be deleted once the job is done. From this table, Oracle will find out how much job has
completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or
TABLE.

13. What is the order of importing objects in impdp?


 Tablespaces
 Users
 Roles
 Database links
 Sequences
 Directories
 Synonyms
 Types
 Tables/Partitions
 Views
 Comments
 Packages/Procedures/Functions
 Materialized views

14. How to import only metadata?


CONTENT= METADATA_ONLY

15. How to import into different user/tablespace/datafile/table?


REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

16. How to export/import without using external directory?

17. Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can
we import to 9i?

NK-8125652330
18. Using normal exp/imp, how to export in higher version (11g) and import into lower version
(10g/9i)?

19. How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?

Flashback Technology
1. Will a normal user is able to use flashback transaction query?

NK-8125652330
No

2. What is flashback query and flash back recovery?


Flashback query, a new feature of Oracle 9i. Flashback query enables us to query our data as it
existed in a previous state. In other words, we can query our data from a point in time before we or
any other users made permanent changes to it.  Flashback recovery can bring the complete database
to the previous state based on SCN number, on timestamp, on restore point.

3. How to flush recycle bin?


We use “Purge” command to Flush Recycle bin. It will automatically remove old data from recycle bin
if tablespace needs some more space. If you want to purge just one single table then you type
"Purge table <tableName>"

4. What is flashback database?


Oracle 10g’s brilliant alternative to database point in time recovery is the the Flashback Database
feature. With this feature in place you can do almost everything that you can with point in time
recovery, without actually having to go through all the disruptions and hassle that a PITR necessarily
details. Unlike other flashback features, which depend on undo data for reconstructing your lost data,
Flashback Database uses flashback logs to access past versions of changed blocks and allied with
some more information mined from the archive logs, you can easily revert your database to a point
in time in the past. Whilst the end product is very much like a point in time recovery, Flashback
database is much faster and less disruptive, because you do not restore from backups and flashback
logs are maintained on the disk itself. Setting it up at the basic level is pretty simple. It all starts
being in ARCHIVELOG mode.

5. Can we go for flashback drop table in 9i?


No

6. Can any user present in dictionary managed tablespace use recycle bin?
No

7. What is flashback data archive?A Flashback Data Archive provides the ability to track and store all
transactional changes to a table over its lifetime. It is no longer necessary to build this intelligence
into your application. A Flashback Data Archive is useful for compliance with record stage policies
and audit reports.A Flashback Data Archive is configured with retention time. Data archived in the
Flashback Data Archive is retained for the retention time.
By default, flashback archiving is off for any table. You can enable flashback archiving for a table if
you have the FLASHBACK ARCHIVE object privilege on the Flashback Data Archive that you want to
use for that table. After flashback archiving is enabled for a table, you can disable it only if you either
have the FLASHBACK ARCHIVE ADMINISTER system privilege or you are logged on as SYSDBA.
8. Limitations of data archive?                     
There are a number of restrictions for flashback archives:The tablespaces used for a flashback
archive must use local extent management and automatic segment space management. The
database must use automatic undo management.

9.Database views useful to view information about flashback data archive? Viewing information about
FLASHBACK ARCHIVE DATA
dba_FLASHBACK_ARCHIVE display information about flashback data archive
dba_FLASHBACK_ARCHIVE_TS display tablespaces of flashback data archive
dba_FLASHBACK_ARCHIVE_TABLES display information about tables that are enabled for flashback
archiving.

NK-8125652330
10. Advantages of data archive?
 The primary advantages of using Flashback Data Archive for historical data tracking include:
1.Application transparency
2. Seamless access
3. Security
4. Minimal performance overhead
5. Storage optimization
6. Centralized management

11. What is the use of DBMS_FLASHBACK Package?The DBMS_FLASHBACKpackage provides the


same functionality as Oracle Flashback Query, but Oracle Flashback Query is sometimes more
convenient.The DBMS_FLASHBACK package acts as a time machine: you can turn back the clock,
carry out normal queries as if you were at that time in the past, and then return to the present.
Because you can use the DBMS_FLASHBACK package to perform queries on past data without special
clauses such as AS OF or VERSIONS BETWEEN, you can reuse existing PL/SQL code to query the
database at times in the past.

DATA GUARD

NK-8125652330
What are the types of Oracle Data Guard?

Oracle Data Guard classified in to two types based on way of creation and method used for Redo
Apply. They are as follows.

1. Physical standby (Redo Apply technology)


2. Logical standby (SQL Apply Technology)
What are the advantages in using Oracle Data Guard?

Following are the different benefits in using Oracle Data Guard feature in your environment.

1. High Availability.
2. Data Protection.
3. Off loding Backup operation to standby database.
4. Automatic Gap detection and Resolution in standby database.
5. Automatic Role Transition using Data Guard Broker.
What are the different services available in Oracle Data Guard?

Following are the different Services available in Oracle Data Guard of Oracle database.

1. Redo Transport Services.


2. Log Apply Services.
3. Role Transitions.
What are the different Protection modes available in Oracle Data Guard?
i
Following are the different protection modes available in Data Guard of Oracle database you can use
any one based on your application requirement.

1. Maximum Protection
2. Maximum Availability
3. Maximum Performance
How to check what protection mode of primary database in your Oracle Data Guard?

By using following query you can check protection mode of primary database in your Oracle Data
Guard setup.

SELECT PROTECTION_MODE FROM V$DATABASE;

For Example:

SQL> select protection_mode from v$database;

PROTECTION_MODE
——————————–

MAXIMUM PERFORMANCE

How to change protection mode in Oracle Data Guard setup?

By using following query you can change the protection mode in your primary database after setting
up required value in corresponding LOG_ARCHIVE_DEST_n parameter in primary database for
corresponding standby database.

ALTER DATABASE SET STANDBY DATABASE TO MAXIMUM [PROTECTION|PERFORMANCE|


AVAILABILITY];

NK-8125652330
Example:

alter database set standby database to MAXIMUM PROTECTION;

What are the advantages of using Physical standby database in Oracle Data Guard?

Advantages of using Physical standby database in Oracle Data Guard are as follows.

 High Availability.
 Load balancing (Backup and Reporting).
 Data Protection.
 Disaster Recovery.
What is physical standby database in Oracle Data Guard?

Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Physical standby database are
created as exact copy i.e block by block copy of primary database. In physical standby database
transactions happen in primary database are synchronized in standby database by using Redo Apply
method by continuously applying redo data on standby database received from primary database.
Physical standby database can offload the backup activity and reporting activity from Primary
database. Physical standby database can be opened for read-only transactions but redo apply won’t
happen during that time. But from 11g onwards using Active Data Guard option (extra purchase) you
can simultaneously open the physical standby database for read-only access and apply redo logs
received from primary database.

What is Logical standby database in Oracle Data Guard?

Oracle Standby database are divided into physical standby database or logical standby database
based on standby database creation and redo log apply method. Logical standby database can be
created similar to Physical standby database and later you can alter the structure of logical standby
database. Logical standby database uses SQL Apply method to synchronize logical standby database
with primary database. This SQL apply technology converts the received redo logs to SQL statements
and continuously apply those SQL statements on logical standby database to make standby database
consistent with primary database. Main advantage of Logical standby database compare to physical
standby database is you can use Logical standby database for reporting purpose during SQL apply i.e
Logical standby database must be open during SQL apply. Even though Logical standby database are
opened for read/write mode, tables which are in synchronize with primary database are available for
read-only operations like reporting, select queries and adding index on those tables and creating
materialized views on those tables. Though Logical standby database has advantage on Physical
standby database it has some restriction on data-types, types of DDL, types of DML and types of
tables.

What are the advantages of Logical standby database in Oracle Data Guard?

 Better usage of resource


 Data Protection
 High Availability
 Disaster Recovery
What is the usage of DB_FILE_NAME_CONVERT parameter in Oracle Data Guard setup?

DB_FILE_NAME_CONVERT parameter is used in Oracle Data Guard setup that to in standby


databases. DB_FILE_NAME_CONVERT parameter are used to update the location of data files in
standby database. These parameter are used when you are using different directory structure in
standby database compare to primary database data files location.

NK-8125652330
What is the usage of LOG_FILE_NAME_CONVERT parameter in Oracle Data Guard setup?

LOG_FILE_NAME_CONVERT parameter is used in Oracle Data Guard setup that to in standby


databases. LOG_FILE_NAME_CONVERT parameter are used to update the location of redo log files in
standby database. These parameter are used when you are using different directory structure in
standby database compare to primary database redo log file location.

 Step for Physical  Standby

These are the steps to follow:

1. Enable forced logging


2. Create a password file
3. Configure a standby redo log
4. Enable archiving
5. Set up the primary database initialization parameters
6. Configure the listener and tnsnames to support the database on both nodes
col name format a20

col thread# format 999

col sequence# format 999

col first_change# format 999999

col next_change# format 999999

SELECT thread#, sequence# AS “SEQ#”, name, first_change# AS “FIRSTSCN”,

       next_change# AS “NEXTSCN”,archived, deleted,completion_time AS “TIME”

FROM   v$archived_log

V$ log_history

Tell me about parameter which is used for standby database?

Log_Archive_Dest_n

Log_Archive_Dest_State_n

Log_Archive_Config

Log_File_Name_Convert

Standby_File_Managment

DB_File_Name_Convert

DB_Unique_Name

Control_Files

NK-8125652330
Fat_Client

Fat_Server

The LOG_ARCHIVE_CONFIG parameter enables or disables the sending of redo streams to the
standby sites. The DB_UNIQUE_NAME of the primary database is dg1 and the DB_UNIQUE_NAME of
the standby database is dg2. The primary database is configured to ship redo log stream to the
standby database. In this example, the standby database service is dg2.

Next, STANDBY_FILE_MANAGEMENT is set to AUTO so that when Oracle files are added or dropped
from the primary database, these changes are made to the standby databases automatically. The
STANDBY_FILE_MANAGEMENT is only applicable to the physical standby databases.

Setting the STANDBY_FILE_MANAGEMENT parameter to AUTO is is recommended when using Oracle


Managed Files (OMF) on the primary database. Next, the primary database must be running in
ARCHIVELOG mode.

Oracle Data Guard Interview Questions and Answers

What is Dataguard?

Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor
one or more standby databases to enable production Oracle databases to survive disasters and data
corruptions. Data Guard maintains these standby databases as copies of the production database.
Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high
level of data protection and data availability.

What is DG Broker?
DG Broker “it is the management and monitoring tool”.
Oracle dataguard broker is a distributed management framework that automates and centralizes the
creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker
specified command-line tool interface “DGMGRL”.

What is the difference between Dataguard and Standby?

Dataguard :
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.

Standby Database :
Physical standby database provides a physically identical copy of the primary database, with on disk
database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.

What are the differences between Physical/Logical standby databases? How would you decide which
one is best suited for your environment?
Physical standby DB:

NK-8125652330
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary
database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure
can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary database
into SQL statements and then executing these SQL statements on the standby DB.
We can open “physical stand by DB to “read only” and make it available to the applications users
(Only select is allowed during this period). we can not apply redo logs received from primary
database at this time.
We do not see such issues with logical standby database. We can open the database in normal mode
and make it available to the users. At the same time, we can apply archived logs received from
primary database.

For OLTP large transaction database it is better to choose logical standby database.

Explain Active Dataguard?

11g Active Data Guard


Oracle Active Data Guard enables read-only access to a physical standby database for queries,
sorting, reporting, web-based access, etc., while continuously applying changes received from the
production database.
Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups
to a standby database, and can provide additional benefits of high availability and disaster protection
against planned or unplanned outages at the production site.

What is a Snapshot Standby Database?


11g Snapshot Standby Database
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby
database which has been created from a physical standby database.
We can convert a physical standby database to a snapshot standby database, do some kind of
testing on a database which is a read write copy of the current primary or production database and
then finally revert it to it’s earlier state as a physical standby database.
While the snapshot standby database is open in read-write mode, redo is being received from the
primary database, but is not applied.
After converting it back to a physical standby database, it is resynchronized with the primary by
applying the accumalated redo data which was earlier shipped from the primary database but not
applied.
Using a snapshot standby, we are able to do real time application testing using near real time
production data. Very often we are required to do production clones for the purpose of testing. But
using snapshot standby databases we can meet the same requirement sparing the
effort,time,resources and disk space.

Snapshot Standby Database (UPDATEABLE SNAPSHOT FOR TESTING)


A snapshot standby database is a fully updatable standby database that is created by converting a
physical standby database into a snapshot standby database.

NK-8125652330
Like a physical or logical standby database, a snapshot standby database receives and archives redo
data from a primary database. Unlike a physical or logical standby database, a snapshot standby
database does not apply the redo data that it receives. The redo data received by a snapshot
standby database is not applied until the snapshot standby is converted back into a physical standby
database, after first discarding any local updates made to the snapshot standby database.

REFERENCE:
http://docs.oracle.com/cd/B28359_01/server.111/b28294/title.htm
What is the Default mode will the Standby will be, either SYNC or ASYNC?
ASYNC

Dataguard Architechture?
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more standby databases.
The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed
geographically. There are no restrictions on where the databases are located, provided they can
communicate with each other.

Dataguard Architecture
The Oracle 9i Data Guard architecture incorporates the following items:

• Primary Database – A production database that is used to create standby databases. The archive
logs from the primary database are transfered and applied to standby databases. Each standby can
only be associated with a single primary database, but a single primary database can be associated
with multiple standby databases.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary
database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases
using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery
Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The
services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and
command line interface.

Primary Database:
A Data Guard configuration contains one production database, also referred to as the primary
database, that functions in the primary role. This is the database that is accessed by most of your
applications.

Standby Database:
A standby database is a transactionally consistent copy of the primary database. Using a backup
copy of the primary database, you can create up to nine standby databases and incorporate them in
a Data Guard configuration. Once created, Data Guard automatically maintains each standby
database by transmitting redo data from the primary database and then applying the redo to the

NK-8125652330
standby database.
The types of standby databases are as follows:

Physical standby database:


Provides a physically identical copy of the primary database, with on disk database structures that
are identical to the primary database on a block-for-block basis. The database schema, including
indexes, are the same. A physical standby database is kept synchronized with the primary database,
through Redo Apply, which recovers the redo data received from the primary database and applies
the redo to the physical standby database.

Logical standby database:


Contains the same logical information as the production database, although the physical organization
and structure of the data can be different. The logical standby database is kept synchronized with the
primary database through SQL Apply, which transforms the data in the redo received from the
primary database into SQL statements and then executes the SQL statements on the standby
database.

What are the services required on the primary and standby database ?
The services required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can
also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either
locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running
on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each
FAL request. .
The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates
transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery
process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.

What is RTS (Redo Transport Services) in Dataguard?


It controls the automated transfer of redo data from the production database to one or more archival
destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and
automatically retrieve replacement archived redo log files from the
primary database or another standby database.

What are the Protection Modes in Dataguard?

Data Guard Protection Modes


This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum

NK-8125652330
requirements of the configured data protection mode and that does not have a redo gap. Redo gaps
are discussed in Section 6.3.3.

Maximum Availability
This protectionmode provides the highest level of data protection that is possible without
compromising the availability of a primary database. Transactions do not commit until all redo data
needed to recover those transactions has been written to the online redo log and to at least one
synchronized standby database. If the primary database cannot write its redo stream to at least one
synchronized standby database, it operates as if it were in maximum performance mode to preserve
primary database availability until it is again able to write its redo stream to a synchronized standby
database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault
does not prevent a complete set of redo data from being sent from the primary database to at least
one standby database.

Maximum Performance
This protectionmode provides the highest level of data protection that is possible without affecting
the performance of a primary database. This is accomplished by allowing transactions to commit as
soon as all redo data generated by those transactions has been written to the online log. Redo data
is also written to one or more standby databases, but this is done asynchronously with respect to
transaction commitment, so primary database performance is unaffected by delays in writing redo
data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has
minimal impact on primary database performance.
This is the default protection mode.

Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this
level of protection, the redo data needed to recover a transaction must be written to both the online
redo log and to at least one synchronized standby database before the transaction commits. To
ensure that data loss cannot occur, the primary database will shut down, rather than continue
processing transactions, if it cannot write its redo stream to at least one synchronized standby
database.
Because this data protection mode prioritizes data protection over primary database availability,
Oracle recommends that a minimum of two standby databases be used to protect a primary
database that runs in maximum protection mode to prevent a single standby database failure from
causing the primary database to shut down.

How to delay the application of logs to a physical standby?

A standby database automatically applies redo logs when they arrive from the primary database. But
in some cases, we want to create a time lag between the archiving of a redo log at the primary site,
and the application of the log at the standby site.

Modify the LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a delay for
the standby database.

NK-8125652330
Example: For 60min Delay:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=stdby_srvc DELAY=60′;
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but
the logs are not immediately applied to the standby database. The logs are applied when the
specified time interval expires.

Steps to create Physical Standby database?

1.Take a full hot backup of Primary database


2.Create standby control file
3.Transfer full backup, init.ora, standby control file to standby node.
4.Modify init.ora file on standby node.
5.Restore database
6.Recover Standby database
(Alternatively, RMAN DUPLICATE DATABASE FOR STANDBY DO RECOVERY can be also used)
7.Setup FAL_CLIENT and FAL_SERVER parameters on both sides
8.Put Standby database in Managed Recover mode
What are the DATAGUARD PARAMETERS in Oracle?
Set Primary Database Initialization Parameters
———————————————-
On the primary database, you define initialization parameters that control redo transport services
while the database is in the primary role. There are additional parameters you need to add that
control the receipt of the redo data and log apply services when the primary database is transitioned
to the standby role.

DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30

Primary Database: Standby Role Initialization Parameters


FAL_SERVER=boston
FAL_CLIENT=chicago
DB_FILE_NAME_CONVERT=’boston’,’chicago’
LOG_FILE_NAME_CONVERT= ‘/arch1/boston/’,’/arch1/chicago/’,’/arch2/boston/’,’/arch2/chicago/’
STANDBY_FILE_MANAGEMENT=AUTO

NK-8125652330
Prepare an Initialization Parameter File for the Standby Database
—————————————————————–
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE) used by the
primary database; a text initialization parameter file can be copied to the standby location and
modified. For example:
CREATE PFILE=’/tmp/initboston.ora’ FROM SPFILE;
Modifying Initialization Parameters for a Physical Standby Database.

DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,’boston’
LOG_FILE_NAME_CONVERT= ‘/arch1/chicago/’,’/arch1/boston/’,’/arch2/chicago/’,’/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston

NK-8125652330
i

Automatic Storage Management

1. What is ASM?
ASM is one file system which will build by an oracle on any raw disk for storing oracle database
files including datafiles, redologs, Backups, controlfiles and spfiles. ASM allows administrators to
add and remove disks while the database is on-line and available to users. And also DBA can
manage storage of database with redundant technology.  Data is automatically striped across all
disks in a diskgroup and is optionally mirrored.
2. What are disadvantages of having raw devices?One single raw device storage space is
completely dedicated to only any one datafile or any  one redolog or to any one controlfile. The tar
command cannot be used for physical file backup; instead we should use dd command.
3. What is advantage of having disk shadowing/ Mirroring? Shadow set of disks save like a backup
for the safe-side of disk failure. In most Volume Manager if any disk failure occurs it automatically
switches over to a working disk. Improved performance because most OS support volume
shadowing can direct file I/O request to use the shadow set of files instead of the main set of files.
This reduces I/O load on the main set of disks.
 4. It is possible to use raw devices as data files and what are the advantages over file system
files? Yes. The advantages over file system files. I/O will be improved and database performance
will increase.

5. What are ASM related init.ora parameters?


ASM_DISKGROUPS
ASM_DISKSTRING
ASM_POWER_LIMIT
ASM_PREFERRED_READ_FAILURE_GROUPS
DB_CACHE_SIZE
DIAGNOSTIC_DEST
INSTANCE_TYPE
LARGE_POOL_SIZE
PROCESSES
REMOTE_LOGIN_PASSWORDFILE
SHARED_POOL_SIZE

6. How many database instances can be handled by one ASM instance?


Several databases can share a single ASM instance. So, although one can create multiple ASM
instances on a single system, normal configurations should have one and only one ASM instance
per system.
For clustered systems, create one ASM instance per node (called +ASM1, +ASM2, etc).
7. How many disk groups should one have?One should have only one disk group for all database
files - and, optionally a second for recovery files. Data with different storage characteristics should
be stored in different disk groups. Each disk group can have different redundancy (mirroring)
settings (high, normal and external), different fail-groups, etc. However, it is generally not
necessary to create many disk groups with the same storage characteristics (i.e. +DATA1,
+DATA2, etc. all on the same type of disks).

ASM Interview questions:

1) What are the background processes in ASM

Ans:

RABL- Rebalancer: It opens all the device files as part of disk discovery and coordinates the ARB
processes for rebalance activity.

ARBx - Actual Rebalancer: They perform the actual rebalancing activities. 


The number of ARBx processes depends on the ASM_POWER_LIMIT init parameter.

ASMB - ASM Bridge: This process is used to provide information to and from the Cluster
Synchronization Service (CSS) used by ASM to manage the disk resources. 
It is also used to update statistics and provide a heartbeat mechanism.

2) What is the use of ASM (or) Why ASM preferred over filesystem?

ANS: ASM provides striping and mirroring.

3) What are the init parameters related to ASM?

ANS:
INSTANCE_TYPE = ASM
ASM_POWER_LIMIT = 11
ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*'
ASM_DISKGROUPS = DG_DATA, DG_FRA

4) What is rebalancing (or) what is the use of ASM_POWER_LIMIT?

ANS:

ASM_POWER_LIMIT is dynamic parameter, which will be useful for rebalancing the data across disks.
Value can be 1(lowest) to 11 (highest).

5) What are different types of redundancies in ASM & explain?

ANS:

External redundancy,
Normal redundancy,
High redundancy.
6) How to copy file to/from ASM from/to filesystem?

ANS:

By using ASMCMD cp command

7) How to find out the databases, which are using the ASM instance?

ANS:

ASMCMD> lsct
DB_Name   Status     Software_Version  Compatible_version  Instance_Name  Disk_Group
amxdcmp1  CONNECTED        11.2.0.2.0          11.2.0.2.0  amxdcmp1       DG1_DCM_DATA
amxddip1  CONNECTED        11.2.0.2.0          11.2.0.2.0  amxddip1       DG1_DDI_DATA
ASMCMD>

(or)

SQL> select DB_NAME from V$ASM_CLIENT;

8) What are different types of stripings in ASM & their differences?

ANS:

Fine-grained striping
Coarse-grained striping

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB
Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576   6835200  1311391                0         1311391    
0             N  DG1_DCM_DATA/
MOUNTED  EXTERN  N         512   4096  1048576    486400   154487                0          154487        
0             N  DG1_DDI_DATA/
ASMCMD>

SQL> select NAME,ALLOCATION_UNIT_SIZE/1024/1024 "MB" from v$asm_diskgroup;

NAME                                   MB
------------------------------ ----------
DG1_DCM_DATA                            1
DG1_DDI_DATA                            1

9) What is allocation unit and what is default value of au_size and how to change?

ANS:

Every ASM disk is divided into allocation units (AU). 


An AU is the fundamental unit of allocation within a disk group. 
A file extent consists of one or more AU. An ASM file consists of one or more file extents.
CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' =
'32M';

10) What process does the rebalancing?

ANS:

RBAL, ARBn

11) How to add/remove disk to/from diskgroup?

ANS:

add disk:

ALTER DISKGROUP DG1_ZABBIX_DATA ADD DISK 


'/zabbix_u03/oradata/zbxprd1/ZBX_DATA_DISK009' name ZBX_DATA_DISK009,
'/zabbix_u04/oradata/zbxprd1/ZBX_DATA_DISK010' name ZBX_DATA_DISK010,
'/zabbix_u05/oradata/zbxprd1/ZBX_DATA_DISK011' name ZBX_DATA_DISK011;

remove disk:

alter diskgroup DG1_CIE_DATA drop disk


DG_CIE_DATA_DISK001, 
DG_CIE_DATA_DISK002, 
DG_CIE_DATA_DISK003, 
DG_CIE_DATA_DISK004; 
Performance Tuning
1. What is performance Tuning?
Performance tuning is the term used to improve the performance of a C.P.U to increase the speed
of response time with minimum resource.
2. Why and when should one tune?One of the biggest responsibilities of a DBA is to ensure that
the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the
database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
1.The speed of computing might be wasting valuable human time (users waiting for response);
2.Enable your system to keep-up with the speed business is conducted; and
3.Optimize hardware usage to save money (companies are spending millions on hardware).

3. What database aspects should be monitored?One should implement a monitoring system to


constantly monitor the following aspects of a database. Writing custom scripts, implementing
Oracle’s Enterprise Manager, or buying a third-party monitoring product can achieve this. If an
alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take
appropriate action.

Infrastructure availability: • Is the database up and responding to requests


                                         • Are the listeners up and responding to requests
                                         • Are the Oracle Names and LDAP Servers up and responding to
requests
                                         • Are the Web Listeners up and responding to requests

Things that can cause service outages:• Is the archive log destination filling up?
                                                             • Objects getting close to their max extents
                                                             • Tablespaces running low on free space/ Objects what
would not be able to extend
                                                             • User and process limits reached
4. What tuning indicators can a DBA use?The following high-level tuning indicators can be used to
establish if a database is performing optimally or not:
1 Buffer Cache Hit Ratio
Formula: Hit Ratio = (Logical Reads - Physical Reads) / Logical Reads
Action: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to increase hit ratio
2Library Cache Hit Ratio
Action: Increase the SHARED_POOL_SIZE to increase hit ratio
5. What are the values that can be specified for OPTIMIZER MODE Parameter?
All_rows,rule,first_rows_1000,first_rows_100,first_rows_10,first_rows_1,choose,first_rows.

6. When you enable tracing for a SQL statement, where do you look for the trace files?
In 11g /disk1/oradata/prod/diag/rdbms/db_name/instance_name/trace In 10g
/disk1/oradata/prod/udump

7. What is the function of Optimizer?


 The goal of the optimizer is to choose the most efficient way to execute a SQL statement.

8. What are the different approaches used by Optimizer in choosing an execution plan?
 Rule-based and Cost-based.
9. What are the values that can be specified for OPTIMIZER MODE Parameter?
All_rows and rule.
10. What is COST-based approach to optimization?
 Considering available access paths and determining the most efficient execution plan based on
statistics in the data dictionary for the tables accessed by the statement and their associated
clusters and indexes.
11. What is RULE-based approach to optimization?
 Choosing an executing plan based on the access paths available and the ranks of these access
paths.
12. Diff between Production, Development& QA database?
 PRODUCTION database is currently using by end users.
 Development database using by developers.

13. Diff b/w patching and upgrading?


 Patching for solving any bugs in the database.
 Up gradation for changing versions and release no's

14. How do you check the locks in the database and determine if there is any deadlock issue?
Transaction deadlocks occur when two or more transactions are attempting to access an object
with incompatible lock modes. The following script can be used to identify deadlocks in the
database. The query depends upon objects that are created by the
script ORACLE_HOME/rdbms/admin/dbmslock.sql. Log on as SYS or with SYSDBA authority and
run this script in all databases

15. Which view is used to see dead locks?


 v$lock
 v$session
 v$parameter

16. How does u know when system was last booted?


 3 ways
1.uptime cmd
2.top
3.who -b

20. How do u know load on system?


1. W
2. Top

21. How do you know when the process is started?


Using ps -ef| grep process name

22. What type of default optimizer does 10g use?


Oracle 10g uses Cost Based Optimizer.

Application Tuning
1. What is Parallel Server?Oracle Parallel Server is a robust computing environment that
harnesses the processing power of multiple, interconnected computers. Oracle Parallel Server
software and a collection of hardware known as a "cluster", unites the processing power of each
component to become a single, robust computing environment. A cluster generally comprises two
or more computers, or "nodes".(i.e) Multiple instances accessing the same database (Only In
Multi-CPU environments)

2. What is mean by Program Global Area (PGA) ?


The PGA (Program or Process Global Area) is a memory area (RAM) that stores data and control
information for a single process. it typically contains a sort area, hash area, session cursor cache,
etc.
3. How would you generating an EXPLAIN plan?
It is pre execution plan .If you do an EXPLAIN PLAN, Oracle will analyze the statment and fill a
special table with the Execution plan for that statement. If you omit the INTO TABLE_NAME
clause, Oracle fills a table named PLAN_TABLE by default.
Usage:explain plan into table_name for your-precious-sql-statement;
The plan table is the table that Oracle fills when you have it explain an execution plan for an SQL
statement. You must make sure such a plan table exists. Oracle ships with the script
UTLXPLAN.SQL which creates this table, named PLAN_TABLE (which is the default name used by
EXPLAIN PLAN). If you like, however, you can choose any other name for the plan table, as long
as you have been granted insert on it and it has all the fields as here.

4. Can you enable trace for a session?


Yes. We can enable SQL trace for a session using “ALTER SESSION SET sql_trace=TRUE”. But it is
not advisable as it is a performance issue. It has to be used only when you want to trace a session
to monitor performance related issues and then stop it.

5. What is the parameter to set the user trace enabling?


Sql_trace = true

6. How to create a trace file?


Set sql_trace=true

7. When 100 users connect to database,Hw u see which statement is taking long time and which
statement is doing physical reading in Performance Tuning?
By using explain plan or TKPROF

8. What is Explain Plan? When do we take it?


 Explain plan tells us how the query is being executed, whether it is using index or not if so what
kind of index, how many loops are being used, what is the cost of each line in the SQL query, total
cost involved, estimated rows returned, estimated KB returned, types of joins used and stuff like
that.

9. What is the cache hit ratio, what impact does it have on performance of anOracle database?For
the buffer cache hit ratio, it calculates how often a requested block has been found in the buffer
cache without requiring disk access. This ratio is computed using data selected from the dynamic
performance view V$SYSSTAT. The buffer cache hit ratio can be used to verify the physical I/O as
predicted by V$DB_CACHE_ADVICE.

10. How does u improve the performance of Report program?There are having in so many ways.
1) You can use the sort after Declare the buffering in Read statement.
2) You don't using inner joines You can use for all entries.
3) Maintain the Work area.
4) Maintain the Variables

11. What is the use of tkprof and how to generate it?Tkprof is one of the most useful utilities
available to DBAs for diagnosing performance issues.  It essentially formats a trace file into a
more readable format for performance analysis.  The DBA can then identify and resolve
performance issues such as poor SQL, indexing, and execution plan. 

12. What would you do to increasing the buffer cache hit ratio?
 If the hit ratio is below 90%, and the dictionary cache has been tuned, increase the init.ora
parameter DB_CACHE_SIZE to increase the buffer.
13. What is hit ratio?
 It is  a  measure  of  well the data cache buffer is handling requests for data. It is a percentage of
available and Non-available of data blocks in any memory component to increase performance.

 14. What are hints in Oracle? HINTS are nothing but the comments used in a SQL statement to
pass instructions to the Oracle optimizer.The optimizer uses these hints  to an execution plan for
the statement.

Database Tuning

1. How do you disable monitoring of a table?


 Alter table tablename no monitoring

2. How do you enable monitoring of a table?


 Alter table tablename monitoring

3. How do you find the files whose are more than 500k?
 fnd . -name "*" -size +500k

4. What is a Parallel Server option in ORACLE?


Oracle Parallel Server is a robust computing environment that harnesses the processing power of
multiple, interconnected computers. Oracle Parallel Server software and a collection of hardware
known as a "cluster", unites the processing power of each component to become a single, robust
computing environment. A cluster generally comprises two or more computers, or "nodes".

5. What does ADDM do?
Oracle10g offers more automatic mechanisms for rudimentary SQL tuning.  The AWR tables allow
Oracle10g to collect and maintain detailed SQL execution statistics, and this stored data is then
used by the Advanced Database Diagnostic Monitor (ADDM, pronounced ‘adam’). ADDM attempts
to supply a root cause analysis along with recommendations on what to do to fix the problem.  An
ADDM output might contain information that there is read/write contention, a free list problem, or
the need to use locally managed tablespaces.
ADDM can identify high load SQL statements, which can, in turn, be fed into the SQL Tuning
Advisor below.  ADDM automatically detects common performance problems, including:
1. Excessive I/O
2. CPU Bottlenecks
3.Contention Issues
4. High Parsing
5.Lock Contention
6.Buffer Sizing Issues
7. RAC Tuning Issues
Creating a new snapshot with information populated in dba_hist_snapshot:
exec dbms_workload_repository.create_snapshot(); The addm_rpt.sql script can be used to view
the output of the snapshot.

6. What is PGA?
 A PGA is a memory region that contains data and control information for a server process. It is
nonshared memory created by Oracle Database when a server process is started. Access to
the PGA is exclusive to the server process. There is one PGA for each server process. Background
processes also allocate their own PGAs. The total memory used by all individual PGAs is known as
the total instance PGA memory, and the collection of individual PGAs is referred to as the total
instance PGA, or just instance PGA.

7. Daily routine of dba


(1). Day running of the ORACLE database, log files, backup situation, the database space usage,
the use of system resources to inspect, identify and solve problems.
(2). Every space on the expansion of the database objects, data growth monitoring, health checks
done on the database, the state of the database objects for check-ups.
(3). Monthly tables and indexes, etc. Analyze, check the list of space debris, looking for
opportunities forperformance tuning the database, the database performance tuning, space
management plan proposed by the next step. ORACLE database on the state to conduct a
comprehensive inspection.
Daily work
(1). Make sure all the INSTANCE state normal landing to all databases or routine
testing ORACLEbackground process:
$ Ps-ef | grep ora
(2). Check the file system using the (free space). If the file system free space is less than 20%,
need to delete unused files to free space. $ Df-k
(3). Check the log files and trace files record alert and trace files for errors.
(4). Check the validity of the database using rman utility .
(5). Check the state of the record data file is not "online" data file and do recovery.
Select file_name from dba_data_files where status = 'OFFLINE'
(6). Monitor database performance running bstat / estat report generation system or use
statspack to collect statistical data
(7). Inspection database performance, records database, cpu use, IO, buffer hit ratio, etc. to use
vmstat, iostat, glance, top, etc. command

Memory Tuning
1. Who is using which UNDO segment?
 Execute the following query to determine who is using a particular UNDO or Rollback Segment:
SQL> SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
           NVL(s.username, 'None') orauser,s.program,r.name undoseg,t.used_ublk *
TO_NUMBER(x.value)/1024||'K' "Undo" FROM
sys.v_$rollname    r,sys.v_$session     s,sys.v_$transaction t, sys.v_$parameter   x
    WHERE s.taddr = t.addr
      AND r.usn   = t.xidusn(+)
      AND x.name  = 'db_block_size'

SID_SERIAL ORAUSER    PROGRAM                        UNDOSEG         Undo
---------- ---------- ------------------------------ --------------- -------
260,7      SCOTT      sqlplus@localhost.localdomain  _SYSSMU4$       8K
                      (TNS V1-V3)

2. Where can one find the high water mark for a table?
There is no single system table which contains the high water mark (HWM) for a table. A table's
HWM can be calculated using the results from the following SQL statements:
SELECT BLOCKS FROM   DBA_SEGMENTS
WHERE  OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
ANALYZE TABLE owner.table ESTIMATE STATISTICS;
SELECT EMPTY_BLOCKS
FROM   DBA_TABLES
WHERE  OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table);
Thus, the tables' HWM = (query result 1) - (query result 2) - 1
3. Define the SGA?
 System Global Area.It consists of Shared pool, Large pool, Java pool, Buffer cache, Log buffer,
Nonstandard block size buffer caches, Keep and recycle buffer caches, and Streams pool.

4. You have 4 instances running on the same UNIX box. How can you determine which shared
memory and semaphores are associated with which instance?Ipcs
SQL> oradebug setmypid
SQL> oradebug ipc
SQL>oradebug tracfile_name

5. When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad -How do
you correct it? If you get excessive disk sorts this is bad. This indicates you need to tune the sort
area parameters in the initialization files. The major sort are parameter is the SORT_AREA_SIZE
parameter. 
6. What are SGA_TARGET and SGA_MAX?SGA_Target is the amount SGA that is used by an
instance. If this parameter is set in initialization parameter file then ASMM (Automatic shared
memory management) is done where the buffer cache, Stream pool, Java pool size, Shared pool
size and large pool are managed by Oracle.
SGA_MAX is the Maximum possible size of SGA allowed when you enable ASMM. SGA_MAX cannot
be changed dynamically. If you raise the SGA_target to more that SGA_MAX you will get error.
7. What is different initialization parameters related to tuning?
Some of the parameters that effect performance are DB_CACHE_SIZE,
SGA_MAX,PGA_AGGREGATE_TARGET, SHARED_POOL_SIZE, and SGA_TARGET when you use
ASMM.

8. Name the parts of the database buffer cache.The database buffer cache consists of the keep
buffer cache, recycle buffer cache, and the default buffer cache.The keep buffer cache retains the
data block in memory.The recycle buffer cache removes the buffers from memory when it’s not
needed.The default buffer cache contains the blocks that are not assigned to the other pools

9. Which memory structures are shared? Name two.The library cache contains the shared SQL
areas, private SQL areas, PL/SQL procedures, and packages, and control structures. The large
pool is an optional area in the SGA.

10. What is the maximum number of database writer processes allowed in anOracle instance?
The maximum is 20.  Every Oracle instance begins with only one database writer process,
DBW0. Additional writer processes may be started by setting the initialization parameter
DB_WRITER_PROCESSES.

Network Tuning

1. What is Parallel Server?


Multiple instances accessing the same database (Only In Multi-CPU environments)
2. Describe a parallel server configuration.
In a parallel server configuration multiple instances known as nodes can mount one database. In
other words, the parallel server option lets you mount the same database for multiple
instances.  In a multithreaded configuration, one shared server process takes requests from
multiple user processes.

3. If you want to configure shared servers which three parameters you need to specify in init.ora
file?
LOCAL_LISTENER, SHARED_SERVERS ,DISPATCHERS.
4. What is the function of Dispatcher (Dnnn) ?
Dispatcher (Dnnn) process is responsible for routing requests from connected user processes to
available shared server processes and returning the responses back to the appropriate user
processes

5. How many Dispatcher Processes are created ?At least one Dispatcher process is created for
every communication protocol in use.

6. View to see how many dispatchers are created by dba in database?         V$dispatchers


7. What are the Disadvantages of dedicated servers?
While using a dedicated server is ideal in many situations, there are also some disadvantages for
those who choose this option.One of the biggest disadvantages to using a dedicated server is the
cost. While most hosting packages that make use of shared servers are relatively inexpensive,
purchasing a hosting package that is not shared is very costly.For users that are not the most
technically savvy webmasters, sometimes a dedicated server can be more than they can handle.
Another disadvantage to dedicated hosting is the lack of free scripts and other additional features
that those on a shared server have access to. Most web hosts offer these preinstalled on their
shared hosting packages, but leave them off the dedicated servers.
8. Disadvantages of shared servers?
1. Security issues:
2. Limited Resources:
3. Dynamic IP:
4. Not good For Large Data Base E-Commerce Sites:

9. Advantages of shared servers?


1. Cheap Cost or Affordable Price:
2. No Maintenance Cost:
3. Fast Setup:
4. Good For Small Sites:

10. What is a dispatcher?


 DISPATCHER configures dispatcher processes in the shared server architecture. The parsing
software supports a name-value syntax to enable the specification of attributes in a position-
independent, case-insensitive manner.

Oracle DBA - Performance Tuning Interview Questions

1. What is Performance Tuning?

Ans: Making optimal use of system using existing resources called performace tuning.

2. Types of Tunings?

Ans: 1. CPU Tuning 2. Memory Tuning 3. IO Tuning 4. Application Tuning 5. Databse Tuning

3. What Mailny Database Tuning contains?

Ans: 1. Hit Ratios 2. Wait Events

3. What is an optimizer?

Ans: Optimizer is a mechanizm which will make the execution plan of an sql statement
4. Types of Optimizers?

Ans: 1. RBO(Rule Based Optimizer) 2. CBO(Cost Based Optimzer)

5. Which init parameter is used to make use of Optimizer?

Ans: optimizer_mode= rule----RBO cost---CBO choose--------First CBO otherwiser RBO

6. Which optimizer is the best one?

Ans: CBO

7. What are the pre requsited to make use of Optimizer?

Ans: 1. Set the optimizer mode 2. Collect the statistics of an object

8. How do you collect statistics of a table?

Ans: analyze table emp compute statistics or analyze table emp estimate statistics

9. What is the diff between compute and estimate?

Ans: If you use compute, The FTS will happen, if you use estimate just 10% of the table will be
read

10. What wll happen if you set the optimizer_mode=choose?Ans: If the statistics of an object is
available then CBO used. if not RBO will be used

11. Data Dictionay follows which optimzer mode?

Ans: RBO

12. How do you delete statistics of an object?

Ans: analyze table emp delete statistics

13. How do you collect statistics of a user/schema?

Ans: exec dbms_stats.gather_schema_stats(scott)

14. How do you see the statistics of a table?

Ans: select num_rows,blocks,empty_blocks from dba_tables where tab_name='emp'

15. What are chained rows?

Ans: These are rows, it spans in multiple blocks

16. How do you collect statistics of a user in Oracle Apps?

Ans: fnd_stats package

17. How do you create a execution plan and how do you see?Ans: 1.
@?/rdbms/admin/utlxplan.sql --------- it creates a plan_table 2. explain set statement_id='1' for
select * from emp; 3. @?/rdbms/admin/utlxpls.sql -------------it display the plan

18. How do you know what sql is currently being used by the session?

Ans: by goind v$sql and v$sql_area

19. What is a execution plan?

Ans: Its a road map how sql is being executed by oracle db?

20. How do you get the index of a table and on which column the index is?

Ans: dba_indexes and dba_ind_columns

21. Which init paramter you have to set to by pass parsing?

Ans: cursor_sharing=force

22. How do you know which session is running long jobs?

Ans: by going v$session_longops

23. How do you flush the shared pool?

Ans: alter system flush shared_pool

24. How do you get the info about FTS?

Ans: using v$sysstat

25. How do you increase the db cache?

Ans: alter table emp cache

26. Where do you get the info of library cache?

Ans: v$librarycache

27. How do you get the information of specific session?

Ans: v$mystat

28. How do you see the trace files?

Ans: using tkprof --- usage: tkprof allllle.trc llkld.txt

29. Types of hits?

Ans: Buffer hit and library hit

30. Types of wait events?

Ans: cpu time and direct path read


1. A tablespace has a table with 30 extents in it. Is this bad? Why or why not?
Level: Intermediate
Expected answer: Multiple extents in and of themselves aren't bad. However if you also have
chained rows this can hurt performance.

2. How do you set up tablespaces during an Oracle installation?


Level: Low
Expected answer: You should always attempt to use the Oracle Flexible Architecture standard or
another partitioning scheme to ensure proper separation of SYSTEM, ROLLBACK, REDO LOG,
DATA, TEMPORARY and INDEX segments.

3. You see multiple fragments in the SYSTEM tablespace, what should you check first?
Level: Low
Expected answer: Ensure that users don't have the SYSTEM tablespace as their TEMPORARY or
DEFAULT tablespace assignment by checking the DBA_USERS view.

4. What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Level: Intermediate
Expected answer: Poor data dictionary or library cache hit ratios, getting error ORA-04031.
Another indication is steadily decreasing performance with all other tuning parameters the same.

5. What is the general guideline for sizing db_block_size and db_multi_block_read for an
application that does many full table scans?
Level: High
Expected answer: Oracle almost always reads in 64k chunks. The two should have a product equal
to 64 or a multiple of 64.

6. What is the fastest query method for a table?


Level: Intermediate
Expected answer: Fetch by rowid

7. Explain the use of TKPROF? What initialization parameter should be turned on to get full
TKPROF output?
Level: High
Expected answer: The tkprof tool is a tuning tool used to determine cpu and execution times for
SQL statements. You use it by first setting timed_statistics to true in the initialization file and then
turning on tracing for either the entire database via the sql_trace parameter or for the session
using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool
against the trace file and then look at the output from the tkprof tool. This can also be used to
generate explain plan output.

8. When looking at v$sysstat you see that sorts (disk) is high. Is this bad or good? If bad, how do
you correct it?
Level: Intermediate
Expected answer: If you get excessive disk sorts this is bad. This indicates you need to tune the
sort area parameters in the initialization files. The major sort are parameter is the
SORT_AREA_SIZe parameter.

9. When should you increase copy latches? What parameters control copy latches?
Level: high
Expected answer: When you get excessive contention for the copy latches as shown by the "redo
copy" latch hit ratio. You can increase copy latches via the initialization parameter
LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.

10. Where can you get a list of all initialization parameters for your instance? How about an
indication if they are default settings or have been changed?
Level: Low
Expected answer: You can look in the init.ora file for an indication of manually set parameters. For
all parameters, their value and whether or not the current value is the default value, look in the
v$parameter view.

11. Describe hit ratio as it pertains to the database buffers. What is the difference between
instantaneous and cumulative hit ratio; which should be used for tuning?
Level: Intermediate
Expected answer: Hit ratio is a measure of how many times the database was able to read a value
from the buffers verses how many times it had to re-read a data value from the disks. A value
greater than 80-90% is good, less could indicate problems. If you take the ratio of existing
parameters this will be a cumulative value since the database started. If you do a comparison
between pairs of readings based on some arbitrary time span, this is the instantaneous ratio for
that time span. Generally speaking an instantaneous reading gives more valuable data since it will
tell you what your instance is doing for the time it was generated over.

12. Discuss row chaining, how does it happen? How can you reduce it? How do you correct it?
Level: high
Expected answer: Row chaining occurs when a VARCHAR2 value is updated and the length of the
new value is longer than the old value and won't fit in the remaining block space. This results in
the row chaining to another block. It can be reduced by setting the storage parameters on the
table to appropriate values. It can be corrected by export and import of the effected table.

13. When looking at the estat events report you see that you are getting busy buffer waits. Is this
bad? How can you find what is causing it?
Level: high
Expected answer: Buffer busy waits may indicate contention in redo, rollback or data blocks. You
need to check the v$waitstat view to see what areas are causing the problem. The value of the
"count" column tells where the problem is, the "class" column tells you with what. UNDO is
rollback segments, DATA is data base buffers.

14. If you see contention for library caches how can you fix it?
Level: Intermediate
Expected answer: Increase the size of the shared pool.

15. If you see statistics that deal with "undo" what are they really talking about?
Level: Intermediate
Expected answer: Rollback segments and associated structures.

16. If a tablespace has a default pctincrease of zero what will this cause (in relationship to the
smon process)?
Level: High
Expected answer: The SMON process won't automatically coalesce its free space fragments.
17. If a tablespace shows excessive fragmentation what are some methods to defragment the
tablespace? (7.1,7.2 and 7.3 only)
Level: High
Expected answer: In Oracle 7.0 to 7.2 The use of the 'alter session set events 'immediate trace
name coalesce level ts#';' command is the easiest way to defragment contiguous free space
fragmentation. The ts# parameter corresponds to the ts# value found in the ts$ SYS table. In
version 7.3 the 'alter tablespace coalesce;' is best. If free space isn't contiguous then export, drop
and import of the tablespace contents may be the only way to reclaim non-contiguous free space.

18. How can you tell if a tablespace has excessive fragmentation?


Level: Intermediate
If a select against the dba_free_space table shows that the count of a tablespaces extents is
greater than the count of its data files, then it is fragmented.

ORACLE GOLDEN GATE

What type of Topology does Goldengate support?

GoldenGate supports the following topologies. More details can be found here.

 Unidirectional

 Bidirectional

 Peer-to-peer

 Broadcast

 Consolidation

 Cascasding

What are the main components of the Goldengate replication?

The replication configuration consists of the following processes.

 Manager

 Extract

 Pump

 Replicate

What transaction types does Goldengate support for Replication?

Goldengate supports both DML and DDL Replication from the source to target.
What are the supplemental logging pre-requisites?

The following supplemental logging is required.

 Database supplemental logging

 Object level logging

Why is Supplemental logging required for Replication?

Integrated Capture (IC):

 In the Integrated Capture mode, GoldenGate works directly with the database log mining
server to receive the data changes in the form of logical change records (LCRs).

 IC mode does not require any special setup for the databases using ASM, transparent data
encryption, or Oracle RAC.

 This feature is only available for oracle databases in Version 11.2.0.3 or higher.

  It also supports various object types which were previously not supported by Classic
Capture.

 This Capture mode supports extracting data from source databases using compression.

 Integrated Capture can be configured in an online or downstream mode.

List the minimum parameters that can be used to create the extract process?

The following are the minimium required parameters which must be defined in the extract
parameter file.

 EXTRACT NAME

 USERID

 EXTTRAIL

 TABLE

I want to configure multiple extracts to write to the same exttrail file? Is this possible?

Only one Extract process can write to one exttrail at a time. So you can’t configure multiple
extracts to write to the same exttrail.

What type of Encryption is supported in Goldengate?

Oracle Goldengate provides 3 types of Encryption.

 Data Encryption using Blow fish.


 Password Encryption.

 Network Encryption.

What are the different password encrytion options available with OGG?

You can encrypt a password in OGG using

 Blowfish algorithm and

 Advance Encryption Standard (AES) algorithm

What are the different encryption levels in AES?

You can encrypt the password/data using the AES in three different keys

a) 128 bit
b) 192 bit and
c) 256 bit

Oracle GoldenGate Interview Questions/FAQs :

1) What are processes/components in GoldenGate?

Ans:

Manager, Extract, Replicat, Data Pump

2) What is Data Pump process in GoldenGate ?

he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional
secondary Extract group that is created on the source system. When Data Pump is not used, the
Extract process writes to a remote trail that is located on the target system using TCP/IP. When
Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will
read the trail and write the data over the network to the remote trail located on the target
system.

The advantages of this can be seen as it protects against a network failure as in the absence of a
storage device on the local system, the Extract process writes data into memory before the same
is sent over the network. Any failures in the network could then cause the Extract process to abort
(abend). Also if we are doing any complex data transformation or filtering, the same can be
performed by the Data Pump. It will also be useful when we are consolidating data from several
sources into one central target where data pump on each individual source system can write to
one common trail file on the target.

3) What is the command line utility in GoldenGate (or) what is ggsci?

ANS: Golden Gate Command Line Interface essential commands – GGSCI


GGSCI   -- (Oracle) GoldenGate Software Command Interpreter

4) What is the default port for GoldenGate Manager process?

ANS:

7809

5) What are important files GoldenGate?

GLOBALS, ggserr.log, dirprm, etc ...

6) What is checkpoint table?

ANS:

Create the GoldenGate Checkpoint table

GoldenGate maintains its own Checkpoints which is a known position in the trail file from where
the Replicat process will start processing after any kind of error or shutdown. 
This ensures data integrity and a record of these checkpoints is either maintained in files stored
on disk or table in the database which is the preferred option.

7) How can you see GoldenGate errors?

ANS:

ggsci> VIEW GGSEVT


ggserr.log file

Linux interview questions for DBA

1. How do you see how many instances are running?


In unix, you can use the command line
% ps -ef|grep ora_

2. How do you automate starting and shutting down of databases in Unix?


Automatic startup can be done with /etc/oratab/ entyr of the linux machine.
Automatic shutdown we can't configure in the entry.

3. You have written a script to take backups. How do you make it run automatically every week?
corntab

4. What is OERR utility?


The oerr utility (Oracle Error) is provided only with Oracle databases on  UNIX  platforms.  oerr is
not an executable, but instead, a shell script that retrieves messages from installed message files.
Oerr is an Oracle utility that extracts error messages with suggested actions from the standard
Oracle message files. This utility is very useful as it can extract OS-specific errors that are not in
the generic Error Messages and Codes Manual.
5. How do you see Virtual Memory Statistics in Linux?
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.

6. How do you see how much hard disk space is free in Linux?
df -lh

7. What is SAR?
The sar command writes to standard output the contents of selected cumulative activity counters
in the operating system

8. What is SHMMAX?
SHMMAX is the maximum size of a shared memory segment on a Linux system

9. Swap partition must be how much the size of RAM?


Your swap partition should be at least as big as your RAM size. However it should be double the
size of RAM

10. What is DISM in Solaris?


DISM Dynamic Intimate Shared memory which is used to support oracle in Solaris Envirnoment
DISM is only supported from Solaris 9 and above version. On Solaris 9 systems,
dynamic/pageable ISM (DISM) is available. This enables Oracle Database to share virtual memory
resources between processes sharing the segment, and at the same time, enables memory
paging. The operating system does not have to lock down physical memory for the entire shared
memory segment.

11. How do you see how many memory segments are acquired by Oracle Instances?
sga
pga
Db_buffer_cache
Log_buffer_cache
12. How do you see which segment belongs to which database instances?

13. What is VMSTAT?
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.

14. How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?
There are two methods to configure the Kernel parameters in RHEL
1. by using the command "sysctl -w <parameter_name>= <value>
The above command will change the kernel parameters on the fly but the changes are not
persistent with system reboots. That is why people always choose the second method to make
changes to kernel parameters
2. By editing the file "/etc/sysctl.conf" file
 A. Edit the file "/etc/sysctl.conf" by adding the parameters along with values
B. execute "/sbin/sysctl -p" to make sure that the changes are made using the values inside the
above mentioned file.
The advantage with the second method is that the changes are persistent with system reboots.

15. How do you remove Memory segments?


To remove the shared memory segment, you could copy/paste shmid and execute:
$ ipcrm shm 32768
Another approach to remove shared memory is to use Oracle's sysresv utility.
What is the difference between Soft Link and Hard Link?Soft link:-This is a Symbolic link between
files. The actual file or directory must be residing at any available partitions of the harddisk Soft
Link is just a shortcut (in windows terms) or link created with a new file name at the working
directory or at current working partition of hard disk. Even when you don't require it you can
confidently delete this soft link as it doesn't remove the actual file or directory. The reason is the
actual file or diretcory's inode is different from the softlink created file's inode in any unix system.
Hard link:-It is the replica of the actual file or directory which must be residing at any available
partitions. This is a duplicate file copy of it's orginal which can be created at current working
partition.When we remove or delete this hardlink it removes the original file or directory too.The
reson is they share the same inode in any unix file system.

16. What is stored in oratab file?


This file is used by ORACLE utilities.  It is created by root.sh and updated by the Database
Configuration Assistant when creating a database.
A colon, ':', is used as the field terminator.  A new line terminates the entry.  Lines beginning with
a pound sign, '#', are comments.
Entries are of the form:
                $ORACLE_SID:$ORACLE_HOME:<N|Y>:
The first and second fields are the system identifier and home directory of the database
respectively.  The third filed indicates to the dbstart utility that the database should , "Y", or
should not, "N", be brought up at system boot time.
Multiple entries with the same $ORACLE_SID are not allowed.

17. How do you see how many processes are running in Unix?


ps -e|cut -d " " -fname|wc -l

18. How do you kill a process in Unix?


Linux and all other UNIX like oses comes with kill command. The command kill sends the specified
signal (such as kill process) to the specified process or process group. If no signal is specified, the
TERM signal is sent.  Kill process using kill command under Linux/UNIX. kill command works
under both Linux and UNIX/BSD like operating systems.

19. Can you change priority of a Process in Unix?


As system administrator you can use the renice command to change the priority of a process all
processes of a user or all processes belong to a group of users. The renice command has the form
/etc/renice priority [ [ -p ] pid ... ] [ [ -g ] pgrp ... ] [ [ -u ] user

1. Q: How do you automate starting and shutting down of databases in Unix/Linux?


A: One of the approaches is to use dbstart and dbshut scripts by init.d
Another way is to create your own script. To do that, create your own script “dbora” in /etc/init.d/
directory

# touch /etc/init.d/dbora

#!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
# Applies to Orcle 10/11g
#
# Set ORA_HOME
# Set ORA_OWNER

ORA_HOME=/u01/app/oracle/product/10.2.0/db_1
ORA_OWNER=oracle

if [ ! -f $ORA_HOME/bin/dbstart ]
then
echo "Oracle startup: Error $ORA_HOME/bin/dbstart doesn't exist, cannot start "
exit
fi

case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su - $ORA_OWNER -c "$ORA_HOME/bin/dbstart $ORA_HOME"
touch /var/lock/subsys/dbora
;;
'stop')
# Stop the Oracle databases:
su - $ORA_OWNER -c "$ORA_HOME/bin/dbshut $ORA_HOME"
rm -f /var/lock/subsys/dbora
;;
esac

Edit the “/etc/oratab” file and set the start flag of desired instance to ‘Y’

MYDB1:/u01/app/oracle/product/10.2.0:Y

#Add dbora to init.d


[root@host ~]# chkconfig --add dbora

#Set the right permissions


chmod 750 /etc/init.d/dbora

 Q: How do you see how many oracle database instances are running?

2. Q: How do you see how many oracle database instances are running? ::
A: Issue the following command “ps -ef |grep pmon”
[oracle@host ~]$ ps -ef |grep pmon |grep –v grep
oracle    7200     1  0 21:16 ?        00:00:00 ora_pmon_my_db_SID
oracle    9297  9181  0 21:42 pts/0    00:00:00 grep pmon

This will show within the paths returned the names of all instances (if you are OFA compliant –
Oracle Flexible Architecture).

#Count them all:


[oracle@host ~]$ ps -ef |grep pmon |grep –v grep |wc -l
1

3. Q: You have written a script my_backup.sh to take backups. How do you make it run
automatically every week? ::
The Crontab will do this work.
Crontab commands:
crontab -e      (edit user’s crontab)
crontab -l      (list user’s crontab)
crontab -r      (delete user’s crontab)
crontab -i      (prompt before deleting user’s crontab)
Crontab syntax :
crontab entry consists of five fields: day date and time followed by the user (optional) and
command to be executed at the desired time
*    *    *    *    *     user  command to be executed
_    _    _    _    _
|    |    |    |    |
|    |    |    |    +----- day of week(0-6)the day of the week (Sunday=0)
|    |    |    +------- month (1-12)       the month of the year
|    |    +--------- day of month (1-31)   the day of the month
|    +----------- hour (0-23)              the hour of the day
+------------- min (0-59)                  the exact minute 
#Run automatically every week – every 6th day of a week (Saturday=6)
* * * * 6 root /home/root/scripts/my_backup.sh
#TIP: Crontab script generator:
http://generateit.net/cron-job/

4. Q: What is OERR utility? ::

Oerr is an Oracle utility that extracts error messages with suggested actions from the standard
Oracle message files.

Oerr is installed with the Oracle Database software and is located in the ORACLE_HOME/bin
directory.

Usage: oerr facility error

Facility is identified by the prefix string in the error message.


For example, if you get ORA-7300, “ora” is the facility and “7300”
is the error.  So you should type “oerr ora 7300”.
If you get LCD-111, type “oerr lcd 111”, and so on. These include ORA, PLS, EXP, etc.
The error is the actual error number returned by Oracle.

Example:

$ oerr ora 600


ora-00600: internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]
*Cause:  This is the generic internal error number for Oracle program
exceptions.  This indicates that a process has encountered an
exceptional condition.
*Action: Report as a bug - the first argument is the internal error number

5. Q: How do you see Virtual Memory Statistics in Linux?::


A: There is several ways to check mem stats: cat /proc/meminfo, top, free, vmstat…
“cat /proc/meminfo”
[user@host ~]$ cat /proc/meminfo
MemTotal:      5974140 kB
MemFree:       1281132 kB
Buffers:        250364 kB
Cached:         754636 kB
SwapCached:      68540 kB
Active:        3854048 kB
Inactive:       599072 kB
HighTotal:     5111744 kB
HighFree:      1018240 kB
LowTotal:       862396 kB
LowFree:        262892 kB
SwapTotal:     2096472 kB
SwapFree:      1880912 kB
Dirty:             364 kB
Writeback:           0 kB
Mapped:        3494544 kB
Slab:           203372 kB
CommitLimit:   5083540 kB
Committed_AS: 16863596 kB
PageTables:      19548 kB
VmallocTotal:   106488 kB
VmallocUsed:      3536 kB
VmallocChunk:   102608 kB
HugePages_Total:     0
HugePages_Free:      0
Hugepagesize:     2048 kB
“top”
[user@host ~]$ top (sort by memory consumption by pressing SHIFT+O and then press “n”)
“free”
[user@host ~]$ free
total       used       free     shared    buffers     cached
Mem:       5974140    4693336    1280804          0     250364     754636
-/+ buffers/cache:    3688336    2285804
Swap:      2096472     215560    1880912

TIP: free -m shows results in MB

"vmstat"
[user@host ~]$ vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
0  0 215560 1281652 250364 754636    0    0    18    15    1     2  3  2 95  0

6. Q How do you see how much hard disk space is free in Linux?::
A: “df” – reports filesystem disk space usage

TIP: “df -h” shows results in human readable format M/G/T

[user@host ~]$ df -h
Filesystem             Size   Used  Avail Use% Mounted on
/dev/mapper/lvroot     4.3G   3.3G   723M   2% /
/dev/sda1              104M    14M    85M  14% /boot
none                   3.1G      0   3.1G   0% /dev/shm
/dev/mapper/lvtmp      2.2G   209M   1.8G  11% /tmp
/dev/mapper/lvvar      2.2G   267M   1.8G  14% /var
/dev/mapper/lvoracle    38G    11G    26G  31% /u01
TIP: df -h /home (shows disk space usage only for /home)

7. Q: What is SAR?::
Huh It could be anything;
SAR stands for Specific Absorption Rate, which is the unit of measurement for the amount of RF
energy absorbed by the body when using a mobile phone.
SAR is an active remote sensing system; SAR antenna on a satellite that is orbiting the Earth and
so on…
The question should be rather like this: What does sar command do in UNIX/LINUX like systems?

A: sar – Collect, report, or save system activity information.


[user@host ~]$ sar
Linux 3.6.10-26.0.0.0.2.ELsmp (host.domain.local)        02/23/2021
01:30:01 PM       CPU     %user     %nice   %system   %iowait     %idle
...
04:10:01 PM       all      2.00      0.00      2.05      0.08     95.86
04:20:02 PM       all      2.70      0.00      2.30      0.07     94.93
Average:          all      2.73      0.00      2.28      0.16     94.83
TIP: ls -la /var/log/sa/sar* ; man sar
More info: http://computerhope.com/unix/usar.htm

8. Q: What is SHMMAX?::
A: shmmax — maximum size (in bytes) for a UNIX/Linux shared memory segment
DESCRIPTION (docs.hp.com)
Shared memory is an efficient InterProcess Communications (IPC) mechanism.
One process creates a shared memory segment and attaches it to its address space.
Any processes looking to communicate with this process through the shared memory segment,
then attach the shared memory segment to their corresponding address spaces as well.
Once attached, a process can read from or write to the segment depending on the permissions
specified while attaching it.

How to display info about shmmax:

[user@host ~]$ cat /etc/sysctl.conf |grep shmmax


kernel.shmmax=3058759680

9. Q: Swap partition must be how much the size of RAM?::


A: A tricky question coz opinions about this are always a good topic for any discussions.
In the past; once upon the time, when systems used to have 32/64 or max 128Mb of RAM
memory it was recommended to allocate as twice as your RAM for swap partition.
Nowadays we do have a bit more memory for our systems.
An advice: always take software’s vendor recommended settings into account and then decide.
For example, Oracle recommends always minimum 2GB for swap and more for their products –
depends of the product and systems size.
Example of ORACLE Database 11g recommendations:
Amount of RAM             Swap Space
Between 1 GB and 2 GB     1.5 times the size of RAM
Between 2 GB and 16 GB     Equal to the size of RAM
More than 16 GB         16 GB
Another common example:
Equal the size of RAM, if amount of RAM is less than 1G
Half the size of RAM for RAM sizes from 2G to 4G.
More than 4G of RAM, you need 2G of Swap.

TIP: To determine the size of the configured swap space in Linux, enter the following command:

[user@host]~ grep SwapTotal /proc/meminfo


SwapTotal:     2096472 kB

To determine the available RAM and swap space use “top” or “free”.

10. Q: What is DISM in Solaris?


A: DISM = Dynamic Intimate Shared memory, which is used to support Oracle in Solaris
Environment.
DISM is only supported from Solaris 9 and above version (not recommended to use in older
version).
Until Solaris 8, only ISM (Intimate Shared Memory) which is of 8kb page size, from Solaris 9 Sun
has introduced a new added feature which is DISM, which supports up to 4mb of page size.

Intimate Shared Memory

On Solaris systems, Oracle Database uses Intimate Shared Memory (ISM) for shared memory
segments because it shares virtual memory resources between Oracle processes. ISM causes the
physical memory for the entire shared memory segment to be locked automatically.
On Solaris 8 and Solaris 9 systems, dynamic/pageable ISM (DISM) is available. This enables
Oracle Database to share virtual memory resources between processes sharing the segment, and
at the same time, enables memory paging. The operating system does not have to lock down
physical memory for the entire shared memory segment.
Oracle Database automatically selects ISM or DISM based on the following criteriA:
– Oracle Database uses DISM if it is available on the system, and if the value of the
SGA_MAX_SIZE initialization parameter is larger than the size required for all SGA components
combined. This enables Oracle Database to lock only the amount of physical memory that is used.
– Oracle Database uses ISM if the entire shared memory segment is in use at start-up or if the
value of the SGA_MAX_SIZE parameter is equal to or smaller than the size required for all SGA
components combined.
Regardless of whether Oracle Database uses ISM or DISM, it can always exchange the memory
between dynamically sizable components such as the buffer cache, the shared pool, and the large
pool after it starts an instance. Oracle Database can relinquish memory from one dynamic SGA
component and allocate it to another component.
Because shared memory segments are not implicitly locked in memory, when using DISM, Oracle
Database explicitly locks shared memory that is currently in use at start-up. When a dynamic SGA
operation uses more shared memory, Oracle Database explicitly performs a lock operation on the
memory that is put to use. When a dynamic SGA operation releases shared memory, Oracle
Database explicitly performs an unlock operation on the memory that is freed, so that it becomes
available to other applications.
Oracle Database uses the oradism utility to lock and unlock shared memory. The oradism utility is
automatically set up during installation. It is not required to perform any configuration tasks to
use dynamic SGA.

11. Q: How do you see how many memory segments are acquired by Oracle Instances?::
A: ipcs – provides  information on the ipc facilities for which the calling process has read acccess
#UNIX: SEGSZ
root> ipcs -pmb
IPC status from <running system> as of Mon Sep 10 13:56:17 EDT 2001
T         ID      KEY        MODE       OWNER    GROUP  SEGSZ  CPID
Shared Memory:
m       2400   0xeb595560 --rw-r-----   oracle   dba  281051136 15130
m        601   0x65421b9c --rw-r-----   oracle   dba  142311424 15161
#Linux: bytes
[user@host ~]$ icps
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status
0x00000000 32769      oracle    644        122880     2          dest

12. Q: How do you see which segment belongs to which database instances?::


A: This can be achieved with help of ipcs tool and sqlplus; oradebug ipc
#Linux
[user@host ~]$ icps
—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status
0x00000000 32769      oracle    644        122880     2          dest
#UNIX:
root> ipcs -pmb
IPC status from <running system> as of Mon Sep 10 13:56:17 EDT 2001
T         ID      KEY        MODE       OWNER    GROUP  SEGSZ  CPID
Shared Memory:
m       32769  0xeb595560 –rw-r—–   oracle   dba  281051136 15130
m        601   0x65421b9c –rw-r—–   oracle   dba  142311424 15161
m        702   0xe2fb1874 –rw-r—–   oracle   dba  460357632 15185
m        703   0x77601328 –rw-r—–   oracle   dba  255885312 15231
#record value of shmid "32769" (ID in UNIX)
[user@host ~]$ sqlplus /nologin
SQL> connect system/manager as sysdba;
SQL> oradebug ipc

#Information have been written to the trace file. Review it.

In case of having multiple instances, grep all trace files for shmid 32769 to identify the database
instance corrsponding to memory segments.

#scrap of trace file MY_SID_ora_17727.trc:


Area  Subarea    Shmid      Stable Addr      Actual Addr
1        1      32769 000000038001a000 000000038001a000

13. Q: What is VMSTAT?::
A:    vmstat – Reports virtual memory statistics in Linux environments.
It reports  information about processes, memory, paging, block IO, traps, and cpu activity.
[user@host ~]$ vmstat
procs ———–memory———- —swap– —–io—- –system– —-cpu—-
r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
3  0 170224 121156 247288 1238460    0    0    18    16    1     0  3  2 95  0

14. Q: How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?::
A: sysctl – configure kernel parameters at runtime
EXAMPLES
/sbin/sysctl -a (Display all values currently available)
/sbin/sysctl -w kernel.shmmax = 3058759680 ( -w this option changes a sysctl setting)

To modify settings permanetly edit /etc/sysctl – kernel sysctl configuration file and then issue the
following command:

/sbin/sysctl -p /etc/sysctl.conf ( Load in sysctl settings from the file specified or /etc/sysctl.conf if
none given)

15. Q: How do you remove Memory segments?::


A: We can use ipcs and ipcrm command
ipcs – provides information on ipc facilities
ipcrm – removes a message queue, semaphore set or shared memory id

First kill all Oracle database processes; A shared memory object is only removed after all currently
attached processes have detached.
#UNIX
root>  ps -ef | grep $ORACLE_SID | grep -v grep | awk '{print $2}' | xargs -i kill -9 {}
root> ipcs -pmb #displays held memory
IPC status from /dev/kmem as of Tue Sep 30 11:11:11 2011
T      ID     KEY        MODE     OWNER  GROUP  SEGSZ  CPID  LPID
Shared Memory:
m   25069 0x4e00e002 –rw-r—– oracle    dba 35562418  2869 23869
m       1 0x4bc0eb18 –rw-rw-rw-   root   root    31008   669   669

RAM memory segment owned by Oracle is ID=25069.

root> ipcrm –m 25069 #this command will release that memory segment

16. Q: What is the difference between Soft Link and Hard Link?::


“Symbolic links” (symlinks/soft link) are a special file type in which the link file actually refers to a
different file or directory by name.
When most operations (opening, reading, writing, and so on) are passed the symbolic link file, the
kernel automatically “dereferences” the link and operates on the target
of the link.  But remove operation works on the link file itself, rather than on its target!
A “hard link” is another name for an existing file; the link and the original are indistinguishable.
They share the same inode, and the inode contains all the information about a file.
It will be correct to say that the inode is the file. Hard link is not allowed for directory (This can be
done by using mount with –bind option)!

ln – makes links between files.  By default, it makes hard links; with the “-s” option, it makes
symbolic (soft) links.

Synopses:
ln [OPTION]… TARGET [LINKNAME]
ln [OPTION]… TARGET… DIRECTORY
EXAMPLE:
#hard links
[user@host ~]$ touch file1
[user@host ~]$ ln file1 file2
[user@host ~]$ ls -li
total 0
459322 -rw-r--r--  2 userxxx users 0 May  6 16:19 file1
459322 -rw-r--r--  2 userxxx users 0 May  6 16:19 file2 (the same inode, rights, size, time and so
on!)
[user@host ~]$ mkdir dir1
[user@host ~]$ ln dir1 dir2
ln: `dir1′: hard link not allowed for directory
#symbolic links
[user@host ~]$ rm file2 #hard link removed
[user@host ~]$ ln -s file1 file2 #symlink to file
[user@host ~]$ ln -s dir1 dir2     #symlink to directory
[user@host ~]$ ls -li
total 12
459326 drwxr-xr-x  2 userxxx users 4096 May  6 16:38 dir1
459327 lrwxrwxrwx  1 userxxx users    4 May  6 16:39 dir2 -> dir1 (dir2 refers to dir1)
459322 -rw-r–r–  1 userxxx users       0 May  6 16:19 file1
459325 lrwxrwxrwx  1 userxxx users    5 May  6 16:20 file2 -> file1 (different inode, rights, size
and so on!)
[user@host ~]$ rm file2 #will remove a symlink NOT a targed file; file1
[user@host ~]$ rm dir2
[user@host ~]$ ls -li
[user@host ~]$ ls -li
total 4
459326 drwxr-xr-x  2 userxxx users 4096 May  6 16:38 dir1
459322 -rw-r–r–  1 userxxx users    0 May  6 16:19 file1
[user@host ~]$ info coreutils ln #(should give you access to the complete manual)

17. Q: What is stored in oratab file?::


A: This file is being read by ORACLE software, created by root.sh script which is being executed
manually during the software installation and updated by the Database Configuration Assistant
(dbca) during the database creation.
File location: /etc/oratab
ENTRY SYNTAX:
$ORACLE_SID:$ORACLE_HOME:<N|Y>:
$ORACLE_SID – Oracle System Identifier (SID environment variable)
$ORACLE_HOME – Database home directory
<N|Y> Start or not resources at system boot time by the start/stop scripts if configured.

Multiple entries with the same $ORACLE_SID are not allowed.

EXAMPLES:
[user@host ~]$ cat /etc/oratab
MYDB1:/u01/app/oracle/product/10.2.0/db:Y
emagent:/u01/app/oracle/product/oem/agent10g:N
client:/u01/app/oracle/product/10.2.0/client_1:N
emcli:/u01/app/oracle/product/oem/emcli:N

18. Q: How do you see how many processes are running in Unix/Linux?::


A: “ps” with “wc” or “top” does teh job.
ps – report a snapshot of the current processes
wc – print the number of newlines, words, and bytes in files
top – display Linux tasks (better solution)
In other words ps will display all running tasks and wc will count them displaying results:
[user@host ~]$ ps -ef | wc -l
149
[user@host ~]$ ps -ef |grep -v "ps -ef" | wc -l #this will not count ps proces executed by you
148
#using top
[user@host ~]$ top -n 1 | grep Tasks
Tasks: 148 total,   1 running, 147 sleeping,   0 stopped,   0 zombie

19. Q: How do you kill a process in Unix?::


A: kill – terminate a process
killall – kill processes by name
kill -9 <PID> #kill process with <PID> by sending SIGKILL 9 Term Kill signal
killall <process name>
EXAPLES:
[user@host ~]$ ps -ef | grep mc
user 31660 31246  0 10:08 pts/2    00:00:00 /usr/bin/mc -P /tmp/mc-user/mc.pwd.31246
[user@host ~]$ kill -9 31660
Killed
#killall
[user@host ~]$ killall mc
Terminated
[user@host ~]$ killall -9 mc
Killed

20. Q: Can you change priority of a Process in Unix? ::


A:    YES. nice & renice does the job.
nice – Runs  COMMAND  with  an  adjusted  scheduling  priority. When no COMMAND specified
prints the current scheduling priority.
ADJUST is 10 by default. Range goes from -20 (highest priority) to  19 (lowest).

renice – alters priority of running processes

EXAMPLES: (NI in top indicates nice prio)


[user@host ~]$ mc
[user@host ~]$ top
PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1896 userxxx  17   0  7568 1724 1372 S    0  0.0   0:00.03 mc
[user@host ~]$ nice -n 12 mc #runs the mc command with prio of 12.
[user@host ~]$ top
PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
1763 userxxx  26  12  6832 1724 1372 S    0  0.0   0:00.03 mc
[root@host ~]# renice +16 1763 #must be a root user
1763: old priority 12, new priority 16
List of Some Important Data Dictionary Views in Oracle 11g

DB Creation
V$SGA
V$INSTANCE
V$DATABASE
V$PROCESS
V$SYSAUX_OCCUPANTS

Users & Resources


DBA_USERS
ALL_USERS
USER_USERS
DBA_TS_QUOTAS
USER_TS_QUOTAS
USER_PASSWORD_LIMITS
USER_RESOURCE_LIMITS
DBA_PROFILES
RESOURCE_COST
V$SESSION
V$SESSTAT
V$STATNAME

RMAN Recovery Catlog


RC_ARCHIVED_LOG
V$ARCHIVED_LOG
RC_BACKUP_CONTROLFILE
V$BACKUP_DATAFILE
RC_BACKUP_DATAFILE
V$BACKUP_DATAFILE
RC_BACKUP_PIECE
V$BACKUP_PIECE
RC_BACKUP_REDOLOG
V$BACKUP_REDOLOG
RC_BACKUP_SET
V$BACKUP_SET
RC_DATABASE
V$DATABASE
RC_DATAFILE
V$DATAFILE
RC_RMAN_CONFIGURATION
V$RMAN_CONFIGURATION
RC_LOG_HISTORY
V$LOG_HISTORY

TBS Management
DBA_TABLESPACES
DBA_TABLESPACE_GROUPS
DBA_DATA_FILES
DBA_FREE_SPACE
V$TABLESPACE
V$DATAFILE
DATABASEE_PROPERTIES
Roles & Privileges
ALL_COL_PRIVS
USER_COL_PRIVS
ALL_TAB_PRIVS
USER_TAB_PRIVS
ALL_TAB_PRIVS_MADE
USER_TAB_PRIVS_MADE
ALL_TAB_PRIVS_RECD
USER_TAB_PRIVS_RECD
DBA_ROLES
DBA_COL_PRIVS
USER_ROLE_PRIVS
DBA_ROLE_PRIVS
USER_SYS_PRIVS
DBA_SYS_PRIVS
COLUMN_PRIVILEGES
DBA_TAB_PRIVS
ROLE_ROLE_PRIVS
ROLE_SYS_PRIVS
SESSION_PRIVS
SESSION_ROLES

Control Files
V$CONTROLFILE
V$CONTROLFILE_RECORD_SECTION

Storage Parameters
DBA_SEGMENTS
DBA_EXTENTS
DBA_TABLES
DBA_INDEXES
DBA_TABLESPACES
DBA_DATA_FILES
DBA_FREE_SPACE

Auditing
STMT_AUDIT_OPTION_MAP
AUDIT_ACTIONS
ALL_DEF_AUDIT_OPTS
DBA_STMT_AUDIT_OPTS
USER_OBJ_AUDIT_OPTS
DBA_OBJ_AUDIT_OPTS
USER_AUDIT_TRAIL
DBA_AUDIT_TRAIL
USER_AUDIT_SESSION
DBA_AUDIT_STATEMENT
USER_AUDIT_OBJECT
DBA_AUDIT_OBJECT
DBA_AUDIT_EXISTS
USER_AUDIT_SESSIONS
DBA_AUDIT_SESSION
USER_TAB_AUDIT_OPTS

Rollback Segments
DBA_SEGMENTS
USER_SEGMENTS
DBA_ROLLBACK_SEGS
V$ROLLSTAT
V$ROLLNAME

Data PUMP
DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES

Dispatchers
V$DISPATCHER_CONFIG
V$MTS
V$DISPATCHER

Redo Log Files


v$LOG
V$LOGFILE
V$LOG_HISTORY
V$LOGHIST
V$RECOVERY_LOG
V$ARCHIVED_LOG

Archived Redo Log Files


V$ARCHIVED_LOG
V$ARCHIVE_DEST
V$ARCHIVE_PROCESSES

Security
DBA_USERS
DBA_USERS_WITH_DEFPWD

Undo Management
DBA_UNDO_EXTENTS
DBA_SEGMENTS
USER_SEGMENTS
V$UNDOSTAT
U$TRANSACTION

Tuning
V$PX_PROCESS
V$PX_SESSION
V$PX_PROCESS_SYSSTAT

EXPERIENCED DBA IQ's

1. Basic (Every DBA should answer correctly ALL these questions. This knowledge is just basic for
a 3+ year experienced DBA)
1.1 Q- Which are the default passwords of SYSTEM/SYS?
A-  MANAGER / CHANGE_ON_INSTALL
1.2 Q- How can you execute a script file in SQLPLUS?
A- To execute a script file in SQLPlus, type @ and then the file name.
1.3 Q- Where can you find official Oracle documentation?
A- tahiti.oracle.com
1.4 Q- What is the address of the Official Oracle Support?
A- metalink.oracle.com or support.oracle.com
1.5 Q- What file will you use to establish Oracle connections from a remote client?
A- tnsnames.ora
1.6 Q- How can you check if the database is accepting connections?
A- lsnrctl status or lsnrctl services
1.7 Q- Which log would you check if a database has a problem?
A- Alert log
1.8 Q- Name three clients to connect with Oracle, for example, SQL Developer:
A- SQL Developer, SQL-Plus, TOAD, dbvisualizer, PL/SQL Developer… There are several, but an
experienced dba should know at least three clients.
1.9 Q- How can you check the structure of a table from sqlplus?
A- DESCRIBE or DESC
1.10 Q- What command will you start to run the installation of Oracle software on Linux?
A- runInstaller
2. Moderate (Standard knoledge for a daily-work of every DBA. He could fail one or two questions,
but not more)
2.1 Q- What should you do if you encounter an ORA-600?
A- Contact Oracle Support
2.2 Q- Explain the differences between PFILE and SPFILE
A- A PFILE is a Static, text file that initialices the database parameter in the moment that it’s
started. If you want to modify parameters in PFILE, you have to restart the database.
A SPFILE is a dynamic, binary file that allows you to overwrite parameters while the database is
already started (with some exceptions)
2.3 Q- In which Oracle version was Data Pump introduced?
A- Oracle 10g
2.4 Q- Say two examples of DML, two of DCL and two of DDL
A- DML: SELECT, INSERT, UPDATE, DELETE, MERGE, CALL, EXPLAIN PLAN, LOCK TABLE
DDL: CREATE, ALTER, DROP, TRUNCATE, COMMENT, RENAME
DCL: GRANT, REVOKE
2.5 Q- You want to save the output of an Oracle script from sqlplus. How would you do it?
A- spool script_name.txt
select * from your_oracle_operations;
spool off;
2.6 Q- What is the most important requirement in order to use RMAN to make consistent hot
backups?
A- Your database has to be in ARCHIVELOG mode.
2.7 Q- Can you connect to a local database without a listener?
A- Yes, you can.
2.8 Q- In which view can you find information about every view and table of oracle dictionary?
A- DICT or DICTIONARY
2.9 Q- How can you view all the users account in the database?
A- SELECT USERNAME FROM DBA_USERS;
2.10 Q- In linux, how can we change which databases are started during a reboot?
A- Edit /etc/oratab
3. Advanced (A 3+ year experienced DBA should have enough knowledge to answer these
questions. However, depending on the work he has done, he could still fail up to 4 questions)
3.1 Q- When a user process fails, what Oracle background process will clean after it?
A- PMON
3.2 Q- How can you reduce the space of TEMP datafile?
A- Prior to Oracle 11g, you had to recreate the datafile. In Oracle 11g a new feature was
introduced, and you can shrink the TEMP tablespace.
3.3 Q- How can you view all the current users connected in your database in this moment?
A- SELECT COUNT(*),USERNAME FROM V$SESSION GROUP BY USERNAME;
3.4 Q- Explain the differences between SHUTDOWN, SHUTDOWN NORMAL, SHUTDOWN
IMMEDIATE AND SHUTDOWN ABORT
A- SHUTOWN NORMAL = SHUTDOWN : It waits for all sessions to end, without allowing new
connections.
SHUTDOWN IMMEDIATE : Rollback current transactions and terminates every session.
SHUTDOWN ABORT : Aborts all the sessions, leaving the database in an inconsistent state. It’s
the fastest method, but can lead to database corruption.
3.5 Q- Is it possible to backup your database without the use of an RMAN database to store the
catalog?
A- Yes, but the catalog would be stored in the controlfile.
3.6 Q- Which are the main components of Oracle Grid Control?
A- OMR (Oracle Management Repository), OMS (Oracle Management Server) and OMA (Oracle
Management Agent).
3.7 Q- What command will you use to navigate through ASM files?
A- asmcmd
3.8 Q- What is the difference between a view and a materialized view?
A- A view is a select that is executed each time an user accesses to it. A materialized view stores
the result of this query in memory for faster access purposes.
3.9 Q- Which one is faster: DELETE or TRUNCATE?
A- TRUNCATE
3.10 Q- Are passwords in oracle case sensitive?
A- Only since Oracle 11g.
4. RAC (Only intended for RAC-specific DBAs, with varied difficultied questions)
4.1 Q- What is the recommended method to make backups of a RAC environment?
A- RMAN to make backups of the database, dd to backup your voting disk and hard copies of the
OCR file.
4.2 Q- What command would you use to check the availability of the RAC system?
A- crs_stat -t -v (-t -v are optional)
4.3 Q- What is the minimum number of instances you need to have in order to create a RAC?
A- 1. You can create a RAC with just one server.
4.4 Q- Name two specific RAC background processes
A- RAC processes are: LMON, LMDx, LMSn, LKCx and DIAG.
4.5 Q- Can you have many database versions in the same RAC?
A- Yes, but Clusterware version must be greater than the greater database version.
4.6 Q- What was RAC previous name before it was called RAC?
A- OPS: Oracle Parallel Server
4.7 Q- What RAC component is used for communication between instances?
A- Private Interconnect.
4.8 Q- What is the difference between normal views and RAC views?
A- RAC views has the prefix ‘G’. For example, GV$SESSION instead of V$SESSION
4.9 Q- Which command will we use to manage (stop, start…) RAC services in command-line
mode?
A- srvctl
4.10 Q- How many alert logs exist in a RAC environment?
A- One for each instance.
5. Master (A 3+ year experienced DBA would probably fail these questions, they are very specifid
and specially difficult. Be glad if he’s able to answer some of them)
5.1 Q- How can you difference a usual parameter and an undocumented parameter?
A- Undocumented parameters have the prefix ‘_’. For example, _allow_resetlogs_corruption
5.2 Q- What is BBED?
A- An undocumented Oracle tool used for foresnic purposes. Stans for Block Browser and EDitor.
5.3 Q- The result of the logical comparison (NULL = NULL) will be… And in the case of (NULL !=
NULL)
A- False in both cases.
5.4 Q- Explain Oracle memory structure
The Oracle RDBMS creates and uses storage on the computer hard disk and in random access
memory (RAM). The portion in the computer s RAM is called memory structure. Oracle has two
memory structures in the computer s RAM. The two structures are the Program Global Area (PGA)
and the System Global Area (SGA).
The PGA contains data and control information for a single user process. The SGA is the memory
segment that stores data that the user has retrieved from the database or data that the user
wants to place into the database.
5.5 Q- Will RMAN take backups of read-only tablespaces?
A- No
5.6 Q- Will a user be able to modify a table with SELECT only privilege?
A- He won’t be able to UPDATE/INSERT into that table, but for some reason, he will still be able to
lock a certain table.
5.7 Q- What Oracle tool will you use to transform datafiles into text files?
A- Trick question: you can’t do that, at least with any Oracle tool. A very experienced DBA should
perfectly know this.
5.8 Q- SQL> SELECT * FROM MY_SCHEMA.MY_TABLE;
SP2-0678: Column or attribute type can not be displayed by SQL*Plus
Why I’m getting this error?
A- The table has a BLOB column.
5.9 Q- What parameter will you use to force the starting of your database with a corrupted
resetlog?
A- _ALLOW_RESETLOGS_CORRUPTION
5.10 Q- Name the seven types of Oracle tables
A- Heap Organized Tables, Index Organized Tables, Index Clustered Tables, Hash Clustered
Tables, Nested Tables, Global Temporary Tables, Object Tables.
Oracle 11g and 12c New features
Oracle 11g Database New Features Interview Questions and Answers

  1. Database Replay
  2. The SQL Performance Analyzer
  3. Online Patching in Oracle Database Control
  4. Automatic Diagnostic Repository (ADR) 
  5. Data Recovery Advisor
  6. Automatic Memory Management
  7. Invisible Indexes
  8. Read-Only Tables 
  9. Shrinking Temporary Tablespaces and Tempfiles
10. Server Result Cache
11. SQL Tuning Automation 
12. SQL Plan Management 
13. Database ADDM 
14. New SYSASM Privilege for ASM Administration 
15. Enhanced Block Media Recovery
16. VALIDATE Command
17. Configuring an Archived Redo Log Deletion Policy
18. Active Database Duplication
19. Virtual Private Catalogs
20. ASM Restricted Mode
21. Checking Diskgroup
22. The FORCE option with Drop Diskgroup Command
23. Active Data Guard is a new option for Oracle Database 11g Enterprise Edition

ASM Rolling Upgrades

Database Replay
Database Replay (sometimes named as Workload Replay) feature in Oracle11g allows you to
reproduce the
production database conditions in a testing environment.In other words, with this feature you can
capture the
actual workload on a production system and replay it in a test system. This way, you can analyze
the condition of the production database without working on the actual production database.

This feature enables you to test the impact of applying changes on a production database. These
changes could be database upgrades, switching to RAC, application upgrades, operating system
upgrades or storage system changes. 

The SQL Performance Analyzer


The SQL Performance Analyzer (SPA) aims at measuring the impact of applying any change on the
database on the performance of the SQL statements execution. If it finds out performance
degradation in one or more SQL statements, it provides you recommendations on how to improve
their performance.

This is very useful for a DBA to analyze how a change on the database (including database
upgrade) may affect the execution efficiency of SQL statements. Using this tool is explained here
because you may consider using it to study the effect of upgrading an Oracle database 10g
release 2 to 11g. 

Note:
If you plan to use SPA on a test database, it is highly recommended to make the test database
resemble the production database as closely as possible. You can use the RMAN duplicate
command for this purpose. 

Online Patching in Oracle Database Control


Patching through Database Control is enhanced in Oracle 11g. With Oracle 11g online patching (or
called hot patching),you can apply or roll back a database patch while the instance is running.
Also it can detect conflicts between two online patches.On the other hand, online patching
consumes more memory than the conventional method.

In UNIX systems, you use the script $ORACLE_HOME/OPatch/opatch to invoke the online
patching. 

Automatic Diagnostic Repository (ADR)


The Automatic Diagnostic Repository (ADR) is a file system repository to store diagnostic data
source such as alter log, trace files, user and background dump files, and also new types of
troubleshooting files such as Health Monitor reports, Incident packages, SQL test cases and Data
repair records.

In Oracle 11g, there is a new framework (named as fault diagnosability infrastructure) consisting
of many tools for diagnosing and repairing the errors in the database. All those tools refer to the
ADR in their operation.

ADR is developed to provide the following advantages:


1.Diagnosis data, because it is stored in file system, is available even when the database is down.
2.It is easier to provide Oracle support with diagnosis data when a problem occurs in the
database.
3.ADR has diagnosis data not only for the database instance. It has troubleshooting data for other
Oracle components such as ASM and CRS.

Note:
For each database instance two alert log files are generated: one as text file and one with xml
format. Contents of the xml-formatted file can be examined using adrci tool.

Also the xml-formatted alert log is saved in the ADR and specifically in the directory
$ORACLE_BASE/diag/rdbms/$INSTANCE_NAME/$ORACLE_SID/alert

Data Recovery Advisor


Data Recovery Advisor is an Oracle Database 11g tool that automatically diagnoses data failures,
determines and presents appropriate repair options,and executes repairs at the user's request.
Data Recovery Advisor can diagnose failures such as the following:

1. Inaccessible components like datafiles and control files.


2. Physical corruptions such as block checksum failures and invalid block header
3. Field values
4. Inconsistent datafiles (online and offline)
5. I/O failures

The advisor however doe not recover from failures on standby databases or RAC environment.
This advisor can be used through RMAN or the Enterprise Manager. 

Automatic Memory Management


In Oracle 11g, a new parameter named as MEMORY_TARGET is added to automate memory
allocation for both the SGA and PGA. When this parameter is set, the SGA and the PGA memory
sizes are automatically determined by the instance based on the database workload.

This parameter is dynamic and can be alter using the ALTER SYSTEM command as shown below:
ALTER SYSTEM SET MEMORY_TARGET = 1024M ;

However, if the database is not configured to use this parameter and you want to use it, you must
restart the
database after setting the parameter. 

Invisible Indexes
Invisible index is an index that is not considered by the optimizer when creating the execution
plans. This can be used to test the effect of adding an index to a table on a query (using index
hint) without actually being used by the other queries.

When using invisible indexes, consider the following:


- If you rebuild an invisible index, the resulting operation will make the index visible.
- If you want the optimizer to consider the invisible indexes in its operation, you can set the new
initialization
parameter OPTIMIZER_USE_INVISIBLE_INDEXES to TRUE (the default is FALSE). You can set the
parameter in the system and session levels.

Read-Only Tables
In Oracle 11g, you can set a table to be read only,i.e. users can only query from the table but no
DML
statement is allowed on the table. 
Shrinking Temporary Tablespaces and Tempfiles 
In Oracle 11g, you can shrink temporary tablespaces and tempfiles. 

Server Result Cache


In Oracle 11g, there is a new SGA component called result cache, which is used cache SQL query
and PL/SQL function results. The database serves the results for the executed SQL queries and
PL/SQL functions from the cache instead of re-executing the actual query. Of course,the target is
to obtain high response time. The cached results stored become invalid when data in the
dependent database objects is modified.

As clear from its concept, result cache is mostly useful in for frequently executed queries with rare
changes on the retrieved data. 

SQL Tuning Automation


The SQL Tuning Advisor is run by default every night during the automated maintenance window.
Basically, the advisor catches the SQL statements from AWR that are candidate for tuning (they
are called buckets) during four different time periods.It then automatically creates SQL profile for
any poor SQL statement, if that helps. Tuned plans are automatically added to the SQL plan
baselines by the automatic SQL tuning task. 

The advisor also may recommend actions like creating  new indexes, refreshing statistics or re-
writing the statement. These actions, however, are not automatically implemented by the
advisor. 

On Oracle 11g Release 2 (11.2.0.2), a new package named as DBMS_AUTO_SQLTUNE should be


used instead of the DBMS_SQLTUNE package. The new package provides more restrictive access
to the Automatic SQL Tuning feature.

To use the DBMS_AUTO_SQLTUNE package, you must have the DBA role, or have EXECUTE
privileges granted by an administrator. The only exception is the EXECUTE_AUTO_TUNING_TASK
procedure, which can only be run by SYS. 

SQL Plan Management


SQL plan management (SPM), is a new feature in Oracle 11g that prevents performance
regressions resulting from sudden changes to the execution plan of a SQL statement by providing
components for capturing, selecting, and evolving SQL plan information.Changes to the execution
plan may be resulted from database upgrades,system and data changes, application upgrade or
bug fixes.

When SPM is enabled, the system maintains a plan history that contains all plans generated by
the optimizer and store them in a component called plan baseline. Among the plan history in the
plan baseline, plans that are verified not to cause performance regression are marked as
acceptable. The plan baseline is used by the optimizer to decide on the best plan to use when
compiling a SQL statement.

Repository stored in data dictionary of plan baselines and statement log maintained by the
optimizer is called
SQL management base(SMB).

SQL Plan management is implemented by undertaking the following phases:


1.Capturing SQL Plan Baselines: this can be done automatically or manually.
2.Selecting SQL Plan Baselines by the optimizer
3.Evolving SQL Plan Baselines 
Database ADDM
Oracle Database 11g has added a new layer of analysis to ADDM called Database ADDM. The
mode ADDM was working in Oracle 10g is now called instance ADDM. The main target of database
ADDM is to analyze and report on RAC environment. To enable Database ADDM, you set the
parameter INSTANCES in DBMS_ADVISOR. 

New SYSASM Privilege for ASM Administration


SYSASM is a new privilege introduced in Oracle 11g. Users who are granted this privilege can
perform ASM administration tasks. The idea behind this privilege is to separate database
management and the storage management responsibilities. 

Backup and Recovery New Features:-

Enhanced Block Media Recovery


In Oracle Database 11g, there is a new command to perform block media recovery, named the
recover ...
blockcommand replacing the old blockrecover command. The new command is more efficient
since because it searches the flashback logs for older uncorrupted versions of the corrupt blocks.
This requires the database to work in archivelog mode and has the Database Flashback enabled.

While the block media recovery is going on, any attempt by users to access data in the corrupt
blocks will result in an error message, telling the user that the data block is corrupt. 

VALIDATE Command 
You can use the new command VALIDATE to manually check for physical and logical corruptions in
datafiles,backup sets, and even individual data blocks. The comma nd by default checks for
physical corruption. You can optionally specify CHECK LOGICAL . Corrupted blocks are reported in
V$DATABASE_BLOCK_CORRUPTION.

Configuring an Archived Redo Log Deletion Policy


You can use RMAN to create a persistent configuration that controls when archived redo logs are
eligible for
deletion from disk or tape. This deletion policy applies to all archiving destinations, including the
flash recovery area. When the policy is configured, it applies on the automatic deletion of the logs
in the flash recovery area and the manual deletion by the BACKUP ... DELETE and DELETE ...
ARCHIVELOG commands.

To enable an archived redo log deletion policy, run the CONFIGURE ARCHIVELOG DELETION
POLICY BACKED UP n TIMES command with the desired options.

Active Database Duplication


In Oracle Database 11g, you can directly duplicate a data base over the network without having to
back up and provide the source database files. This direct database duplication is called active
database duplication. It can be done either with Database Control or through RMAN. Instance that
runs the duplicated database is called auxiliary instance. 

Virtual Private Catalogs


In Oracle Database 11g, you can restrict access to the recovery catalog by granting access to only
a subset of the metadata in the recovery catalog. The subset that a user has read/write access to
is termed as virtual private catalog, or just virtual catalog. The central or source recovery catalog
is now called the base recovery catalog. 

ASM Restricted Mode

In Oracle 11g, you can start the ASM instance in restricted mode. When in restricted mode,
databases will not be permitted to access the ASM instance. Also, individual diskgroup can be set
in restricted mode. 

Checking Diskgroup

Starting from Oracle Database 11g, you can validate the internal consistency of ASM diskgroup
metadata using the ALTER DISKGROUP ... CHECK command. Summary of errors is logged in the
ASM alert log file. 

The FORCE option with Drop Diskgroup Command

If a disk is destroyed beyond repair, you want to drop it. But because the disk is practically
damaged, you cannot mount it and thus you cannot issue the DROP DISKGROUP command
against it. In such a condition, Oracle 11g provides the FORCE INCLUDING CONTENTS option to
drop the diskgroup even if it is not mounted. 

SQL>DROP DISKGROUP <DISKGROUP_NAME> FORCE INCLUDING CONTENTS; 

ASM Rolling Upgrades


ASM rolling upgrades enable you to independently upgrade or patch clustered ASM nodes without
affecting
database availability, thus providing greater uptime. Rolling upgrade means that all of the
features of a clustered ASM environment function when one or more of the nodes in the cluster
use different software versions. 

Active Data Guard is a new option for Oracle Database 11g Enterprise Edition
An Active Data Guard standby database is an exact copy of the primary that is open read-only
while it continuously applies changes transmitted by the primary database. An active standby can
offload ad-hoc queries, reporting, and fast incremental backups from the primary database,
improving performance and scalability while preventing data loss or downtime due to data
corruptions, database and site failures, human error, or natural disaster.  Oracle Active Data
Guard enables read-only access to a physical standby database.

With Oracle Active Data Guard, a physical standby database can be used for real-time reporting,
with minimal latency between reporting and production data. Compared with traditional replication
methods, Active Data Guard is very simple to use, transparently supports all datatypes, and offers
very high performance. Oracle Active Data Guard also allows backup operations to be off-loaded
to the standby database, and be done very fast using intelligent incremental backups.

Active Dataguard Features:


1.  Physical Standby with Real-time Query
2.  Fast Incremental Backup on Physical Standby.
3.  Automatic Block Repair.

1. What is Assm?
Automatic segment space management (ASSM) is a simpler and more efficient way of managing
space within a segment. It completely eliminates any need to specify and tune the pctused,
freelists, and freelist groups’ storage parameters for schema objects created in the tablespace. If
any of these attributes are specified, they are ignored.

2. What is Asmm?
ASMM (Automatic Shared Memory Management) is the collective name for the dynamic memory
allocation technologies added in Oracle 9i and improved with each subsequent release. This
reduces the amount of manual configuration required and allows the database to adapt to
workload changes.

3. What is the use in memory_target parameter?


MEMORY_TARGET provides the following:
1. A single parameter for total SGA and PGA sizes
2. Automatically sizes SGA components and PGA
3. Memory is transferred to where most needed
4. Uses workload information
5. Uses internal advisory predictions
6. Can be enable by DBCA at the time of Database creation.

4. What is the use of Mman?


Mman stands for Memory Manager; it is a background process that manages the dynamic resizing
of SGA memory areas as the workload increases or decreases. This process was introduced
in Oracle 10g.

5. Oracle 11g new features?


1. Improved data compression ratios (up to 20x).
2. Ability to upgrade database applications while users remain online.
3. New ease-of-use features that make Grid computing more accessible.
4. Automation of key systems management activities
.
6. What is AWR and ADDM?
AWR (Automatic Workload Repository) is a built-in repository (in the sysaux tablespace) that
exists in everyOracle Database. At regular intervals, the Oracle Database makes a snapshot of all
of its vital statistics and workload information and stores them in the AWR.
ADDM (Automatic Database Diagnostic Monitor) can be describe as the database's doctor. It
allows anOracle database to diagnose itself and determine how potential problems could be
resolved. ADDM runs automatically after each AWR statistics capture, making the performance
diagnostic data readily available.

7. What is proactive tablespace management system?


The Proactive Tablespace Management (PTM) capability in the Oracle Database 10g brings
efficient and powerful space monitoring, notification and space trending to the Oracle Database.
Prior to Oracle Database 10g, the tools available for monitoring and setting up notifications
regularly polled the database to monitor its space usage. Querying space usage information
requires collecting data about the state of the database — state that is constantly changing in a
production system. Because such queries are inherently expensive, the space monitoring tools
typically run them infrequently, once a day or once every couple of hours. When they are run, the
queries steal CPU, IO and memory (especially the buffer cache) resources away from critical
business activity in the production system. It’s a health check that is either late or hurts the
health of the system or worse, both!

8. What are the advantages of AWR?


Advantages of the new workload repository include:
1. AWR is a record of all database in-memory statistics historically stored. In the past, historical
data could be obtained manually using the ‘statspack’ utility. AWR automatically collects more
precise and granular information than past methods.
2. With a larger data sample, more informed decisions could be made. The self-tuning mechanism
uses this information for trend analysis.
3. Another benefit is that AWR statistics are accessible to external users, who can build their own
performance monitoring tools, routines, and scripts.
4. Awr collects database performance identical values from different layers like
C.P.U resources utilization.
Memory utilization
Timing  statistics
Typical executed Queries Latches statistics etc.

12c New Features:

01. Pluggable Databases Through Database Consolidation:


Oracle is doing every thing to jump into the cloud bandwagon. With 12C, Oracle is trying to
address the problem of Multitenancy through this feature. There is a radical change and a major
change in the core database architecture through the introduction of Container Databases also
called CBD and Pluggable Databases (PDB). The memory and process is owned by the Container
Database. The container holds the metadata where the PDBs hold the user data. You can create
upto 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production


environment. With these many instances, it is a maintenance nightmare as all these instances
have to be separately

•Upgraded
•Patched
•Monitored
•Tuned
•RAC Enabled
•Adjusted
•Backed up and 
•Data Guarded.

With Pluggable Databases feature, you just have to do all this for ONE single instance. Without
this feature, prior to 12C, you would have to create separate schemas and there is always a
thread of security how much ever the isolation we build into it. There are problems with
namespace conflicts, there is always going to be one public synonym that you can create. With
PDBs you can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and
separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link
feature. There is no high startup cost of creating a database any more. Instead of one instance
per day, the shift is into one instance per many databases. For the developer community, you can
be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but
for the DBAs the world would look like it has changed a lot.
Another cool feature is, you can allocate a CPU percentage for each PDB.

Another initiative being, it has announced a strategic tieup with salesforce.com during the first
week of July 2013.

02. Redaction Policy:


This is one of the top features in Oracle 12C. Data Redaction in simple terms means, masking of
data. You can setup a Data Redaction policy, for example SSN field in a Employee table can be
masked. This is called redaction. From Sql Develop you can do this by going to the table:
Employee->Right click on Security Policy->click on New->click on Redaction Policy->Enter SSN. 
When you do a select * from employee, it will show that the SSN is masked.
The new data masking will use a package called DBMS_REDACT. It is the extension to the FGAC
and VPD present in earlier versions.
By doing this, whoever needs to view the data will be able to see it where as the other users will
not be able to view it.

03. Top N Query and Fetch and offset Replacement to Rownum:


With the release of Oracle Database 12c, Oracle has introduced this new SQL syntax to simplify
fetching the first few rows. The new sql syntax "Fetch First X Rows only" can be used.

04. Adaptive Query Optimization and Online Stats Gathering:


With this feature, it helps the optimizer to make runtime adjustments to execution plan which
leads to better stats. For statements like CTAS (Create Table As Select) and IAS (Insert As
Select), the stats is gathered online so that it is available immediately.

05. Restore a Table easily through RMAN:


Earlier if you had to restore a particular table, you had to do all sorts of things like restoring a
tablespace and or do Export and Import. The new restore command in RMAN simplifies this task.

06. Size Limit on Varchar2, NVarchar2, Raw Data Types increased:


The previous limit on these data types was 4K. In 12C, it has been increased to 32,767 bytes.
Upto 4K, the data is stored inline. I am sure everyone will be happy with this small and cute
enhancement.

07. Inline PL/SQL Functions and Procedures:


The in line feature is extended in Oracle 12C. In addition to Views, we can now have PL/SQL
Procedures and Functions as in line constructs. The query can be written as if it is calling a real
stored procedure, but however the functions do not actually exist in the database. You will not be
able to find them in ALL_OBJECTS. I think this will be a very good feature for the developers to
explore as there is no code that needs to be compiled.

08. Generated as Identity/Sequence Replacement:


You can now create a col with 'generated as identity' clause. Thats it. Doing this is equivalent to
creating a separate sequence and doing a sequence.nextval for each row. This is another handy
and a neat feature which will help developer community. This is also called No Sequence Auto
Increment Primary Key.

09. Multiple Indexes on a Single Column:


Prior to 12C, a column cant be in more than one index. In 12C, you can include a column in B-tree
index as well as a Bit Map index. But, please note that only one index is usable at a given time.

10. Online Migration of Table Partition or Sub Partition:


You can very easily migrate a partition or sub partition from one tablespace to another. Similar to
how the online migration was achieved for a non-partitioned table in prior releases, a table
partition or sub partition can be moved to another tablespace online or offline. When an ONLINE
clause is specified, all DML operations can be performed without any interruption on the partition|
sub-partition which is involved in the procedure. In contrast, no DML operations are allowed if the
partition|sub-partition is moved offline.

11. Temporary UNDO:


Prior to 12C, undo records generated by TEMP Tablespace is stored in the undo tablespace. With
Temp undo feature in 12C, temp undo records can be stored in temporary table instead of UNDO
TS. The benefit is - reduced undo tablespace and reduced redo log space used.

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;


SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit

12. In Database Archiving:


This feature enables archiving rows within a table by marking them as inactive. These inactive
rows are in the database and can be optimized using compression but are not visible to the
application. These records are skipped during FTS (Full Table Scan).

You might also like