You are on page 1of 102

Oracle Architecture Primary Architecture Components

The figure shown above details the Oracle architecture.

Oracle server: An Oracle server includes an Oracle Instance and an Oracle database.

Oracle instance:
An Oracle Instance consists of two different sets of components: 1. Memory structures 2. Background processes The First component set includes the memory structures that comprise the Oracle instance. When an instance starts up, a memory structure called the System Global Area (SGA) is allocated. At this point the background processes also start. The Second component set is the set of background processes (PMON, SMON, RECO, DBW0, LGWR, CKPT, D000 and others). Each background process is a computer program. These processes perform input/output and monitor, other Oracle processes to provide good performance and database reliability. An Oracle Instance provides access to one and only one Oracle database.

Oracle database:
An Oracle database consists of files. Sometimes these are referred to as operating system files, but they are actually database files that store the database information that a firm or organization needs in order to operate. The redo log files are used to recover the database in the event of application program failures, instance failures and other minor failures. The archived redo log files are used to recover the database if a disk fails. Other files not shown in the figure include: The required parameter file that is used to specify parameters for configuring an Oracle instance when it starts up. The optional password file authenticates special users of the database these are termed privileged users and include database administrators. Alert and Trace Log Files these files store information about errors and actions taken that affect the configuration of the database.

Oracle instance
Memory Structure The memory structures include two areas of memory: System Global Area (SGA) this is allocated when an Oracle Instance starts up. Program Global Area (PGA) this is allocated when a Server Process starts up.

System Global Area

The SGA is an area in memory that stores information shared by all database processes and by all users of the database (sometimes it is called the Shared Global Area). This information includes both organizational data and control information used by the Oracle Server. The SGA is allocated in memory and virtual memory. The size of the SGA can be established by a DBA by assigning a value to the parameter SGA_MAX_SIZE in the parameter filethis is an optional parameter. The SGA is allocated when an Oracle instance (database) is started up based on values specified in the initialization parameter file (either PFILE or SPFILE). The SGA has the following mandatory memory structures: Shared Pool includes two components: Library Cache Data Dictionary Cache Database Buffer Cache Redo Log Buffer Other structures (for example, lock and latch management, statistical data)

Additional optional memory structures in the SGA include: Large Pool Java Pool Streams Pool The SHOW SGA SQL command will show you the SGA memory allocations. In order to execute SHOW SGA you must be connected with the special privilege SYSDBA (which is only available to user accounts that are members of the DBA Linux group). SQL> connect / as sysdba Connected. SQL> show sga Total System Global Area 1610612736 bytes Fixed Size 2084296 bytes Variable Size 385876536 bytes Database Buffers 1207959552 bytes Redo Buffers 14692352 bytes

Oracle 8i and earlier versions of the Oracle Server used a Static SGA. This meant that if modifications to memory management were required, the database had to be shutdown, modifications were made to the init.ora parameter file, and then the database had to be restarted. Oracle 9i and 10g use a Dynamic SGA. Memory configurations for the system global area can be made without shutting down the database instance. The advantage is obvious. This allows the DBA to resize the Database Buffer Cache and Shared Pool dynamically.

Several initialization parameters are set that affect the amount of random access memory dedicated to the SGA of an Oracle Instance. These are: SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual memory allocated to the SGA a typical setting might be 1 GB; however, if the value for SGA_MAX_SIZE in the initialization parameter file or server parameter file is less than the sum the memory allocated for all components, either explicitly in the parameter file or by default, at the time the instance is initialized, then the database ignores the setting for SGA_MAX_SIZE. DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the Database Buffer Cache in standard database blocks. Block sizes vary among operating systems. The DBORCL database uses 8 KB blocks. The total blocks in the cache defaults to 48 MB on LINUX/UNIX and 52 MB on Windows operating systems. LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log Buffer. SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated to shared SQL and PL/SQL. The default is 16 MB. If the operating system is based on a 64 bit configuration, then the default size is 64 MB. LARGE_POOL_SIZE: This is an optional memory object the size of the Large Pool defaults to zero. If the init.ora parameter PARALLEL_AUTOMATIC_TUNING is set to TRUE, then the default size is automatically calculated. JAVA_POOL_SIZE: This is another optional memory object. The default is 24 MB of memory.

The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of the additional parameters, DB_CACHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE, LARGE_POOL_SIZE, and JAVA_POOL_SIZE. Memory is allocated to the SGA as contiguous virtual memory in units termed granules. Granule size depends on the estimated total size of the SGA, which as was noted above, depends on the SGA_MAX_SIZE parameter. Granules are sized as follows: If the SGA is less than 128 MB in total, each granule is 4 MB. If the SGA is greater than 128 MB in total, each granule is 16 MB. Granules are assigned to the Database Buffer Cache and Shared Pool, and these two memory components can dynamically grow and shrink. Using contiguous memory improves system performance. The actual number of granules assigned to one of these memory components can be determined by querying the database view named V$BUFFER_POOL. Granules are allocated when the Oracle server starts a database instance in order to provide memory addressing space to meet the SGA_MAX_SIZE parameter. The minimum is 3 granules: one each for the fixed SGA, Database Buffer Cache, and Shared Pool. In practice, you'll find the SGA is allocated much more memory than this. The SELECT statement shown below shows a current_size of 1,152 granules. SELECT name, block_size, current_size, prev_size, prev_buffers FROM v$buffer_pool; NAME BLOCK_SIZE CURRENT_SIZE PREV_SIZE PREV_BUFFERS -------------------- ---------- ------------ ---------- -----------DEFAULT 8192 1152 0 0

For additional information on the dynamic SGA sizing, enroll in Oracle's Oracle10g Database Performance Tuning course.

Shared Pool

The Shared Pool is a memory structure that is shared by all system users. It consists of both fixed and variable structures. The variable component grows and shrinks depending on the demands placed on memory size by system users and application programs. Memory can be allocated to the Shared Pool by the parameter SHARED_POOL_SIZE in the parameter file. You can alter the size of the shared pool dynamically with the ALTER SYSTEM SET command. An example command is shown in the figure below. You must keep in mind that the total memory allocated to the SGA is set by the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it is set), and since the Shared Pool is part of the SGA, you cannot exceed the maximum size of the SGA. The Shared Pool stores the most recently executed SQL statements and used data definitions. This is because some system users and application programs will tend to execute the same SQL statements often. Saving this information in memory can improve system performance.

The Shared Pool includes the Library Cache and Data Dictionary Cache.
Library Cache

Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit is called. This enables storage of the most recently used SQL and PL/SQL statements. If the Library Cache is too small, the Library Cache must purge statement definitions in order to have space to load new SQL and PL/SQL statements. Actual management of this memory structure is through a Least-Recently-Used (LRU) algorithm. This means that the SQL and PL/SQL statements that are oldest and least recently used are purged when more storage space is needed. The Library Cache is composed of two memory subcomponents: Shared SQL: This stores/shares the execution plan and parse tree for SQL statements. If a system user executes an identical statement, then the statement does not have to be parsed again in order to execute the statement.

Shared PL/SQL Procedures and Packages: This stores/shares the most recently used PL/SQL statements such as functions, packages, and triggers.

Data Dictionary Cache

The Data Dictionary Cache is a memory structure that caches data dictionary information that has been recently used. This includes user account information, datafile names, table descriptions, user privileges, and other information. The database server manages the size of the Data Dictionary Cache internally and the size depends on the size of the Shared Pool in which the Data Dictionary Cache resides. If the size is too small, then the data dictionary tables that reside on disk must be queried often for information and this will slow down performance.

Database Buffer Cache The Database Buffer Cache is a fairly large memory object that stores the actual data blocks that are retrieved from datafiles by system queries and other data manipulation language commands. A query causes a Server Process to first look in the Database Buffer Cache to determine if the requested information happens to already be located in memory thus the information would not need to be retrieved from disk and this would speed up performance. If the information is not in the Database Buffer Cache, the Server Process retrieves the information from disk and stores it to the cache. Keep in mind that information read from disk is read a block at a time, not a row at a time, because a database block is the smallest addressable storage space on disk. Database blocks are kept in the Database Buffer Cache according to a Least Recently Used (LRU) algorithm and are aged out of memory if a buffer cache block is not used in order to provide space for the insertion of newly needed database blocks. The buffers in the cache are organized in two lists: the write list and, the least recently used (LRU) list. The write list holds dirty buffers these are buffers that hold that data that has been modified, but the blocks have not been written back to disk. The LRU list holds free buffers, pinned buffers, and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed. When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list this causes dirty buffers to age toward the LRU end of the LRU list. When an Oracle user process needs a data row, it searches for the data in the database buffer cache because memory can be searched more quickly than hard disk can be accessed.

If the data row is already in the cache (a cache hit), the process reads the data from memory; otherwise a cache miss occurs and data must be read from hard disk into the database buffer cache. Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the LRU end of the list. The search continues until a free buffer is found or until the search reaches the threshold limit of buffers. Each time the user process finds a dirty buffer as it searches the LRU, that buffer is moved to the write list and the search for a free buffer continues. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list. If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk. This frees up some buffers. The block size for a database is set when a database is created and is determined by the init.ora parameter file parameter named DB_BLOCK_SIZE. Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB. The size of blocks in the Database Buffer Cache matches the block size for the database. The DBORCL database uses a 8KB block size. Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can be more than one Database Buffer Cache allocated to match block sizes in the cache with the block sizes in the non-standard tablespaces. The size of the Database Buffer Caches can be controlled by the parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the memory allocated to the caches without restarting the Oracle instance. You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM command like the one shown here: ALTER SYSTEM SET DB_CACHE_SIZE = 96M; You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size it to achieve an optimal workload for the memory allocation. This information is displayed from the V$DB_CACHE_ADVICE view. In order for statistics to be gathered, you can dynamically alter the system by using the ALTER SYSTEM SET DB_CACHE_ADVICE (OFF, ON, READY) command. However, gathering statistics on system performance always incurs some overhead that will slow down system performance. SQL> ALTER SYSTEM SET db_cache_advice = ON; System altered. SQL> DESC V$DB_cache_advice; Name Null? Type ----------------------------------------- -------- ------------ID NUMBER


KEEP Buffer Pool This pool retains blocks in memory (data from tables) that are likely to be reused throughout daily processing. An example might be a table containing user names and passwords or a validation table of some type. The DB_KEEP_CACHE_SIZE parameter sizes the KEEP Buffer Pool. RECYCLE Buffer Pool This pool is used to store table data that is unlikely to be reused throughout daily processing thus the data is quickly recycled. The DB_RECYCLE_CACHE_SIZE parameter sizes the RECYCLE Buffer Pool.

Redo Log Buffer

The Redo Log Buffer memory object stores images of all changes made to database blocks. As you know, database blocks typically store several table rows of organizational data. This means that if a single column value from one row in a block is changed, the image is stored. Changes include INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP. Think of the Redo Log Buffer as a circular buffer that is reused over and over. As the buffer fills up, copies of the images are stored to the Redo Log Files that are covered in more

detail in a later module. Large Pool The Large Pool is an optional memory structure that primarily relieves the memory burden placed on the Shared Pool. The Large Pool is used for the following tasks if it is allocated: Allocating space for session memory requirements from the User Global Area (part of the Server Process) where a Shared Server is in use. Transactions that interact with more than one database, e.g., a distributed database scenario. Backup and restore operations by the Recovery Manager (RMAN) process. RMAN uses this only if the BACKUP_DISK_IO = n and BACKUP_TAPE_IO_SLAVE = TRUE parameters are set. If the Large Pool is too small, memory allocation for backup will fail and memory will be allocated from the Shared Pool. Parallel execution message buffers for parallel server operations. The PARALLEL_AUTOMATIC_TUNING = TRUE parameter must be set.

The Large Pool size is set with the LARGE_POOL_SIZE parameter this is not a dynamic parameter. It does not use an LRU list to manage memory. Java Pool The Java Pool is an optional memory object, but is required if the database has Oracle Java installed and in use for Oracle JVM (Java Virtual Machine). The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB. The Java Pool is used for memory allocation to parse Java commands. Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in the Shared Pool. Streams Pool This cache is new to Oracle 10g. It is sized with the parameter STREAMS_POOL_SIZE. This pool stores data and control structures to support the Oracle Streams feature of Oracle Enterprise Edition. Oracle Steams manages sharing of data and events in a distributed environment. If STEAMS_POOL_SIZE is not set or is zero, memory for Oracle Streams operations is allocated from up to 10% of the Shared Pool memory. Automatic Shared Memory Management (ASSM) Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the initialization parameters, such as SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE parameters. Automatic Shared Memory Management enables a DBA to specify the total SGA

memory available through the SGA_TARGET initialization parameter. The Oracle Database automatically distributes this memory among various subcomponents to ensure most effective memory utilization. The DBORCL database SGA_TARGET is set in the initDBORCL.ora file: sga_target=1610612736

With automatic SGA memory management, the different SGA components are flexibly sized to adapt to the SGA available. Setting a single parameter simplifies the administration task the DBA only specifies the amount of SGA memory available to an instance the DBA can forget about the sizes of individual components. No out of memory errors are generated unless the system has actually run out of memory. No manual tuning effort is needed. The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the following components: Fixed SGA and other internal allocations needed by the Oracle Database instance The log buffer The shared pool The Java pool The buffer cache

If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the SGA_MAX_SIZE value is bumped up to accomodate SGA_TARGET. After startup, SGA_TARGET can be decreased or increased dynamically. However, it cannot exceed the value of SGA_MAX_SIZE that was computed at startup. When you set a value for SGA_TARGET, Oracle Database 10g automatically sizes the most commonly configured components, including: The shared pool (for SQL and PL/SQL execution) The Java pool (for Java execution state) The large pool (for large allocations such as RMAN backup buffers) The buffer cache

There are a few SGA components whose sizes are not automatically adjusted. The DBA must specify the sizes of these components explicitly, if they are needed by an application. Such components are: Keep/Recycle buffer caches (controlled by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE) Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32}) Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)

Fixed SGA The fixed SGA is a component of the SGA that varies in size from platform to platform and from release to release. It is compiled into the Oracle binary itself at installation time (hence the name fixed). The fixed SGA contains a set of variables that point to the other components of the SGA, and variables that contain the values of various parameters. The size of the fixed SGA is something with which we have no control over, and it is generally very small. Think of this area as a bootstrap section of the SGAsomething Oracle uses internally to find the other bits and pieces of the SGA.

Program Global Area

There are two ways to manage memory of the PGA: Manual PGA memory management, where you tell Oracle how much memory is it allowed to use to sort and hash any time it needs to sort or hash in a specific process Automatic PGA memory management, where you tell Oracle how much memory it should attempt to use systemwide The PGA is memory specific to an operating process. The server process allocates memory structures that it requires in the PGA. PGA memory management is controlled by the database initialization parameter WORKAREA_SIZE_POLICY. This initialization parameter defaults to AUTO, for automatic PGA memory management. The Program Global Area is also termed the Process Global Area (PGA) and is a part of memory allocated that is outside of the Oracle Instance. The PGA stores data and control information for a single Server Process or a single Background Process. It is allocated when a process is created and the memory is scavenged by the operating system when the process terminates. This is NOT a shared part of memory one PGA to each process only. The content of the PGA The PGA is subdivided into different areas, each with a different purpose. Figure144 shows the possible contents of the PGA for a dedicated server session. Not all of the PGA areas will exist in every case. Private SQL Area A private SQL area holds information about a parsed SQL statement and other sessionspecific information for processing. When a server process executes SQL or PL/SQL code, the process uses the private SQL area to store bind variable values, query execution state information, and query execution work areas. A user session issuing SQL statements has a Private SQL Area that may be associated with a Shared SQL Area if the same SQL statement is being executed by more than one system user. (In Shared Server ) Multiple private SQL areas in the same or different sessions can point to a single execution plan in the SGA . For example, 20 executions of SELECT * FROM employees in one session and 10 executions of the same query in a different session can share the same plan.

This often happens in OLTP environments where many users are executing and using the same application program. Dedicated Server environment the Private SQL Area is located in the Program Global Area. Shared Server environment the Private SQL Area is located in the System Global Area.

A cursor is a name or handle to a specific private SQL area. As shown in Figure145, you can think of a cursor as a pointer on the client side and as a state on the server side. Because cursors are closely associated with private SQL areas, the terms are sometimes used interchangeably. A private SQL area is divided into the following areas: The run-time area This area contains query execution state information. For example, the run-time area tracks the number of rows retrieved so far in a full table scan. Oracle Database creates the run-time area as the first step of an execute request. For DML statements, the runtime area is freed when the SQL statement is closed. The persistent area This area contains bind variable values. A bind variable value is supplied to a SQL statement at run time when the statement is executed. The persistent area is freed only when the cursor is closed. SQL Work Area A SQL work area is a private allocation of PGA memory used for memory-intensive operations. For example, a sort operator uses the sort area to sort a set of rows. Similarly, a hash join operator uses a hash area to build a hash table from its left input, whereas a bitmap merge uses the bitmap merge area to merge data retrieves from scans of multiple bitmap indexes. Oracle requires room to perform sort operations when restricted joins, index builds and reports are executed. Preferably, this space should be memory space. If the size of a sort is larger than the memory space available, then the sort is done on disk in the users temporary tablespace. SORT_AREA_SIZE: The total amount of RAM that will be used to sort information before swapping out to disk. SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted data after the sort is complete. That is, if SORT_AREA_SIZE was 512KB and SORT_AREA_RETAINED_SIZE was 256KB, then your server process would use up to 512KB of memory to sort data during the initial processing of the query. When the sort was complete, the sorting area would be shrunk down to 256KB, and any sorted data that did not fit in that 256KB would be written out to the temporary tablespace. HASH_AREA_SIZE: The amount of memory your server process would use to store hash tables in memory. These structures are used during a hash join, typically when joining a large set with another set. The smaller of the two sets would be hashed into memory and anything that didnt fit in the hash area region of memory would be stored in the temporary tablespace by the join key.

Automated PGA Memory Management The database automatically tunes work area sizes when automatic PGA memory management is enabled. Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting the WORKAREA_SIZE_POLICY = AUTO parameter (this is the default!) and PGA_AGGREGATE_TARGET = n (where n is some amount of memory established by the DBA). However, the DBA can let Oracle 10g determine the appropriate amount of memory. Oracle 8i and earlier required the DBA Manully to set the following parameters to control SQL Work Area memory allocations: SORT_AREA_SIZE. HASH_AREA_SIZE. BITMAP_MERGE_AREA_SIZE. CREATE_BITMAP_AREA_SIZE.

Session Memory: Memory that holds session variables and other session information.

Software Code Area Software code areas store Oracle executable files running as part of the Oracle instance. These code areas are static in nature and are located in privileged memory that is separate from other user programs.The code can be installed sharable when multiple Oracle instances execute on the same server with the same software release level. User Global Area (UGA) The UGA is session memory, which is memory allocated for session variables, such as logon information, and other information required by a database session. Essentially, the UGA stores the session state. Figure142 depicts the UGA. If a session loads a PL/SQL package into memory, then the UGA contains the package state, which is the set of values stored in all the package variables at a specific time (see "PL/ SQL Packages" on page8-6). The package state changes when a package subprogram changes the variables. By default, the package variables are unique to and persist for the life of the session. The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are equivalent to data blocks. The page pool is allocated at the start of an OLAP session and released at the end of the session. An OLAP session opens automatically whenever a user queries a dimensional object such as a cube. The UGA must be available to a database session for the life of the session. For

this reason, the UGA cannot be stored in the PGA when using a shared server connection because the PGA is specific to a single process. Therefore, the UGA is stored in the SGA when using shared server connections, enabling any shared server process access to it. When using a dedicated server connection, the UGA is stored in the PGA.

Background Processes Background Process is a program.Each background process is meant for a specific purpose and its role is well defined. Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated. Background processes are visible as separate operating system processes in Unix/ Linux. In Windows, these run as separate threads within the same service. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log. Background processes are started automatically when the instance is started. To findout background processes from database: SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND'; To findout background processes from OS: $ ps -ef|grep ora_|grep SID Mandatory Background Processes If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted. Database Writer (maximum 20) DBW0-DBW9,DBWa-DBWj The DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes (DBW1 through DBW9 and DBWa through DBWj) to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems. Multiple database writers can be configured by initialization parameter

DB_WRITER_PROCESSES, depends on the number of CPUs allocated to the instance. When a buffer in the database buffer cache is modified, it is marked dirty. A cold buffer is a buffer that has not been recently used according to the least recently used (LRU) algorithm. The DBWn process writes cold, dirty buffers to disk so that user processes are able to find cold, clean buffers that can be used to read new blocks into the cache. As buffers are dirtied by user processes, the number of free buffers diminishes. If the number of free buffers drops too low, user processes that must read blocks from disk into the cache are not able to find free buffers. DBWn manages the buffer cache so that user processes can always find free buffers. Whenever a log switch is occurring as redolog file is becoming CURRENT to ACTIVE stage, oracle calls DBWn and synchronizes all the dirty blocks in database buffer cache to the respective datafiles, scattered or randomly. DBWn will be invoked in following scenarios: 1. When the dirty blocks in SGA reaches to a threshold value, oracle calls DBWn. 2. When the database is shutting down with some dirty blocks in the SGA, then oracle calls DBWn. 3. DBWn has a time out value (3 seconds by default) and it wakes up whether there are any dirty blocks or not. 4. When a checkpoint is issued. 5. When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers. it signals DBWn to write. 6. When a huge table wants to enter into SGA and oracle could not find enough free space where it decides to flush out LRU blocks and which happens to be dirty blocks. Before flushing out the dirty blocks, oracle calls DBWn. 7. RAC ping request is made. 8. When Table DROPed or TRUNCATEed. 9. When tablespace is going to OFFLINE. 10. When tablespace is going to READ ONLY. 11. When tablespace is going to BEGIN BACKUP. Log Writer (maximum 1) LGWR LGWR writes redo data(redo entries) from redolog buffers to (online) redolog files, sequentially. Redo log buffer works in circular fashion. It means that it overwrites old entries. But before overwriting, old entries must be copies to redo log files. Usually Log writer process (LGWR) is fast enough to mange these issues. Log writer process (LGWR) writes redo entries after certain amount of time to ensure that free space is available for new redo entries. Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a cyclic process. Newly created redolog file will be in UNUSED state. When the LGWR is writing to a particular redolog file, that file is said to be in CURRENT status. If the file is filled up completely then a log switch takes place and the LGWR starts writing to the second file (this is the reason every database requires a minimum of 2 redolog groups). The file which is filled up now becomes from CURRENT to ACTIVE. Log writer will write synchronously to the redo log groups in a circular fashion. If any damage is identified with a redo log file, the log writer will log an error in the LGWR trace file and the system Alert Log. Sometimes, when additional redo log buffer space is required, the LGWR will even write uncommitted redo log entries to release the held buffers. LGWR can also

use group commits (multiple committed transaction's redo entries taken together) to write to redo logs when a database is undergoing heavy write operations. LGWR will be invoked in following scenarios: 1. When user performs commit When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log writer puts a commit record in the redo log buffer and writes it to disk immediately along with the transaction's redo entries. Changes to actual data blocks are deferred until a convenient time (Fast-Commit Mechanism). 1. After every three seconds 2. When redo log buffer is 1/3 full. 3. Shutting down the database. 4.DBWn process checks for redo entries, it signal LGWR process if redo entries have not been written. Checkpoint (maximum 1) CKPT A Checkpoint is a background process. The checkpoint occurs when all modified database buffers in the Oracle SGA are written out to datafiles by the database writer (DBWn) process. After a checkpoint completes ,The checkpoint (CKPT) process writes checkpoint information(TimeStamp and SCN) to control files and data file headers. (After a checkpoint completes, A checkpoint (CKPT) process in Oracle stores the SCN individually in the control file for each datafile. The following SQL shows the datafile checkpoint SCN for a single datafile in the control file .This SCN is called checkpoint SCN.) Checkpoints occur AFTER (not during) every redo log switch and also at intervals specified by initialization parameters. Set parameter LOG_CHECKPOINTS_TO_ALERT=TRUE to observe checkpoint start and end times in the database alert log. Checkpoints can be forced with the ALTER SYSTEM CHECKPOINT; command. Checkpoint event can be occurred in following conditions: 1. 2. 3. 4. Whenever database buffer cache filled up. Whenever times out (3seconds until 9i, 1second from 10g). Log switch occurred. Whenever manual log switch is done. SQL> ALTER SYSTEM SWITCH LOGFILE; 5. Manual checkpoint. SQL> ALTER SYSTEM CHECKPOINT; 6. Graceful shutdown of the database. 7. Whenever BEGIN BACKUP command is issued. 8. When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT (in seconds), exists between the incremental checkpoint and the tail of the log. 9. When the number of OS blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL, exists between the incremental checkpoint and the tail of the log.The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform roll-forward is reached. 10. Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET (in seconds) is

reached and specifies the time required for a crash recovery. The parameter FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but these parameters can still be used. System Monitor (maximum 1) SMON The System Monitor (SMON) is responsible for instance recovery by applying entries in the online redo log files to the datafiles. It also performs other activities as outlined in the figure shown below. 1. If the database is crashed (power failure) and next time when we restart the database SMON observes that last time the database was not shutdown gracefully. Hence it requires some recovery, which is known as INSTANCE CRASH RECOVERY. When performing the crash recovery before the database is completely open, if it finds any transaction committed but not found in the datafiles, will now be applied from redolog files to datafiles. 2. If SMON observes some uncommitted transaction which has already updated the table in the datafile, is going to be treated as a in doubt transaction and will be rolled back with the help of before image available in rollback segments. 3. SMON also cleans up temporary segments that are no longer in use. 4. It also coalesces contiguous free extents in dictionary managed tablespaces that have PCTINCREASE set to a non-zero value. 5. In RAC environment, the SMON process of one instance can perform instance recovery for other instances that have failed. 6. SMON wakes up about every 5 minutes to perform housekeeping activities.

Process Monitor (maximum 1) PMON The Process Monitor (PMON) is a cleanup type of process that cleans up after failed processes such as the dropping of a user connection due to a network failure or the abend of a user application program. It does the tasks shown in the figure below.

1. If a client has an open transaction which is no longer active (client session is closed) then PMON comes into the picture and that transaction becomes in doubt transaction which will be rolled back. 1. PMON is responsible for performing recovery if a user process fails. It will rollback uncommitted transactions. If the old session locked any resources

that will be unlocked by PMON. 1. PMON is responsible for cleaning up the database buffer cache and freeing resources that were allocated to a process. 1. PMON also registers information about the instance and dispatcher processes with Oracle (network) listener. 1. PMON also checks the dispatcher & server processes and restarts them if they have failed. 1. PMON wakes up every 3 seconds to perform housekeeping activities.

Recoverer (maximum 1) RECO [Mandatory from 10g] This process is intended for recovery in distributed databases. The distributed transaction recovery process finds pending distributed transactions and resolves them. All indoubt transactions are recovered by this process in the distributed database setup. RECO will connect to the remote database to resolve pending transactions. Pending distributed transactions are two-phase commit transactions involving multiple databases. The database that the transaction started is normally the coordinator. It will send request to other databases involved in two-phase commit if they are ready to commit. If a negative request is received from one of the other sites, the entire transaction will be rolled back. Otherwise, the distributed transaction will be committed on all sites. However, there is a chance that an error (network related or otherwise) causes the two-phase commit transaction to be left in pending state (i.e. not committed or rolled back). It's the role of the RECO process to liaise with the coordinator to resolve the pending two-phase commit transaction. RECO will either commit or rollback this transaction. Memory Monitor process (MMON) Collects statistics to help the database manage itself. The MMON process collects gathers memory statistics (snapshots) stores this information in the AWR which is used by the ADDM (automatic database diagnostic monitor), MMON is also responsible for issuing alerts for metrics that exceed their thresholds. Processes You need to understand three different types of Processes: User Process: Starts when a database user requests to connect to an Oracle Server. Server Process: Establishes the Connection to an Oracle Instance when a User Process requests connection makes the connection for the User Process. Background Processes: These start when an Oracle Instance is started up.

User and server processes The processes shown in the figure are called user and server processes. These processes are used to manage the execution of SQL statements.

A Shared Server Process can share memory and variable processing for multiple user processes. A Dedicates Server Process manages memory and variables for a single user process.

Connecting to an Oracle Instance Creating a Session

System users can connect to an Oracle database through SQLPlus or through an application program like the Internet Developer Suite (the program becomes the system user). This connection enables users to execute SQL statements. The act of connecting creates a communication pathway between a user process and an Oracle Server. As is shown in the figure above, the User Process communicates with the Oracle Server through a Server Process. The User Process executes on the client computer. The Server Process executes on the server computer, and actually executes SQL statements submitted by the system user. The figure shows a one-to-one correspondence between the User and Server Processes. This is called a Dedicated Server connection. An alternative configuration is to use a Shared Server where more than one User Process shares a Server Process. Sessions: When a user connects to an Oracle server, this is termed a session. The session starts when the Oracle server validates the user for connection. The session ends when the user logs out (disconnects) or if the connection terminates abnormally (network failure or client computer failure). A user can typically have more than one concurrent session, e.g., the user may connect using SQLPlus and also connect using Internet Developer Suite tools at the same time. The limit of concurrent session connections is controlled by the DBA. If a system users attempts to connect and the Oracle Server is not running, the system user receives the Oracle Not Available error message. User Process In order to use Oracle, you must obviously connect to the database. This must occur whether you're using SQLPlus, an Oracle tool such as Designer or Forms, or an application program. This generates a User Process (a memory object) that generates programmatic calls through your user interface (SQLPlus, Integrated Developer Suite, or application program) that creates a session and causes the generation of a Server Process that is either dedicated or shared. Server Process As you have seen, the Server Process is the go-between for a User Process and the Oracle Instance. In a Dedicated Server environment, there is a single Server Process to serve each User Process. In a Shared Server environment, a Server Process can serve several User Processes, although with some performance reduction. Allocation of server process in a dedicated environment versus a shared environment is covered in further detail in the Oracle10g Database Performance Tuning course offered by Oracle Education.

Optional Background Processes Archiver (maximum 10) ARC0-ARC9 We cover the Archiver (ARCn) optional background process in more detail because it is almost always used for production systems storing mission critical information. The ARCn process must be used to recover from loss of a physical disk drive for systems that are "busy" with lots of transactions being completed. When a Redo Log File fills up, Oracle switches to the next Redo Log File. The DBA creates several of these and the details of creating them are covered in a later module. If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a round-robin fashion by overwriting ones that have already been used it should be obvious that the information stored on the files, once overwritten, is lost forever. If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are individually written to Archived Redo Log Files and LGWR does not overwrite a Redo Log File until archiving has completed. Thus, committed data is not lost forever and can be recovered in the event of a disk failure. Only the contents of the SGA will be lost if an Instance fails. In NOARCHIVELOG mode, the Redo Log Files are overwritten and not archived. Recovery can only be made to the last full backup of the database files. All committed transactions after the last full backup are lost, and you can see that this could cost the firm a lot of $$$. When running in ARCHIVELOG mode, the DBA is responsible to ensure that the Archived Redo Log Files do not consume all available disk space! Usually after two complete backups are made, any Archived Redo Log Files for prior backups are deleted. The ARCn process is responsible for writing the online redolog files to the mentioned archive log destination after a log switch has occurred. ARCn is present only if the database is running in archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redolog file, it is not released to log writer for overwriting. The number of archiver processes that can be invoked initially is specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES (by default 2, max 10). The actual number of archiver processes in use may vary based on the workload. ARCH processes, running on primary database, select archived redo logs and send them to standby database. Archive log files are used for media recovery (in case of a hard disk failure and for maintaining an Oracle standby database via log shipping). Archives the standby redo logs applied by the managed recovery process (MRP). Coordinated Job Queue Processes (maximum 1000) CJQ0/Jnnn Job queue processes carry out batch processing. All scheduled jobs are executed by these processes. The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. These processes will be useful in refreshing materialized views. This is the Oracles dynamic job queue coordinator. It periodically selects jobs (from JOB$)

that need to be run, scheduled by the Oracle job queue. The coordinator process dynamically spawns job queue slave processes (J000-J999) to run the jobs. These jobs could be PL/SQL statements or procedures on an Oracle instance. From 11g release2, DBMS_JOB and DBMS_SCHEDULER work without setting JOB_QUEUE_PROCESSES. Prior to 11gR2 the default value is 0, and from 11gR2the default value is 1000. Lock Monitor (maximum 1) LMON Lock monitor manages global locks and resources. It handles the redistribution of instance locks whenever instances are started or shutdown. Lock monitor also recovers instance lock information prior to the instance recovery process. Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks. Lock Manager Daemon (maximum 10) LMDn LMDn processes manage instance locks that are used to share resources between instances. LMDn processes also handle deadlock detection and remote lock requests. Global Cache Service (LMS) In an Oracle Real Application Clusters environment, this process manages resources and provides inter-instance resource control. Lock processes (maximum 10) LCK0- LCK9 The instance locks that are used to share resources between instances are held by the lock processes. Block Server Process (maximum 10) BSP0-BSP9 Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of another instance, in certain circumstances. Queue Monitor (maximum 10) QMN0-QMN9 This is the advanced queuing time manager process. QMNn monitors the message queues. Event Monitor (maximum 1) EMN0/EMON This process is also related to advanced queuing, and is meant for allowing a publish/subscribe style of messaging between applications. Dispatcher (maximum 1000) Dnnn Intended for multi threaded server (MTS) setups. Dispatcher processes listen to and receive requests from connected sessions and places them in the request queue for further processing. Dispatcher processes also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are mediators between the client processes and the shared server processes. The maximum number of dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS. Shared Server Processes (maximum 1000) Snnn Intended for multi threaded server (MTS) setups. These processes pickup requests from the call request queue, process them and then return the results to a result queue. The number of shared server processes to be created at instance startup can be specified using the initialization parameter SHARED_SERVERS. Maximum shared server processes can be specified by MAX_SHARED_SERVERS.

Parallel Execution Slaves (maximum 1000) Pnnn These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery. The Maximum number of parallel processes that can be invoked is specified by the initialization parameter PARALLEL_MAX_SERVERS. Trace Writer (maximum 1) TRWR Trace writer writes trace files from an Oracle internal tracing facility. Input/Output Slaves (maximum 1000) Innn These processes are used to simulate asynchronous I/O on platforms that do not support it. The initialization parameter DBWR_IO_SLAVES is set for this purpose. Dataguard Monitor (maximum 1) DMON The Dataguard broker process. DMON is started when Dataguard is started. Managed Recovery Process MRP This process will apply archived redo log to the standby database. Remote File Server process RFS The remote file server process on the standby database receives archived redo logs from the primary database. Wakeup Monitor Process (maximum 1) WMON This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for an event to occur. This process is obsolete and has been removed. Recovery Writer (maximum 1) RVWR This is responsible for writing flashback logs (to FRA). Fetch Archive Log (FAL) Server Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. Fetch Archive Log (FAL) Client Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence. New Background Processes in Oracle 10g Memory Manager (maximum 1) MMAN MMAN dynamically adjust the sizes of the SGA components like DBC, large pool, shared pool and java pool. It is a new process added to Oracle 10g as part of automatic shared memory management. Memory Monitor (maximum 1) MMON MMON monitors SGA and performs various manageability related background tasks. Memory Monitor Light (maximum 1) MMNL New background process in Oracle 10g.

Change Tracking Writer (maximum 1) CTWR CTWR will be useful in RMAN. Optimized incremental backups using block change tracking (faster incremental backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks. ASMB This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources. It's also used to update statistics and provide a heart beat mechanism. Re-Balance RBAL RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM. Actual Rebalance ARBx ARBx is configured by ASM_POWER_LIMIT. New Background Processes in Oracle 11g ACMS - Atomic Controlfile to Memory Server DBRM - Database Resource Manager DIA0 - Diagnosibility process 0 DIAG - Diagnosibility process FBDA - Flashback Data Archiver GTX0 - Global Transaction Process 0 KATE - Konductor (Conductor) of ASM Temporary Errands MARK - Mark Allocation unit for Resync Koordinator (coordinator) SMCO - Space Manager VKTM - Virtual Keeper of TiMe process W000 - Space Management Worker Processe Database The Oracle database architecture includes both logical and physical structures as follows: Physical: Control files, Redo Log Files, Datafiles, Operating System Blocks. Logical: Tablespaces, Segments, Extents, Data Blocks,Tables,Indexes. Physical Structure Database Files An Oracle database consists of physical files. The database itself has: Datafiles these contain the organization's actual data. Redo log files these contain a record of changes made to the database, and enable recovery when failures occur. Control files It contains physical information of database .

Other key files as noted above include: Parameter file there are two types of parameter files. 1. The init.ora file (also called the PFILE) is a static parameter file. It contains parameters that specify how the database instance is to start up. For example,

some parameters will specify how to allocate memory to the various parts of the system global area. 1. The spfile.ora is a dynamic parameter file. It also stores parameters to specify how to startup a database; however, its parameters can be modified while the database is running. Password file specifies which *special* users are authenticated to startup/shut down an Oracle Instance. Archived redo log files these are copies of the redo log files and are necessary for recovery in an online, transaction-processing environment in the event of a disk failure.

Logical Structure It is helpful to understand how an Oracle database is organized in terms of a logical structure that is used to organize physical objects.

Tablespace: An Oracle 10g database must always consist of at least two tablespaces (SYSTEM and SYSAUX), although a typical Oracle database will multiple tablespaces tablespaces. A tablespace is a logical storage facility (a logical container) for storing objects such as tables, indexes, sequences, clusters, and other database objects. Each tablespace has at least one physical datafile that actually stores the tablespace at the operating system level. A large tablespace may have more than one datafile allocated for storing objects assigned to that tablespace. A tablespace belongs to only one database. Tablespaces can be brought online and taken offline for purposes of backup and management, except for the SYSTEM tablespace that must always be online. Tablespaces can be in either read-only or read-write status. Datafile: Datafiles is placed within Tablespace,which are physical disk objects. A datafile can only store objects for a single tablespace, but a tablespace may have more than one datafile this happens when a disk drive device fills up and a tablespace needs to be expanded, then it is expanded to a new disk drive. The DBA can change the size of a datafile to make it smaller or later. The file can also grow in size dynamically as the tablespace grows.

Segment: When logical storage objects are created within a tablespace, for example, an employee table, a segment is allocated to the object. Each segment is collection of one or more Extents. Obviously a tablespace typically has many segments. A segment cannot span tablespaces but can span datafiles that belong to a single tablespace.

Extent: Each object has one segment which is a physical collection of extents. Extents is collections of Blocks.

Extents are simply collections of contiguous disk storage blocks. A logical storage object such as a table or index always consists of at least one extent ideally the initial extent allocated to an object will be large enough to store all data that is initially loaded. As a table or index grows, additional extents are added to the segment. A DBA can add extents to segments in order to tune performance of the system. An extent cannot span a datafile.

Block: The Oracle Server manages data at the smallest unit in what is termed a block or data block. Data are actually stored in blocks. A physical block is the smallest addressable location on a disk drive for read/write operations. An Oracle data block consists of one or more physical blocks (operating system blocks) so the data block, if larger than an operating system block, should be an even multiple of the operating system block size, e.g., if the Linux operating system block size is 2K or 4K, then the Oracle data block should be 2K, 4K, 8K, 16K, etc in size. This optimizes I/O. The data block size is set at the time the database is created and cannot be changed. It is set with the DB_BLOCK_SIZE parameter. The maximum data block size depends on the operating system. Tablespace: An Oracle database is comprised of tablespaces. Tablespaces logically organize data that are physically stored in datafiles. A tablespace belongs to only one database, and has at least one datafile that is used to store data for the associated tablespace. Because disk drives have a finite size, a tablespace can span disk drives when datafiles from more than one disk drive are assigned to a tablespace. This enables systems to be very, very large. Datafiles are always assigned to only one tablespace and, therefore, to only one database. What is Locally Managed and Dictionary Managed Tablespace? Tablespace can be catagerized Locally Managed or Dictionary Managed based on Extent allocate managed . Locally Managed The extents allocated to a locally managed tablespace are managed through the use of bitmaps. Each bit corresponds to a block or group of blocks (an extent). The bitmap value (on or off) corresponds to whether or not an extent is allocated or free for reuse. Local management is the default for the SYSTEM tablespace beginning with Oracle 10g. If the SYSTEM tablespace is locally managed, the other tablespaces must also be either locally managed or read-only.

Local management reduces contention for the SYSTEM tablespace because space allocation and deallocation operations for other tablespaces do not need to use data dictionary tables. The LOCAL option is the default so it is normally not specified. With the LOCAL option, you cannot specify any DEFAULT STORAGE, MINIMUM EXTENT, or TEMPORARY clauses. UNIFORM a specification of UNIFORM means that the tablespace is managed in uniform extents of the SIZE specified. use UNIFORM to enable exact control over unused space and when you can predict the space that needs to be allocated for an object or objects. Use K, M, G, etc to specify the extent size in kilobytes, megabytes, gigabytes, etc. The default is 1M; however, you can specify the extent size with the SIZE clause of the UNIFORM clause. AUTOALLOCATE a specification of AUTOALLOCATE instead of UNIFORM, then the tablespace is system managed and you cannot specify extent sizes. AUTOALLOCATE is the defaultthis simplifies disk space allocation because the database automatically selects the appropriate extent size. AUTOALLOCATE this does waste some space but simplifies management of tablespace. Tablespaces with AUTOALLOCATE are allocated minimum extent sizes of 64K dictionary-managed tablespaces have a minimum extent size of two database blocks.

Advantages of Local Management: Basically all of these advantages lead to improved system performance in terms of response time, particularly the elimination of the need to coalesce free extents. Local management avoids recursive space management operations. This can occur in dictionary managed tablespaces if consuming or releasing space in an extent results in another operation that consumes or releases space in an undo segment or data dictionary table. Because locally managed tablespaces do not record free space in data dictionary tables, they reduce contention on these tables. Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Changes to the extent bitmaps do not generate undo information because they do not update tables in the data dictionary (except for special cases such as tablespace quota information). Example CREATE TABLESPACE command this creates a locally managed Inventory tablespace with AUTOALLOCATE management of extents. CREATE TABLESPACE inventory DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL AUTOALLOCATE; Example CREATE TABLESPACE command this creates a locally managed tablespace with

UNIFORM management of extents with extent sizes of 128K. CREATE TABLESPACE inventory DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; Possible Errors You cannot specify the following clauses when you explicitly specify EXTENT MANAGEMENT LOCAL: DEFAULT storage clause MINIMUM EXTENT TEMPORARY Segment Space Management in Locally Managed Tablespaces Use the SEGMENT SPACE MANAGEMENT clause to specify how free and used block space within a segment is to be managed. Once established, you cannot alter the segment space management method for a tablespace. MANUAL: This setting uses free lists to manage free space within segments. Free lists are lists of data blocks that have space available for inserting rows. You must specify and tune the PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters. MANUAL is the default. AUTO: This uses bitmaps to manage free space within segments (This setting is automated free lists). A bitmap describes the status of each data block within a segment with regard to the data block's ability to have additional rows inserted. Bitmaps allow Oracle to manage free space automatically. Specify automatic segment-space management only for permanent, locally managed tablespaces. Automatic generally delivers better space utilization than manual, and it is self-tuning.

Example CREATE TABLESPACE command this creates a locally managed tablespace with AUTO segment space management. CREATE TABLESPACE inventory DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; Dictionary Managed With this approach the data dictionary tables (sys.fet$, sys.uet$ ) to manage extent allocation and deallocation manually. The DEFAULT STORAGE clause enables you to customize the allocation of extents. This provides increased flexibility, but less efficiency than locally managed tablespaces.

Example this example creates a tablespace using all DEFAULT STORAGE clauses. CREATE TABLESPACE inventory DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf' SIZE 50M EXTENT MANAGEMENT DICTIONARY DEFAULT STORAGE ( INITIAL 50K NEXT 50K MINEXTENTS 2 MAXEXTENTS 50 PCTINCREASE 0 ); The tablespace will be in a single, 50M datafile. The EXTENT MANAGEMENT DICTIONARY clause specifies the management. All segments created in the tablespace will inherit the default storage parameters unless their storage parameters are specified explicitly to override the default. The storage parameters specify the following: INITIAL size in bytes of the first extent in a segment. NEXT size in bytes of second and subsequent segment extents. PCTINCREASE percent by which each extent after the second extent grows. SMON periodically coalesces free space in a dictionary-managed tablespace, but only if the PCTINCREASE setting is NOT zero. Use ALTER TABLESPACE <tablespacename> COALESCE to manually coalesce adjacent free extents. MINEXTENTS number of extents allocated at a minimum to each segment upon creation of a segment. MAXEXTENTS number of extents allocated at a maximum to a segment you can specify UNLIMITED. Altering Dictionary Tablespace Storage Settings Any of the storage settings for Dictionary-Managed tablespaces can be modified with the ALTER TABLESPACE command. This only alters the default settings for future segment allocations. Segment What is Segment? A segment is a set of extents that contains all the data (database objects) stored for a specific logical storage structure within a tablespace. For example, CREATE TABLE Sql Command issued by Oracle User Process then Oracle allocates data segment with one or more extents, and for each index, Oracle allocates index segment with one or more extents. Type of Segment Objects in an Oracle database such as tables, indexes, clusters, sequences, etc., are comprised of segments. There are several different types of segments. There are 11 types of segments in Oracle 10g: 1. Table 2. Table partition 3. Cluster 4. Index 5. Index partition 6. Rollback (Undo Segment) 7. Deferred rollback 8. Temporary 9. Cache 10. Lobsegment 11. Lobindex

These types can be grouped into four segment classes: Data Segments Index Segments Temporary Segments Rollback segments Data Segments Oracle creates this data segment when you create the table or cluster with the CREATE statement. Table Data segment: Oracle creates Table Data Segment when you create the table Table Data Segment holds all data of table . Table segments do not store table rows in any particular order. Table segments do not store data that if data is clustered or partitioned. The DBA has almost no control over the location of rows in a table. The segment belongs to a single tablespace.

Table Partition Data Segment: If a table has high concurrent usage, that is simultaneous access by many different system users as would be the case for a SALES_ORDER table in an online-transaction processing environment, you will be concerned with scalability and availability of information as the DBA. This may lead you to create a table that is partitioned into more than one table partition segment. A partitioned table has a separate segment for each partition. Each partition may reside in a different tablespace. Each partition may have different storage parameters. The Oracle Enterprise Edition must have the partitioning option installed in order to create a partitioned table. Cluster Data Segment: Rows in a cluster segment are stored based on key value columns. Clustering is sometimes used where two tables are related in a strong-weak entity relationship. A cluster may contain rows from two or more tables. All of the tables in a cluster belong to the same segment and have the same storage parameters. Clustered table rows can be accessed by either a hashing algorithm or by indexing. Index Segment: When an index is created as part of the CREATE TABLE or CREATE INDEX command, an index segment is created. Tables may have more than one index, and each index has its own segment. Each index segment has a single purpose to speed up the process of locating rows in a table or cluster. Index-Organized Table: This special type of table has data stored within the index based on primary key values. All data is retrievable directly from the index structure (a tree structure). Index Partition Segment: Just as a table can be partitioned, so can an index. The purpose of using a partitioned index is to minimize contention for the I/O path by spreading index inputoutput across more than one I/O path. Each partition can be in a different tablespace. The partitioning option of Oracle Enterprise Edition must be installed. Undo Segment or Rollback Segment: A Rollback Segment is that when a transaction

modifies the database, Oracle copies the original data before modifying it. The original copy of the modified data is called undo data. . The original copy of the modified data is stored in undo tablespace. Temporary Segment: Temporary segments are created when commands and clauses such as CREATE INDEX, SELECT DISTINCT, GROUP BY, and ORDER BY caluse. Oracle automatically allocates temporary Segment to perform memory sort operations. Oracle can also allocate temporary segments for temporary tables and indexes created on temporary tables. Temporary tables hold data that exists only for the duration of a transaction or session. Often sort actions require more memory than is available. When this occurs, intermediate results of sort actions are written to disk so that the sort operation can continue this allows information to swap in and out of memory by writing/ reading to/from disk. Temporary segments store intermediate sort results.

LOB Data Segment: Large objects can be stored as one or more columns in a table. Large objects (LOBs) include images, separate text documents, video, sound files, etc. These LOBs are not stored in the table they are stored as separate segment objects. The table with the column actually has a "pointer" value stored in the column that points to the location of the LOB. Nested Table: A column in one table may consist of another table definition. The inner table is called a "nested table" and is stored as a separate segment. This would be done for a SALES_ORDER table that has the SALES_DETAILS (order line rows) stored as a nested table. Bootstrap Segment: This is a special cache segment created by the sql.bsq script that runs when a database is created. It stores initial data dictionary cache information when a database is opened. This segment cannot be queried or updated and requires no DBA maintenance. Extents Extents are a group of contiguous data block. One or more extents in turn make up a segment. When Extents Are Allocated? When you create a table, Oracle allocates to the tables data segment with an initial extent of a specified number of data blocks. Although no rows have been inserted yet, the initial extent are reserved for that tables rows. If the initial extent become full and more space is required to hold new data, Oracle automatically allocates an incremental extent for that segment. An incremental extent is a subsequent extent of the same or greater size than the previously allocated extent in that segment. How Extents Are Allocated? Oracle uses different algorithms to allocate extents, depending on whether they are

locally managed or dictionary managed. Prior to Oracle8i, all tablespaces were created as dictionary managed. Dictionary managed tablespaces rely on data dictionary tables to track space utilization. Beginning with Oracle8i, you could create locally managed tablespaces, which use bitmaps (instead of data dictionary tables) to track used and free space. Local managed tablespace In order to develop an understanding of extent allocation to segments, we will review the CREATE TABLESPACE command creates with a Local managed tablespace. CREATE TABLESPACE data DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 20M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K; The above CREATE TABLESPACE command specifies that local management is used for extents and the default size for all extents is specified through the UNIFORM SIZE parameter as 40K. Since this parameter cannot be overridden, all segments in this tablespace will be allocated with segments that are 32K in size.

A tablespace that manages its extents locally can have either uniform extent sizes or Variable extent sizes that are determined automatically by the system. When you create the tablespace, the UNIFORM or AUTOALLOCATE (system-managed) clause specifies the type of allocation. Dictionary managed tablespace This next CREATE TABLESPACE command creates a dictionary managed tablespace. Here the DEFAULT STORAGE parameter is used to specify the size of extents allocated to segments created within the tablespace. These parameters can be overridden by parameter specifications in the object creation command, for example, a CREATE TABLE command.

CREATE TABLESPACE data DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 20M EXTENT MANAGEMENT DICTIONARY DEFAULT STORAGE ( INITIAL 128K NEXT 40K PCTINCREASE 50 MINEXTENTS 1 MAXEXTENTS 999); INITIAL specifies the initial extent size. A size that is too large here can cause failure of the database if there is not any area on the disk drive with sufficient contiguous disk space to satisfy the INITIAL parameter. When a database is built to store information from an older system that is being

converted to Oracle, a DBA may have some information about how large initial extents need to be in general and may specify a larger size as is done here at 128K. NEXT specifies the size of the next extent. This is termed an incremental extent. This can also cause failure if the size is too large. Usually a smaller value is used, but if the value is too small, segment fragmentation can result. This must be monitored periodically by a DBA which is why dictionary managed tablespaces are NOT preferred. PCTINCREASE can be very troublesome. If you set this very high, e.g. 50% as is shown here, the segment extent size can increase by 7,655 percent over just 10 extents. Best solution: a single INITIAL extent of the correct size followed by a small value for NEXT and a value of 0 (or a small value such as 5) for PCTINCREASE.

Use smaller default INITIAL and NEXT values for a dictionary-managed tablespace's default storage clauses as these defaults can be over-ridden during the creation of individual objects (tables, indexes, etc.) where the STORAGE clause is used in creating the individual objects. MINEXTENTS and MAXEXTENTS parameters specify the minimum and maximum number of extents allocated by default to segments that are part of the tablespace.

The default storage parameters can be overridden when a segment is created as is illustrated in this next section. Example of a CREATE TABLE Command This shows the creation of a table named Orders in the Data01 tablespace. Data01 is locally managed, but here the storage parameters specified override the storage parameters for the tablespace. CREATE TABLE Orders ( Order_Id NUMBER(3) PRIMARY KEY, Order_Ddate DATE DEFAULT (SYSDATE), Ship_Date DATE, Client VARCHAR(3) NOT NULL, Amount_Due NUMBER(10,2), Amount_Paid NUMBER(10,2) ) PCTFREE 5 PCTUSED 65 STORAGE ( INITIAL 1M NEXT 48K PCTINCREASE 5 MINEXTENTS 1 MAXEXTENTS UNLIMITED ) TABLESPACE Data01; Allocation/Deallocation: When a tablespace is initially created, the first datafile (and subsequent datafiles) created to store the tablespace has a header which may be one or more blocks at the beginning of the file as is shown in the figure below. As segments are created, extended, or altered free extents are allocated. The figure here shows that extents can vary in size.

This figure would represent either a Dictionary Managed tablespace where individual segments are allocated extents of differing sizes, or a Locally managed tablespace where the Locally Managed tablespace's extent size is specified by the EXTENT MANAGEMENT LOCAL AUTOALLOCATE clauserecall that AUTOALLOCATE enables Oracle to decide the appropriate extent size for a segment. As segments are dropped, altered, or truncated, extents are released to become free extents available for reallocation. Over time, segments in a tablespace's datafiles can become fragmented due to the addition of extents as is shown in this figure.

Database Block What is Data Block? A data block is the smallest unit of data storage used by a database. In contrast, at the physical, operating system level, all data is stored in bytes. Each operating system has a block size. Oracle requests data in multiples of Oracle data blocks, not operating system blocks. The standard block size is specified by the initialization parameter DB_BLOCK_SIZE. In addition, you can specify of up to five nonstandard block sizes. The data block sizes should be a multiple of the operating systems block size within the maximum limit to avoid unnecessary I/O. Standard Block Size: The DB_CACHE_SIZE parameter replaces the previous DB_BLOCK_BUFFERS parameter from Oracle 8i and previous RDBMS software releases. This new parameter specifies the size of the Database Buffer Cache. The minimum size for DB_CACHE_SIZE must be specified as follows: One granule where a granule is a unit of contiguous virtual memory allocation in RAM. If the total System Global Area (SGA) based on SGA_MAX_SIZE is less than 128MB, then a granule is 4MB. If the total SGA is greater than 128MB, then a granule is 16MB. The default value for DB_CACHE_SIZE is 48MB. Nonstandard Block Size: If a DBA wishes to specify one or more nonstandard block sizes, the parameter following parameters are set. The data block sizes should be a multiple of the operating system's block size within the maximum limit to avoid unnecessary I/O. Oracle data blocks are the smallest units of storage that Oracle can use or allocate. Do not use the specified DB_BLOCK_SIZE value to set nonstandard block sizes. For example, if the standard block size is 4K, do not use the DB_4K_CACHE_SIZE parameter. DB_2K_CACHE_SIZE -- parameter for 2K nonstandard block sizes. DB_4K_CACHE_SIZE -- parameter for 4K nonstandard block sizes. DB_8K_CACHE_SIZE -- parameter for 8K nonstandard block sizes. DB_16K_CACHE_SIZE -- parameter for 16K nonstandard block sizes. DB_32K_CACHE_SIZE -- parameter for 32K nonstandard block sizes.

Nonstandard Block Size Tablespaces: The BLOCKSIZE parameter is used to create a tablespace with a nonstandard block size. Example: CREATE TABLESPACE special_apps

DATAFILE '/u01/student/dbockstd/oradata/USER350_spec_apps01.dbf' SIZE 20M BLOCK SIZE 32; Here the nonstandard block size specified with the BLOCK SIZE clause is 32K. This command will not execute unless the DB_32K_CACHE_SIZE parameter has already been specified because buffers of size 32K must already be allocated in the Database Buffer Cache as part of a sub cache.

There are some additional rules regarding the use of multiple block sizes: If an object is partitioned and resides in more than one tablespace, all of the tablespaces where the object resides must be the same block size. Temporary tablespaces must be the standard block size. This also applies to permanent tablespaces that have been specified as default temporary tablespaces for system users. Data Block Contents This figure shows the components of a data block. This is the structure regardless of the type of segment to which the block belongs.

Block header The header contains general block information, such as the block address and the type of segment (for example, data or index) and transaction slot information. Table Directory used to track the tables to which row data in the block belongs. Data from more than one table may be in a single block if the data are clustered. The Table Directory is only used if the data are clustered. Row Directory - used to track which rows from a table are in this block. The Row Directory includes addresses for each row or row fragment in the row data area. When space is allocated in the Row Directory to store information about a row, this space is not reclaimed upon deletion of a row, but is reclaimed when new rows are inserted into the block. A block can be empty of rows, but if it once contained rows, then data will be allocated in the Row Directory (2 bytes per row) for each row that ever existed in the block. Transaction Slots are space that is used when transactions are in progress that will modify rows in the block. Overhead-The data block header, table directory, and row directory are referred to collectively as overhead. Some block overhead is fixed in size; the total block overhead size is variable. On average, the fixed and variable portions of data block overhead total 84 to 107 bytes Data space (Row Data) stores row data that is inserted from the bottom up. Free space -- in the middle of a block can be allocated to either the header or data space, and is contiguous when the block is first allocated. Free space is allocated for insertion of new rows and for updates to rows that require additional space (for example, when a trailing null is updated to a nonnull value).Whether issued insertions actually occur in a given data block is a function of current free space in that data block and the value of the space management parameter PCTFREE. Free space may fragment as rows in the block are modified or deleted. SMON coalesces this free space periodically.

Table Data in a Segment: Table data is stored in the form of rows in a data block.

The figure below shows the block header then the data space (row data) and the free space. Each row consists of columns with associated overhead. The storage overhead is in the form of "hidden" columns accessible by the DBMS that specify the length of each succeeding column.

Rows are stored right next to each other with no spaces in between. Column values are stored right next to each other in a variable length format. The length of a field indicates the length of each column value (variable length - Note the Length Column 1, Length Column 2, etc., entries in the figure). Column length of 0 indicates a null field. Trailing null fields are not stored.

Row Chaining and Migrating There are two situations where a data row may not fit into a single data block: The row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG or LONG RAW. Row chaining in these cases is unavoidable. A row that originally fit into one data block has one or more columns updated so that the overall row length increases, and the block's free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row. The rowid of a migrated row does not change.

When a row is chained or migrated, I/O performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for the row. Manual Data Block Free Space Management -- Database Block Space Utilization Parameters Manual data block management requires a DBA to specify how block space is used and when a block is available for new row insertions. This is the default method for data block management for dictionary managed tablespace objects (another reason for using locally managed tablespaces with UNIFORM extents). Database block space utilization parameters are used to control space allocation for data and index segments. The INITTRANS parameter specifies the initial number of transaction slots created when a database block is initially allocated to either a data or index segment. These slots store information about the transactions that are making changes to the block at a given point in time. The amount of space allocated for a transaction slot is 23 bytes. If you set INITRANS to 2, then there are 46 bytes (2 * 23) pre-allocated in the header, etc. These slots are in the database block header.

The INITTRANS parameter specifies a minimum level of concurrent access. The default is 1 for a data segment and 2 for an index segment. If a DBA specifies INITTRANS at 4, for example, this means that 4 transactions can be concurrently making modifications to the database block. Also, setting this to a figure that is larger than the default can eliminate the processing overhead that occurs whenever additional transaction slots have to be allocated to a block's header when the number of concurrent transactions exceeds the INITTRANS parameter. The MAXTRANS parameter specifies the maximum number of concurrent transactions that can modify rows in a database block. Surprisingly, the default maximum is 255. This value is quite large. This parameter is set to guarantee that there is sufficient space in the block to store data or index data. Example: Suppose a DBA sets INITTRANS at 4 and MAXTRANS at 10. Initially, 4 transaction slots are allocated in the block header. If 6 system users process concurrent transactions for a given block, then the number of transaction slots increases by 2 slots to 6 slots. Once this space is allocated in the header, it is not deallocated. What happens if 11 system users attempt to process concurrent transactions for a given block? The 11th system user is denied access an Oracle error message is generated until current transactions complete (either are committed or rolled back). The PCTFREE and PCTUSED Parameters You, as the DBA, must decide how much Free Space is needed for data blocks in manual management of data blocks. You set the free space with the PCTFREE and PCTUSED parameters at the time that you create an object like a Table or Index. PCTFREE: The PCTFREE parameter is used at the time an object is created to set the percentage of a block space to be reserved for future updates to rows in the block. This figure shows the situation where the PCTFREE parameter is set to 20 (20%). The default value for PCTFREE is 10%. New rows can be added to a data block until the amount of space remaining is less than PCTFREE. Also, PCTFREE reserves space for growth of existing rows through the modification of data values. After PCTFREE is met (this means that there is less space available than the PCTFREE setting), Oracle considers the block full and will not insert new rows to the block. PCTUSED: The parameter PCTUSED is used to set the level at which a block can again (added to freelist by delete operatin) be considered by Oracle for insertion of new rows. It is like a low water mark whereas PCTFREE is a high water mark. As free space grows (the space allocated to rows in a database block decreases due to deletions), the block can again have new rows inserted but only if the percentage of the data block in use falls below PCTUSED. New insertions are allowed when the percentage of a block being used falls below PCTUSED either by row deletion or updates that reduce column storage. Example: if PCTUSED is set at 40, then the percentage of block space used must drop to 39% or less before row insertions are again made. The system default for PCTUSED is 40. Oracle tries to keep a data block at least PCTUSED full before using new blocks.

The PCTUSED parameter is not set when Automatic Segment Space Management is enabled.

This figure depicts the situation where PCTUSED is set to 60 and PCTFREE is set to 20 (60% and 20% respectively). Both PCTFREE and PCTUSED are calculated as percentages of the available data space Oracle deducts the space allocated to the block header from the total block size when computing these parameters. Generally PCTUSED plus PCTFREE should add up to 80. The sum of PCTFREE and PCTUSED cannot exceed 100. If PCTFREE is 20, and PCTUSED is 60, this will ensure at least 60% of each block is used while saving 20% for row updates. Effects of PCTFREE and PCTUSED: A high ( 90%) PCTFREE has these effects: There is a lot of space for the growth of existing rows in a data block. Performance is improved since data blocks do not need to be reorganized very frequently. Performance is improved because chaining is reduced. Storage space within a data block may not be used efficiently as there is always some empty space in the data blocks. A low (40%) PCTFREE has these effects (basically the opposite effect of high PCTFREE): There is less space for growth of existing rows. Performance may suffer due to the need to reorganize data in data blocks more frequently: Oracle may need to migrate a row that will no longer fit into a data block due to modification of data within the row. If the row will no longer fit into a single database block, as may be the case for very large rows, then database blocks are chained together logically with pointers. This also causes a performance hit. This may also cause a DBA to consider the use of a nonstandard block size. In these situations, I/O performance will degrade. Examine the extent of chaining or migrating with the ANALYZE command. You may resolve row chaining and migration by exporting the object (table), dropping the object, and then importing the object. Chaining may increase resulting in additional Input/Output operations. Very little storage space within a data block is wasted. A high PCTUSED has these effects: Decreases performance because data blocks will experience more migrated and chained rows. Reduces wasted storage space by filling each data block more fully. A low PCTUSED has these effects: Performance improves due to a decrease in migrated and chained rows. Storage space usage is not as efficient due to more unused space in data blocks. Guidelines for setting PCTFREE and PCTUSED: If data for an object tends to be fairly stable (doesn't change in value very much), not much free space is needed (as little as 5%). If changes occur extremely often and data values are very volatile, you may needed as much as 40% free space. Once this parameter is set, it can not be

changed without at least partially recreating the object affected. Update activity with high row growth the application uses tables that are frequently updated affecting row size set PCTFREE moderately high and PCTUSED moderately low to allow for space for row growth. PCTFREE = 20 to 25 PCTUSED = 35 to 40 (100 PCTFREE) PCTUSED = 35 to 45 Insert activity with low row growth the application has more insertions of new rows with very little modification of existing rows set PCTFREE low and PCTUSED at a moderate level. This will avoid row chaining. Each data block has its space well utilized but once new row insertion stops, there are no more row insertions until there is a lot of storage space again available in a data blocks to minimize migration and chaining. PCTFREE = 5 to 10 PCTUSED = 50 to 60 (100 PCTFREE) PCTUSED = 30 to 45 Performance primary importance and disk space is readily available when disk space is abundant and performance is the critical issue, a DBA must ensure no migration or chaining occurs by using very high PCTFREE and very low PCTUSED settings. A lot of storage space will be wasted to avoid migration and chaining. PCTFREE = 30 PCTUSED = 30 (100 PCTFREE) PCTUSED = 40 Disk space usage is important and performance is secondary the application uses large tables and disk space usage is criticial. Here PCTFREE should be very low while PCTUSED is very high the tables will experience some data row migration and chaining with a performance hit. PCTFREE = 5 PCTUSED = 90 (100 PCTFREE) PCTUSED = 5

Free lists: When a segment is created, it is created with a Free List that is used to track the blocks allocated to the segment that are available for row insertions. A segment can have more than one free list if the FREELISTS parameter is specified in the storage clause when an object is created. If a block has free space that falls below PCTFREE, that block is removed from the free list. Oracle improves performance by not considering blocks that are almost full as candidates for row insertions.

Automatic Segment Space Management (ASSM) (OR) Automatic Data Block Free Space Management Free space can be managed either automatically or manually. Automatic segment space management was introduced with Oracle 9i. It simplifies the management of the PCTUSED, FREELISTS, and FREELIST GROUPS parameters and generally provides better space utilization where objects may vary considerably in terms of row size. This can also yield

improved concurrent access handling for row insertions. A restriction is that you cannot use this approach if a tablespace will contain LOBs. Segment free and used space is tracked with bitmaps instead of free lists. Bitmap segments contain a bitmap that is used to track the status of each block in a segment with respect to available space. Think of an individual bit as either being "on" to indicate the block is available or "off" to indicate a block is or is not available. A bitmap is stored in a separate set of blocks called bitmapped blocks. When a new row needs to be inserted into a segment, the bitmap is searched for a candidate block. This search occurs much more rapidly than can be done with a Free List because a Bit Map Index can often be entirely stored in memory and the use of a Free List requires searching a chain data structure (linked list). Automatic segment management can only be enabled at the tablespace level, and only if the tablespace is locally managed. An example CREATE TABLESPACE command is shown here. CREATE TABLESPACE user_data DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 20M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K SEGMENT SPACE MANAGEMENT AUTO; The SEGMENT SPACE MANAGEMENT AUTO clause specifies the creation of the bitmapped segments. Automatic segment space management offers the following benefits: Ease of use. Better space utilization, especially for the objects with highly varying size rows. Better run-time adjustment to variations in concurrent access. Better multi-instance behavior in terms of performance/space utilization. Two types of statements can increase the amount of free space in a database block: DELETE statements that delete rows, and UPDATE statements that update a column value to a smaller value than was previously required. Both of these statements release space that can be used subsequently by an INSERT statement. Released space may or may not be contiguous with the main area of free space in a data block. Oracle coalesces the free space of a data block only when: An INSERT or UPDATE statement attempts to use a block that contains enough free space to contain a new row piece, or the free space is fragmented so the row piece cannot be inserted in a contiguous section of the block. Oracle does this compression only in such situations, because otherwise the performance of a database system decreases due to the continuous compression of the free space in data blocks.

Managing Temporary Tablespaces

A temporary tablespace contains transient data that persists only for the duration of the session. Temporary segment created when a SQL statement requires a temporary work area such as during sorting of data. The sort segment is created by the first statement that uses a temporary tablespace for sorting, after startup, and is released only at shutdown. Pay special attention to the readings on sort operations, and understanding how Oracle first attempts to sort row results sets inside the RAM memory of the SGA or PGA. Sorting can occur whenever an SQL statement contains an order by, or group by clause. If there is no room in the RAM memory region to sort the results set quickly, Oracle will go to the temporary tablespace, and complete the sort operation using disk storage. The management of sorting is a very critical part of SQL tuning because RAM memory sorts are many thousands of times faster than sort that have to be done inside the temporary table space. Temporary Datafiles or Tempfile A tempfile is a file that is part of an Oracle database. Tempfiles are used with TEMPORARY TABLESPACES and are used for storing temporary data like sort spill-over or data for global temporary tables. Locally managed temporary tablespaces have temporary datafiles (tempfiles),which are similar to ordinary datafiles, with the following exceptions: Tempfiles are always set to NOLOGGING mode. You cannot make a tempfile read-only. You cannot rename a tempfile. You cannot create a tempfile with the ALTER DATABASE statement.

Allocation of Temporary Segments for Queries Oracle allocates temporary segments as needed during a user session in the temporary tablespace of the user issuing the statement. Specify this tablespace with a CREATE USER or an ALTER USER statement using the TEMPORARY TABLESPACE clause. If no temporary tablespace is defined for the user, then the default temporary tablespace is the SYSTEM tablespace. The default storage characteristics of the containing tablespace determine those of the extents of the temporary segment. Oracle drops temporary segments when the statement completes(SMON frees temporary segments.). Because allocation and deallocation of temporary segments occur frequently. If a default temporary tablespace is not specified at the time a database is created, a DBA can create one by altering the database. ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp; After this, new system user accounts are automatically allocated temp as their temporary tablespace. If you ALTER DATABASE to assign a new default temporary tablespace, all system users will automatically be reassigned to the new default tablespace for temporary operations. You cannot take a default temporary tablespace offline this is done only for system

maintenance or to restrict access to a tablespace temporarily. None of these activities apply to default temporary tablespaces. Allocation of Temporary Segments for Temporary Tables and Indexes Oracle allocates segments for a temporary table when the first INSERT into that table is issued. (This can be an internal insert operation issued by CREATE TABLE AS SELECT) .Segments for a temporary table are allocated in the temporary tablespace of the user who created the temporary table. Oracle drops segments for a transaction-specific temporary table at the end of the transaction and drops segments for a session-specific temporary table at the end of the session. If other transactions or sessions share the use of that temporary table,the segments containing their data remain in the table. create global temporary table dbatest ( c1 number, c2 number); Creating Temporary Tables To create a table named test with column col1 type varchar2 length 10, col2 type number. col3 type clob we can use CREATE TABLE statement as, CREATE TABLE TEST(col1 VARCHAR2(10), col2 NUMBER,col3 CLOB); Now if I insert data into the table the data is visible and accessible to all users. In many cases it is needed the data inside a table will be reside temporarily. In that case we can use temporary tables. Temporary tables are useful in applications where a result set is to be buffered. To create temporary table we have to issue CREATE GLOBAL TEMPORARY clause. Temporary table can be of two types based on ON COMMIT clause settings. 1)ON COMMIT DELETE ROWS specifies temporary table would be transaction specific. Data persist within table up to transaction ending time. If you end the transaction the database truncates the table (delete all rows). Suppose if you issue commit or run ddl then data inside the temporary table will be lost. It is by default option. 2)ON COMMIT PRESERVE ROWS specifies temporary table would be session specific. Data persist within table up to session ending time. If you end the session the database truncates the table (delete all rows). Suppose you type exit in SQL*Plus then data inside the temporary table will be lost. Example of transaction Specific Temporary Tables. 1)This statement creates a temporary table that is transaction specific: SQL>CREATE GLOBAL TEMPORARY TABLE test_temp (col1 number,col2 number) ON COMMIT DELETE ROWS; Table created. 2)Insert row in to the temporary table. SQL> insert into test_temp values(1,2); 1 row created. 3)Look at the data in the table. SQL> select * from test_temp; COL1 COL2 ---------- ----------

1 2 4)Issue Commit. SQL> commit; Commit complete. 5)Now look at the data in the temporary table. As I created transaction specific temporary table(on commit delete rows) so data is lost after commit. SQL> select * from test_temp; no rows selected Example of Session Specific Temporary Tables. 1)Create Session Specific Temporary Table test_temp2. CREATE GLOBAL TEMPORARY TABLE test_temp2 (col1 number,col2 number) ON COMMIT PRESERVE ROWS; 2)Insert data into it and look at data both before commit and after commit. SQL> insert into test_temp2 values(3,4); 1 row created. SQL> select * from test_temp2; COL1 COL2 ---------- ---------3 4 SQL> commit; Commit complete. SQL> select * from test_temp2; COL1 COL2 ---------- ---------3 4 3)End the Session. SQL> exit; Disconnected from Oracle Database 10g Enterprise Edition Release - Production With the Partitioning, OLAP and Data Mining options 4)Connect in a new session and look at data again. -bash-3.00$ sqlplus arju/a SQL*Plus: Release - Production on Tue Jun 10 00:06:27 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release - Production With the Partitioning, OLAP and Data Mining options SQL> select * from test_temp2; no rows selected Note: Never confuse with the GLOBAL keyword. After seeing GLOBAL keyword you may thing LOCAL keyword may have. But it is not true. To create temporary table we have to specify GLOBAL keyword. No LOCAL keyword exist there.

Creating a Locally Managed Temporary Tablespace

Locally managed temporary tablespaces use tempfiles, which do not modify data outside of the temporary tablespace or generate any redo for temporary tablespace data. The following statement creates a temporary tablespace in which each extent is 16M.Each 16M extent (which is the equivalent of 8000 blocks when the standard block size is 2K) is represented by a bit in the bitmap for the file. The extent management clause is optional for temporary tablespaces because all temporary tablespaces are created with locally managed extents of a uniform size. The Oracle Database default for SIZE is 1M. The AUTOALLOCATE clause is not allowed for temporary tablespaces. CREATE TEMPORARY TABLESPACE temp TEMPFILE '/u01/oradata/shops/ temp01.dbf' SIZE 100M;

Altering a Locally Managed Temporary Tablespace Add temp file to tablespaces ALTER TABLESPACE lmtemp ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 18M REUSE; Note: You cannot use the ALTER TABLESPACE statement, with the TEMPORARY keyword, ALTER TABLESPACE lmtemp TEMPFILE OFFLINE; ALTER TABLESPACE lmtemp TEMPFILE ONLINE; Note: You cannot take a temporary tablespace offline. Instead, you take its tempfile offline. The view V$TEMPFILE displays online status for a tempfile. However, the ALTER DATABASE statement can be used to alter tempfiles. The following statements take offline and bring online tempfiles. They behave identically to the last two ALTER TABLESPACE statements in the previous example. ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' OFFLINE; ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' ONLINE; The following statement resizes a temporary file: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' RESIZE 18M; The following statement drops a temporary file and deletes the operating system file: ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP INCLUDING DATAFILES; To rename a tempfile, you take the tempfile offline, use operating system commands to rename or relocate the tempfile, and then use the ALTER DATABASE RENAME FILE command to update the database controlfiles.

Temporary Tablespace Groups You can have more than one temporary tablespace online and active. Oracle supports this through the use of temporary tablespace groups this is a synonym for a list of temporary tablespaces. Creating a Tablespace Group CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/ lmtemp201.dbf' SIZE 50M TABLESPACE GROUP group1; Changing Members of a Tablespace Group The following statement adds a tablespace to an existing group. It creates and adds tablespace lmtemp3 to group1, so that group1 contains tablespaces lmtemp2 and lmtemp3. CREATE TEMPORARY TABLESPACE lmtemp3 TEMPFILE '/u02/oracle/data/ lmtemp301.dbf' SIZE 25M TABLESPACE GROUP group1; Example: Suppose two temporary tablespaces named TEMP01 and TEMP02 have been created. This code assigns the tablespaces to a group named TEMPGRP.

SQL> ALTER TABLESPACE temp01 TABLESPACE GROUP tempgrp; Tablespace altered. SQL> ALTER TABLESPACE temp02 TABLESPACE GROUP tempgrp; Tablespace altered. Example continued: This code changes the database's default temporary tablespace to TEMPGRP you use the same command that would be used to assign a temporary tablespace as the default because temporary tablespace groups are treated logically the same as an individual temporary tablespace.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tempgrp; Database altered. To drop a tablespace group, first drop all of its members. Drop a member by assigning the temporary tablespace to a group with an empty string.

SQL> ALTER TABLESPACE temp01 TABLESPACE GROUP ''; Tablespace altered. To assign a temporary tablespace group to a user the CREATE USER SQL command is the same as for an individual tablespace. In this example user350 is assigned the temporary tablespace TEMPGRP.

SQL> CREATE USER user350 IDENTIFIED BY secret_password 1. DEFAULT TABLESPACE users 3 TEMPORARY TABLESPACE tempgrp; Tablespace Information of Temporary Tablespace You can view the allocation and deallocation of space in a temporary tablespace sort segment using the V$SORT_SEGMENT view. The V$TEMPSEG_USAGE view identifies the current sort users in those segments. You also use different views for viewing information about tempfiles than you would for datafiles. The V$TEMPFILE and DBA_TEMP_FILES views are analogous to the V$DATAFILE and DBA_DATA_FILES views.

Managing Undo Data What is Undo or Rollback segment? A Undo Segment is that when a transaction modifies the database, Oracle copies the original data before modifying it. The original copy of the modified data is called undo data. . The original copy of the modified data is stored in undo tablespace. Transactions Transaction collection of SQL data manipulation language (DML) statements treated as a logical unit. Failure of any statement results in the transaction being "undone". If all statements process, SQLPlus or the programming application will issue a COMMIT to make database changes permanent. Transactions implicitly commit if a user disconnects from Oracle normally. Abnormal disconnections result in transaction rollback. The command ROLLBACK is used to cancel (not commit) a transaction that is in progress. SET TRANSACTION Transaction boundaries can be defined with the SET TRANSACTION command. This has no performance benefit achieved by setting transaction boundaries, but doing so enables defining a savepoint. Savepoint allows a sequence of DML statements in a transaction to be partitioned so you can roll back one or more or commit the DML statements up to the savepoint. Savepoints are created with the SAVEPOINT savepoint_name command. DML statements since the last savepoint are rolled back with the ROLLBACK TO SAVEPOINT savepoint_name command. Types of rollbacks: statement level rollback rollback to a savepoint rollback of a transaction due to user request

rollback of a transaction due to abnormal process termination rollback of all outstanding transactions when an instance terminates abnormally rollback of incomplete transactions during recovery

Undo vs. Rollback In earlier versions of Oracle, the term rollback was used instead of undo, and instead of managing undo segments, the DBA was responsible for managing rollback segments. Rollback segments were one of the primary areas where problems often arose; thus, the conversion to undo management is a significant improvement. Undo Segments A database system can run in either manual undo management mode or automatic undo management mode. (1) automatic undo management In automatic undo management mode, undo space is managed in undo tablespaces.To use automatic undo management mode, the database administrator needs only to create an undo tablespace for each instance and set the UNDO_MANAGEMENT initialization parameter to AUTO. Automatic undo management mode is supported under compatibility levels of Oracle9i or higher. Although manual undo management mode is supported, you are strongly encouraged to run in automatic undo management mode (2) manual undo management In manual undo management mode, undo space is managed through rollback segments.Manual undo management mode is supported under any compatibility level .manual undo management is the only method available for Oracle 8i and earlier versions of Oracle and is the type of management that involves use of rollback segments. Undo data old data values from tables are saved as undo data by writing a copy of the image from a data block on disk to an undo segment. This also stores the location of the data as it existed before modification. Undo segment header this stores a transaction table where information about current transactions using this particular segment is stored. A serial transaction uses only one undo segment to store all of its undo data. A single undo segment can support multiple concurrent transactions. Purpose (Used)of Undo Segments Undo segments have three purposes: (1) Transaction Rollback, (2) Transaction Recovery, and (3) Read Consistency. Transaction Rollback: Old images of modified columns are saved as undo data to undo segments. If a transaction is rolled back because it cannot be committed or the application program directs a rollback of the transaction, the Oracle server uses the undo data to restore the original values by writing the undo data back to the table/index row. Transaction Recovery: Sometimes an Oracle Instance will fail and transactions in progress will not complete nor be committed. Redo Logs bring both committed and uncommitted transactions forward to the point of

instance failure. Undo data is used to undo any transactions that were not committed at the point of failure. Recovery is covered in more detail in a later set of notes.

Read Consistency: Many users will simultaneously access a database. These users should be hidden from modifications to the database that have not yet committed. Also, if a system user begins a program statement execution, the statement should not see any changes that are committed after the transaction begins. Old values stored in undo segments are provided to system users accessing table rows that are in the process of being changed by another system user in order to provide a read-consistent image of the data.

In the figure shown above, an UPDATE command has a lock on a data block from the EMPLOYEE table and an undo image of the block is written to the undo segment. The update transaction has not yet committed, so any concurrent SELECT statement by a different system user will result in data being displayed from the undo segment, not from the EMPLOYEE table. This read-consistent image is constructed by the Oracle Server. Undo Segment Types A SYSTEM undo segment is created in the SYSTEM tablespace when a database is created. SYSTEM undo segments are used for modifications to objects stored in the SYSTEM tablespace. This type of Undo Segment works identically in both manual and automatic mode. Databases with more than one tablespace must have at least one non-SYSTEM undo segment for manual mode or a separate Undo tablespace for automatic mode. Manual Mode: A non-SYSTEM undo segment is created by a DBA and is used for changes to objects in a non-SYSTEM tablespace. There are two types of non-SYSTEM undo segments: (1) Private and (2) Public. Private Undo Segments: These are brought online by an instance if they are listed in the parameter file. They can also be brought online by issuing an ALTER ROLLBACK SEGMENT segment_name ONLINE command (prior to Oracle 9i, undo segments were named rollback segments and the command has not changed). Private undo segments are used for a single Database Instance. Public Undo Segments: These form a pool of undo segments available in a database. These are used with Oracle Real Application Clusters as a pool of undo segments available to any of the Real Application Cluster instances. You can learn more about public undo segments by studying the Oracle9i Real Application Clusters and Administration manual. Deferred Undo Segments: These are maintained by the Oracle Server so a DBA does not have to maintain them. They can be created when a tablespace is brought offline (immediate, temporary, or recovery) and are used for undo transactions when the tablespace is brought back online. They are dropped by the Oracle Server automatically when they are no longer needed. Automatic Undo Management

Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing undo information and space. In automatic undo management mode, undo space is managed in undo tablespaces.To use automatic undo management mode, the database administrator needs only to create an undo tablespace for each instance and set the UNDO_MANAGEMENT initialization parameter to AUTO. The Oracle Server automatically maintains undo data in the Undo tablespace. Oracle automatically creates, sizes, and manages undo segments.

Automatic Undo Segments are named with a naming convention of: _SYSMUn$ For example, they may be named: _SYSMU1$ and _SYSMU2$, etc. Configuration of automatic undo tablespace When a single Undo tablespace exists in a database, the UNDO_MANAGEMENT parameter in the initialization file is set to AUTO and Oracle will automatically use the single Undo Tablespace. If more than one Undo tablespace exists (so they can be switched if necessary, but only one can be active), the UNDO_TABLESPACE parameter in the initialization file is used to specify the name of the Undo tablespace to be used by Oracle Server when an Oracle Instance starts up. If no Undo tablespace exists Oracle will startup a database and use the SYSTEM tablespace rollback segment for undo. An alert message will be written to the alert file to warn that no Undo tablespace is available. Examples: UNDO_MANAGMENT=AUTO or UNDO_MANAGMENT=MANUAL UNDO_TABLESPACE=UNDO01 You cannot dynamically change UNDO_MANAGEMENT from AUTO to MANUAL or vice-versa. When in MANUAL mode, the DBA must create and manage undo segments for the database. If u have two undo tablespace. You can alter the system to change the Undo tablespace that is in use as follows: ALTER SYSTEM SET undo_tablespace = UNDO02; Undo Parameters To reduce this management overhead, Oracle introduced Automated Undo Management. Automated UNDO management allows Oracle to manage all of the undo segments in one tablespace, called the UNDO tablespace. The DBA creates the UNDO tablespace using the CREATE TABLESPACE command using the UNDO keyword. Having created the UNDO tablespace, the DBA will define the following parameters in the database parameter file: UNDO_TABLESPACE - The name of the undo tablespace UNDO_MANAGEMENT - AUTO UNDO_RETENTION - Defines the retention period for generated UNDO

used to reduce snapshot to old errors

Undo Mode Parameter UNDO_MANAGEMENT --MANUAL. If AUTO, use automatic undo management. The default is

Specify tablespace Parameter UNDO_TABLESPACE --- An optional dynamic parameter specifying the name of an undo tablespace. This parameter should be used only when the database has multiple undo tablespaces and you want to direct the database instance to use a particular undo tablespace. Undo Retention parameter If Undo Segment data is to be retained a long time, then the Undo tablespace will need larger datafiles. After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible. The retention period is set with the UNDO_RETENTION parameter that defines the period in seconds. You can set this parameter in the initialization file or you can dynamically alter it with the ALTER SYSTEM command: ALTER SYSTEM SET UNDO_RETENTION = 43200; The above command will retain undo segment data for 720 minutes (12 hours) the default value is 900 seconds (15 minutes). If the tablespace is too small to store Undo Segment data for 720 minutes, then the data is not retained instead space is recovered by the Oracle Server to be allocated to new active transactions. ORA-1555 snapshot too old Error:The database makes its best effort to honor the specified minimum undo retention period, provided that the undo tablespace has space available for new transactions. When available space for new transactions becomes short, the database begins to overwrite expired undo. If the undo tablespace has no space for new transactions after all expired undo is overwritten, the database may begin overwriting unexpired undo information. If any of this overwritten undo information is required for consistent read in a current long-running query, the query could fail with the snapshot too old error message. A DBA can also determine how long to retain undo data to provide consistent reads. If undo data is not retained long enough, and a system user attempts to access data that should be located in an Undo Segment, then an Oracle error: ORA-1555 snapshot too old error is returned this means read-consistency could not be achieved by Oracle.

Oracle 10g automatically tunes undo retention by collecting database use statistics. Specifying UNDO_RETENTION sets a low threshold so that undo data is retained at a minimum for the threshold value specified, providing there is sufficient Undo tablespace capacity. RETENTION GUARANTEE : To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default. For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor the minimum retention period specified by UNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information. This results naturally when a long running query reading from those undo blocks ORA-01555 You enable retention guarantee by specifying the RETENTION GUARANTEE clause for the undo tablespace when you create it with either the CREATE DATABASE or CREATE UNDO TABLESPACE statement. Or, you can later specify this clause in an ALTER TABLESPACE statement. You disable retention guarantee with the RETENTION NOGUARANTEE clause. You can use the DBA_TABLESPACES view to determine the retention guarantee setting for the undo tablespace. A column named RETENTION contains a value of GUARANTEE, NOGUARANTEE, or NOT APPLY (used for tablespaces other than the undo tablespace). The TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic performance view can be queries to determine the amount of time Undo data is retained for an Oracle database. Query the RETENTION column of the DBA_TABLESPACES view to determine the setting for the Undo tablespace possible values are GUARANTEE, NOGUARANTEE, and NOT APPLY (for tablespaces other than Undo).

Status of Undo Segments When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Database attempts to retain old undo information before overwriting it. Three Status of UndoSegents exists in an Undo tablespace: Active (unexpired) Old undo information with an age that is less than the current undo

retention period is said to be unexpired. These segments are needed for read consistency even after a transaction commits. Expired Old (committed) undo information that is older than the current undo retention period is said to be expired .These segments store undo data that has been committed and all queries for the data are complete and the undo retention period has been reached. Unused these segments have space that has never been used.

Sizing Undo Tablespace The minimum size for an Undo tablespace is enough space to hold before-image versions of all active transactions that have not been committed or rolled back. When space is inadequate to support changes to uncommitted transactions for rollback operations, the error message ORA-30036: Unable to extend segment by space_qtr in undo tablespace tablespace_name is displayed, and the DBA must increase the size of the Undo tablespace. Initial Size enable automatic extension (use the AUTOEXTEND ON clause with the CREATE TABLESPACE or ALTER TABLESPACE commands) for Undo tablespace datafiles so they automatically increase in size as more Undo space is needed. After the system stabilizes, Oracle recommends setting the Undo tablespace maximum size to about 10% more than the current size. The Undo Advisor software available in Enterprise Manager can be used to calculate the amount of Undo retention disk space a database needs. Handle ORA-30019 Error Older application programs may have programming code (PL/SQL) that use the SET TRANSACTION USE ROLLBACK SEGMENT statement to specify a specific rollback segment to use when processing large, batch transactions. Such a program has not been modified to Automatic Undo Management and normally this command would return an Oracle error: ORA30019: Illegal rollback segment operation in Automatic Undo mode. You can suppress these errors by specifying the UNDO_SUPPRESS_ERRORS parameter in the initialization file with a value of TRUE.

Managing undo Tablespace Creating the Undo Tablespace You will recall from our earlier studies that an Undo tablespace can be created by specifying a clause in the CREATE DATABASE command. CREATE DATABASE USER350 (... more clauses go here ...) UNDO TABLESPACE undo01 DATAFILE '/u02/student/dbockstd/oradata/USER350undo01.dbf' SIZE 20M AUTOEXTEND ON NEXT 1M MAXSIZE 50M (... more clauses follow the UNDO TABLESPACE clause here ...) In the example command shown above, the Undo tablespace is named UNDO01. If you do not specify an UNDO TABLESPACE clause within the CREATE DATABASE command, but you do set the UNDO_MANAGEMENT parameter to AUTO in the initialization file, then Oracle will create automatically create an Undo tablespace named SYS_UNDOTBS stored within a file named 'dbu1<oracle_SID>.dbf' the file is located in the $ORACLE_HOME/dbs directory. The initial size of this Undo tablespace is operating system dependent and AUTOEXTEND will be set to ON. You can also create an Undo tablespace with the CREATE UNDO TABLESPACE command. CREATE UNDO TABLESPACE undo02 DATAFILE '/u02/student/dbockstd/oradata/USER350undo02.dbf' SIZE 25M REUSE AUTOEXTEND ON; Altering Undo Tablespace The ALTER TABLESPACE command can be used to modify an Undo tablespace. For example, the DBA may need to add an additional datafile to the Undo tablespace. ALTER TABLESPACE undo01 ADD DATAFILE '/u02/student/dbockstd/oradata/USER350undo02.dbf' SIZE 30M AUTOEXTEND ON; The DBA can also use the following clauses: RENAME DATAFILE [ONLINE | OFFLINE] BEGIN BACKUP END BACKUP If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter, as shown in this example:Use the ALTER SYSTEM command to switch

between Undo tablespaces remember only one Undo tablespace can be active at a time. ALTER SYSTEM SET UNDO_TABLESPACE=undo03;

Dropping an Undo Tablespace The DROP TABLESPACE command can be used to drop an Undo tablespace that is no longer needed. DROP TABLESPACE undo02 INCLUDING CONTENTS AND DATAFILES; The Undo tablespace to be dropped cannot be in use. The clause INCLUDING CONTENTS AND DATAFILES causes the contents (segments) and datafiles at the operating system level to be deleted. If it is active, you must switch to a new Undo tablespace and drop the old one only after all current transactions are complete. The following query will display any active transactions. The PENDING OFFLINE status indicates that the Undo segment within the Undo tablespace has active transactions. There are no active transactions when the query returns no rows.

SELECT, b.status FROM v$rollname a, v$rollstat b WHERE IN (SELECT segment_name FROM dba_segments WHERE tablespace_name = 'UNDOTBS1') AND a.usn = b.usn; NAME STATUS ------------------------------ --------------_SYSSMU1$ ONLINE _SYSSMU2$ ONLINE _SYSSMU3$ ONLINE _SYSSMU4$ ONLINE _SYSSMU5$ ONLINE _SYSSMU6$ ONLINE _SYSSMU7$ ONLINE _SYSSMU8$ ONLINE _SYSSMU9$ ONLINE _SYSSMU10$ ONLINE 10 rows selected.

Undo Data Statistics The V$UNDOSTAT view displays statistical data to show how well a database is performing. Each row in the view represents statistics collected for a 10-minute interval. You can use this to estimate the amount of undo storage space needed for the current

workload. If workloads vary considerably throughout the day, then a DBA should conduct estimations during peak workloads. The column ssolderrcnt displays the number of queries that failed with a "Snapshot too old" error.

SELECT TO_CHAR(end_time, 'yyyy-mm-dd hh24:mi') end_time, undoblks, ssolderrcnt FROM v$undostat; END_TIME UNDOBLKS SSOLDERRCNT ----------------------- -------- ----------2009-06-14 11:58 2009-06-14 11:56 2009-06-14 11:46 More... 0 0 0 0 0 0

In order to size an Undo tablespace, a DBA needs three pieces of information. Two are obtained from the initialization file: UNDO_RETENTION and DB_BLOCK_SIZE. The third piece of information is obtained by querying the database: the number of undo blocks generated per second.

SELECT (SUM(undoblks))/SUM((end_time-begin_time) * 86400) FROM v$undostat; (SUM(UNDOBLKS))/SUM((END_TIME-----------------------------.028608781

In this next query, the END_TIME and BEGIN_TIME columns are DATE data and subtractions of these results in days converting days to seconds is done by multiplying by 86,400, the number of seconds in a day. This value needs to be multiplied by the size of an undo block the same size as the database block defined by the DB_BLOCK_SIZE parameter. The number of bytes of Undo tablespace storage needed is calculated by this query: SELECT (UR * (UPS * DBS)) + (DBS * 24) As "Bytes" FROM (SELECT value As UR FROM v$parameter WHERE name = 'undo_retention'), (SELECT (SUM(undoblks)/SUM(((end_time begin_time) * 86400))) As UPS FROM v$undostat), (SELECT value As DBS FROM v$parameter WHERE name = 'db_block_size');

Bytes ---------407391.527 Convert this figure to megabytes of storage by dividing by 1,048,576 (the number of bytes in a megabyte). The Undo tablespace needs to be about 0.38 MB according to this calculation, although this is because the sample database has very few transactions. Undo Quota An object called a resource plan can be used to group users and place limits on the amount of resources that can be used by a given group. This may become necessary when long transactions or poorly written transactions consume limited database resources. If the database has no resource bottlenecks, then the allocating of quotas can be ignored. Sometimes undo data space is a limited resource. A DBA can limit the amount of undo data space used by a group by setting the UNDO_POOL parameter which defaults to unlimited. If the group exceeds the quota, then new transactions are not processed until old ones complete. The group members will receive the ORA-30027: Undo quota violation failed to get %s (bytes) error message. Resource plans are covered in more detail in a later set of notes. Undo Segment Information The following views provide information about undo segments: DBA_ROLLBACK_SEGS V$ROLLNAME -- the dynamic performance views only show data for online segments. V$ROLLSTAT V$UNDOSTAT V$SESSION V$TRANSACTION This query lists information about undo segments in the SIUE DBORCL database. Note the two segments in the SYSTEM tablespace and the remaining segments in the UNDO tablespace. COLUMN segment_name FORMAT A15; COLUMN owner FORMAT A10; COLUMN tablespace_name FORMAT A15; COLUMN status FORMAT A10; SELECT segment_name, owner, tablespace_name, status FROM dba_rollback_segs; SEGMENT_NAME OWNER TABLESPACE_NAME STATUS --------------- ---------- --------------- ---------SYSTEM SYS SYSTEM ONLINE _SYSSMU1$ PUBLIC UNDOTBS1 ONLINE _SYSSMU2$ PUBLIC UNDOTBS1 ONLINE _SYSSMU3$ PUBLIC UNDOTBS1 ONLINE _SYSSMU4$ PUBLIC UNDOTBS1 ONLINE





11 rows selected.

The owner column above specifies the type of undo segment. SYS means a private undo segment. This query is a join of the V$ROLLSTAT and V$ROLLNAME views to display statistics on undo segments currently in use by the Oracle Instance. The usn column is a sequence number. COLUMN name FORMAT A12; SELECT, s.extents, s.rssize, s.hwmsize, s.xacts, s.status FROM v$rollname n, v$rollstat s WHERE n.usn = s.usn; NAME EXTENTS RSSIZE HWMSIZE XACTS STATUS ------------ ---------- ---------- ---------- ---------- ---------SYSTEM 6 385024 385024 0 ONLINE _SYSSMU1$ 2 122880 2220032 0 ONLINE _SYSSMU2$ 3 188416 2285568 0 ONLINE _SYSSMU3$ 3 1171456 2220032 0 ONLINE _SYSSMU4$ 3 1171456 2220032 0 ONLINE _SYSSMU5$ 3 1171456 2220032 0 ONLINE _SYSSMU6$ 3 188416 2220032 0 ONLINE _SYSSMU7$ 3 1171456 2220032 0 ONLINE _SYSSMU8$ 3 188416 2088960 0 ONLINE _SYSSMU9$ 4 253952 2220032 0 ONLINE _SYSSMU10$ 5 319488 2220032 0 ONLINE 11 rows selected.

This query checks the use of an undo segment by any currently active transaction by joining the V$TRANSACTION and V$SESSION views. SELECT s.username, t.xidusn, t.ubafil, t.ubablk, t.used_ublk FROM v$session s, v$transaction t WHERE s.saddr = t.ses_addr;

Flashback Features

Flashback features allow DBAs and users to access database information from a previous point in time. Undo information must be available so the retention period is important. Example: If an application requires a version of the database that is up to 12 hours old, the UNDO_RETENTION must be set to 43200. The RETENTION GUARANTEE clause needs to be specified. The Oracle Flashback Query option is supplied through the DBMS_FLASHBACK package at the session level. At the object level, Flashback Query uses the AS OF clause to specify the point in time for which data is viewed. Flashback Version Query enables users to query row history through use of a VERSIONS clause of a SELECT statement. Example: This SELECT statement retrieves the state of an employee record for an employee named Sue at 9:30 AM on June 13, 2009 because it was discovered that Sue's employee record was erroneously deleted. SELECT * FROM employee AS OF TIMESTAMP TO_TIMESTAMP('2009-06-13 09:30:00', 'YYYY-MM-DD HH:MI:SS') WHERE name = 'SUE';

This INSERT statement restores Sue's employee table information. INSERT INTO employee (SELECT * FROM employee AS OF TIMESTAMP TO_TIMESTAMP('2009-06-13 09:30:00', 'YYYY-MM-DD HH:MI:SS') WHERE name = 'SUE'); Other information about Flashback features will be covered in other notes dealing with database recovery.

Redo Log Files

What is redo log? When a transaction modifies the database, The modified data stored in Redo Log Files. Redo Log Files Contains Redo Record. Redo logs contain a history of all changes made to the database. There are two type of Redo log Files: 1. On-Line Redo log Files 2. Archive Redo log Files

On-Line Redo Log Files How Oracle Database Writes to the Redo Log The redo log of a database consists of two or more redo log files. The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived (if the database is in ARCHIVELOG mode). LGWR writes to redo log files in a circular fashion. When the current redo log file fills, LGWR begins writing to the next available redo log file. When the last available redo log file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle again. Figure illustrates the circular writing of the redo log file. The numbers next to each line indicate the sequence in which LGWR writes to each redo log file. Figure : Reuse of Redo Log Files by


Status of the redo Log Files

Active (Current) and Inactive Redo Log Files Oracle Database uses only one redo log files at a time to store redo records written from the redo log buffer. The redo log file that LGWR is actively writing to is called the current redo log file.

Log files required for instance recovery are categorized as active log files. Log files no longer needed for instance recovery are categorized as inactive log files. Active log files cannot be overwritten by LGWR until ARC0 has archived the data when archiving is enabled.

If you have enabled archiving (the database is in ARCHIVELOG mode), then the database cannot reuse or overwrite an active online log file until one of the archiver background processes (ARCn) has archived its contents. If archiving is disabled (the database is in NOARCHIVELOG mode), then when the last redo log file is full, LGWR continues by overwriting the first available active file. Log Switches and Log Sequence Numbers A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. However, you can configure log switches to occur at regular intervals, regardless of whether the current redo log file is completely filled. You can also force log switches manually. Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it. When the database archives redo log files, the archived log retains its log sequence number. A redo log file that is cycled back for use is given the next available log sequence number. Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files. Log Switches and Checkpoints This figure shows commands used to cause Redo Log File switches and Checkpoints.

Redo Log File Organization Multiplexing The figure shown below provides the general approach to organizing on-line Redo Log Files. Initially Redo Log Files are created when a database is created, preferably in groups to provide for multiplexing. Additional groups of files can be added as the need arises.

Each Redo Log Group has identical Redo Log Files (however, each Group does not have to have the same # of Redo Log Files). If you have Redo Log Files in Groups, you must have at least two Groups. The Oracle Server needs a minimum of two on-line Redo Log Groups for normal database operation. When redo log files are multiplexed, LGWR concurrently writes the same redo log information to multiple identical redo log files, thereby eliminating a single point of redo log failure. Thus, if Disk 1 crashes as shown in the figure above, none of the Redo Log Files are truly lost because there are duplicates.

A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of the group. Each Group Member has an identical log sequence number and is the same size the members within a group cannot be different sizes. The log sequence number is assigned by the Oracle Server as it writes to a log group and the current log sequence number is stored in the control files and in the header information of all Datafiles this enables synchronization between Datafiles and Redo Log Files. If the group has more members, you need more disk drives in order for the use of multiplexed Redo Log Files to be effective.

Each database instance has its own set of Redo Log Files (Groups). Each group (whether or not the files are multiplexed) is called an instance redo thread. Typically one database instance accesses an Oracle database so the instance has one redo thread. When using Oracle Real application Clusters, more than one instance can access a single database so each instance will have a redo thread. A Redo Log File stores Redo Log Records (also called redo log entries). Each record consists of "vectors" that store information about: changes made to a database block. undo block data. transaction table of undo segments. These enable the protection of rollback information as well as the ability to roll forward for recovery. Each time a Redo Log Record is written from the Redo Log Buffer to a Redo Log File, a System Change Number (SCN) is assigned to the committed transaction.

Log Writer Failure What if LGWR cannot write to a Redo Log File or Group? Possible failures and the results are: 1. At least one Redo Log File in a Group can be written Unavailable Redo Log Group members are marked as Invalid, a LGWR trace file is generated, and an entry is written to the alert file processing of the database proceeds normally while ignoring the invalid Redo Log Group members. 2. LGWR cannot write to a Redo Log Group because it is pending archiving (LGWR cannot access the next group at a log switch because the group needs to be archived )--Database operation halts until the Redo Log Group becomes available (could be through turning off archiving) or is archived. 1. A Redo Log Group is unavailable due to media failure (All members of the next group are inaccessible to LGWR at a log switch because of media failure) Oracle Database returns an error, and the database instance shuts down. In this case, you may need to perform media recovery on the database from the loss of a redo log file. If the database checkpoint has moved beyond the lost redo log, media recovery is not necessary, because the database has saved the data recorded in the redo log to the datafiles. You need only drop the inaccessible redo log group. If the database did not

archive the bad log, use ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before the log can be dropped. ALTER DATABASE CLEAR UNARCHIVED LOG 1. A Redo Log Group fails while LGWR is writing to the members Oracle generates an error message and the database instance shuts down. Check to see if the disk drive needs to be turned back on or if media recovery is required. Sometimes a Redo Log File in a Group becomes corrupted while a database instance is in operation. Database activity halts because archiving cannot continue. Clear the Redo Log Files in a Group (here Group #2) with the statement: ALTER DATABASE CLEAR LOGFILE GROUP 2; Where to Store Redo Log Files and Archive Log Files Guidelines for storing On-line Redo Log Files versus Archived Redo Log files: 1. Separate members of each Redo Log Group on different disks as this is required to ensure multiplexing enables recovery in the event of a disk drive crash. 2. If possible, separate On-line Redo Log Files from Archive Log Files as this reduces contention for the I/O path between the ARCn and LGWR background processes. 3. Separate Datafiles from On-line Redo Log Files as this reduces LGWR and DBWn contention. It also reduces the risk of losing both Datafiles and Redo Log Files if a disk crash occurs. You will not always be able to accomplish all of the above guidelines your ability to meet these guidelines will depend on the availability of a sufficient number of independent physical disk drives. How large should Redo Log Files be, and how many Redo Log Files are enough? This provides facts and guidelines for sizing Redo Log files. Minimum size for an On-line Redo Log File is 50Kb. Maximum size depends on the operating system. The file size depends on the size of transactions that process in the database. Large batch update transactions require larger Redo Log Files, perhaps 2MB to 5MB in size. Databases that primarily support on-line, transaction-processing (OLTP) can work successfully with smaller Redo Log Files. Set the size large enough so that the On-line Redo Log Files switch about once every 30 minutes. If your Log Files are 1MB in size and switches are occurring on the average of once every 10 minutes, then triple their size! You can specify the log switch interval to 30 minutes (a typical value) with the init.ora command shown here that sets the ARCHIVE_LAG_TARGET parameter in seconds ( there are 1800 seconds in 30 minutes). ARCHIVE_LAG_TARGET = 1800

or to set the parameter dynamically ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 1800 Determine if LGWR has to wait (meaning you need more groups) by: Check the DUMP File locations for LGWR trace files the trace files will provide information about LGWR waits. Check the alertSID.log file for messages indicating that LGWR has to wait for a group because a checkpoint has not completed or a group has not been archived.

The parameter MAXLOGFILES in the CREATE DATABASE command specifies the maximum number of Redo Log Groups you can have group numbers range from 1 to MAXLOGFILES. Override this parameter only by recreating the database or control files. When MAXLOGFILES is not specified, the CREATE DATABASE command uses a default value specific to each operating system check the operation system documentation. LGWR writes from the Redo Log Buffer to the current Redo Log File when: a transaction commits the Redo Log Buffer is 1/3 or more full. There is more than 1MB of changed rows in the Redo Log Buffer Prior to DBWn writing modified blocks from the Database Buffer Cache to Datafiles. Checkpoints also affect Redo Log File usage. During a checkpoint the DBWn background process writes dirty database buffers (buffers that have modified data) from the Database Buffer Cache to datafiles. The CKPT background process updates the control file to reflect that a checkpoint has been successfully completed. If a log switch occurs as a result of a checkpoint, then the CKPT process updates the headers of the datafiles. Checkpoints can occur for all datafiles in the database or only for specific datafiles. A checkpoint occurs, for example, in the following situations: when a log switch occurs. when an Oracle Instance is shut down with the normal, transactional, or immediate option. when forced by setting the initialization parameter FAST_START_MTTR_TARGET that controls the number of dirty buffers written by DBWn to datafiles. when a DBA issues the command to create a checkpoint. when the ALTER TABLESPACE [OFFLINE NORMAL | READ ONLY | BEGIN BACKUP] command causes check pointing on specific datafiles. Checkpoint information is also recorded in the alertSID.log file whenever the LOG_CHECKPOINTS_TO_ALERT initialization parameter is set to TRUE. The default value of FALSE for this parameter does not log checkpoints.

Creating Redo Log Groups and Members Adding On-line Redo Log File Groups

This figure shows the ALTER DATABASE command option used to add Redo Log File Groups. This simultaneously adds new log files to the new Group 3.

Adding On-line Redo Log File Members This figure shows the ALTER DATABASE command options to add new Log File Members to existing groups. If the file to be added already exists and is being reused, it must have the same size and you must use the REUSE option in the command immediately after the filename specification.

Relocating and Renaming Redo Log Members You can use operating system (Copy) commands to relocate redo logs, and then use the ALTER DATABASE statement to make their new names (locations) known to the database. if datafiles and a number of redo log files are stored on the same disk and should be separated to reduce contention. Use the following steps for relocating redo logs. The example used to illustrate these steps assumes: The log files are located on two disks: diskA and diskB. The redo log is duplexed: one group consists of the members /diskA/logs/log1a.rdo and / diskB/logs/log1b.rdo, and the second group consists of the members /diskA/logs/log2a.rdo and /diskB/logs/log2b.rdo.

The redo log files located on diskA must be relocated to diskC. The new filenames will reflect the new location: /diskc/logs/log1c.rdo and /diskc/logs/log2c.rdo. Steps for Renaming Redo Log Members 1. Shut down the database. SHUTDOWN 1. Copy the redo log files to the new location. Operating system files, such as redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files. The following example uses operating system commands (UNIX) to move the redo log members to a new location: mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo

1. Startup the database, mount, but do not open it. CONNECT / as SYSDBA STARTUP MOUNT

1. Rename the redo log members. Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database redo log files. ALTER DATABASE RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo' TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo'; 1. Open the database for normal operation. The redo log alterations take effect when the database is opened. ALTER DATABASE OPEN; Dropping Redo Log Groups and Members
Dropping Redo Log File Groups and Files

This is accomplished with the ALTER DATABASE command as shown here: ALTER DATABASE DROP LOGFILE GROUP 3; Remember, you must keep at least two groups of On-line Redo Log Files working. You also cannot drop an active (current) Group. Further, the actual operating system files are not deleted when you drop a Group. You must use operating system commands to delete the files that stored the Redo Logs of the dropped Group. Dropping Redo Log File Member Sometimes an individual Redo Log File will become damaged (invalid). You can use the following command to drop the file. Then use the operating system command to delete the file that stored the invalid Redo Log File, and then recreate the Redo Log File. ALTER DATABASE DROP LOGFILE MEMBER ' /u01/student/dbockstd/oradata/USER350redo01a.log';

Changing Redo Log File Sizes Each Redo Log File member in a Group must be identical in size. If you need to make your Redo Log Files larger, use the following steps. 1. Use the V$LOG view to identify the current active Redo Log Group. SQL> SELECT group#, status FROM v$log; GROUP# STATUS ---------- ---------------1 INACTIVE 2 INACTIVE 3 CURRENT 4 INACTIVE

2. Drop one or more of the inactive Redo Log Groups keeping at least two current On-line Redo Log Groups. 3. Use operating system commands to delete the files that stored the dropped Redo Log Files. 4. Recreate the groups with larger file sizes. Continue this sequence until all groups have been resized. Obtaining Redo Log Group and File Information Two views, V$LOG and V$LOGFILE are used to store information about On-line Redo Log files. The following example queries display information from SIUE's DBORCL database. SELECT group#, sequence#, bytes/1024, members, status FROM v$log; GROUP# SEQUENCE# BYTES/1024 MEMBERS STATUS ---------- ---------- ---------- ---------- ---------------1 37 51200 2 CURRENT 2 35 51200 2 INACTIVE 3 36 51200 2 INACTIVE Possible Status values for this view are: Unused the Redo Log Group has never been used this status only occurs for a newly added Redo Log Group. Current the active Redo Log Group. Active the Redo Log Group is active, but not the current Group it is needed for crash recovery and may be in use for block recovery. It may not yet be archived. Clearing the Log is being recreated after an ALTER DATABASE CLEAR LOGFILE command. Clearing_Current the current Redo Log Group is being cleared of a closed group. Inactive The Group is not needed for Instance Recovery. COLUMN member FORMAT A45; COLUMN status FORMAT A10; SELECT member, status FROM v$logfile; MEMBER STATUS --------------------------------------------- ---------/u01/oradata/DBORCL/DBORCLredo3a.rdo /u02/oradata/DBORCL/DBORCLredo3b.rdo /u01/oradata/DBORCL/DBORCLredo2a.rdo /u02/oradata/DBORCL/DBORCLredo2b.rdo /u01/oradata/DBORCL/DBORCLredo1a.rdo /u02/oradata/DBORCL/DBORCLredo1b.rdo 6 rows selected. Possible Status values for this view are:

Invalid the file cannot be accessed and needs to be dropped and recreated. Stale the contents of the file are incomplete drop it and recreate it. Deleted the file is no longer in use you can use operating system commands to delete the associated operating system file. Blank the file is in use.

Archived Redo Log Files

What Is the Archived Redo Log? Oracle Database filled groups of redo log files saved to one or more offline destinations, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode. You can choose automatic or manual archiving. When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot reuse and hence overwrite a redo log group until it has been archived. The background process ARCn automates archiving operations when automatic archiving is enabled. The database starts multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall behind. You can use archived redo logs to: Recover a database Update a standby database Get information about the history of a database using the LogMiner utility Choosing Between NOARCHIVELOG and ARCHIVELOG Mode A production database should always be configured to operate in ARCHIVELOG mode. IN NOARACHIVELOG mode: When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log. The database control file indicates that filled groups are not required to be

archived. Therefore, when a filled group becomes inactive after a log switch, the group is available for reuse by LGWR. The Redo Log Files are overwritten each time a log switch occurs, but the files are never archived. When a Redo Log File (group) becomes inactive it is available for reuse by LGWR. This mode protects a database from instance failure, but NOT from media failure. . Only the most recent changes made to the database, which are stored in the online redo log groups, are available for instance recovery. In the event of media failure occurs while the database is in NOARCHIVELOG mode, you can only restore the database to the point of the most recent full database You cannot perform tablespace backups in NOARCHIVELOG mode. you can use only whole database backups taken while the database is closed. IN ARCHIVELOG mode : Full On-line Redo Log Files are written by the ARCn process to specified archive locations, either disk or tape you can create more than one archiver process to improve performance. A database control file tracks which Redo Log File groups are available for reuse (those that have been archived). The DBA can use the last full backup and the Archived Log Files to recover the database. If a Redo Log File that has not been archived cannot be reused until the file is archived if the database stops awaiting archiving to complete, add an additional Redo Log Group.

The archiving of filled groups has these advantages: A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of an operating system or disk failure. If you keep an archived log, you can use a backup taken while the database is open and in normal system use. You can keep a standby database current with its original database by continuously applying the original archived redo logs to the standby. Figure Redo Log File Use in ARCHIVELOG Mode While archiving can be set to either manual or automatic, the preferred setting for normal production database operation is automatic. In manual archiving, the DBA must manually archive each On-line Redo Log File. Performing Manual Archiving To operate your database in manual archiving mode, Issue the command to turn on archiving manual. ALTER DATABASE ARCHIVELOG MANUAL; When you operate your database in manual ARCHIVELOG mode, you must archive inactive groups of filled redo log files or your database operation can be temporarily suspended. To archive a filled redo log group manually, connect with administrator privileges. Ensure that the database is mounted but not open. Use the ALTER SYSTEM statement with the ARCHIVE LOG clause to manually archive filled redo log files. The following statement archives all unarchived log files:

ALTER SYSTEM ARCHIVE LOG ALL; When you use manual archiving mode, you cannot specify any standby databases in the archiving destinations. Specifying the Number of ARCn Processes The LOG_ARCHIVE_MAX_PROCESSES parameter in the init.ora file specifies how many ARCn processes are started for a database instance. Default is two ARCn processes you can specify up to 10 processes. Use additional ARCn processes to ensure automatic archiving of filled redo log files does not fall behind. The LOG_ARCHIVE_MAX_PROCESSES parameter is dynamic and can be changed as shown but only with the database in MOUNT stage (not OPEN). ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES = 4; Switching from NOARCHIVELOG to ARCHIVELOG 1. Connect to the database with administrator privileges (AS SYSDBA) shutdown the database instance normally with the command: Shutdown Note: You cannot change from ARCHIVELOG to NOARCHIVELOG if any datafiles require media recovery. 2. Backup the database it is always recommended to backup a database before making any major changes. 3. Edit the init.ora file to add parameters to specify the destinations for archive log files (the next section provides directions on how to specify archive destinations). 4. Startup a new instance in MOUNT stage do not open the database archive status can only be modified in MOUNT stage: STARTUP MOUNT PFILE=<your pfile location> 5. Issue the command to turn on archiving and then open the database: ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; 6. Shutdown the database. SHUTDOWN IMMEDIATE 7. Backup the database necessary again because the archive status has changed. The previous backup was taken in NOARCHIVELOG mode and is no longer usable.

Specifying Archive Destinations and Names Archive Redo Log files can be written to a single disk location or they can be multiplexed, i.e. written to multiple disk locations. Archiving to a single destination was once accomplished by specifying the LOG_ARCHIVE_DEST initialization parameter in the init.ora file it has since been replaced in favor of the LOG_ARCHIVE_DEST_n parameter (see next bullet). Multiplexing can be specified for up to 10 locations by using the LOG_ARCHIVE_DEST_n parameters (where n is a number from 1 to 10). This can also be used to duplex the files by specifying a value for the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 parameters. When multiplexing, you can specify remote disk drives if they are available to the server. These examples show setting the init.ora parameters for the possible archive destination specifications: 1. Example of Single Destination: LOG_ARCHIVE_DEST = '/u03/student/dbockstd/oradata/archive' 1. Example of Duplex Destinations: LOG_ARCHIVE_DEST_1 = 'LOCATION = /u01/student/dbockstd/oradata/archive' LOG_ARCHIVE_DEST_2 = 'LOCATION = /u02/student/dbockstd/oradata/archive' 1. Example of Multiplexing Three Archive Log Destinations (for those DBAs that are very risk averse): LOG_ARCHIVE_DEST_1 = 'LOCATION = /u01/student/dbockstd/oradata/archive' LOG_ARCHIVE_DEST_2 = 'LOCATION = /u02/student/dbockstd/oradata/archive' LOG_ARCHIVE_DEST_3 = 'LOCATION = /u03/student/dbockstd/oradata/archive' The LOCATION keyword specifies an operating system specific path name. Note: If you use a LOG_ARCHIVE_DEST_n parameter, then you cannot use the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST parameters. Specify the naming pattern to use for naming Archive Redo Log Files with the LOG_ARCHIVE_FORMAT command in the init.ora file. LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc where %t = thread number. %s = log sequence number. %r = reset logs ID (a timestamp value). This example shows a sequence of Archive Redo Log files generated using the LOG_ARCHIVE_FORMAT to specify naming the Redo Log Files all of the logs are for thread 1 with log sequence numbers of 100, 101, and 102 with reset logs ID 509210197 indicating the files are from the same database.

/disk1/archive/arch_1_101_509210197.arc, /disk1/archive/arch_1_102_509210197.arc /disk2/archive/arch_1_100_509210197.arc, /disk2/archive/arch_1_101_509210197.arc, /disk2/archive/arch_1_102_509210197.arc Understanding Archive Destination Status Each archive destination has the following variable characteristics that determine its status: Valid/Invalid: indicates whether the disk location or service name information is specified and valid Enabled/Disabled: indicates the availability state of the location and whether the database can use the destination Active/Inactive: indicates whether there was a problem accessing the destination Several combinations of these characteristics are possible. To obtain the current status and other information about each destination for an instance, query the V$ARCHIVE_DEST view. The characteristics determining a locations status that appear in the view are shown in Table 71. Note that for a destination to be used, its characteristics must be valid, enabled, and active. Table 7-1 Destination Status STATUS Characteristics
Valid Enabled Active



True False True True

True n/a True True False False

True n/a False False True False

The user has properly initialized the destination, which is available for archiving. The user has not provided or has deleted the destination information. An error occurred creating or writing to the destination file; refer to error data. Destination is full (no disk space). The user manually and temporarily disabled the destination. The user manually and temporarily disabled the destination following an error; refer to error data. A parameter error occurred; refer to error data.






Viewing Information on Archive Redo Log Files Information about the status of the archiving can be obtained from the V$INSTANCE dynamic performance view. This shows the status for the DBORCL database.

SELECT archiver FROM v$instance; ARCHIVE ------STARTED Several dynamic performance views contain useful information about archived redo logs, as summarized in the following table. Dynamic Performance View Description V$DATABASE Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG mode and whether MANUAL (archiving mode) has been specified. Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information. Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.



V$ARCHIVE_PROCESSES Displays information about the state of the various archive processes for an instance. V$BACKUP_REDOLOG Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information. Displays all redo log groups for the database and indicates which need to be archived. Contains log history information such as which logs have been archived and the SCN range for each archived log.


Note: A final caution about automatic archiving Archive Redo Log files can consume a large quantity of space. As you dispose of old copies of database backups, dispose of the associated Archive Redo Log files.

Managing Tablespace and Datafiles Most Oracle databases will have a USERS permanent tablespace. This tablespace is used to store objects created by individual users of the database. we use the USERS tablespace as a store location for tables, indexes, views, and other objects created by students.

All students share the same USERS tablespace.

Many Oracle databases will have one or more DATA tablespaces. A DATA tablespace is also permanent and is used to store application data tables such as ORDER ENTRY or INVENTORY MANAGEMENT applications. For large applications, it is often a practice to create a special DATA tablespace to store data for the application. In this case the tablespace may be named whatever name is appropriate to describe the objects stored in the tablespace accurately. Oracle database having a DATA (or more than one DATA) tablespace will also have an accompanying INDEXES tablespace. The purpose of separating tables from their associated indexes is to improve I/O efficiency. The DATA and INDEXES tablespaces will typically be placed on different disk drives thereby providing an I/O path for each so that as tables are updated, the indexes can also be updated simultaneously.

CREATE TABLESPACE Command To create a tablespace you must have the CREATE TABLESPACE privilege. The full CREATE TABLESPACE (and CREATE TEMPORARY TABLESPACE) command syntax is shown here. CREATE TABLESPACE tablespace [DATAFILE clause] [MINIMUM EXTENT integer[K|M]] [BLOCKSIZE integer [K]] [LOGGING|NOLOGGING] [DEFAULT storage_clause ] [ONLINE|OFFLINE] [PERMANENT|TEMPORARY] [extent_management_clause] [segment_management_clause] As you can see, almost all of the clauses are optional. The clauses are defined as follows: TABLESPACE: This clause specifies the tablespace name. DATAFILE: This clause names the one or more datafiles that will comprise the tablespace and includes the full path, example:

DATAFILE '/U01/student/dbockstd/oradata/USER350data01.dbf' SIZE 10M MINIMUM EXTENT: Every used extent for the tablespace will be a multiple of this integer value. Use either G, M or K to specify gigabytes, megabytes, or kilobytes. BLOCKSIZE: This specifies a nonstandard block size this clause can only be used if the DB_CACHE_SIZE parameter is used and at least one DB_nK_CACHE_SIZE parameter is set and the integer value for BLOCKSIZE must correspond with one of the DB_nK_CACHE_SIZE parameter settings. LOGGING: This is the default all tables, indexes, and partitions within a tablespace have modifications written to Online Redo Logs.

NOLOGGING: This option is the opposite of LOGGING and is used most often when large direct loads of clean data are done during database creation for systems that are being ported from another file system or DBMS to Oracle. DEFAULT storage_clause: This specifies default parameters for objects created inside the tablespace. Individual storage clauses can be used when objects are created to override the specified DEFAULT. OFFLINE: This parameter causes a tablespace to be unavailable after creation. PERMANENT: A permanent tablespace can hold permanent database objects. TEMPORARY: A temporary tablespace can hold temporary database objects, e.g., segments created during sorts as a result of ORDER BY clauses or JOIN views of multiple tables. A temporary tablespace cannot be specified for EXTENT MANAGEMENT LOCAL or have the BLOCKSIZE clause specified. extent_management_clause: This clause specifies how the extents of the tablespace are managed and is covered in detail later in these notes. segment_management_clause: This specifies how Oracle will track used and free space in segments in a tablespace that is using free lists or bitmap objects. datafile_clause: filename [SIZE integer [K|M] [REUSE] [ AUTOEXTEND ON | OFF ] filename: includes the path and filename and file size. . REUSE: specified to reuse an existing file. NEXT: Specifies the size of the next extent. MAXSIZE: Specifies the maximum disk space allocated to the tablespace. Usually set in megabytes, e.g., 400M or specified as UNLIMITED.

Tablespace and Datafile Sizing and Resizing If tablespaces need to have additional space. This can be accomplished by: Setting the AUTOEXTEND option to enable a tablespace to increase automatically in size. This can be dangerous if a runaway the space and consumes all available storage space. An advantage is that applications will not ABEND because a tablespace runs out of storage capacity. This can be accomplished when the tablespace is initially created or by using the ALTER TABLESPACE command at a later time.

CREATE TABLESPACE application_data DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 200M AUTOEXTEND ON NEXT 48K MAXSIZE 500M; This query uses the DBA_DATA_FILES view to determine if AUTOEXTEND is enabled for selected tablespaces in the shops Oracle database. SELECT tablespace_name, autoextensible FROM dba_data_files; TABLESPACE_NAME ------------------------------ --SYSTEM YES DRSYS NO DATA NO AUT


ALTER DATABASE DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' AUTOEXTEND ON MAXSIZE 600M; These two commands look similar, but this second resizes a datafile while the above command sets the maxsize of the datafile. ALTER DATABASE DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' RESIZE 600M; Add a new datafile to a tablespace with the ALTER TABLESPACE command.

ALTER TABLESPACE application_data ADD DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 200M; Rename and Relocating Datafiles We can use two method Relocating Datafile.

Tablespace data file rename SQL> alter database datafile 4 offline; Database altered. SQL> host cp -p /u01/app/oracle/oradata/backup/users01.dbf /u03/oradata/users01.dbf SQL> alter tablespace users rename datafile '/u01/app/oracle/oradata/testdb/users01.dbf' to '/u03/oradata/users01.dbf'; Tablespace altered. Database data file rename The ALTER DATABASE command can be used to move datafiles by renaming them. This cannot be used if the tablespace is the SYSTEM or contains active undo or temporary segments. The ALTER DATABASE command can also be used with the RENAME option. This is the

method that must be used to move the SYSTEM tablespace because it cannot be taken offline. The steps are: 1. Shut down the database. 2. Use an operating system command to move the files. linux> mv 'OLDFILE.DBF ' 'NEWFILE.DBF' 3. Mount the database. startup mount 4. Execute the ALTER DATABASE RENAME FILE command. ALTER DATABASE RENAME FILE '/u01/student/dbockstd/oradata/USER350data01.dbf' TO '/u02/student/dbockstd/oradata/USER350data01.dbf' SIZE 200M; 5. Open the database. Alter database open Dropping Tablespaces Occasionally tablespaces are dropped due to database reorganization. A tablespace that contains data cannot be dropped unless the INCLUDING CONTENTS clause is added to the DROP command. Since tablespaces will almost always contain data, this clause is almost always used. A DBA cannot drop the SYSTEM tablespace or any tablespace with active segments. Normally you should take a tablespace offline to ensure no active transactions are being processed. An example command set is: ALTER TABLESPACE application_data OFFLINE; DROP TABLESPACE application_data INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS; The AND DATAFILES clause causes the datafiles to also be deleted. Otherwise, the tablespace is removed from the database as a logical unit, and the datafiles must be deleted with operating system commands. The CASCADE CONSTRAINTS clause drops all referential integrity constraints where objects in one tablespace are constrained/related to objects in another tablespace.

Non-Standard Block Sizes It may be advantageous to create a tablespace with a nonstandard block size in order to import data efficiently from another database. This also enables transporting tablespaces with unlike block sizes between databases. A block size is nonstandard if it differs from the size specified by the DB_BLOCK_SIZE initialization parameter. The BLOCKSIZE clause of the CREATE TABLESPACE statement is used to specify nonstandard block sizes. In order for this to work, you must have already set DB_CACHE_SIZE and at least one

DB_nK_CACHE_SIZE initialization parameter values to correspond to the nonstandard block size to be used. The DB_nK_CACHE_SIZE initialization parameters that can be used are: DB_2K_CACHE_SIZE DB_4K_CACHE_SIZE DB_8K_CACHE_SIZE DB_16K_CACHE_SIZE DB_32_CACHE_SIZE Note that the DB_nK_CACHE_SIZE parameter corresponding to the standard block size cannot be used it will be invalid instead use the DB_CACHE_SIZE parameter for the standard block size.

Example these parameters specify a standard block size of 8K with a cache for standard block size buffers of 12M. The 2K and 8K caches will be configured with cache buffers of 8M each. DB_BLOCK_SIZE=8192 DB_CACHE_SIZE=12M DB_2K_CACHE_SIZE=8M DB_8K_CACHE_SIZE=8M Example this creates a tablespace with a blocksize of 2K (assume the standard block size for the database was 8K). CREATE TABLESPACE inventory DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 50M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K BLOCKSIZE 2K; Managing Tablespaces with Oracle Managed Files As you learned earlier, when you use an OMF approach, the DB_CREATE_FILE_DEST parameter in the parameter file specifies that datafiles are to be created and defines their location. The DATAFILE clause to name files is not used because filenames are automatically generated by the Oracle Server, for example, ora_tbs1_2xfh990x.dbf. You can also use the ALTER SYSTEM command to dynamically set this parameter in the SPFILE parameter file. ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u02/student/dbockstd/oradata'; Additional tablespaces are specified with the CREATE TABLESPACE command shown here that specifies not the datafile name, but the datafile size. You can also add datafiles with the ALTER TABLESPACE command. CREATE TABLESPACE application_data DATAFILE SIZE 100M;

ALTER TABLESPACE application_data ADD DATAFILE; Setting the DB_CREATE_ONLINE_LOG_DEST_n parameter prevents log files and control files from being located with datafiles this will reduce I/O contention. When OMF tablespaces are dropped, their associated datafiles are also deleted at the operating system level. Tablespace Information in the Data Dictionary The following data dictionary views can be queried to display information about tablespaces. Tablespaces: DBA_TABLESPACES, V$TABLESPACE Datafiles: DBA_DATA_FILES, V$_DATAFILE Temp files: DBA_TEMP_FILES, V$TEMPFILE You should examine these views in order to familiarize yourself with the information stored in them.

Control File
What Is a Control File? A Control File is a small binary file that records the physical structure of the database, needed to startup an Oracle database and to operate the database. The control file includes: The database name Names and locations of associated datafiles and redo log files The timestamp of the database creation The current log sequence number Checkpoint information Properties of Control file A control file belongs to only one database. A control file(s) is created at the same time the database is created based on the CONTROL_FILES parameter in the PFILE. If all copies of the control files for a database are lost/destroyed, then that Control file recovery must be accomplished before the database can be opened. The control file must be available for writing by the Oracle Database server whenever the database is open. Without the control file, the database cannot be mounted and recovery is difficult. An Oracle database writes continuously to the control file (or to more than one where more than one exists). You must never attempt to modify a control file as only the Oracle Server should modify this file.

Contents of a Control File Control files record the following information: Database name recorded as specified by the initialization parameter DB_NAME or the

name used in the CREATE DATABASE statement. Database identifier recorded when the database is created. Time stamp of database creation. Names and locations of datafiles and online redo log files. This information is updated if a datafile or redo log is added to, renamed in, or dropped from the database. Tablespace information. This information is updated as tablespaces are added or dropped. Redo log history recorded during log switches. Location and status of archived logs recorded when archiving occurs. Location and status of backups recorded by the Recovery Manager utility. Current log sequence number recorded when log switches occur. Checkpoint information recorded as checkpoints are made. Multiplexing Control Files Control files should be multiplexed this means that more than one identical copy is kept and each copy is stored to a separate, physical disk drive of course your Server must have multiple disk drives in order to do this. Even if only one disk drive is available, you should still multiplex the control files. This eliminates , the need to use database recovery if a copy of a control file is destroyed in a disk crash or through accidental deletion. You can keep up to eight copies of control files the Oracle Server will automatically update all control files specified in the initialization parameter file to a limit of eight. More than one copy of a control file can be created by specifying the location and file name in the CONTROL_FILES parameter of the PFILE when the database is created. During database operation, only the first control file listed in the CONTROL_FILES parameter is read, but all control files listed are written to in order to maintain consistency. Multiplexing Control File When using PFILE You can also add additional control files. When using a PFILE, this is accomplished by shutting down the database, copying an existing control file to a new file on a new disk drive, editing the CONTROL_FILES parameter of the PFILE, then restarting the database.

Multiplexing Control File When using SPFILE If you are using an SPFILE, you can use the steps specified in the figure shown here. The difference is you name the control file in the first step and create the copy in step 3.

Creating Control Files This section describes ways to create control files 1. Creating Initial Control Files 2. Creating Additional Copies, Renaming, and Relocating Control Files 3. Creating New Control Files Initial Creation of Control Files: The initial control files of an Oracle Database are created when you issue the CREATE

DATABASE statement. The names of the control files are specified by the CONTROL_FILES parameter in the initialization parameter file used during database creation. The filenames specified in CONTROL_FILES should be fully specified and are operating system specific. The following is an example of a CONTROL_FILES initialization parameter: CONTROL_FILES = (/u01/oracle/prod/control01.ctl, /u02/oracle/prod/control02.ctl, /u03/oracle/prod/control03.ctl) The CREATE DATABASE statement automatically create the control file. When creating the database. If files with the specified names currently exist at the time of database creation, you must specify the CONTROLFILE REUSE clause in the CREATE DATABASE statement, or else an error occurs. Also, if the size of the old control file differs from the SIZE parameter of the new one, you cannot use the REUSE clause.

Creating Additional Copies, Renaming, and Relocating Control Files You can create an additional control file copy for multiplexing by copying an existing control file to a new location and adding the file name to the list of control files. Similarly, you rename an existing control file by copying the file to its new name or location, and changing the file name in the control file list. In both cases, to guarantee that control files do not change during the procedure, shut down the database before copying the control file. To add a multiplexed copy of the current control file or to rename a control file: 1. Shut down the database. 2. Copy an existing control file to a new location, using operating system commands. 3. Edit the CONTROL_FILES parameter in the database initialization parameter file to add the new control file name, or to change the existing control filename. 4. Restart the database.

Control Files - Moving

Procedures for Moving a Control file 1. Shutdown the instance. 2. Copy the second control file (or make a copy of the only control file) to a second drive. Additional copies can be made to other drives (two are usually sufficient). 3. Edit the INIT.ORA or the INIT_node_sid.ORA file to reflect the new location(s) for the control file(s) created in step 2. This information is placed into the initialization parameter CONTROLFILES. 4. Delete the copied control file from the first location. 5. Restart the instance. If a message appears that it cannot locate the control file, check that you used the full path specification in the CONTROLFILES parameter. 6. Once the instance starts, query the v$parameter table for the value of the parameter name "controlfiles" to be sure that the changes were successful. Creating (Fresh) New Control Files This section discusses when and how to create new control files.If we Have no backup controlfile because of lost controlfile. When to Create New Control Files It is necessary for you to create new control files in the following situations: All control files for the database have been permanently damaged and you do not have a

control file backup. You want to change the database name. For example, you would change a database name if it conflicted with another database name in a distributed environment. Note: You can change the database name and DBID (internal database identifier) using the DBNEWID utility. See Oracle Database Utilities for information about using this utility.

The CREATE CONTROLFILE Statement You can create a new control file for a database using the CREATE CONTROLFILE statement. The following statement creates a new control file for the prod database (a database that formerly used a different database name): CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/prod/redo01_01.log', '/u01/oracle/prod/redo01_02.log'), GROUP 2 ('/u01/oracle/prod/redo02_01.log', '/u01/oracle/prod/redo02_02.log'), GROUP 3 ('/u01/oracle/prod/redo03_01.log', '/u01/oracle/prod/redo03_02.log') RESETLOGS DATAFILE '/u01/oracle/prod/system01.dbf' SIZE 3M, '/u01/oracle/prod/rbs01.dbs' SIZE 5M, '/u01/oracle/prod/users01.dbs' SIZE 5M, '/u01/oracle/prod/temp01.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG; Steps for Creating New Control Files Complete the following steps to create a new control file. 1. Make a list of all datafiles and redo log files of the database. you will already have a list of datafiles and redo log files that reflect the current structure of the database. However, if you have no such list, executing the following statements will produce one. SELECT MEMBER FROM V$LOGFILE; SELECT NAME FROM V$DATAFILE; SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files'; If you have no such lists and your control file has been damaged so that the database cannot be opened, try to locate all of the datafiles and redo log files that constitute the database. Any files not specified in step 5 are not recoverable once a new control file has been created. Moreover, if you omit any of the files that make up the SYSTEM tablespace, you might not be able to recover the database. 1. Shut down the database.

If the database is open, shut down the database normally if possible. Use the IMMEDIATE or ABORT clauses only as a last resort. 1. Back up all datafiles and redo log files of the database. 2. Start up a new instance, but do not mount or open the database: STARTUP NOMOUNT 1. Create a new control file for the database using the CREATE CONTROLFILE statement. When creating a new control file, specify the RESETLOGS clause if you have lost any redo log groups in addition to control files. In this case, you will need to recover from the loss of the redo logs (step 8). You must specify the RESETLOGS clause if you have renamed the database. Otherwise, select the NORESETLOGS clause. 1. Store a backup of the new control file on an offline storage device. 2. Edit the CONTROL_FILES initialization parameter for the database to indicate all of the control files now part of your database as created in step 5 (not including the backup control file). If you are renaming the database, edit the DB_NAME parameter in your instance parameter file to specify the new name. 3. Recover the database if necessary. If you are not recovering the database, skip to step 9. If you are creating the control file as part of recovery, recover the database. If the new control file was created using the NORESETLOGS clause (step 5), you can recover the database with complete, closed database recovery. If the new control file was created using the RESETLOGS clause, you must specify USING BACKUP CONTROL FILE. If you have lost online or archived redo logs or datafiles, use the procedures for recovering those files. 9. Open the database using one of the following methods: If you did not perform recovery, or you performed complete, closed database recovery in step 8, open the database normally. ALTER DATABASE OPEN;

If you specified RESETLOGS when creating the control file, use the ALTER DATABASE statement, indicating RESETLOGS. ALTER DATABASE OPEN RESETLOGS; The database is now open and available for use. Troubleshooting After Creating Control Files After issuing the CREATE CONTROLFILE statement, you may encounter some errors. Checking for Missing or Extra Files After creating a new control file and using it to open the database, check the alert log to see if the database has detected inconsistencies between the data dictionary and the control file, such as a datafile in the data dictionary includes that the control file does not list. If a datafile exists in the data dictionary but not in the new control file, the database creates a placeholder entry in the control file under the name MISSINGnnnn, where nnnn is the file number in decimal. MISSINGnnnn is flagged in the control file as being offline and requiring media recovery. If the actual datafile corresponding to MISSINGnnnn is read-only or offline normal, then you

can make the datafile accessible by renaming MISSINGnnnn to the name of the actual datafile. If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot use the rename operation to make the datafile accessible, because the datafile requires media recovery that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing the datafile. Conversely, if a datafile listed in the control file is not present in the data dictionary, then the database removes references to it from the new control file. In both cases, the database includes an explanatory message in the alert log to let you know what was found. Handling Errors During CREATE CONTROLFILE If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-01177, ORA-01215, or ORA-01216) when you attempt to mount and open the database after creating a new control file, the most likely cause is that you omitted a file from the CREATE CONTROLFILE statement or included one that should not have been listed. In this case, you should restore the files you backed up in step 3 and repeat the procedure from step 4, using the correct filenames. Backup Control Files Oracle recommends backup of control files every time the physical database structure changes including: Adding, dropping, or renaming datafiles. Adding or dropping a tablespace, or altering the read/write state of a tablespace. Adding or dropping redo log files or groups. Use the ALTER DATABASE BACKUP CONTROLFILE statement to backup control files. Example: ALTER DATABASE BACKUP CONTROLFILE TO /u02/oradata/backup/control.bkp; Now use an SQL statement to produce a trace file (write a SQL script to the trace file) that can be edited and used to reproduce the control file.This Human Readable format. ALTER DATABASE BACKUP CONTROLFILE TO TRACE; alter database backup controlfile to trace as '/some/arbitrary/path'; Recovering a Control File Using a Current Copy This section presents ways that you can recover your control file from a current backup or from a multiplexed copy. What if a Disk Drive Fails? Recovering a Control File Use the following steps to recover from a disk drive failure that has one of the databases control files located on the drive. Shut down the instance. Replace the failed drive.

Copy a control file from one of the other disk drives to the new disk drive here we assume that u02 is the new disk drive and control02.ctl is the damaged file.

$ cp /u01/oracle/oradata/control01.ctl /u02/oracle/oradata/control02.ctl Restart the instance. If the new media (disk drive) does not have the same disk drive name as the damaged disk drive or if you are creating a new copy while awaiting a replacement disk drive, then alter the CONTROL_FILES parameter in the PFILE prior to restarting the database. No media recovery is required. If you are awaiting a new disk drive, you can alter the CONTROL_FILES parameter to remove the name of the control file on the damaged disk drive this enables you to restart the database.

Recovering from Control File Corruption Using a Control File Copy This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is corrupted, that the control file directory is still accessible, and that you have a multiplexed copy of the control file. 1. The instance shut down 2. use an operating system command to overwrite the bad control file with a good copy: % cp /u03/oracle/prod/control03.ctl /u02/oracle/prod/control02.ctl. 1. Start SQL*Plus and open the database: SQL> STARTUP Recovering from Permanent Media Failure Using a Control File Copy This procedure assumes that one of the control files specified in the CONTROL_FILES parameter is inaccessible due to a permanent media failure and that you have a multiplexed copy of the control file. 1. With the instance shut down 2. use an operating system command to copy the current copy of the control file to a new, accessible location: % cp /u01/oracle/prod/control01.ctl /u04/oracle/prod/control03.ctl 1. Edit the CONTROL_FILES parameter in the initialization parameter file to replace the bad location with the new location: CONTROL_FILES = (/u01/oracle/prod/control01.ctl, /u02/oracle/prod/control02.ctl, /u04/oracle/prod/control03.ctl) 3. Start SQL*Plus and open the database: SQL> STARTUP If you have multiplexed control files, you can get the database started up quickly by editing the CONTROL_FILES initialization parameter. Remove the bad control file from CONTROL_FILES setting and you can restart the database immediately. Then you can perform the reconstruction of the bad control file and at some later time shut down and restart the database after editing the CONTROL_FILES initialization parameter to include the recovered control file. Dropping Control Files You want to drop control files from the database, for example, if the location of a control file is no longer appropriate. Remember that the database should have at least two control files at all times.

1. Shut down the database. 2. Edit the CONTROL_FILES parameter in the database initialization parameter file to delete the old control file name. 3. Restart the database Oracle Managed Files Approach Control files are automatically created with the Oracle Managed Files (OMF) approach during database creation even if you do not specify file locations/names with the CONTROL_FILES parameter. Of course, it is probably better to specify file locations/names. Recall from your earlier studies that if you wish to use the init.ora file to manage control files, you must use the filenames generated by OMF. The locations are specified by the DB_CREATE_ONLINE_LOG_DEST_n parameter or if this is not specified, then their location is defined by the DB_CREATE_FILE_DEST parameter. Control file names generated with OMF can be found within the alertSID.log that is automatically generated by the CREATE DATABASE command and maintained by the Oracle Server. Control File Information Several dynamic performance views and SQL*Plus commands can be used to obtain information about control files. V$CONTROLFILE gives the names and status of control files for an Oracle Instance. V$DATABASE displays database information from a control file. V$PARAMETER lists the status and location of all parameters. V$CONTROLFILE_RECORD_SECTION lists information about the control file record sections. SHOW PARAMETER CONTROL_FILES command lists the name, status, and location of control files. The queries shown here were executed against the DBORCL database used for general instruction in our department. CONNECT / AS SYSDBA 1. SELECT name FROM v$controlfile; NAME ------------------------------------------------------------/u01/oradata/DBORCL/DBORCLcontrol01.ctl /u02/oradata/DBORCL/DBORCLcontrol02.ctl /u03/oradata/DBORCL/DBORCLcontrol03.ctl

1. SELECT name, value FROM v$parameter WHERE name='control_files'; NAME VALUE ---------------------------------------------------------- /u01/oradata/ DBORCL/DBORCLcontrol01.ctl, control_files /u02/oradata/DBORCL/DBORCLcontrol02.ctl

, /u03/oradata/DBORCL/DBORCLcontrol03.ctl 1. DESC v$controlfile_record_section; Name --------------------TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID Null? Type -------- ---------------------------VARCHAR2(28) NUMBER NUMBER NUMBER NUMBER NUMBER NUMBER

1. SELECT type, record_size, records_total, records_used FROM v$controlfile_record_section WHERE type='DATAFILE'; TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED ---------------------------- ----------- ------------- -----------DATAFILE 428 100 4

The RECORDS_TOTAL shows the number of records allocated for the section that stores information on data files. Several dynamic performance views display information from control files including: V$BACKUP, V$DATAFILE, V$TEMPFILE, V$TABLESPACE, V$ARCHIVE, V$LOG, V$LOGFILE, and others.

Initialization Parameter Files

When an Oracle Instance is started, the characteristics of the Instance are established by parameters specified within the initialization parameter file that is read during startup. In the figure shown below, the initialization parameter file is named spfiledb01.ora; however, you can select any name for the parameter filethe database here has an ORACLE_SID value of db01.

There are two types of initialization parameter files: Static parameter file: This has always existed and is known as the PFILE and is commonly referred to as the init.ora file. The actual naming convention used is to name the file initSID.ora where SID is the system identifier (database name) assigned to the database. Persistent parameter file: This is the SPFILE and is commonly referred to as the spfileSID.ora. There are two types of parameters: Explicit parameters. These have entries in the parameter file. Implicit parameters. These have no entries in the parameter file and Oracle uses default values. Initialization parameter files include the following: Instance parameters. A parameter to name the database associated with the file. SGA memory allocation parameters. Instructions for handling online redo log files. Names and locations of control files. Undo segment information.

PFILE This is a plain text file. I usually maintain this file either by editing it with the vi editor, or

by FTPing it to my client computer, modifying it with Notepad, and then FTPing it back to the SOBORA2 server. The file is only read during database startup so any modifications take effect the next time the database is started up. This is an obvious limitation since shutting down and starting up an Oracle database is not desirable in a 24/7 operating environment.

The naming convention followed is to name the file initSID.ora where SID is the system identifier. For example, the PFILE for the departmental SOBORA2 database is named initDBORCL.ora. You can also create an init.ora file by typing commands into a plain text file using an editor such as Notepad. NOTE: For a Windows operating system, the default location for the init.ora file is C:\Oracle_Home\database. The basic initialization parameters there are a total of 255 parameters, most are optional and Oracle will use default settings for them. Here the most commonly specified parameters are sorted according to their category. DB_BLOCK_SIZE (mandatory) specifies the size of the default Oracle block in the database. At database creation time, the SYSTEM, TEMP, and SYSAUX tablespaces are created with this block size. An 8KB block size is about the smallest you should use for any database although 2KB and 4KB block sizes are legal values. DB_CACHE_SIZE and DB_nK_CACHE_SIZE (recommended, optional): DB_CACHE_SIZE specifies the size of the area the SGA allocates to hold blocks of the default size. If the parameter is not specified, then the default is 0 (internally determined by the Oracle Database). If the parameter is specified, then the userspecified value indicates a minimum value for the memory pool. DB_nK_CACHE_SIZE specifies up to four other non-default block sizes, and is useful when transporting a tablespace from another database with a block size other than DB_BLOCK_SIZE. DB_FILE_MULTIBLOCK_READ_COUNT (recommended) used to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation. Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. PGA_AGGREGATE_TARGET (recommended) specifies the target aggregate PGA memory available to all server processes attached to the instance. You must set this parameter to enable the automatic sizing of SQL working areas used by memory-intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create.

#Cache and I/O DB_BLOCK_SIZE=8192 DB_CACHE_SIZE=71303168 DB_FILE_MULTIBLOCK_READ_COUNT=16 PGA_AGGREGATE_TARGET=25165824 DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE (recommended) specifies the default location for the flash recovery area. The flash recovery area contains multiplexed copies of current control files and online redo logs, as well as archived redo logs, flashback logs, and RMAN backups. Specifying this parameter without also specifying the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed. #Recovery DB_RECOVERY_FILE_DEST=/u01/student/dbockstd DB_RECOVERY_FILE_DEST_SIZE=536870912 CURSOR_SHARING (optional) setting this to FORCE or SIMILAR allows similar SQL statements to share the Shared SQL area in the SGA. The SIMILAR specification doesn't result in a deterioration in execution plans for the SQL statements. A Setting of EXACT allows SQL statements to share the SQL area only if their text matches exactly. OPEN_CURSORS (recommended) a cursor is a handle or name for a private SQL areaan area in memory in which a parsed statement and other information for processing the statement are kept. Each user session can open multiple cursors up to the limit set by the initialization parameter OPEN_CURSORS. OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an excessive number of cursors. #Cursors and Library Cache CURSOR_SHARING=SIMILAR OPEN_CURSORS=300 BACKGROUND_DUMP_DEST, CORE_DUMP_DEST, and USER_DUMP_DEST (all recommended) these three parameters specify where to store trace files generated by Oracle when an internal error is detected. The trace files are generated by background processes, core oracle processes, and user application processes, respectively. The BACKGROUND_DUMP_DEST also stores the alert log file for Oracle the alert log file stores information about significant events affecting the database. #Dump destinations BACKGROUND_DUMP_DEST='/u01/student/dbockstd/oradata/bdump' CORE_DUMP_DEST='/u01/student/dbockstd/oradata/cdump' USER_DUMP_DEST='/u01/student/dbockstd/oradata/udump' TIMED_STATISTICS (optional) a setting of TRUE causes Oracle to collect and store information about system performance in trace files or for display in the V$SESSSTATS and V$SYSSTATS dynamic performance views. Normally the setting is FALSE to avoid the overhead of collecting these statistics. #Diagnostics and Statistics TIMED_STATISTICS=TRUE

CONTROL_FILES (mandatory) tells Oracle the location of the control files to be read during database startup and operation. The control files are typically multiplexed (multiple copies). #Control File Configuration CONTROL_FILES = ("/u01/student/dbockstd/oradata/USER350control01.ctl", "/ u02/student/dbockstd/oradata/USER350control02.ctl")

LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n (mandatory if running in archive mode): You choose whether to archive redo logs to a single destination or multiplex the archives. If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive only to a primary and secondary destination (using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST). LOG_ARCHIVE_FORMAT specifies the format used to name the system generated archive log files so they can be read by Recovery Manager to automate recovery. #Archive LOG_ARCHIVE_DEST_1='LOCATION= ' #New log archive format for compatibility 10.0 and up. LOG_ARCHIVE_FORMAT=@t_@s_@r.dbf

SHARED_SERVERS this parameter specifies the number of server processes to create when an instance is started. If system load decreases, then this minimum number of servers is maintained. Therefore, you should take care not to set SHARED_SERVERS too high at system startup. DISPATCHERS this parameter configures dispatcher processes in the shared server architecture. #Shared Server Only use these parameters for a Shared Server # installation the parameter starts shared server if set > 0 SHARED_SERVERS=2 #Uncomment and use first DISPATCHERS parameter if the listener #is configured for SSL security #(listener.ora and sqlnet.ora) #DISPATCHERS='(PROTOCOL=TCPS)(SER=MODOSE)', # '(PROTOCOL=TCPS)(PRE=oracle.aurora.server.SGiopServer)' DISPATCHERS='(PROTOCOL=TCP)(SER=MODOSE)", '(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)', '(PROTOCOL=TCP)'

COMPATIBLE (optional) allows a newer version of Oracle binaries to be installed while restricting the feature set as if an older version was installed used to move forward with a database upgrade while remaining compatible with applications that may fail if run with new

software versions. The parameter can be increased as applications are reworked. DB_NAME (mandatory) specifies the local portion of a database name. Maximum name size is 8 characters. Must begin with alphanumeric character. Once set it cannot be changed without recreating the database. DB_NAME is recorded in the header portion of each datafile, redo log file, and control file. INSTANCE_NAME (Optional) in a Real Application Clusters environment, multiple instances can be associated with a single database service. Clients can override Oracle's connection load balancing by specifying a particular instance by which to connect to the database. INSTANCE_NAME specifies the unique name of this instance. In a singleinstance database system, the instance name is usually the same as the database name. #Miscellaneous COMPATIBLE='' DB_NAME=USER350 INSTANCE_NAME=USER350 DB_DOMAIN (recommended) this parameter is used in a distributed database system. It DB_DOMAIN specifies the logical location of the database within the network structure. You should set this parameter if this database is or ever will be part of a distributed system. #Distributed, Replication, and SnapShot DB_DOMAIN='' REMOTE_LOGIN_PASSWORDFILE (recommended) specifies the name of the password file that stores user names and passwords for privileged (DBAs, SYS, and SYSTEM) users of the database. #Security and Auditing REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE JAVA_POOL_SIZE, LARGE_POOL_SIZE and SHARED_POOL_SIZE (optional) these parameters size the shared pool, large pool, and Java pool. These are automatically sized by the Automatic Shared Memory Management (ASSM) if you set the SGA_TARGET initialization parameter. To let Oracle manage memory, set the SGA_TARGET parameter to the total amount of memory for all SGA components. Even if SGA_TARGET is set, you can also set these parameters when you want to manage the cache sizes manually. The total of the parameters cannot exceed the parameter SGA_MAX_SIZE which specifies a hard upper limit for the entire SGA. SGA_TARGET (recommended) a SGA_TARGET specifies the total size of all SGA components. If SGA_TARGET is specified, then the following memory pools are automatically sized: Buffer cache (DB_CACHE_SIZE) Shared pool (SHARED_POOL_SIZE) Large pool (LARGE_POOL_SIZE)

Java pool (JAVA_POOL_SIZE) #Pool sizing JAVA_POOL_SIZE=31457280 LARGE_POOL_SIZE=1048576 SHARED_POOL_SIZE=123232153 #This is the minimum for 10g SGA_TARGET=134217728

PROCESSES (recommended) this parameter represents the total number of processes that can simultaneously connect to the database, including background and user processes. The background processes is generally 15 and you would add the # of maximum concurrent users. There is little or no overhead associated with making PROCESSES too big. JOB_QUEUE_PROCESSES (recommended, especially to update materialized views) specifies the maximum number of processes that can be created for the execution of jobs per instance. Advanced queuing uses job queues for message propagation. You can create user job requests through the DBMS_JOB package. Some job queue requests are created automatically. An example is refresh support for materialized views. If you wish to have your materialized views updated automatically, you must set JOB_QUEUE_PROCESSES to a value of one or higher. #Processes and Sessions PROCESSES=150 JOB_QUEUE_PROCESSES=10

FAST_START_MTTR_TARGET (optional) this specifies the number of seconds the database takes to perform crash recovery of a single instance. #Redo Log and Recovery FAST_START_MTTR_TARGET=300

RESOURCE_MANAGER_PLAN (optional) this specifies the top-level resource plan to use for an instance. The resource manager will load this top-level plan along with all its descendants (subplans, directives, and consumer groups). If you do not specify this parameter, the resource manager is off by default. If you specify a plan name that does not exist within the data dictionary, Oracle will return an error message. #Resource Manager RESOURCE_MANAGER_PLAN=SYSTEM_PLAN

SORT_AREA_SIZE (recommended) an often used parameter to improve sorting performance, this parameter SORT_AREA_SIZE specifies (in bytes) the maximum amount of memory Oracle will use for a sort. After the sort is complete, but before the rows are returned, Oracle releases all of the memory allocated for the sort, except the amount specified by the SORT_AREA_RETAINED_SIZE parameter. After the last row is returned, Oracle releases the remainder of the memory.

#Sort, Hash Joins, and Bitmap Index memory allocation SORT_AREA_SIZE=524288 UNDO_MANAGEMENT and UNDO_TABLESPACE (recommended but actually required for most installations) Automatic Undo Management automates the recovery of segments that handle undo information for transactions. It is recommended to always set the UNDO_MANAGEMENT parameter to AUTO. Specify the name of the UNDO tablespace with the UNDO_TABLESPACE parameter. Only one UNDO tablespace can be active at a time. #Automatic Undo Management UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undo1 So, which parameters should you include in your PFILE when you create a database? Here is a listing of the parameters included in a PFILE for a "General Database" created through the Database Configuration Assistant wizard. This PFILE was created by generating a SPFILE and then converting it to a PFILE, so the syntax shown is somewhat different PFILE (listing from the initDBORCL.ora file) DBORCL.__db_cache_size=1207959552 DBORCL.__java_pool_size=16777216 DBORCL.__large_pool_size=16777216 DBORCL.__shared_pool_size=352321536 DBORCL.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/product/10.2.0/db_1/admin/DBORCL/adump' *.background_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/DBORCL/bdump' *.compatible='' *.control_files='/u01/oradata/DBORCL/DBORCLcontrol01.ctl','/u02/oradata/DBORCL/ DBORCLcontrol02.ctl','/u03/oradata/DBORCL/DBORCLcontrol03.ctl' *.core_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/DBORCL/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='DBORCL' *.dispatchers='(PROTOCOL=TCP) (SERVICE=DBORCLXDB)' *.job_queue_processes=10 *.open_cursors=300 *.pga_aggregate_target=1253048320 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=1610612736 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/u01/app/oracle/product/10.2.0/db_1/admin/DBORCL/udump' *.log_archive_format='DBORCL_%s_%t_%r.arc' *.log_archive_dest_1='LOCATION=/u03/archive/DBORCL'

The example above shows the format for specifying values: keyword = value. Each parameter has a default value that is often operating system dependent. Generally parameters can be specified in any order. Comment lines can be entered and marked with the # symbol at the beginning of the comment. Enclose parameters in quotation marks to include literals. Usually operating systems such as LINUX are case sensitive so remember this in specifying file names.

SPFILE The SPFILE is a binary file that was introduced with Oracle 9i. You must not manually modify the file and it must always reside on the server. After the file is created, it is maintained by the Oracle server. The SPFILE enables you to make changes that are termed persistent across startup and shutdown operations. You can make dynamic changes to Oracle while the database is running and this is the main advantage of using this file. The default location is in the $ORACLE_HOME/dbs directory with a default name of spfileSID.ora.

As is shown in the figure above, you can create an SPFILE from an existing PFILE by typing in the command shown while using SQL*Plus. Note that the filenames are enclosed in singlequote marks. Recreating a PFILE You can also create a PFILE from an SPFILE by exporting the contents through use of the CREATE command. You do not have to specify file names as Oracle will use the spfile associated with the ORACLE_SID for the database to which you are connected. CREATE PFILE FROM SPFILE; You would then edit the PFILE and use the CREATE command to create a new SPFILE from the edited PFILE.

How The STARTUP Command By Init Parameter files? The STARTUP command is used to startup an Oracle database. We've seen two different initialization parameter files. There is a precedence to which initialization parameter file is read when an Oracle database starts up as only one of them is used. First Priority: the spfileSID.ora on the server side is used to start up the instance. Second Priority: If the spfileSID.ora is not found, the default SPFILE on the server side is used to start the instance. Third Priority: If the default SPFILE is not found, the initSID.ora on the server side will be used to start the instance.

Modifying SPFILE Parameters Earlier you read that an advantage of the SPFILE is that certain dynamic parameters can be changed without shutting down the Oracle database. These changes are made as shown in the figure below by using the ALTER SYSTEM command. Modifications made in this way change the contents of the SPFILE. If you shutdown the database and startup again, the modifications you previously made will take effect because the SPFILE was modified.

The ALTER SYSTEM SET command is used to change the value of instance parameters and has a number of different options as shown here. ALTER SYSTEM SET parameter_name = parameter_value [COMMENT 'text'] [SCOPE = MEMORY|SPFILE|BOTH] [SID= 'sid'|'*'] where parameter_name: Name of the parameter to be changed parameter_value: Value the parameter is being changed to COMMENT: A comment to be added into the SPFILE next to the parameter being altered SCOPE: Determines if change should be made in memory, SPFILE, or in both areas MEMORY: Changes the parameter value only in the currently running instance SPFILE: Changes the parameter value in the SPFILE only BOTH: Changes the parameter value in the currently running instance and the SPFILE SID: Identifies the ORACLE_SID for the SPFILE being used 'sid': Specific SID to be used in altering the SPFILE '*': Uses the default SPFILE

Here is an example coding script within SQL*Plus that demonstrates how to display current parameter values and to alter these values. SQL> SHOW PARAMETERS timed_statistics

NAME TYPE VALUE ------------------ ----------- ----timed_statistics boolean FALSE

SQL> ALTER SYSTEM SET timed_statistics = FALSE 2 COMMENT = 'temporary setting' SCOPE=BOTH 3 SID='USER350'; System altered. You can also use the ALTER SYSTEM RESET command to delete a parameter setting or revert to a default value for a parameter. SQL> ALTER SYSTEM RESET timed_statistics

2 SCOPE=BOTH 3 SID='USER350'; System altered. SQL> SHOW PARAMETERS timed_statistics NAME TYPE VALUE ------------------ ----------- ----timed_statistics boolean FALSE

Starting Up a Database Instance Stages Databases can be started up in various states or stages. The diagram shown below illustrates the stages through which a database passes during startup and shutdown.

NOMOUNT: This stage is only used when first creating a database or when it is necessary to recreate a database's control files. Startup includes the following tasks. Read the spfileSID.ora or spfile.ora or initSID.ora. Allocate the SGA. Startup the background processes. Open a log file named alert_SID.log and any trace files specified in the initialization parameter file. Example startup commands for creating the Oracle database and for the database belonging to USER350 are shown here. SQL> STARTUP NOMOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora SQL> STARTUP NOMOUNT PFILE=$HOME/initUSER350.ora MOUNT: This stage is used for specific maintenance operations. The database is mounted, but not open. You can use this option if you need to: Rename datafiles. Enable/disable redo log archiving options. Perform full database recovery. When a database is mounted it is associated with the instance that was started during NOMOUNT stage. locates and opens the control files specified in the parameter file. reads the control file to obtain the names/status of datafiles and redo log files, but it does not check to verify the existence of these files. Example startup commands for maintaining the Oracle database and for the database belonging to USER350 are shown here.

SQL> STARTUP MOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora SQL> STARTUP MOUNT PFILE=$HOME/initUSER350.ora OPEN: This stage is used for normal database operations. Any valid user can connect to the database. Opening the database includes opening datafiles and redo log files. If any of these files are missing, Oracle will return an error. If errors occurred during the previous database shutdown, the SMON background process will initiate instance recovery. An example command to startup the database in OPEN stage is shown here. SQL> STARTUP PFILE=$ORACLE_HOME/dbs/initDBORCL.ora SQL> STARTUP PFILE=$HOME/initUSER350.ora If the database initialization parameter file is in the default location at $ORACLE_HOME/dbs, then you can simply type the command STARTUP and the database associated with the current value of ORACLE_SID will startup. Startup Command Options: You can force a restart of a running database that aborts the current Instance and starts a new normal instance with the FORCE option. SQL> STARTUP FORCE PFILE=$HOME/initUSER350.ora Sometimes you will want to startup the database, but restrict connection to users with the RESTRICTED SESSION privilege so that you can perform certain maintenance activities such as exporting or importing part of the database. SQL> STARTUP RESTRICT PFILE=$HOME/initUSER350.ora You may also want to begin media recovery when a database starts where your system has suffered a disk crash. SQL> STARTUP RECOVER PFILE=$HOME/initUSER350.ora On a LINUX server, you can automate startup/shutdown of an Oracle database by making entries in a special operating system file named oratab located in the /var/opt/oracle directory. IMPORTANT NOTE: If an error occurs during a STARTUP command, you must issue a SHUTDOWN command prior to issuing another STARTUP command.

ALTER DATABASE Command You can change the stage of a database. This example changes the database from OPEN to READ ONLY. SQL> startup mount pfile=$HOME/initUSER350.ora ORACLE instance started. Total System Global Area 25535380 bytes Fixed Size 279444 bytes

Variable Size 20971520 bytes Database Buffers 4194304 bytes Redo Buffers 90112 bytes Database mounted. SQL> ALTER DATABASE user350 OPEN READ ONLY; Database altered.

Restricted Mode Earlier you learned to startup the database in a restricted mode with the RESTRICT option. If the database is open, you can change to a restricted mode with the ALTER SYSTEM command as shown here. The first command restricts logon to users with restricted privileges. The second command enables all users to connect. SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION; SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION; One of the tasks you may perform during restricted session is to kill current user sessions prior to performing a task such as the export of objects (tables, indexes, etc.). The ALTER SYSTEM KILL SESSION 'integer1, integer2' command is used to do this. The values of integer1 and integer2 are obtained from the SID and SERIAL# columns in the V$SESSION view. The first six SID values shown below are for background processes and should be left alone! Notice that the users SYS and USER350 are connected. We can kill the session for user account name DBOCKSTD. SQL> SELECT sid, serial#, status, username FROM v$session WHERE username='DBOCKSTD'; SID SERIAL# STATUS USERNAME ---------- ---------- -------- -----------------------------1. 1352 INACTIVE DBOCKSTD SQL> ALTER SYSTEM KILL SESSION '260,1352'; System altered. Now when DBOCKSTD attempts to select data, the following message is received. SQL> select * from newspaper; select * from newspaper * ERROR at line 1: ORA-00028: your session has been killed When a session is killed, PMON will rollback the user's current transaction and release all table and row locks held and free all resources reserved for the user.


You can open a database as read-only provided it is not already open in read-write mode. This is useful when you have a standby database that you want to use to enable system users to execute queries while the production database is being maintained.

Database Shutdown The SHUTDOWN command is used to shutdown a database instance. You must be connected as either SYSOPER or SYSDBA to shutdown a database. Shutdown Normal: This is the default shutdown mode. No new connections are allowed. The server waits for all users to disconnect before completing the shutdown. Database and redo buffers are written to disk. The SGA memory allocation is released and background processes terminate. The database is closed and dismounted. The shutdown command is: Shutdown Or Shutdown Normal Shutdown Transactional: This prevents client computers from losing work. No new connections are allowed. No connected client can start a new transaction. Clients are disconnected as soon as the current transaction ends. Shutdown proceeds when all transactions are finished. The shutdown command is: Shutdown Transactional Shutdown Immediate: This can cause client computers to lose work. No new connections are allowed. Connected clients are disconnected and SQL statements in process are not completed. Oracle rolls back active transactions. Oracle closes/dismounts the database. The shutdown command is: Shutdown Immediate Shutdown Abort: This is used if the normal or transactional or immediate options fail. This is the LEAST favored option because the next startup will require instance recovery and you CANNOT backup a database that has been shutdown with the ABORT option. Current SQL statements are immediately terminated. Users are disconnected. Database and redo buffers are NOT written to disk. Uncommitted transactions are NOT rolled back.

The Instance is terminated without closing files. The database is NOT closed or dismounted. Database recovery by SMON must occur on the next startup. The shutdown command is:

Shutdown Abort Diagnostic Files These files are used to store information about database activities and are useful tools for troubleshooting and managing a database. There are several types of diagnostic files. alert_SID.log file Each Oracle instance generates an alert_SID.log file. As the DBA you need to monitor it on a daily basis. Default location: $ORACLE_HOME/rdbms/log directory A different location can be specified in the initialization parameter file by the BACKGROUND_DUMP_DEST parameter. The location of the alert_DBORCL.log file is: /u01/app/oracle/product/10.2.0/ db_1/admin/DBORCL/bdump The log will grow in size over time as new information is written to it. You will periodically need to delete the file a new one will be automatically generated. Stores information about database startup, shutdown, all non-default initialization parameters, the thread used by an Instance, the log sequence number for LGWR, information on log switches, the creation of tablespaces and undo segments, the results of all ALTER statements, and various error messages. Background trace files are generated by background processes such as SMON and PMON when problems occur. The naming convention is: sid_processname_PID.trc. Background trace files are written to the directory is specified by the BACKGROUND_DUMP_DEST parameter. Delete these files after you have solved the problem logged by the background process. User trace files store information about fatal user errors. The naming convention is: sid_ora_PID.trc. User trace files are written to the directory specified by the USER_DUMP_DEST parameter. Delete these files after you have solved the problem generated by the system user. You can enable or disable user tracing with the ALTER SESSION command as shown here.

ALTER SESSION SET SQL_TRACE = TRUE You can also set the SQL_TRACE = TRUE parameter in the initialization parameter files.

Environment Variables Operating System Environment Variables Oracle makes use of environment variables on the server and client computers in both LINUX and Windows operating systems in order (1) establish standard locations for files, and (2) make it easier for you to use Oracle.

On LINUX, environment variables values can be displayed by typing the command env at the operating system prompt. To create or set an environment variable value, the syntax is: VARIABLE_NAME = value export VARIABLE_NAME An example of setting the ORACLE_SID database system identifier is shown here: dbockstd/> ORACLE_SID=USER350 dbockstd/> export ORACLE_SID The following environment variables in a LINUX environment are used for the server. HOME Command: HOME=/u01/student/dbockstd Use: Stores the location of the home directory for your files for your assigned LINUX account. You can always easily change directories to your HOME by typing the command: cd $HOME Note: The $ is used as the first character of the environment variable so that LINUX uses the value of the variable as opposed to the actual variable name. LD_LIBRARY_PATH Command: LD_LIBRARY_PATH=/u01/app/oracle/product/10.2.0/db_1/lib Use: Stores the path to the library products used most commonly by you. Here the first entry in the path points to the library products for Oracle that are located in the directory /u01/app/ oracle/product/10.2.0/db_1/lib. For multiple entries, you can separate Path entries by a colon. ORACLE_BASE Command: ORACLE_BASE=/u02/app/oracle Use: Stores the base directory for the installation of Oracle products. Useful if more than one version of Oracle is loaded on a server. Other than that, this variable does not have much use. We are not using it at SIUE. ORACLE_HOME Command: ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 Use: Enables easy changing to the home directory for Oracle products. All directories that you will use are hierarchically below this one. The most commonly used subdirectories are named dbs and rdbms. ORACLE_SID Command: ORACLE_SID=USER350 (or the name of your database) Use: Tells the operating system the system identifier for the database. One of the databases on the SOBORA2 server is named DBORCL when you create your own database, you will use youre a database name assigned by your instructor as the ORACLE_SID system identifier for your database. ORACLE_TERM

Command: ORACLE_TERM=vt100 Use: In LINUX, this specifies the terminal emulation type. The vt100 is a very old type of emulation for keyboard character input. PATH Command: PATH=/u01/app/oracle/product/10.2.0/db_1/bin:/bin:/usr/bin:/usr/local/bin:. Use: This specifies path pointers to the most commonly used binary files. A critical entry for using Oracle is the =/u01/app/oracle/product/10.2.0/db_1/bin entry that points to the Oracle binaries. If you upgrade to a new version of Oracle, you will need to upgrade this path entry to point to the new binaries.

Windows Variables In a Windows operating system environment, environment variables are established by storing entries into the system registry. Your concern here is primarily with the installation of Oracle tools software on a client computer. Windows and Oracle allows and recommends the creation of more than one ORACLE_HOME directory (folder) on a Windows client computer. This is explained in more detail in the installation manuals for the various Oracle software products. Basically, you should use one folder as an Oracle Home for Oracle Enterprise Manager software and a different folder as an Oracle Home for Oracle's Internet Developer Suite this suite of software includes Oracle's Forms, Reports, Designer, and other tools for developing internet-based applications.

What is Alert Log? Each database has a special file named alert_sid.log. The alert log of a database is a chronological log of messages and errors, and includes the following items: 1)All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock errors (ORA60) that occur. 2)Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements. 3)Messages and errors relating to the functions of shared server and dispatcher processes. 4)Errors occurring during the automatic refresh of a materialized view. 5)The values of all initialization parameters that had nondefault values at the time the database and instance start. Oracle Database uses the alert log to record these operations as an alternative to displaying the information on an operator's console.

The alert log file destination is specified by BACKGROUND_DUMP_DEST.