Tuning Overview List the roles associated with the database tuning process The DBA, the application

developer, management, the system administrator, the network administrator. Define the steps associated with the tunning process Considered on the ROI (Return on Investment) outline here are the tunning recommendation in order: A. Do a proper logical design: In practice this often means more tables with fewer rows per table. In turn, this means the capability for faster searching. The fewer rows that are stored and must be searched through, regardless of the search techinque, the quicker you'll be able to find what you're looking for. B. Do a proper physical design : Separate the datafiles (tablespaces ) into different disks to avoid I/O contention , use striping. C. Redesign if necessary Sometimes the best way to correct the database without extensive tuning efforts is a re-analysis and redesign. Consider redesign when the initial design was hurried, or when multiple databases need to be integrated. D. Write Efficient Application Code: If some SQL code uses an inefficient seach or sort routine, despite the best efforts of the Oracle optimizer, the application will run slowly. E. Rewrite code if necessary: If application code efficiency comes into question and resource and management permit , re-analyze and re-write the code. Tune the database memory structures Oracle can offer substantial improvments through the tuning of its database buffer cache. Also the shared

F.

pool caches SQL code via the library cache component and caches the data dictonary component through ,obviously ,data dictonary cache. The redo log buffer is a separatly tunable area in the SGA. G. Tune OS memory Structures if necessary : The SWAP area can become a bottleneck since it functions as an OS temporary storage, user temporary storage, and the OS virtual memory backing store. The SA and DBA musht work together so that the OS provides enough shared memory and semaphores to give the Oracle processes enough breathing room to operate efficiently. H. Tune Database I/O Database I/O is affected by both the RDBMS and the OS of course, but tuning the I/O means relocating logical and physical structures to reduce contention. If this point in tunning is reached you will have already tuned the database buffer cache. Now the main focus will be to simply adjust the physical design. You physically do more redesiging , if necessary with I/O in mind exclusively. I. Tune OS I/O if necessary The OS fullfils all write and reads requests by all processes, including the Oracle background procceses DBWR and LGRW. As OS typically buffers these requests, performs reads and writes, and then returns the acknowlegment and data back to the process upon completion. File system are data structures that contain metadata about the files they manage, suche as the location of each files starting sector address, its sector length, its directory tree location, its attributes (permissions,size,timestamps,etc). In UNIX, file systems have their own logical block sizes, which correspond to something greater than or equal to a physical block size (512 bytes), usually 8KB by default. The oracle Block size should be at least 8KB or a multiple of it,such as 16KB.

Other importatn OS and Oracle I/O tuning issues include: read-ahead capabilities, asynchronous I/O, multiblock reads, RAID stripe sizes, disk geometry issues, controlles issues,and many more. J. Tune the network if necessary A saturated network can cancel out imporvments made by database tuning. Tune the clients if necessary Consider more exotic solutions Oracle Multithreaded Server ("MTS") , transaction processing (TP) monitors, Oracle Parallel Query and other parallel capabilities, Oracle's clustering capabilities, Oracle's bitmapped indexing, MPP machines, solid state disks, memory-resident (RAM) disks, hardware accelerators and queuing systems.

K. L.

Identify tuning goals There are different ways of determining the goals of a performance tuning effort, consider the application type, also sampling the database on various quantative measaurs is further defining the tuning goals: Throughput : Work per unit time , as measured by transactions per second; higher is obviously better.
α. β.

Response Time: The time it takes for an application to respond, measured in milliseconds or seconds, lower is better. Wall Time : The elapsed time a program takes to run, lower is better.

χ.

In most systems throughput and response time run counter to one another as tuning goals. If response time is high (bad), throughput might be high (good). If

Use caching.prefetch 10. Always consider the two central tuning goals : 8. 9. in terms of transaction on the application needs.throughput is low (bad) then response time might be low (good). fast disks. Minimize contention : Bottlenecks are characterized by delays and waits. The two main configurations for running a database are ARCHIVELOG and NOARCHIVELOG. A DSS wants low response time . and a batch system normally wants lower wall times.eliminate or reduce these whenever possible. List the strengths of different database configurations for recoverability. 11. review and rewrite code as necessary. Minimize data transfer rates ( the time it takes to read or write data). Typically OLTP systems want low response time or high throughput. buffering and queueing whenever possible to compensate for the electromechanical disadvantage ( memory is faster than disk).RAID and parallel operations help do this. Assuming a recovery operation is needed the following scenarios could arise: . they might run concurrently and yet still be noncompetetive for the most part. Also consider these general-purpose tuning goals : 8. Minimize the number of blocks that need to be accessed. 9. Schedule Programs to run as noncompetetively as possible. Maximize your return on invesetment Invest your time and effort wisely by working on the problems most likely to yield the most improvement.

If the database is in NOARCHIVELOG: You benefit from not having to save all the on-line redo log files. an incomplete recovery only rollsback certain transactions without the need to involve ALL the redo-log files.because in ARCHIVELOG every on-line redo log file is eventually backed-up. However the disadvantage of running this configuration is that in the event of a failure your only means of backup is by your last backup tape. Oracle Alert Log and Trace Files . Incomplete-Recovery: In this scenario the database is restored and recovered through the application of only SOME of the redo information generated since the last backup. these archived redo-log files can easily fill up a WHOLE disk (10-12 GB) in a matter of hours. Complete-Recovery : The database is restored and recovered through the applicaion of ALL redo information (this includes both the online and archived redo log files) since the last backup. This type of recovery is normally used when an on-line redo log file is lost due to hardware failure or a certain user requires to backup to a certain point in time. the ones that are ARCHIVED in ARCHIVELOG mode. the damaged files are recovered using al the redo information generated since the last full backup.If the database is in ARCHIVOLOG there are two possibilities: 8.in simpler terms. on heavily accesed systems this type of configuration alleviates disk-space usage. Oracle cannot help in this case because it does not have a history of previous redo-log files . 9. This type of recovery is performed normally when one of more data files or control files are damaged.

ora file.ora file.g. Tablespace creation. Dumps of the current process stacks. these processes are commonly known as "detached process dumps". currently executing cursors. A user dump test is saved to the directory specified in the parameter user_dump_test in the init. A background dump test is saved at the background_dump_test parameter in the init. these processes are of cours known as "user process dumps".e. It is more important to collect these files and forward them to Oracle Support as they may help resolve the problem. recovery operations. Utilities and Dynamic Performance Views Collect analysis through .log file for the database. redo log switches.Describe the location and usefullnes of the Alert Log The alert log records the commands and command results of major events in the life of the database.ora file. When a background processes is terminated or abnormally aborts an operation. The background processes also write an entry to the alert. This file is located at the location specified by the parameter BACKGROUND_DUMP_TEST in the init. and many other information pertinent to the problem. it usually produces a trace file containing an error message causing the failure. Describe the location and usefullness of the background and user processes dump files. and data base startups.

The DBA runs UTLBSTAT before running his or her application or simulation.enterprise manager. The V$ views also form the basis of the standart Oracle tuning scripts.Available Dynamic troubleshooting and performance views. The UTLBSTAT/UTLESTAT report output This is the most commonly used diagnostic utility. The the DBA run utlestat which builds the ending tables and the difference tables. 1. computes the performance differences (deltas) between the utlbstat run and this utlestat . The available V$ views are at the table V$FIXED_TABLE to get a listing of the V$ views. which query them using SQL scripts and format the output that is returned. The V$ views are called dynamic because they are populated at instance startup and are truncated at shutdown. rely on the V$ dynamic performance views. These views are grouped into instance. you can use Server Manager and the V$ views to either supplement or supplant those utilities. disk. database. Therefore. memory. user. The utlbstat script builds the beginning tables necessary to collect and store the performance data. 0. session. if UTLBSTAT/UTLESTAT do not give you what you want. UTLBSTAT/UTLESTAT. All the Oracle products such as server manager. and contention aspects of performance they are based on the internal X$ base tables.

redo.etc). either directly or inderectily ( by taking some of the output values given and using them as inputs into simple formulas ).etc). and application events through tracing. formats the data output data ( including comments and some explanations ). Oracle wait events Appropriate Enterprise Manager tuning tools Enterprise Manager "performance pack" is extremly useful.fragmentation. Latches are implemented using semaphores at the OS level.the top user sessions with regard to resource consumption.run. keeping the ROI startegy in mind. 2. Interpretation of this data means comparing these final figures to more or less established guidelines. report. Define the latch types Latches are used to control access to shared structures.txt . memory.I/O. Latches are locks that are held for a very small amount of time. . you tablespace storage (data files.The components in the performance pack help analyze your logicaland physical design. a variety of performance issues (throughput. and categorizing the findings as acceptable or not for that given area of performance. This file must be interpreted by the DBA. and writes it to the default file. They also monitor locks. rollback. 3.

it has to first acquire the latch. this unsuccesful process will start spinning on the latch and try to acquire it. After that the processes will go to sleep for a specified amount of time.ora file. Acquire the latch in willing to wait mode If the process fails to acquire the latch in the first try. If the system has multiple CPUs. Acquire the latch in immediate mode If this ocurrs and the latch is already being used by another process (as is the case ). it will wait and try again. it will continue to try to acquire the latch until the spin_count parameter in the init. no other process is accessing the latch) in which case the processes gets the latch immediatly. The Important latches: . In case the latch is already acquired by another process. this view can be used to determine which session is currently holding the latch. and repeat the aforementioned cycle. To view latches the following "views" are used : 10. With every spin.the process has two options: 8. it will try to acquire the latch. V$latch : This view conatins all important statistics related to the performance of various latch on the system.Before a process gets access to a shared structure protected by a latch. 11. THe number of times the process spins on the latch is defined by the parameter spin_count in the init. it will continue by taking a different action.ora file is reached. the process will not wait to acquire the latch. 9. wake up again. if it does not . This processes will hold the latch for the period of time it requires and will then relinquish the latch. V$latchholder : If the system is currently having latch contention problems . This latch could currently be free (that is.

The dirty buffers contain buffers that are to be written to the disk. If there are excessive requests for free buffers in the buffer cache. The LRU list is comprised of the pinned buffers. tere will be high access to the LRU list causing contention for the cache buffer LRU chain. The pinned buffer are buffers that are currently accessed by other processes.ora file. cache buffers lru chain latch : This latch is responsible for protecting the access paths to db block buffer in the buffer cache. 52 types of latches on an Oracle installation. By increasing this paramater. However the follwing latches are of significant importance in any tuning job. 0. The dirty list contains the buffers that have been modified but not written to the disk yet. When a process needs to read data from the disk that is not already in the cache. The buffer cache is organized in two lists: the dirty list and the LRU list. The contention for this latch can be minimized with the parameter db_block_lru_latches in the init. and they then subsequently get moved to the dirty list. The . the contention for this latch can be minimized. it needs a free buffer to read the new data. The free buffers are the buffers that are available for use.There are aprox. The buffer cache size defined by the parameter db_block_buffers resides in the SGA and contains a cached copy of data read from data files. It scans the LRU list to check for free buffers. and the free buffers. the dirty buffers that have not yet been moved to the dirty list.

A quick way to check whether there is any contention on the redo log buffer is to check where there are any waits associated with writing to the redo log buffer.maximum value for this parameter is double the numberof CPUs The basic reason for contention for this latch is a high request for free buffers. Α. redo allocation AND redo copy latches : These latches control the access to the redo log buffer.value from v$sysstat where name = 'redo log space requests' . so if the buffer cache is enlarged you must ensure there is enough contiguos memory available on the system to service the increase. one of these latches is to be acquired by the process. If the size of the redo log information written to the redo log buffer is less thant the log_small_entry_max_size parameter. the process will use the redo allocation latch. You can optimize the SQL statments to minimize the high demand for free buffers or increase the db_block_buffer parameter to increase the number of free buffers available on the system. If the size is greater than this value. When a process requires writing to the redo log buffer. This can be done using the V$sysstat view : select name. the process is copied using the redo copy latch . NOTE: The SGA must fit into contiguous chunks of real memory.

a number of write operations can be grouped so that they can be . Contention of the redo copy latch : If the system is facing contention for the redo copy latch . Contention for the redo allocation latch : The contention for the redo allocation latch can be reduced on a multi-CPU system by forcing the process to use the redo copy latch instead. this value is 0.ora parameter log_entry_prebuild_threshold can be increased so that the data that is written to the redo log buffer is grouped and written out. and the redo allocation latch will be used.The size of the redo log buffer will have to be increased if the number of waits is too high. If there is a contention for the redo allocation latch . Because there can be multiple redo copy latches.For a single CPU system. the copy will be done more efficiently. By increasing the parameter. it can be decreased by either increasing the value of log_small_entry_max_size ( so that the redo allocation (so that it increases the latch is used) or increasing the value of log_simultaneous_copies number of redo copy latches available). the value of log_small_entry_max_size can be decreased from its current value so thar redo copy latch is used. The number of redo copy latches is defined by the parameter LOG_SIMULTANEOUS_COPIES The maximum number of available latches on the system is double the number of CPUs. The init.

e.g: select sal from employee where emp_if := emp_id. Β. thereby reducing requests for these latches and thus contention. The library cache inclusdes the Shared SQL area. Using bind variables: Using bind variables prevents multiple copies of the same select statement from being shared in the same pool. The advantage is that these objects will never be flushed out of the shared pool.Shared SQL area contains SQLs that are shared among multiple sessions. private SQL areas. PL/SQL procedure packages. Contention for this latch can be avoided by using code that can be shared by multiple sessions. By increasing the sharing of these SQLs. contention to this latch can be avoided.These objects can be identified by : . or just the first letter a capital ) to the parsing engine even a one lower-upper case letter will generate a different hah value. Even putting more spaces in a select statement causes the hash to be different. Pinning frequently used objects like procedures and packages. Very high parsing on the system and heavy demand to open a new cursor because of low sharing among processes are some of the common causes of contention on this latch. Library Cache Latch : This latch is primarly concerned with the control of access to the libarary cache. Contention for this latch occurs when there is a lot of demand for space in the library cache.written out in one operation. and other control structures. This can be done by typing the code with the same conventions (all capitals for DML key words.

retaining memory as to whether it has that information. This is know as a hit.'P'). executions from V$db_object_cache where executions > <threshold value> order by 2 desc. For most cache .select name. Whereas a buffer is a "dumb" mechanism. dbms_share_pool. a cache is a "smart" mechanism. the cache checks to see whether it already has it ni memory. And in order to pin these objects in the shared pool. simply providing temporary storage data on its way between fast memory and slow disk.keep('object name'. If it does not.it answers the request itself. To check the object in the shared pool that are not pinned : select name. so that it can avoid many unnecessary trips to the disk as possible.If it does.returning the requested data. a trip to the disk is warranted. This is known as a miss.type.kept. or part of it.sharable_mem From v$db_object_cache Where kept = 'NO' Order by shareable_mem desc. When an I/O request is made. Tuning the Shared Pool Tune the library cache and the data dictonary cache The Shared pool is a special type of buffer.

with the benfit of reusing the function in the library cache. 2. Another problem with hash functions is the use of literals in the query. The data dictonary cache portion of the shared pool is sized by the parameter SHARED_POOL_SIZE. such as stored procedures. functions. is an extra space or different case letter is used in a query then the hash function will not be applicable. . at any given time. which ensures that. the MRU (Most Recently used) data is held in cache. In other words it uses a hash function In order for the statement to be reused by another.mechanism a 90+ % hit ratio is very good perfromance.Caches are generally managed by the LRU (Least Recently Used) algorithm. it is recommended that the query use a bind variables in order for the hash statement to be reused. it allocates an SQL area in the library cache for it by applying a mathematical formula to the alphanumeric text of the SQL statement and using the result to store (and later find) it in the cache . and it is the ONLY way to indirectly size this cache. triggers. and the LRU data is aged out. and PL/SQL blocks. For example. When Oracle parses a SQL statement. ( In the case of DSS the use of bind variables is limited ) The Shared pool is composed of the library cache and the data dictonary cache 1. such as a C integer. the statements must by identical. packages. The library cache: Contains all the recently executed SQL statements. a bind variable is normally a host 3GL variable. Then the value of the variable can take on any specified integer.

it is also memory-intensive as a by-product. They are not separatly configurable. There are a couple of ways to measure data dictonary cache performance.15. Becasue caching and buffering are involved. diagnosing and tuning the library cache so that it performs wll should have the side effect of helping the performance of the data dictonary cache because they both coexist in the shared pool. DBA_ views. from v$rowcache. increase the SHARED_POOL_SIZE (and retest) Measure the shared pool hit ratio b. User_ views. where namespace='SQL AREA'. One is to query the V$rowcache view: 3. Compute the sum of all the GET_MISS and divide that by the sum of all the GET_REQS to get a similar DC_MISS_RATIO. from v$librarycache d.The following are the object that held in the SYS and SYSTEM schemes: X$ Tables. Cursor opening and closing should be carefully placed in the application to facilitate reuse of the private SQL area . V$ views.txt (UTLBSTAT/UTLESTAT). select sum(GETMISSES)/SUM(GETS) "DC_MISS_RATIO" 4. The other way is to get the datadictonary section of report. select gethitratio c. Sizing. Size the shared pool apropriatly To improve the performace of the library cache: 1. If either of these two methods yield a DC_MISS_RATIO > . Minimize unnecessary parse calls from within applications : Parsing is CPU intensive.

SQL statements must be identical to be reused. To determine if the object was pinned select substr(NAME.sql (check version on platform to see if this script needs to be run) Run the(se) script(s) as SYS. For less . run SQL TRACE/TKPROF and examine wheter the count column for parse is near the value for Execute (or Fetch).Furthermore except for DSS apllications.25). KEPT from v$db_obeject_cache.sql script. the application is then reparsing for almost every execute (or fetch). procedure or package can be held in memory using a special shared pool package DBMS_SHARED_POOL.ora . To determine wheter your application might be inefficient in this regard. you will commonly receive ORA04031 errors( not enough contiguos free space ). 3. 4. use bind variables when appropriate. If so.for multiple SQL statements. You might also need to run the scripts prvtpool. Establish some conventions.KEEP('<object name>'). To unpin the object: EXECUTE DBMS_SHARED_POOL. Maximize reuse of those statements that must be parsed : As mentioned. trigger. 5. To create this package you must run the procedure dbmspool. One way to avoid this fragmentation is by pinning frequently used objects to memory. might need to be modified in order to allow for the sufficient allocation of cursor space ( private SQL areas). Minimize fragmentation in the library cache : Unless your application is guarded against fragementation.UNKEEP('<object name>'). 2.like "always code SQL statements in uppercase". Pin Frequently Used program objects in memory : Is Oracle a cursor.1. The parameters OPEN_CURSORS in init. EXECUTE DBMS_SHARED_POOL.

use: 6. In order to do the previous you need to have an idea of what constitutes a "large object". Set SHARED_POOL_RESERVED_MIN_ALLOC to your specification Set SHARED_POOL_RESERVED_SIZE to the output of the last query. SHAREABLE_MEM 7. 8. . select substr(name. plus a fudge factor of 10%. So take the following steps: 1.25) "NAME".1.frequently used objects you can use the init. Set the SHAED_POOL_RESERVER_SIZE to what would be the maximum number of bytes of your largest objects simultaneously loaded. Also determine the size you need to set SHARED_POOL_RESERVED_SIZE use: select sum(shareable_mem) from V$db_object_cache where shareable_mem >= <SHARED_POOL_RESERVED_MIN_ALLOC> . You set aside a shared pool "reverved area" for your large objects. from V$db_object_cache where name='<object name>'. You can also set CURSOR_SPACE_FOR_TIME to TRUE to prevent SQL areas associated with cursors from aging out of the library cache before 2. Essentialy you guarentee that your necessary large object will find space. To determine the size of a particular object you want to include in the reserved area.ora parameters SHARED_POOL_RESERVED_SIZE.

you can reserve an area within the shared pool for large objects via the SHARED_POOL_RESERVED_SIZE parameter in init. you are using Oracle forms or SQL*Forms or You use dynamic SQL. The "reserve size" is set aside for the shared pool entries of large object (such as large packages).'P' ). Tune the shared pool reserved space Rather than using the large pool. execute DBMS_SHARED_POOL.ora . Pin Object in the shared pool Rather than reserving space in the Shared pool. but it can have a performance impact. NOTE: Do not change the value for CURSOR_SPACE_FOR_TIME to true if: RELOADS in V$LIBRARY always show a 0 value. Pining packages in memory immediatly after starting the database will increase the likelyhood that a large enough section of contiguos free space is avilable in memory.KEEP('APPOWNER. you may wish to selectively "pin" packages in memory. The KEEP procedure in the DBMS_SHARED_POOL package designates the packages to pin in the shared pool. Pining of packages is more related to application management than application tunning.ADD_CLIENT'. alter procedure APPOWNER.they have been closed by a program.ADD_CLIENT compile. .

BY default this pool is not created. You can specify the minimum allocation size for an object in the large pool via the LARGE_POOL_MIN_ALLOC parameter in init. This can range from 2KB (2048 bytes ) to 64 KB (65536 bytes). Shut down the Instance Do a full export of your database (if feasible) . The TWO parameters that determine the size are: 9.ora. consider rebuilding it if that is feasible for your application: 1. The database buffer cache is the cache structure in the SGA and holds copies of the memory of the Most Recently Used (MRU) Oracle data blocks. If your database has already been created with a relatively small block size .Describe the User Global Area (UGA) and session memory consideration Configure the Large Pool The large pool in Oracle will be used when Oracle requests large contiguos area of memory within the shared pool ( such as during use of the multithreaded server). set a value (in bytes) for the LARGE_POOL_SIZE parameter in init. DB_BLOCK_SIZE : Which is the size of an Oracle Block. Tuning the Buffer Cache Describe how the buffer cache is managed The single most important tuning change you can make to improve the performance of your Oracle system is to properly set the size of the database buffer cache accordingly. To create a large pool. 2.ora. For performance generally the higher the better. This parameter defaults to 16KB is obsolete in Oracle 8i.

5. This should be sufficiently high to yield en efficient hit ratio. along with the shared pool. should fit comfortably in real (available core) memory. Server looks in database buffer cache for it . always reads from ( and writes to ) the database buffer cache. and the OS requirements. The real point to put accros is that it caches Oracle blocks It is different from the shared pool in that it caches data and not programs.ora Startup the Instance Reimport the database as SYS.3. Hence. DB_BLOCK_BUFFERS: Is the number of Oracle blocks to be held in memory. Oracle always reads Oracle blocks into the database buffer cache before passing them on to user processes. or application. The size of the database buffer cache is : DB_BLOCK_BUFFERS X DB_BLOCK_SIZE The database buffer cache is somewhat of a misnomer. Paging is Oracle's primary job when it comes to the DB_BLOCK_BUFFERS and you dont want the OS paging underneath Oracle!. in that the cache is a special kind of buffer. but not so high as to cause OS paging. Hence buffer cache is actually redundant. User select data ( request block) b. The following are the steps in the buffer management of an I/O request: a. You must also be careful to take into consideration non-Oracle application memory requirements. The last thing you want is the SGA being paged in and out of memory by the OS. your database buffer cache. Each buffer equals one block. 10. not to mention a little confusing. Increase de DB_BLOCK_SIZE in your init. A user process. It pages them in on demand. 4.

If the user does not modify it. d. DBWR writes de block (dirty buffer) back to its location in datafile on disk. It it doesnt find it. However. e. current. or readconsistent (rollback). Buffers can be free (clean). Blocks read in service of full table scans are placed at the LRU end of the LRU buffer chain. A dirty buffer is one that has been used. current buffers more often than not become dirty. it retunrs it. By their very nature. f. If it finds it (through the hash function) in the LRU list. Set the number of blocks (batch size) by setting: DB_FILE_MULTIBLOCK_READ_COUNT = <number of blocks> . Calculate and tune the buffer cache hit ratio . you can still cache whole tables on the MRU end of the chain. its finished. A free buffer is one that has yet to be used since instance startup. dirty.c. or DELETE.UPDATE. Index can be accessesed one block at a time. Read-consistent buffers serve SELECT statements and rollback. or written out by the DBWR on checkpoint. If the user does modify it. A current buffer is one used in service of an INSERT. or one that has been used and is now available.it reads in the block from the datafile on the disk and attaches it (using the hash function) to the MRU or LRU end of the LRU list as appropriate. Full table scans can have multiple blocks read with one request. but has not been flushed .

(physical reads / logical reads ) Tune the buffer cache hit ratio by adding or removing buffers . V$SYSSTAT C. You also want to minimize latch contention. mili ) . and "consistent gets" is the number of read consistent (rollback) copies of blocks in cache. One way to calculate the hit ratio is with: select 1-(P.NAME = 'consistent gets'. As with any latch approach. It is summed up as 1 . "db block gets" is the number of blocks read from current copies of blocks in cache.Because memory I/O is several magnitudes faster than disk I/O (nano vs.NAME = 'db block gets' AND C. You want block on average to be fetched 90 % of the time from the database buffer cache ( data block buffer cache) versus the datafile. Here "physical reads" is the number of block read from the disk. jsut like those throughout the Oracle kernel and the library cache ( in the shared pool).VALUE+C. o list .VALUE)) "CACHE HIT RATIO" FROM V$SYSSTAT P. you must have enough because latches ( or spin locks ) contain no queing mechanisms as with semaphores. is locked through latch mechanisms.VALUE/(D. The LRU buffer chain.NAME = 'physical reads' AND D. V$SYSSTAT D WHERE P. you want I/O requests to be satisfied by memory as often as possible.

. then the parameter DB_BLOCK_LRU_STATISTICS = <number of buffers you would want> . increase the db_block_buffers and rerun the query.sql/utlestat.90. just as you would for UTLBSTAT. or utlbstat.sql.sql. DATA and INDEX tablespaces. it is not as complicated as changing the db_block_size . Increasing this parameter requires an instance shutdown and startup. Once the instance is started up once again.TEMP. There is also another option that simulates the effect of adding more buffers to the database. First the instance needs to shutdown.sql/UTLESTAT. Use locally managed tablespaces to avoid space management issues Detect I/O problems Ensure that the files are distributed to minimize I/O contention and use appropriate type of devices. let your application run again for a reasnable amount of time.If the database buffer cache hit ratio is less than . The table X$KCBRBH Create multiple buffer pools Size multiple buffer pools Monitor buffer cache usage Make appropriate use of table caching Diagnose LRU latch contention Avoid free list cotention Tuning the Redo Log Buffer Determine if processes are waiting for space in the redo log buffer Size the redo log buffer appropriatly Reduce redo operations Database configuration and I/O issues Diagnose appropriate use od SYSTEM.RBS.

Use striping where appropriate Tune checkpoints Tune DBWn process I/O The parameter LOG_CHECKPOINT_TIMEOUT specifies an upper bound on the time a buffer can be dirty in the cache before DBWn must write it to disk. . Sorting can take place fully in memory. The default value for LOG_CHECKPOINT_TIMEOUT is 1800. and that is the desired case. However when this cannot be done. memory and disk. However it is more likely to spill over to disk sorting. which can be extremely time-consuming despite even the best physical design of a database. sorting needs to be tuned.especially with large tables. then no buffer reamins dirty in the cache for more than 60 seconds. because they consume a substantial amount of CPU time.which is most often the case. If you set LOG_CHECKPOINT to 60. Using Oracle Blocks Efficiently Determine the appropriate block size Optimize space usage within blocks Detect and resolve row migration Monitor and tune indexes Optimizing Sort Operations Identify the SQL operation that requiere sorting Your first and best strategy is to avoid sorts when possible.

you can choose to sort the data at the OS level and then create the index with the NOSORT option. Like wise a UNION must eliminate duplicate rows (However a UNION ALL . unless you happen to already have a sorted copy of the data. To verify this action use the command EXPLAIN PLAN. which isnt much of a trade. unless it is used on a column already with a unique index) to eliminate duplicte column values. ORDER BY and GROUP BY usually require sorts. and separating this space physically from the rest of the Oracle datafiles. This doesnt usually buy you anything. The DISTINCT qualifier must use a sorting technique(again. However. because you are onlu trading RDBMS sorting for OS sorting. such as SyncSort. rollback segments and redo logs. the TEMP tablespace). However. Of course. ALTER INDEX. this implies allocating sufficinet temporary disk space (in effect. REBUILD likewise requires the same sort.The second best strategy is to sort in memory as much as possible and sort on disk only when absolutly necessary. third party sorting utility.. by definition allows duplicate . Other options include using a fast. or using Oracle's Parallel Query Option (PQO) to use SQL*Loader and load the data in parallel. Who generates sorts ? : The CREATE INDEX statement obviously requires a sort operation on the index key to enable the creation of the B*tree structure. unsorted. an ORDER BY on an indexed column uses the already sorted index in most circumstances..

it attempts to perform the sort within SORT_AREA_SIZE for .rows. e. involving the allocation of a temporary segment. INTERSECT and MINUS require some duplicate elimination. d. though . A join operation requires sorts of whatever tables do not already have exsiting indexes on the join key. there wont be any duplicates to start with. MINUS IN.thereby negating the need for sorting any of the tables. If primary keys are enforced on the two UNIONable tables. NOT IN The primary parameters affecting sort operations are: 1. ALTER INDEX . c. The following list sums up the SQL commands or operators that can trigger sorts: a. it doesn't require sorting. REBUILD ORDER BY.. so the UNION ALL is a recommended substitute for the UNION operation ). IN and NOT IN can require sorting. The more usual situation. is for tables to be joined on primary keys(already having unique indexes). especially if they are in support of nested subqueries. b. If a sort operation requires more than SORT_AREA_RETAINED_SIZE for an in-memory sort. though experience nowhere near the burden of a UNION operation. CREATE INDEX. SORT_AREA_RETAINED_SIZE : The maximum amount of memory to be used for an in memory sort SORT_AREA_SIZE : The maximum amount of memory to be used for an external disk sort operation.. Similarly. 2. GROUP BY DISTINCT UNION. INTERSECT. so because it doesnt eliminate duplicates.

they are part of the UGA. The server process sorts one segment at a time and returns the merger of the sorted segments as the result. SORT_AREA_SIZE x 2 x (degree of parallelism) b. except for MTS. Using EXPLAIN PLAN. If the sort operation requires further memory. Any active sort requires SORT_AREA_SIZE.an external disk sort. These settings hold true only for the dedicated server. it splits the sort burden into multiple sort runs and allocates multiple temporary segments for that purpose. set . A join sort is a sort in support of a joib operation. SORT_AREA_RETAINED_SIZE x (degree of parallelism) x (number of sorts > 2) For PQO. Instead. The sort that is currently executing in known as the active sort. So . except when using MTS. each parallel query server requires SORT_AREA_SIZE .they are part of the SGA shared pool because the UGA is relocated there anyway. the optional value is 1MB. Any join sort requires SORT_AREA_RETAINED_SIZE. allocating a temporary segment in the process.you can see that many SQL statements can require multiple sorts within their execution plans. If you are using MTS. These memory allocations are not stored in the SGA shared pool. In general SORT_AREA_SIZE = SORT_AREA_RETAINED_SIZE.For MTS. set the following values: a. For PQO. Higher values havent yielded better performance. for PQO. However the two sets of parallel servers can be working at once. which requires some special considerations.

A temporary tablespace segment cannot contain any permanent objects and conisists solely of a single sort segment. Again. A useful guide in seting the parameters for the extent are INITIAL = NEXT = ( max size as prescribed by datafile or disk) / (number of expected concurrent sorts).. you can set: SORT_AREA_RETAINED_SIZE = (SORT_AREA_SIZE / the number of expected concurrent sorts). Also. it then requires the allocation of a temporary segment and attempts to work within SORT_AREA_SIZE.As a guideline.. The number of expected concurrent sorts can be calculated roughly as equal to twice the number of concurrent queries . Temporary Tablespaces are created with the CREATE or ALTER TABLESPACE .. TEMPORARY syntax. This is a case when you dont want one large extent sized just below the datafile size. This segment grows in extents as sort concurrency and operation size increase. That is.these temporary tablespaces are made up of one segment. which would normally be a good recommendation for general use (permanent) tablespaces.SORT_AREA_RETAINED_SIZE much smaller than SORT_AREA_SIZE. created initially on the fly by the first sort requiring it.. when the sort operation memory requirments exceed SORT_AREA_RETAINED_SIZE.. as discussed. but not less than 1/10 (SORT_AREA_SIZE) Temporary (Sort) segments must be created when a sort cannot take place fully in memory.. set INITIAL= NEXT = some multiple of SORT_AREA_SIZE plus at least one block .

The size of the buffers is set by the init.for the overhead of the extent header because you wouldn't want any single sort requiring more than one extent. you can afford to have a few sorts stored in the same extents. Of course.but each sort operation can have its own memory buffers and write them directly to disk.having equal sized extents is a fair approach. In the SGA. barring actual sizing techniques.and block using the temporary sort segments. Besides. Furthermore. You can use this to determine efficiency (hits) and help size your extents properly. . a memory structure known as the Sort Extent Pool (SEP) includes the extents that make up the single sort segments belonging to the temporary tablespaces. At the same time. and it works well with random size requirements ( no single sort need is too far from the average) . much like the capability of reusing buffers in the database buffer cache. this is called sort direct write . this pool offers free extents (those that have been allocated and used by an earlier running process. Set PCTINCREASE to 0 because you dont want any surprises such as increasingly large NEXT extents. and are now free but not deallocated) to be reused.ora parameter SORT_WRITE_BUFFERSIZE(32-64KB). When sort space is requested by a process. becasue concurrency again plays a factor here. you will need to SORT_AREA_SIZE bytes. the V$SORT_SEGMENT contains information such as number of users. Oracle offers the capability of having sorts bypass the database buffer cache. due to the random nature fo concurrent access.

sort writes are always sort direct writes.and the number of buffers is set by SORT_WRITE_BUFFER(2-8). each (parallel) sort requires : ((SORT_WRITE_BUFFERS x SORT_WRITE_BUFFER_SIZE) + SORT_AREA_SIZE) parallelism) x 2 x (degree of The init. If set to AUTO. sort writes are buffered in the database buffer cache before being written back out to disk. determines the sorting behavior regarding using the database buffer cache or not.ora parameter. SORT_DIRECT_WRITES. If set to FALSE. the default. Monitoring and Detecting Lock Contention Define types and modes of locking .Each regular (serial) sort operation requires a Sort Direct Writes Buffer of: (SORT_WRITE_BUFFERS x SORT_WRITE_BUFFER_SIZE ) + SORT_AREA_SIZE For PQO . and Data Warehouses should normally have this set to TRUE (or at least left to AUTO) Ensure that sorting is done in memory when possible Reduce the number of I/Os required for the sort runs Allocate temporary space appropriatly Tuning Rollback Segments Use the dynamic performance views to check the rollback segment performance Reconfigure and monitor rollback segments Define the number and sizes of rollback segments Appropriatly allocate rollback segments to transaction. and if SORT_AREA_SIZE >= 10 x Sort Direct Writes Buffer. the Sort Direct Writes Buffer is used.These are normal sort buffer writes. DSSs. If set to TRUE. VLDBs.

Index-Organized Tables C.List possible causes of contention Use Oracle utilities to detect lock contention Resolve contention in an emergency Prevent Locking Problems SQL Issues and Tuning Considerations for Different Applications Identify the Role of the DBA in application tuning Use optimizer modes to enhance SQL statement performance In general. Star transformations F. you must keep statistics current To use cost-based optimization for a statement. collect statistics for the table accessed by the statement and enable cost-based optimization using one of these methods : . Partitioned Tables B. This is specially true for large queries with multiple joins or multiple indexes.always use the cost-based optimization approach. Reverse Indexes D. Parallel Execution E. A. Star Joins The cost-based approach genrally chooses an execution plan that is as good as or better than the plan chosen by the rule-based approach. The rule-based approach is available for the benfit of existing applications. To maintain the effectivnes of cost-based optimizer. but new optimizer functionality uses the cost-based approach. The following features are only available with cost-based optimization.

Plan Stability examines the optimization results using the same data used to generate the execution plan. That is. Oracle uses the input to the execution plan to generate an outline and not the execution plan itself. You should also use cost-based optimization for data warehousing applications because the cost based optimizer supports new and enhanced features for DSS.ora parameter is set to its default value of CHOOSE.use any hint other than RULE. Make sure the OPTIMIZER_MODE init.I. Oracle supports rule-based optimization. because eventually. issue an ALTER SESSION SET OPTIMIZER_MODE with the ALL_ROWS or FIRST_ROWS option. the rule-based approach will not be availabe in the Oracle Server. The first scenario is that if you disable outline use by setting the system/session parameter USE_STORED_OUTLINES to . but you should design new applcations to use cost-based optimization. II. Manage stored outlines to store execution paths as a series of hints An outline consists primarly of a set of hints that is equivalent to the optimizers results for the execution plan generation of a particular SQL statement. To enable cost-based optimization for an individual SQL statement. Oracle uses one of two scenarios when compiling SQL statements and matching them with outlines. III. When Oracle creates an outline. You should eventually migrate your existing applications to use th ecost-based approach. To enable cost-based optimization for you session only.

This ensures Oracle does not use an execution plan complied under one category to execute a SQL statement that Oracle should compile under a different category. The only effect outlines have on caching execution plans is that the outlines category name is used in addition to the SQL text to identify whether the plan is in cache. Oracle can automatically create outlines for all SQL statements. First. if the SQL text of the incoming statement exactly matches the SQL text in an outline in that category. Differences include spacing changes. or you can create them for specific SQL . Oracle does not attmept to match SQL text to outlines. Oracle considers any differences a mismatch. These rules are identical to the rules for cursor matching. Second. carriage return variations. embedded hints. Oracle considers both texts identical and uses the outline.if you specify that Oracle must use a particular outline category. The second scenario involves the following two matching steps. only outlines in that category are candidates for matching. Unless you remove them Oracle retains outlines indefinitely. Oracle retains execution plans in cache and only recreates them if they become invalid or if the cache is not large enough to hold all of the them.FALSE. or even differences in comment text. How Oracle Stores Outlines: Oracle stores outline data in the OL$ table and hint data in the OL$HINTS table.

In either case. Oracle uses outlines in the DEFAULT category.set the system parameter USE_STORED_OUTLINES to TRUE or to a category name.These systems are characterized by growing volumes of data that several hundred users access concurrently. When activated. You can also create stored outlines for specific statements using the CREATE OUTLINE statement. Oracle searches for an outline in the DEFAULT category.statements. Oracle uses outlines in that category until you re-set the USE_STORED_OUTLINES to FALSE. They key . Typical OLTP applications are airline reservation systems and banking applications. Identify the demands of online transaction processing (OLTP) OLTP applications are high throughput. You can access information about outlines and related hint data that Oracle stores in the : USER_OUTLINES and USER_OUTLINE_HINTS views. the outlines derive their input from the rule-based or cost-based optimizers. Oracle creates stored outlines automatically when you set the parameter CREATE_STORED_OUTLINES to TRUE. If you specify a category in the parameter. Oracle creates outlines for all executed SQL statements.If you specify a category name and Oracle does not find an outline in that category that matches de SQL statement.insert/updateintensive systems. Use the available data access methods to tune the physical design of the database. To use stored outlines when Oracle compiles a SQL statement. If you set USE_STORED_OUTLINES to TRUE.

procedures. client/server architecture. well tuned SQL statements. packages and functions. Data Block Size. Discrete Transactions. accuracy and availability. In these types of databases you must avoid excessive use of indexes and clusters because these structures slow down insert and update activity The following elements are crucial for OLTP systems: Rollback segments. speed(throughput). Identify the demands of decision support systems (DSS) Decision support system or Data Warehousing applications typcially convert large amounts of information into useddefined reports. . Transaction processing monitors and the multi-threaded server. dynamically changeable initialization parameters. The key goals of a data warehousing system (DSS) are repsonse time. concurrency and recoverarability.goals of OLTP systems are: availability(7x24). the shared pool.integrity constraints. An example of a decision support system is a marketing tool that determines the buying patterns of consumers based on infromation gathered from demographic studies. They perform queries on the large amount of data gathered from OLTP applications. Decision makers use these applications to determine what strategies the organization should take. Dynamic Allocation of space to tables and rollback segments. Indexes-Cluster and hashing.

Reconfigure systems on a temporary basis for particular needs. The following issues are crucial in implementing and tuning DSS: Materialized Views. This is because operations can be effectivley spread among many CPUs on a single system. Symmetric multiprocessing (SMP). give me a break !!! Managing the Mixed Workload Describe the features of Database Resource Manager Limit the use of resources using Database Resource Manager Tuning with Oracle Expert Describe the features of Oracle Expert Multithreaded Server Tunning Issues .clusters and hashing. Parallel Execution. Cluster. Indexes ( B*Tree and Bitmap). PL/SQL functions in SQL statements. By spreading the processing over many processes. or massively parallel systems gain the largest performance benefits from parallel execution. Try and apply the previous two steps for a temporary basis. Using hints in queries. clustered.The key to performance in a DSS is properly tuned queries and proper use of indexes. Oracle can execute complex statements more quickly than if only a single server processed them. Data block size. Star query. hashing. The optimizer. One way to improve the response time in DSS is to use Parallel execution. This feature enables multiple processes to simultaneously process a single SQL statement.

just as there is a SGA. the total amount of memory required in using MTS is not really more than the dedicated server. From Oracle 8 you can use 'session uga memory max' Diagnose and resolve performance issues with a multithreaded server processes Configure the multithreaded server envioronment to optimize performance . use: select sum(value) from v$SESSTAT SE. The remaining process-specifc memory is reatined in the Process Global Area (PGA) and holds information that cannot be shared. and private SQL areas. In this way. and then increment the SHARED_POOL_SIZE by this amount. In a MTS environments.STATISTIC# = SN.Identify issues associated with managing users in a mutilthreaded server environment. you have to increase the SHARED_POOL_SIZE. just redistributed. Normally the default RDBMS instance ( also known as dedicated server) results in a one to one mapping of user processes to server processes. which contains user session information. sort areas. This yields the maximum amount of UGA session memory used since instance startup. the UGA is moved up into the shared pool. However. there is also a User Global Area (UGA). You may wish to take samples of this result often . To help size the UGA that is relocated to the SGA.STATISTIC#.NAME = 'max session memory' AND SE. V$STATNAME SN WHERE SN. With MTS.

. as long as reparsing is kept low. The default setting for CLOSE_CACHE_OPEN_CURSOR is FALSE. CLOSE_CACHED_OPEN_CURSOR The parameter SESSION_CACHED_CURSORS can be set to the expected maximum number of session cursors to be cached in the users memory area. Two init. set this parameter when statements are frequently reused. Make sure these two parameters do not conflict. Optionally.With MTD you have some control over the distribution of server versus user memory.Optionally set this to TRUE if SQL statements are rarely reused. SESSION_CACHED_CURSORS 2.ora parameters that affect user memory are: 1. setting both to TRUE or FALSE seems to be the best approach. This helps offload server memory requirements at the expense of increasing user memory. meaning that cursors are not closed on COMMIT.

Sign up to vote on this title
UsefulNot useful