AWR Reports

Introduction Differences between AWR and Statspack Workload Repository Reports AWR Snapshots and Baselines Quick Summary on AWR Sections Reading the AWR Report - Header Section: - Cache Sizes - Load Profile - Instance Efficiency - Shared Pool Statistics - Top 5 Timed Foreground Events - Common Waits Events ******* - Time Model Statistics - Operating System Statistics - Foreground Wait Class and Foreground Wait Events - Foreground Wait Class - Foreground Wait Events - Background Wait Events - Service Statistics - SQL Information Section - Generate Execution Plan for given SQL statement - Instance Activity Stats Section - I/O Stats Section - Tablespace IO Stats - File IO Stats - Buffer Pool Statistics Section - Advisory Statistics Section - Instance Recovery Stats - Buffer Pool Advisory - PGA Reports - PGA Aggr Target Stats - PGA Aggr Target Histogram - PGA Memory Advisory

- Shared Pool Advisory - SGA Target Advisory - Buffer Waits Statistics - Enqueue Statistics - UNDO Statistics Section - Latch Statistics - Segments by Logical Reads and Segments by Physical Reads - Segments by Logical Reads - Segments by Physical Reads - Segments by Physical Read Requests - Segments by UnOptimized Reads - Segments by Direct Physical Reads - Segments by Physical Writes - Segments by Physical Write Requests - Segments by Direct Physical Writes - Segments by Table Scans - Segments by DB Blocks Changes - Segments by Row Lock Waits - Segments by ITL Waits - Segments by Buffer Busy Waits Retrieve SQL and Execution Plan from AWR Snapshots Get Data from ASH Moving AWR Information Links with AWR Analyzer

AWR periodically gathers and stores system activity and workload data which is then analyzed by ADDM. Every layer of Oracle is equipped with instrumentation that gathers information on workload which will then be used to make self-managing decisions. AWR is the place where this data is stored. AWR looks periodically at the system performance (by default every 60 minutes) and stores the information found (by default up to 7 days). AWR runs by default and Oracle states that it does not add a noticeable level of overhead. A new background server process (MMON) takes snapshots of the in-memory database statistics (much like STATSPACK) and stores this information in the repository. MMON also provides Oracle with a server initiated alert feature, which notifies database administrators of potential problems (out of space, max extents reached, performance thresholds, etc.). The information is stored in the SYSAUX tablespace. This information is the basis for all self-management decisions. To access Aut omat ic Workload Reposit ory through Oracle Ent erprise Manager Database Control: On the Administ rat ion page, select the Workload Reposit ory link under Workload. From the Aut omat ic Workload Reposit ory page,

you can manage snapshots or modify AWR settings. o To manage snapshots, click the link next to Snapshot s or Preserved Snapshot Set s. On the Snapshot s or Preserved Snapshot Set s pages, you can: + View information about snapshots or preserved snapshot sets (baselines). + Perform a variety of tasks through the pull-down Act ions menu, including creating additional snapshots, preserved snapshot sets from an existing range of snapshots, or an ADDM task to perform analysis on a range of snapshots or a set of preserved snapshots. o To modify AWR settings, click the Edit button. On the Edit Set t ings page, you can set the Snapshot Ret ent ion period and Snapshot Collect ion interval. Most inf ormat ive sect ions of t he report I find the following sections most useful: - Summary - Top 5 timed events - Top SQL (by elapsed time, by gets, sometimes by reads) When viewing AWR report, always check corresponding ADDM report for actionable recommendations. ADDM is a self diagnostic engine designed from the experience of Oracle’s best tuning experts. Analyzes AWR data automatically after an AWR snapshot. Makes specific performance recommendations. Both the snapshot frequency and retention time can be modified by the user. To see the present settings, you could use:
select snap_interval, retention from dba_hist_wr_control; SNAP_INTERVAL RETENTION ------------------- ------------------+00000 01:00:00.0 +00007 00:00:00.0 or select dbms_stats.get_stats_history_availability from dual; select dbms_stats.get_stats_history_retention from dual;

This SQL shows that the snapshots are taken every hour and the collections are retained for 7 days If you want to extend that retention period you can execute:
execute dbms_workload_repository.modify_snapshot_settings( interval => 60, -- In Minutes. Current value retained if NULL. retention => 43200); -- In Minutes (= 30 Days). Current value retained if NULL

In this example the retention period is specified as 30 days (43200 min) and the interval between each snapshot is 60 min.

3)STATSPACK does not store the Active Session History (ASH) statistics which are available in the AWR dba_hist_active_sess_history view. dba_hist_service_wait_class and dba_hist_service_name .com . MMON transfers an inmemory version of the statistics to the permanent statistics tables. The two reports give essential the same output but the awrrpti. stats$streams_apply_sum . by default. AWR snapshots are purged automatically by MMON every night. These tables are stats$streams_capture . 5)The AWR also contains views such as dba_hist_service_stat . 2)The AWR repository holds all of the statistics available in STATSPACK as well as some additional statistics which are not. They are similar in format to the statspack reports and give the option of HTML or plain text formats. Statspack quits working. The key AWR views are: dba_hist_sysmetric_history and dba_hist_sysmetric_summary . it would be useful to monitor its performance using STATSPACK utility. stats$rule_set . therefore. Workload Repository Reports Oracle provide two main scripts to produce workload repository reports.sql allows you to select a single instance. AWR snapshots are scheduled every 60 minutes by default.).Differences between AWR and STATSPACK report 1)Statspack snapshot purges must be scheduled manually. etc. which track history of statistics that reflect the performance of the Oracle Streams feature. it will free space in SYSAUX by automatically deleting the oldest set of snapshots. 4)STATSPACK does not store history for new metric statistics introduced in Oracle. If this occurs. if a DBA relies heavily on the Oracle Streams feature. AWR will initiate a server-generated alert to notify administrators of the out-of-space error condition. stats$propagation_receiver and stats$buffered_queues . 8)ADDM captures a much greater depth and breadth of statistics than Statspack does. stats$buffered_subscribers . CRON. 6)The latest version of STATSPACK included with Oracle contains a set of specific tables. which store history for performance cumulative statistics tracked for specific services. tries to keep one week's worth of AWR snapshots available. During snapshot processing. stats$propagation_sender . The reports can be generated as follows: PDFmyURL. If AWR detects that the SYSAUX tablespace is in danger of running out of space. The AWR does not contain the specific tables that reflect Oracle Streams activity. 7)Statspack snapshots must be run by an external scheduler (dbms_jobs. MMON. When the Statspack tablespace runs out of space.

sql The scripts prompt you to enter the report format (html or text). to_char(begin_interval_time.sql ashrpt. to_char(end_interval_time.sql awrddrpt.sql awrgdrpt.sql awrsqrpt.'dd/MON/yy hh24:mi') End_Interval FROM dba_hist_snapshot ORDER BY 1. You can see what snapshots are currently in the AWR by using the DBA_HIST_SNAPSHOT view as seen in this example: SELECT snap_id. SNAP_ID ---------954 955 956 BEGIN_INTERVAL --------------30/NOV/05 03:01 30/NOV/05 04:00 30/NOV/05 05:00 END_INTERVAL --------------30/NOV/05 04:00 30/NOV/05 05:00 30/NOV/05 06:00 PDFmyURL. the end snapshot id and the report .sql addmrpt. it shows all the AWR snapshots available and asks for two specific ones as interval boundaries.sql @$ORACLE_HOME/rdbms/admin/awrrpti. the start snapshot id.'dd/MON/yy hh24:mi') Begin_Interval. This script looks like Statspack.sql awrgrpt.@$ORACLE_HOME/rdbms/admin/awrrpt.sql There are other scripts too.create_snapshot. here is the full list: REPORT NAME Automatic Workload Repository Report Automatic Database Diagnostics Monitor Report ASH Report AWR Dif f Periods Report AWR Single SQL Statement Report AWR Global Report AWR Global Dif f Report SQL Script awrrpt. AWR Snapshots and Baselines You can create a snapshot manually using: EXEC dbms_workload_repository.

Displays the metrics associated with each metric group. GATHER_STATS_JOB. c. c. To see this job.Displays metric information. c. * V$METRICGROUP .Displays database environment .957 30/NOV/05 06:00 30/NOV/05 07:00 958 30/NOV/05 07:00 30/NOV/05 08:00 959 30/NOV/05 08:00 30/NOV/05 09:00 Each snapshot is assigned a unique snapshot ID that is reflected in the SNAP_ID column.drop_snapshot_range(low_snap_id=>1107.Displays AWR settings. * DBA_HIST_WR_CONTROL . as seen in this example: EXEC dbms_workload_repository. * V$METRICNAME .40). use the DBA_SCHEDULER_JOBS view as seen in this example: SELECT a.repeat_interval FROM dba_scheduler_jobs a.1. and enabled automatically. space_usage_kbytes from v$sysaux_occupants. when you create a new Oracle database.Displays SQL execution plans.start_date. * DBA_HIST_DATABASE_INSTANCE . This procedure takes two parameters. * DBA_HIST_SQL_PLAN .Displays the history contents of the active session history.Displays baseline information. The END_INTERVAL_TIME column displays the time that the actual snapshot was taken. dba_scheduler_windows c WHERE job_name='GATHER_STATS_JOB' PDFmyURL. Sometimes you might want to drop snapshots manually. The following workload repository views are available: * V$ACTIVE_SESSION_HISTORY . you can use the following query to identify the occupants of the SYSAUX Tablespace select substr(occupant_name.job_name.Displays historical metrics.window_name. The dbms_workload_repository. a. This job is created.schedule_name.drop_snapshot_range procedure can be used to remove a range of snapshots from the AWR. c. * DBA_HIST_SNAPSHOT . * V$METRIC . dba_scheduler_wingroup_members b.Displays the active session history (ASH) sampled every second. * DBA_HIST_ACTIVE_SESS_HISTORY .Displays all metrics groups. * V$METRIC_HISTORY . low_snap_id and high_snap_id.Displays snapshot information. AWR Automated Snapshots Oracle uses a scheduled job. Finally .enabled. to collect AWR statistics. * DBA_HIST_BASELINE . high_snap_id=>1108).

schedule_name=b. end_snap_id=>1111.----------1 EOM Baseline 1109 1111 In this case. baseline_name. And you can enable the job using the dbms_scheduler. Thus.disable procedure as seen in this example: Exec dbms_scheduler.And a.window_name. end_snap_id FROM dba_hist_baseline.window_group_name And b.enable('GATHER_STATS_JOB'). allowing you to run the AWR snapshot reports on the preserved baseline snapshots at any time and compare them to recent snapshots contained in the AWR. Baselines can be seen using the DBA_HIST_BASELINE view as seen in the following example: SELECT baseline_id.enable procedure as seen in this example: Exec dbms_scheduler.create_baseline (start_snap_id=>1109. baseline_name=>'EOM Baseline').--------------. AWR Baselines It is frequently a good idea to create a baseline in the AWR. otherwise. the main purpose of a baseline is to preserve typical runtime statistics in the AWR repository. You can disable this job using the dbms_scheduler.------------. which can assist in determining database performance problems. Creating baselines You can use the create_baseline procedure contained in the dbms_workload_repository stored PL/SQL package to create a baseline as seen in this example: EXEC dbms_workload_repository. the column BASELINE_ID identifies each individual baseline that has been defined. as are the beginning and ending snapshot IDs. The Oracle database server will exempt the snapshots assigned to a specific baseline from the automated purge routine.window_name=c. BASELINE_ID BASELINE_NAME START_SNAP_ID END_SNAP_ID ----------.disable('GATHER_STATS_JOB'). . You can remove a baseline using the dbms_workload_repository. The name assigned to the baseline is listed. EXEC dbms_workload_repository. Note that the cascade parameter will cause all associated snapshots to be removed if it is set to TRUE. Cascade=>FALSE). Removing baselines The pair of snapshots associated with a baseline are retained until the baseline is explicitly deleted.drop_baseline (baseline_name=>'EOM Baseline'.drop_baseline procedure as seen in this example that drops the “EOM Baseline” that we just created. the snapshots will be PDFmyURL. A baseline is defined as a range of snapshots that can be used to compare to other pairs of snapshots. This allows you to compare current performance (and configuration) to established baseline performance.

cleaned up automatically by the AWR automated processes.

Quick Summary on AWR Sections
This section contains detailed guidance for evaluating each section of an AWR report. Report Summary Section: This gives an overall summary of the instance during the snapshot period, and it contains important aggregate summary information. - Cache Sizes: This shows the size of each SGA region after AMM has changed them. This information can be compared to the original init.ora parameters at the end of the AWR report. - Load Profile: This section shows important rates expressed in units of per second and transactions per second. - Instance Efficiency Percentages: With a target of 100%, these are high-level ratios for activity in the SGA. - Shared Pool Statistics: This is a good summary of changes to the shared pool during the snapshot period. - Top 5 Timed Events: This is the most important section in the AWR report. It shows the top wait events and can quickly show the overall database bottleneck. Wait Events Statistics Section This section shows a breakdown of the main wait events in the database including foreground and background database wait events as well as time model, operating system, service, and wait classes statistics. - Time Model Statistics: Time mode statistics report how database-processing time is spent. This section contains detailed timing information on particular components participating in database processing. - Wait Class: - Wait Events: This AWR report section provides more detailed wait event information for foreground user processes which includes Top 5 wait events and many other wait events that occurred during the snapshot interval. - Background Wait Events: This section is relevant to the background process wait events. - Operating System Statistics: The stress on the Oracle server is important, and this section shows the main external resources including I/O, CPU, memory, and network usage. - Service Statistics: The service statistics section gives information about how particular services configured in the database are operating. - Service Wait Class Stats: SQL Statistics Section This section displays top SQL, ordered by important SQL execution metrics. - SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution time during processing. - SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time during its processing. - SQL Ordered by Gets: These SQLs performed a high number of logical reads while retrieving data. - SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while retrieving data. - SQL Ordered by Executions:

- SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations. - SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large amount of SGA shared pool memory. - SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool for some reason. - Complete List of SQL Text: Instance Activity Stats This section contains statistical information describing how the database operated during the snapshot period. - Instance Activity Stats - Absolute Values: This section contains statistics that have absolute values not derived from end and start snapshots. - Instance Activity Stats - Thread Activity: This report section reports a log switch activity statistic. I/O Stats Section This section shows the all important I/O activity for the instance and shows I/O activity by tablespace, data file, and includes buffer pool statistics. - Tablespace IO Stats - File IO Stats Buffer Pool Statistics Section Advisory Statistics Section This section show details of the advisories for the buffer, shared pool, PGA and Java pool. - Instance Recovery Stats: - Buffer Pool Advisory: - PGA Aggr Summary: PGA Aggr Target Stats; PGA Aggr Target Histogram; and PGA Memory Advisory. - Shared Pool Advisory: - SGA Target Advisory - Stream Spool Advisory - Java Pool Advisory Wait St at ist ics Sect ion - Buffer Wait Statistics: This important section shows buffer cache waits statistics. - Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are special internal structures which provide concurrent access to various database resources. Undo Statistics Section - Undo Segment Summary: This section gives a summary about how undo segments are used by the database. - Undo Segment Stats: This section shows detailed history information about undo segment activity. Latch Statistics Section: This section shows details about latch statistics. Latches are a lightweight serialization mechanism that is used to single-thread access to

internal Oracle structures. - Latch Activity - Latch Sleep Breakdown - Latch Miss Sources - Parent Latch Statistics - Child Latch Statistics Segment Statistics Section: This report section provides details about hot segments using the following criteria: - Segments by Logical Reads: Includes top segments which experienced high number of logical reads. - Segments by Physical Reads: Includes top segments which experienced high number of disk physical reads. - Segments by Row Lock Waits: Includes segments that had a large number of row locks on their data. - Segments by ITL Waits: Includes segments that had a large contention for Interested Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage parameter of the table. - Segments by Buffer Busy Waits: These segments have the largest number of buffer waits caused by their data blocks. Dictionary Cache Stats Section This section exposes details about how the data dictionary cache is operating. Library Cache Sect ion Includes library cache statistics describing how shared library objects are managed by Oracle. Memory Statistics Section - Process Memory Summary - SGA Memory Summary: This section provides summary information about various SGA regions. - SGA Breakdown difference: Streams Statistics Section - Streams CPU/IO Usage - Streams Capture - Streams Apply - Buffered Queues - Buffered Subscribers - Rule Set

Reading the AWR Report

Cache Sizes at the beginning and end of the Snapshot.13 21:07 C PUs 16 C o re s 8 Se ssio ns 267 278 R e le ase 11. The first is the value per second (for example.0.296M Log Buffer: 8K 8. Load Profile: The load profile provides an at-a-glance look at some specific operational statistics.Jul.344M Std Block Siz e: 1. DB TIME: Represents the activity on the database.632K Elasped Time: It represents the snapshot window or the time between the two snapshots.The main sections in an AWR report include: AWR Report Header: This section shows basic information about the report like when the snapshot was taken. WORKLOAD REPOSITORY report for D B N ame FGUARD D B Id Inst ance Inst num St art up T ime 1 03.91 (mins) 4. for how long.Jul.120M End 1. 1.520M 1.Jul.6 Cache Sizes B e g in Buffer Cache: Shared Pool Siz e: 1.72 atl.46 (mins) C urso rs/Se ssio n .04. If DB TIME is Greater than Elapsed Time then it means that database has high workload.2.bit Snap Id Begin Snap: End Snap: Elapsed: DB Time: 20813 20854 Snap T ime 08.frauddb. You can compare these statistics with a baseline snapshot report to determine if database activity is Linux x86 64.13 00:00:19 09.fiservipo. Values for these statistics are presented in two formats.393. how much redo was generated per second) and the second is the value per transaction (for example.2.689.1 3.024 bytes PDFmyURL.13 15:54:14 2.0 R AC NO 750434027 FGUARD Plat f o rm Ho st N ame So cke t s 2 Me mo ry ( G B ) 11. etc.

6 2. it does not include background process. High hard parse rates cause serious performance issues. For example. A ‘hard parse’ rate of greater than 100 per second indicates there is a very high amount of hard parsing on the system.972.02 0. Hard Parses: Those parses requiring a completely new parse of the SQL statement. and must be investigated.5 83. UPDATEs. Logical reads: This is calculated as Consistent Gets + DB Block Gets = Logical Reads Block Changes: The number of blocks modified during the sample interval.2 14.02 0. Pe r Se co nd DB Time(s): DB CPU(s): Redo siz e: Logical reads: Block changes: Physical reads: Physical writes: User calls: Parses: Hard parses: W/A MB processed: Logons: Executes: Rollbacks: Transactions: 2. A soft parse reuses a previous hard parse PDFmyURL. and DELETEs than before). Parses: The total of all parses. A high hard parse rate is usually accompanied by latch contention on the shared pool and library cache latches.233.9 39. we want a low number here.0 Pe r Exe c 0. Of .4 4.7 1. Note it does not include background processes.2 12.5 25. both hard and soft. As DB time.6 0.9 30.000 of redo data along with around 26.6 144.2 2. Soft Parses: Not listed but derived by subtracting the hard parses from parses.5 88.7 0.4 0.050.000 redo per second.0 2. Possible reasons for excessive hard parses may be a small shared pool or may be that bind variables are not being used.1 1. If you see an increase here then more DML statements are taking place (meaning your users are doing more INSERTs.9 2.411.444. examine the latching sections of the report.1 Pe r Transact io n 0. The value is in microseconds Redo size: This is the amount of DML happening in the DB.7 8. and DELETEs than before). DB CPU(s): It's the amount of CPU time spent on user calls.6 67.9 0.0 0.2 9.6 4.8 4.3 0.671. Physical reads: The number of requests for a block that caused a physical I/O.2 41. Check whether waits for ‘latch free’ appear in the top-5 wait events. This value can give you some indication if usage has increased. UPDATEs.01 Pe r C all 0.131.of redo were generated per transaction). If you see an increase here then more DML statements are taking place (meaning your users are doing more INSERTs. and if so. the table below shows that an average transaction generates about 12.01 Where: DB time(s): It's the amount of time oracle has spent performing database user calls.641. Physical writes: Number of physical writes performed User calls: Indicates how many user calls have occurred during the snapshot period.

25 98.memory Sort % : 98. For example: • a significant increase in ‘redo size’. Buf f er Hit Rat io.hence "Buffer Nowait").93 Buf f er Nowait Rat io. Sorts: Number of sorts occurring in the database Logons: No of logons during the interval Executes: how many statements we are executing per second / transaction Transactions: How many transactions per second we process The per-second statistics show you the changes in throughput (i. Similarly. Unnecessary soft parses also limit application scalability. whether the instance is performing more work per second). optimally a SQL statement should be soft-parsed once per session.. Instance Efficiency Percentages (Target 100%) These statistics include several buffer related ratios including the buffer hit percentage and the library hit percentage. If the ratio is low. (also known as the buffer-cache hit ratio) Ideally more than 95 percent. In this example • Comparing the number of Physical reads per second to the number of Physical writes per second shows the physical read to physical write ratio is very high..00 97. This is the percentage of time that the instance made a call to get a buffer (all buffer types are included here) and that buffer was made available immediately (meaning it didn't have to wait for the buffer. shared pool memory usage statistics are included in this section. A high soft parse rate could be anywhere in the rate of 300 or more per second.Parse CPU: 100.. Additionally.00 Redo NoWait % : 99. check the Buffer Wait Statistics section of the report for more detail on which type of block is being contended for.e. If the ratio is low. looking at the per-transaction statistics allows you to identify changes in the application characteristics by comparing these to the corresponding statistics from the baseline report.99 % Non.32 In.78 99.00 Latch Hit % : 22. • an increase in the ‘redo size’ without an increase in the number of ‘transactions per second’ would indicate a changing transaction profile. Typical OLTP systems have a read-to-write ratio of 10:1 or 5:1 • This system is busy. It shows the % of times a particular block was found in buffer cache insted of performing a physical I/O (reading from disk). the percentage of recursive calls that occurred.94 Soft Parse % : . ‘block changes’ and ‘pct of blocks changed per read’ would indicate the instance is performing more inserts/updates/deletes.00 100. the load profile section provides the percentage of blocks that were changed per read. the percentage of transactions that were rolled back and the number of rows sorted per sort operation. then could be a (hot) block(s) being contended for that should be found in the Buffer Wait Section. and executed many times.and hence consumes far fewer resources. Buffer Nowait % : Buffer Hit % : Library Hit % : Execute to Parse % : Parse CPU to Parse Elapsd % : 100. with 84 User calls per second. PDFmyURL. Also.

you may see "latch free" as one of your top wait events. they may be slow. but this assumption may not always be valid. check whether there's a parsing issue. check the Wait Events list for evidence of this event. Setting the PGA_AGGREGATE_TARGET (or SORT_AREA_SIZE) initialization parameter effectively will eliminate this problem. If value is negative. This is very BAD!! One way to increase your parse ratio is to use bind variables. all data (such as PDFmyURL. frequently executed SQL statements that repeatedly refer to a small number of buffers via indexed lookups can create a misleadingly high buffer hit ratio. if you run some SQL and it has to be parsed every time you execute it (because no plan exists for this statement) then your percentage would be 0%. rather than requiring a disk-sort segment to complete the sort. So. Optimally. more than once every 15 minutes). Redo Nowait Rat io. Execut e t o Parse. check the disks on which the redo logs reside to see why the switches are not happening quickly. which indicates that hot blocks (that is. In other words. which means you could put the files on faster disks. This is another form of thrashing which also degrades performance tremendously. If the number is negative. this ratio can sometimes be misleading. also known as the library-cache hit ratio. If the "Library Hit ratio" is low. as opposed to hard. If your alert log shows that you are switching logs frequently (that is. the BUFFER_CACHE is too small and the data is bein aged out before it can be used. gives the percentage of pin requests that result in pin hits. If these disks are not overloaded. or just as likely. that the system did not make correct use of bind variables in the application. Another cause for a negative execute to parse ratio is if the shared pool is too small and queries are aging out of the shared pool and need to be reparsed.Although historically known as one of the most important statistics to evaluate. A pin hit occurs when the SQL or PL/SQL code to be executed is already in the library cache and is valid to execute. If the log switches are not frequent. A lower ratio could also indicate that bind variables are not used or some other issue is causing SQL not to be reused (in which case a smaller shared pool may only be a band-aid that will potentially fix a library latch problem which may result). The instance didn't have to wait to use the redo log if this is 100% The redo-log space-request statistic is incremented when an Oracle process attempts to write a redo-log entry but there is not sufficient space remaining in the online redo log. Library Hit Rat . as a minimum you pretend to have this one in 95% Sof t Parse Rat io. you may be able to reduce the amount of switching by increasing the size of the online redo logs. This ratio gives the percentage of sorts that were performed in memory. Parse CPU t o Parse Elapsd %: Generally. A soft parse occurs when a session attempts to execute a SQL statement and a usable version of the statement is already in the shared pool. If the soft parse ratio is also low. This ratio gives the percentage of parses that were soft. this is a measure of how available your CPU cycles were for SQL parsing. Similarly. A low buffer hit ratio does not necessarily mean the cache is too small. it may be that potentially valid full-table scans are artificially reducing what is otherwise a good ratio. it could be indicative of a shared pool that is too small (SQL is prematurely pushed out of the shared pool). this ratio should be high. a value close to 100 percent for the redo nowait ratio indicates minimal time spent waiting for redo logs to become available. This ratio indicates the amount of redo entries generated for which there was space available in the redo log. In-Memory Sort Rat io. it means that the number of parses is larger than the number of executions. iterative access to these buffers can artificially inflate the buffer hit ratio. either because the logs are not filling up very often or because the database is able to switch to a new log quickly whenever the current log fills up. they are placed at the most recently used (MRU) end of the buffer cache. If this is low. a high buffer hit ratio (say. blocks that are still being modified) are aging out of the cache while they are still needed. in an OLTP environment. For example. This ratio. The more times that your SQL statement can reuse an existing plan the higher your Execute to Parse ratio is. 99 percent) normally indicates that the cache is adequately sized. Thus. When these buffers are read. This inflation makes tuning the buffer cache a challenge. Sometimes you can identify a too-small buffer cache by the appearance of the write complete waits event.

as the data is rolled up over all latches.99 Redo NoWait %: Buffer Hit %: 95. A low value for this ratio indicates a latching problem. occurs when the current SQL statement is either not in the shared pool or not there in a shareable form.the optimizer execution plan) pertaining to the statement in the shared pool is equally applicable to the statement currently being issued.99 Interpreting the ratios in this section can be slightly more complex than it may seem at first glance. This is the ratio of the total number of latch misses to the number of latch gets for all latches. When the soft parse ratio falls much below 80 percent.72 99. 1 parse per second).00 97. such values can be misleading your system may be doing something efficiently that it would be better off not doing at all. where user PDFmyURL. Also check the "Shared Pool Statistics".ora parameter cursor_sharing. investigate whether you can share SQL by using bind variables or force cursor sharing by using the init. A hard parse. whereas a high value is generally good. look at the shared-pool and library-cache latches in the Latch sections of the report for indications of contention on these latches. While high values for the ratios are generally good (indicating high efficiency).57 In-memory Sort %: Library Hit %: 99.this is a indication that the shared pool needs to be increased (especially if the "Begin" value is much smaller) Example: Evaluating the Instance Efficiency Percentages Section Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.66 % Non-Parse CPU: 100. and refer to the Latch sections of the report. a high latch hit ratio can artificially mask a low get rate on a specific latch. Cross-check this value with the Top 5 Wait Events to see if latch free is in the list. parsing may not be a significant issue in your system. given by the following ratio: (parse time CPU) / (parse time elapsed) A low value for this ratio could mean that the non-CPU-related parse time was spent waiting for latches. Another useful standard of comparison is the proportion of parse time that was not CPU-related. Before you jump to any conclusions about your soft parse ratio. Latch Hit % of less than 99 percent is usually a big problem. To investigate further. low values aren't always bad. execute many times. Lat ch Hit Rat io. a low in-memory sort ratio (indicating a low percentage of sorts performed in memory) would not necessarily be a cause for concern in a system (DSS) environment. if the "End" value is in the high 95%-100% range . An example of the latter case would be when the SQL statement in the shared pool is textually identical to the current statement but the tables referred to in the two statements resolve to physically different tables. on the other hand. which might indicate a parsing or latching problem. . However. If the rates are low (for example. the soft parse ratio should be greater than 95 percent.89 Soft Parse %: Execute to Parse %: 88. be sure to compare it against the actual hard and soft parse rates shown in the Load Profile. Hard parsing is an expensive operation and should be kept to a minimum in an OLTP environment.11 99. Ideally. The aim is to parse once. For example. however.55 99.75 Latch Hit %: Parse CPU to Parse Elapsd %: 52.

19 (A) % Non-Parse CPU: Shared Pool Statistics Memory Usage %: % SQL with executions>1: % Memory for SQL w/exec>1: Begin -----28.00 (A) .84 Observations: • The 100% soft parse ratio (A) indicates the system is not hard-parsing. and so on .80 75.03 84.96 In-memory Sort %: Library Hit %: 99. Another Sample Analysis Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 98. whether it involves lots of sorting.09 100.10 (A) Latch Hit %: Parse CPU to Parse Elapsd %: 58. while the number of parses either go down or remain the same.whether it is query-intensive or update-intensive. The following ratios should be above 90% in a database.84 100.response time is less critical than in an online transaction processing (OLTP) environment.99 Soft Parse %: Execute to Parse %: 0. Basically. The ratio will be negative executes are lower but the parses are higher. However the system is soft parsing a lot.04 (B) 76. The ratio will be close to zero if the number of executes and parses are almost equal.65 End -----29.37 99. rather than only rebinding and re-executing the same cursors.when you're evaluating the Instance Efficiency Percentages. as the Execute to Parse % is very low (A).00 99.91 83.56 Redo NoWait %: Buffer Hit %: 99. you need to keep in mind the characteristics of your application . Buffer Nowait Buffer Hit Library Hit Redo NoWait In-memory Sort Soft Parse Latch Hit Non-Parse CPU The execut e t o parse rat io should be very high in a ideal database. The execut e t o parse rat io is basically a measure between the number of times a sql is executed versus the number of times it is parsed. the CPU time used for parsing (A) is only 58% PDFmyURL. Also. The ratio will move higher as the number of executes go up.

% memory f or SQL w/exec>1: From the memory space allocated to cursors.93 End -----43. shows which % has been used by cursors more than 1.61 94.53 75. Top 5 Timed Foreground Events This section provides insight into what events the Oracle database is spending most of it's time on (see wait events).79 76. The % should be very near to value 100.64 Memory Usage % = It's the shared pool usage.98 Shared Pool Statistics Memory Usage %: % SQL with executions>1: % Memory for SQL w/exec>1: Begin -----42.1] SCRIPT TO SUGGEST MINIMUM SHARED POOL SIZE Shared Pool Statistics B e g in Memory Usage % : % SQL with executions>1: % Memory for SQL w/exec>1: 73. along with the number of waits. So here we have use 73. the time waited (in seconds).44 94.08 77. Each wait event is listed.42 93.86 92.86 per cent of our shared pool and out of that almost 94 percent is being re-used.07 73. this memory could be redeployed ***Please see the following NOTES on shared pool issues [NOTE:146599. • There seems to be a lot of unused memory in the shared pool (only 29% is used) (B). If we get a low number here. the average wait per event (in microseconds) and the associated wait PDFmyURL. If Memory Usage % is too large like 90 % it could mean that your shared pool is tool small and if the percent is in 50 for example then this could mean that you shared pool is too large % SQL wit h execut ions>1 = Shows % of SQLs executed more than 1 time.1] Understanding and Tuning the Shared Pool [NOTE:105813. This may also imply some resource contention during parsing (possibly related to the latch free event?). then the DB is not using shared SQL statements.of the total elapsed parse time (see Parse CPU to Parse Elapsd).33 End 75.1] Diagnosing and Resolving Error ORA-04031 [ . May be because bind variables are not being used. If there is insufficient memory allocated to other areas of the database (or OS).

Common WAIT EVENTS If you want a quick instance wide wait event status. then the Time(s) wont appear.094 221.301 6. time is very important component. Eve nt direct path write temp DB CPU log file sync db file sequential read direct path read 247.58 User I/O Wait C lass 68.09 2. 'client message'. 'smon timer'.time_waited from V$system_event where event NOT IN ('pmon timer'.609 6. EVENT -----------------------db file sequential read latch free db file scattered read enqueue buffer busy waits log file parallel write TOTAL_WAITS ------------35051309 1373973 2958367 2837 444743 146221 TIME_WAITED ------------15965640 1913357 1840810 370871 252664 123435 PDFmyURL. 'NULL event') order by time_waited desc. If you turn off the statistic parameter.719 Wait s 12.773.class.657 70. When you are trying to eliminate bottlenecks on your system. '%SQL*Net%'. showing which events are the biggest contributors to total wait time. Wait analysis should be done with respect to Time(s) as there could be million of waits but if that happens for a second or so then who cares.813 4. you have several different types of waits.442 28 26 1 Avg wait ( ms) 16 % D B t ime 25. total_waits.07 User I/O 1.46 Commit 2. you can use the following query : select event. 'rdbms ipc reply'. your report's Top 5 Timed Event s sect ion is t he f irst place t o look and you should use t he HIGHEST WAIT TIMES t o guide t he invest igat ion. As you will see. This is one of the most important sections of the report.933 5.218. 'virtual circuit'. Therefore.83 User I/O It's crit ical to look into this .833 T ime ( s) 193. 'parallel deque wait'. so let's discuss the most common waits on the next section.

As full table scans are pulled into memory. this parameter should have values of 4-16 independent of the size of the database but with higher values needed with smaller Oracle block sizes. by getting faster disks or by distributing your I/O load better. For Fast Full Index scans: select sql_text from v$sqltext t. they can indicate problems if any of the following circumstances are true: • The data-access method is bad (that is. you either need to reduce the cost of I/O.hash_value. .com . e. they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache.1. v$sql_plan p where t.g.ora parameter db_file_multiblock_read_count specifies the maximum numbers of blocks read in that way. DB File Scattered Read. check to ensure that full table scans are necessary when you see these waits.operation='INDEX' and p. o In cases where such multiblock scans occur from optimal execution plans it is possible to tune the size of multiblock I/Os issued by Oracle PDFmyURL. Try to cache small tables to avoid reading them in over and over again. t. since a full table scan is put at the cold end of the LRU (Least Recently Used) list. The init. t. Although it may be more efficient in your situation to perform a full table scan than an index scan. You can use the report to help identify the query in question and fix it.hash_value=p. The appearance of the‘db file scattered read’ and ‘db file sequential read’ events may not necessarily indicate a problem.piece.options='FULL SCAN' order by p.options='FULL' order by p. or you need to reduce the amount of full table scans by tuning SQL statements.hash_value.operation='TABLE ACCESS' and p.The view V$SQL_PLAN view can help: For Full Table scans: select sql_text from v$sqltext t.hash_value and p. A large number here indicates that your table may have missing indexes.hash_value and p. the SQL statements are poorly tuned). Typically. as IO is a normal activity on a healthy instance. However. resulting in unnecessary or inefficient IO operations • The IO system is overloaded and performing poorly • The IO system is under-configured for the load • IO operations are taking too long If this Wait Event is a significant portion of Wait Time then a number of approaches are possible: o Find which SQL statements perform Full Table or Fast Full Index scans and tune them to make sure these scans are necessary and not the result of a suboptimal plan. That generally happens during a f ull scan of a t able or Fast Full Index Scans.hash_value=p. v$sql_plan p where t. statistics are not updated or your indexes are not used. If you have a high wait time for this event.piece.

and check join orders for multiple table joins. shorten the checkpoint. When a session needs a free buffer and cannot find one. You should correlate this wait statistic with other known issues within the report. or increasing the number of physical disks. It related to memory starvation and non selective index use. it will post the database writer process asking it to flush dirty blocks (No place to put a new block).by setting the instance parameter DB_FILE_MULTIBLOCK_READ_COUNT so that: DB_BLOCK_SIZE x DB_FILE_MULTIBLOCK_READ_COUNT = max_io_size of system Query tuning should be used to optimize online SQL to use indexes. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. If that is the case. Check to ensure that index scans are necessary. Depending on the problem it may help to tune PGA_AGGREGATE_TARGET and/or DB_CACHE_SIZE. rebuilding the index will compact it and will produce to visit less blocks. They can also show up as direct path read/write waits. 2. but it can indicate problems in some circumstances. well-tuned system. you may want to consider accelerat ing increment al checkpoint ing. to determine whether there is a potential I/O bottleneck. use multiple DBWR’s. to see if it might be helpful to tune the statements with the highest resource usage. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough. It could be because the indexes are fragmented. faster I/O. such as inefficient SQL. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks. To investigate if this is an I/O problem. To address this. The sequential read event identifies Oracle reading blocks sequentially. leaving none for this particular statement that is waiting for the system to process. look at the report I/O Statistics. the buffer cache could be full of multiple versions of the same buffer. using more DBWR processes. but they're also memory hogs that could cause high wait numbers for sequential reads. tune the code to get less dirty blocks. Then. first you should examine the SQL Ordered by Physical Reads section of the report. Increase the DB_CACHE_SIZE. It could mean that a lot of index reads/scans are going on. reexamine the host hardware for disk bottlenecks and check the host-hardware statistics for indications that a disk reconfiguration may be of benefit. PDFmyURL. causing great inefficiency. 3. if all your SQL is tuned. When they occur in conjunction with the appearance of the 'db file scattered read' and 'db file sequential read' in the Top 5 Wait Events section. Problematic hash-area joins should show up in the PGA memory. Waits in this category may indicate that you need to increase t he DB_BUFFER_CACHE. Also look at the average time per read in the Tablespace and File I/O sections of the report. one after each other. i.e. It is normal for this number to be large for a high-transaction. Parallel DML (PDML) such as parallel updates. examine the OS I/O statistics for corresponding symptoms. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to. Free Buffer Waits. If many I/O-related events appear high in the Wait Events list. This could indicate poor joining order of tables or un-selective indexes in your SQL or wait ing f or writ es t o TEMP space (direct loads. These circumstances are usually interrelated. I/O for sequential reads can be reduced by tuning SQL calls that result in full table scans and using the partitioning option for large tables. Block reads are fairly inevitable so the aim should be to minimize unnecessary I/ . Is the wait that comes from the physical side of the database. DB File Sequential Read.

" When a DML (insert/update/ delete) occurs. Buf f er busy wait s should not be great er t han 1 percent . To prevent buffer busy waits related to data blocks. the HW enqueue. A buffer busy wait happens when multiple processes concurrently want to modify the same block in the buffer cache. When latch miss ratios are greater than 0. Oracle writes information into the block. potentially use reverse key indexes. Buffer busy waits can be reduced by using reverse-key indexes for busy indexes and by partitioning busy tables. An enqueue includes a queuing . look at the latch-specific sections of the report to see which latches are contended for. you should investigate the issue. If latch free waits are in the Top 5 Wait Events or high in the complete Wait Events list. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. This typically happens during massive parallel inserts if your tables do not have free lists and it can happen if you have too few rollback segments.4. use smaller blocks. including all users who are "interested" in the state of the block (Interested Transaction List. Note that Oracle's latching mechanism is not FIFO. If the wait is on a data block.5 percent. increase the freelist groups or increase the pctused to pctfree gap. 6. increase the freelists on the table. first out). To decrease waits in this area. Latches are used to prevent concurrent access to a shared memory structure. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header.. you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. check MetaLink for bug reports if you suspect this is the case. or try to preallocate extents or at least make the next extent larger for problematic dictionarymanaged tablespaces. Latches are like locks on memory that are very quickly obtained and released. fix queries to reduce the blocks popularity. when a select for update is executed. Try to fix the SQL. RAC). or use Locally Managed Tablespaces (LMTs). 5. and the TM enqueue. when there are not enough slots built with the initrans that is specified). partition the index. Enqueue waits usually point to the ST enqueue. If it's on an index block. Locks protect shared resources. Increase initrans and/or maxtrans (this one’s debatable). you should rebuild the index. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified by maxtrans. redo generation issues (redo allocation latch). to prevent two people from updating the same data at the same time application. so it's not as "hot. Reduce records per block Buffer Busy Wait on Undo Header – Add rollback segments or increase size of segment area (auto undo) Buffer Busy Wait on Undo block – Commit more (not too much) Larger rollback segments/area. if it's on an undo block. you can also use a smaller block size: fewer records fall within a single block in this case. HW enqueues are used with the high-water mark of a segment. or use a reverse key index. such as data in a record. manually allocating the extents can circumvent this PDFmyURL. you can address this by adding rollback segments. Buffer Busy Waits. and hot blocks in the buffer cache (cache buffers chain). Buffer Busy Wait on Segment Header – Add freelists (if inserts) or freelist groups (esp. the TX4 enqueue. I/O. buffer cache contention issues (cache buffers LRU chain). Latches are low-level queuing mechanisms (they're accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). ITL). If the latch is not available. Enqueue. which is FIFO (first in. There are also latch waits related to bugs. An enqueue is a lock that protects a shared resource. you can increase the initrans. e. a latch free miss is recorded.g. Buffer Busy Wait on Data Block – Separate ‘hot’ data. Use ASSM. Latch Free. you can move data to another block to avoid this hot block. If this is the case. Use LMTs. If the wait is on an undo header. which will create the space in the block to allow multiple ITL slots. Most latch problems are related to the failure to use bind variables (library cache latch).

The log file sync process must wait for this to successfully complete. or alternate redo logs on different physical disks (with no other DB Files. TX4 enqueue waits are usually the result of one of three issues. etc) to reduce the archiving effect on LGWR. 9. To reduce wait events here. (Space becomes available only after LGWR has written the current contents of the log buffer to disk. use BULKS. All commit requests are waiting for "logfile switch (archiving needed)" or "logfile switch (chkpt. Log Buffer Space Look at increasing log buffer size.) This typically happens when applications generate redo faster than LGWR can write it to disk. or because log swit ches are t oo slow . You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. ‘log buffer parallel write’ PDFmyURL. increase the size of the redo log files. Put redo logs on a faster disk. This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs. Don't use RAID 5. for their high speed. The first issue is duplicates in a unique index. mgmt. potentially consider using file system direct I/O or raw devices. Incomplete). be sure t o index them to avoid this general locking issue. You might even consider using solid-state disks. which are very fast at writing information.ST Use LMT’s or pre-allocate large extents Enqueue . you need to commit/rollback to free the enqueue. .) locking of tables. you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. A Log File Sync happens each time a commit (or rollback) takes place. Enqueue . When a user commits or rolls back . If there are no free ITL slots. a block-level lock could occur. and you may potentially need to add database writers if the DBWR is the problem. the LGWR flushes the session's redo from the log buffer to the redo logs. or get faster disks to write to. Bitmap (TX4) & Duplicates in Index (TX4). ASM. The third and most likely issue is when mult iple users are updat ing t he same block. You may need to add more or larger redo logs. Log File Sync Could indicate excessive commits. To address this problem. or increase the size of the log buffer. The session is waiting for space in the log buffer." Ensure that the archive disk is not full or slow. 7. 8.) Enqueue – TX Increase initrans and/or maxtrans (TX4) on (transaction) the table or index.HW Pre-allocate extents above HW (high water mark. Fix locking issues if TX6. The second is multiple updates to the same bitmap index fragment. Enqueue . Since a single bitmap fragment may contain multiple rowids. The associated event. If there are a lot of waits in this area then you may want to examine your application to see if you are committing too frequently (or at least more than you need to). If you have f oreign keys. DBWR may be too slow because of I/O. TX4s are the most common enqueue waits. f or example ). Finally. TM enqueues occur during DML to prevent DDL to the affected object. Log File Switch log file switch (checkpoint incomplete): May indicate excessive db files or slow IO subsystem log file switch (archiving needed): Indicates archive files are written too slowly log file switch completion: May need more log files per May indicate excessive db files or slow IO subsystem.wait. try to commit more records (t ry t o commit a bat ch of 50 inst ead of one at a t ime.TM Index foreign keys. Check application (trans. since it is very slow for applications that write a lot. DML Locks.

non-OPS/RAC database. This is done during recovery operations or when buffer prefetching is being used as an optimization i. instead of performing multiple single-block reads. Block accesses are also known as Buffer Gets and Logical I/Os. I/O wait events would be the top wait events but since we are avoiding I/O's with RAC and OPS the 'global cache cr request' wait event often takes the place of I/O wait events. 10. When CPU Other is a significant component of total Response Time the next step is to find the SQL statements that access the most blocks. 13. This Wait Event is used when Oracle performs in parallel reads from multiple datafiles to non-contiguous buffers in memory (PGA or Buffer Cache). PDFmyURL. The report lists such SQL statements in section SQL ordered by Gets. In a heavily loaded system. or the amount of CPU time used during the snapshot window. and it will indicate if your actual problem is with the log file I/O. Idle events are listed in the stats$idle_event table. 14. library cache pin: Library cache latch contention may be caused by not using bind variables. This may occur during recovery or during regular activity when a session batches many single block I/O requests together and issues them in . 11. follow the same guidelines as 'db file sequential read'. CPU time This is not really a wait event (hence. if the CPU time event is the biggest event. ensuring no other processes can update the object at the same time. In some cases the 'global cache cr request' wait event may be perfectly normal if large buffer caches are used and the same data is being accessed concurrently on multiple instances. which could be the cause of the bottleneck. If this wait is an important component of Wait Time. Large wait times for this event can also be caused by having too few CPU resources available for the redolog writer process. 12. See Note 157766. There are several idle wait events listed after the output. global cache cr request: (OPS) This wait event shows the amount of time that an instance has waited for a requested data block for a consistent read and the transferred block has not yet arrived at the requesting instance. but rather the sum of the CPU used by this used by the redo log writer process.e. you can ignore them. In a perfectly tuned. it could be a table or index partition. the new name). Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings.1 'Sessions Wait Forever for 'global cache cr request' Wait Event in OPS or RAC'. forcing the use of an index when a full scan should have been used). This happens when you are compiling or parsing a PL/SQL object or a view. that could point to some CPU-intensive processing (for example. Idle Event. DB File Parallel Read If you are doing a lot of partition activity then expect to see that wait even. It is due to excessive parsing of SQL statement. The session wants to pin an object in memory in the library cache for examination.

SQL*Net message to client The “SQL*Net message to client” Oracle metric indicates the server (foreground process) is sending a message to the client. Even though the writes may be issued in parallel. Have in mind DEFAULT degree = PARALLEL_THREADS_PER_CPU * # CPU's 16. If you have DEFAULT parallelism on your object you can decrease the value of PARALLEL_THREADS_PER_CPU. Maybe the application doesn't need to bring back as much data as it is. This wait situation triggers this event. The wait occurs in log writer (LGWR) as part of normal activity of copying records from the REDO log buffer to the current online log. especially distributed databases with slow database links. enq: TX . The SQL*Net more data to client event happens when Oracle writes multiple data buffers (sized per SDU) in a single logical network call. oracle tries to acquire lock on that row. Log file parallel write waits can be reduced by moving log files to the faster disks and/or separate disks where there will be less contention. The session has issued asynchronous I/O requests that bypass the buffer cache and is waiting for them to complete. LGWR needs to wait for the last I/O to be on disk before the parallel write is considered complete.row lock contention: Oracle keeps data consistency with the help of locking mechanism. If the system workload is high consider to decrease the degree of parallelism. 18.15. When a particular row is being modified by the process. direct Path writes: You wont see them unless you are doing some appends or data loads. Only when the process has acquired lock the process can modify the row otherwise the process waits for the . PX qref latch Can often mean that the Producers are producing data quicker than the Consumers can consume it. parallel query or hash joins. and it can be used to identify throughput issues over a network. 21. SQL*Net more data to client This means the instance is sending a lot of data to the client. processes waiting on this event can acquire lock on the row and perform DML operation 20. The session has issued asynchronous PDFmyURL. direct Path reads / direct path writes: Could happen if you are doing a lot of parallel query activity. The actual wait time is the time taken for all the outstanding I/O requests to complete. It occurs when waiting for writes of REDO records to the REDO log files to complete. 17. You can decrease this time by having the client bring back less data. Hence the wait time depends on the time it takes the OS to complete all requests. Once the lock is released. These wait events often involve temporary segments. Log File Parallel Write. The lock is released whenever a COMMIT is issued by the process which has acquired lock for the row. 19. either through Update/ Delete or Insert operation. Maybe we could increase parallel_execution_message_size to try to eliminate some of these waits or we might decrease the degree of parallelism. sorting activity.

look at turning off parallel undo recovery. Control File Parallel Write: The session has issued multiple I/O requests in parallel to write blocks to all control files. Usually sorting to Temp. the buffer cannot be used while it is being written. Undo segment extension: The session is waiting for an undo segment to be extended or shrunk.I/O requests that bypass the buffer cache and is waiting for them to complete. sorting activity.temp tables. bitmap) check pga parameter or sort area or hash area parameters. This primarily occurs at instance startup and when the ARCH process archives filled online redo logs. Time Model Statistics PDFmyURL. tune undo 25. 22. Can also be parallel query. 26. etc Adjust PGA_AGGREGATE_TARGET to fix it. wait for a undo record: Usually only during recovery of large transactions. If excessive. direct path read temp or direct path write temp: This wait event shows Temp file activity (sort. has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk and is waiting for all requests to complete.) 30. Could also be insert append. parallel query or hash joins. DB File Parallel Write: The process. 27. 29. Library Cache load lock: The session is waiting for the opportunity to load an object or a piece of an object into the library cache. Control File Sequential Read: The session is waiting for blocks to be read from a control file. log file sequential read: The session is waiting for blocks to be read from the online redo log into memory. You might want to increase them 24.hashes. 23. 28. write complete waits: The session is waiting for a requested buffer to be written to disk. and is waiting for all of the writes to complete. (Only one process can load an object or a piece of an object at a time. These wait events often involve temporary segments. typically .

51 6. or if hard parsing is significant.01 0. The time model allows you to see a summary of where the database is spending it's time.41 86.04 1.03 0.16 0.31 0. The report will present the various time related statistic (such as DB CPU) and how much total time was spent in the mode of operation represented by that statistic. .6 1 884 .14 9.70 935.87 17.43 3.98 4.04 6.02 0.35 32.73 % o f D B T ime 86 .calls (DB Time): 20586. If Hard parses or parsing time is very high then further investigation should be done to resolve the problem.22 68.25 3. Time related statistics presents the various operations which are consuming most of the database time.416.75 38. Here is an example of the time model statistic report: Total time in database user.640. You should not expect the % of DB Time to add up to 100% because there is overlap among statistics.02 0.586.01 0.16 PDFmyURL.4 5 6 4 .62 0.00 0. you must investigate it further.08 859.19 0.00 If parsing time is very high.767. Statistic name St at ist ic N ame sql execute elapsed time DB CPU parse time elapsed hard parse elapsed time PL/SQL execution elapsed time hard parse (sharing criteria) elapsed time connection management call elapsed time repeated bind elapsed time PL/SQL compilation elapsed time hard parse (bind mismatch) elapsed time sequence load elapsed time failed parse elapsed time DB time background elapsed time background cpu time T ime ( s) 19. If SQL time>>DB CPU time then probably have IO issues.04 20. parsing and other stuff low.1s Statistics including the word " background" measure background process time.74 0. Example St at ist ic N ame sql execute elapsed time DB CPU parse time elapsed hard parse elapsed time T ime ( s) 12. and so do not contribute to the DB time statistic Ordered by % or DB time desc.Oracle Database time model related statistics are presented next.20 73.36 0. Generally you want SQL processing time high.22 6.05 % o f D B T ime 95.

106 226.01 0. The total wait event time can be calculated as 14363 – 9223.03 0.11 14 .72 1. End Value is displayed if different ordered by statistic type (CPU Use.36 2.18 0. The lion share of database time (86. Virtual Memory.61 seconds of which 884. 9.failed parse elapsed time PL/SQL execution elapsed time hard parse (sharing criteria) elapsed time connection management call elapsed time hard parse (bind mismatch) elapsed time PL/SQL compilation elapsed time repeated bind elapsed time sequence load elapsed time DB time background elapsed time background cpu time 821.00 5.024 475. This was just under 65% of database resources.596.3 seconds.00 4.51 25.96 731.582.00 1 End Value . The rest of statistics is tiny in this case Operating System Statistics This part of the report provides some basic insight into OS performance. Hardware Config). In total there was 14363 seconds database time used.45%) was spent on executing sql which is a good sign.021 1.850 26.73 seconds was hard parsing.00 In the above example.664 12.948 8 212.70 seconds CPU time was used for all user sessions.74 1. Name St at ist ic BUSY_TIME IDLE_TIME IOWAIT_TIME NICE_TIME SYS_TIME USER_TIME LOAD RSRC_MGR_CPU_WAIT_TIME VM_IN_BYTES VM_OUT_BYTES PHYSICAL_MEMORY_BYTES Value 1.831. Here is an example from a Linux system: *TIME statistic values are diffed.00 72.22 0. All others display actual values.10 0.901.39 153.223.20 0.768 PDFmyURL. The total parse time was 935.961.560.96 14.945.003 1 0 1.336.07 0.70 = 5139.432. and OS configuration too. This report may vary depending on the OS platform that your database is running on.

37 0.08 (s) PDFmyURL.21 0. Thus.096 16.096 In this example output.67 11.46 0.g.194.millisecond . These are known as wait events.07 98.47 1. Wait classes define "buckets" that allow for summation of various wait times. ms . a SQL query) will have a number of wait events associated with it.576 87.13 0.19 5.80 6.53 88.99 2. the execution of some database operation (e.586.Jul 13:00:59 12.30 1.1000th of a second ordered by wait time desc. we have 16 CPU's on the box.45 0.88 0.01 % b usy % use r % sys % id le % io wait Foreground Wait Class and Foreground Wait Events Closely associated with the time model section of the report are the Foreground wait class and Foreground wait event statistics sections.384 131.304 1.Jul 15:00:08 12. waits desc % Timeouts: value of 0 indicates value was < .43 1.81 0.56 1.39 99.22 10. Writing to disk or to the control file) is metered.NUM_CPUS NUM_CPU_CORES NUM_CPU_SOCKETS GLOBAL_RECEIVE_SIZ E_MAX GLOBAL_SEND_SIZ E_MAX TCP_RECEIVE_SIZ E_DEFAULT TCP_RECEIVE_SIZ E_MAX TCP_RECEIVE_SIZ E_MIN TCP_SEND_SIZ E_DEFAULT TCP_SEND_SIZ E_MAX TCP_SEND_SIZ E_MIN 16 8 2 4. Foreground Wait Class s .380 174.5% of Total DB time 20.06 93.second.760 4. These buckets allow one to quickly determine which subsystem is likely suspect in performance problems (e. Within Oracle.072 4.63 10.g.5% .33 88.Jul 14:00:03 12. or the cluster).93 1. because each of these operations requires the system to wait for the event to complete.048.35 0.g.Jul 17:00:16 12.45 2.61 0. the duration of a large number of operations (e.Jul 16:00:12 12.Jul 18:00:21 Lo ad 0.74 0. Value of null is truly 0 Captured Time accounts for 107.47 11.Detail Snap T ime 12.77 1. Operating System Statistics . Each wait event is assigned to one of these buckets (for example System I/O or User I/O). for example. the network. We can try to determine which wait events are causing us problems by looking at the wait classes and the wait event reports generated from AWR.67 .

0 0 3 0.---------------.------.00 0.755.01 0.second -> cs .millisecond 1000th of a second -> us .6 Concurrency 216 .43 (s) DB CPU time: 17.20 (s) Wait C lass DB CPU User I/O Commit Other Concurrency Network Application System I/O Configuration 6.767 3.9 Other 439.1000000th of a second -> ordered by wait time desc.142 .500 666 140 58 2 1 0 0 1 4 0 1 0 2 0 0 Avg wait ( ms) % D B t ime 86.0 ------------------------------------------------------------PDFmyURL.767.3 User I/O 112 .526 .00 0.7 Application 13 .002 1.--------System I/O 8.0 0 0 0.00 3.0 0 0 0.24 0.0 Network .576 169.596 99.o ut s To t al Wait T ime ( s) 17.635 350.080 78. waits desc DB/Inst: A109/a1092 Snaps: 2009-2010 Avg %Time Total Wait wait Waits Wait Class Waits -outs Time (s) (ms) /txn -------------------.3 Commit 16 .centisecond 100th of a second -> ms .00 Here is an example of the wait class report section: ------------------------------------------------------------- Foreground Wait Class -> s .68 0.536.547 579 584 1 0 0 21 0 0 0 0 0 Wait s % T ime .28 0.microsecond .6 3 0 589.0 0 0 0.-----.0 0 0 4.0 0 2 0.Total FG Wait Time: 4.367.2 Cluster 443 .0 25 3 10.---------------.31 17.

00 9.97 0.69 0.222 1. s .00 0.007 131. waits desc (idle events last) % Timeouts: value of 0 indicates value was < .00 0.75 0.03 0.132 1.837.52 3.38 0.787 758 % T ime . Note that this section is sorted by wait time (listed in microseconds).658 92 344 65.647 116 1.00 0.838.53 0.g.01 0.1000th of a second Only events with Total Wait Time (s) >= .01 0. because it can get quite long).07 0.52 0.00 % D B t ime 11.06 0.448 441 4.854 2.01 4. Here is an example of the wait event report (we have eliminated some of the bulk of this report.millisecond .00 0.635 13.666 1.00 PDFmyURL.01 0.38 0.05 0.11 0.03 0. Value of null is truly 0 Eve nt direct path write temp direct path read log file sync db file sequential read direct path read temp PX Deq: Slave Session Stats db file scattered read kksfbc child completion latch: shared pool library cache: mutex X library cache lock cursor: pin S wait on X latch free os thread startup SQL*Net message to client cursor: mutex S latch: row cache objects read by other session db file parallel read PX Deq: Signal ACK EXT PX Deq: Signal ACK RSG enq: PS . Foreground Wait Events Wait events are normal occurrences.01 0.53 16.190 169.contention Wait s 1.5% .02 0.05 0. the disk sub-system) this fact will appear in the form of one or more wait events with an excessive duration.06 0.00 0.13 0.In this report the system I/O wait class has the largest number of waits (total of 25 seconds) and an average wait of 3 milliseconds.739. The wait event report then provides some insight into the detailed wait events.01 0.703 346 582 9.267 930 666 143 131 107 26 22 16 14 12 9 7 4 2 2 1 1 1 1 1 1 Avg wait ( ms) 1 0 4 11 0 1 3 51 3 0 35 16 1 32 0 1 1 12 3 0 0 1 Wait s /t xn 10.849 67.787 65.08 0.00 .01 0.39 0. but if a particular sub-system is having a problem performing (e.26 0.555 8.00 0.second.64 0.o ut s 0 0 0 0 0 0 0 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 To t al Wait T ime ( s) 2.001 are shown ordered by wait time desc.97 0. ms .24 0.837.00 0.08 10.

220 .106 41.74 4.00 0.790 1.00 0.00 0.38 0.904 20.00 0.28 6.00 0.00 0.6 PDFmyURL.057.964 1.--------15 1.00 0.00 0.41 0.09 0.783 1.01 0.24 4.00 0.735 65.00 0.627 128.00 0.01 object reuse Disk file operations I/O enq: KO .01 0.00 0.00 0.enq: RO .687 1.01 0.00 0.----------control file parallel write 1.-------------.00 0.00 0.739.00 0.00 0.00 0.275 0 0 0 100 0 0 0 0 0 100 0 0 0 0 0 0 0 0 0 0 0 0 0 100 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 260.0 18 Avg wait Waits (ms) /txn ------.00 0.38 0.00 0.00 0.00 0.00 9.00 0.00 0.00 0.612 387 266 51 13 4 3 1 13 0 1 0 0 1 0 37 139 0 0 0 0 0 2 2 29 1 0 1 0 0 150 500 2 0 2 0 0 0 1 0 0.892 746.160 13 32 584 1.790 65.00 .00 0.415 5 1 71.01 0.00 0.974 486 577 676 189 17 15 1 18 1.00 0.00 object checkpoint PX qref latch latch: parallel query alloc buffer latch: cache buffers chains SQL*Net more data to client enq: TX .00 0.07 0.00 0.00 0.00 0.06 0.index contention library cache load lock asynch descriptor resiz e PX Deq: Table Q Get Keys reliable message buffer busy waits cursor: pin S row cache lock direct path sync latch: cache buffer handles utl_file I/O PX Deq: Table Q qref wait list latch free latch: object queue header operation control file sequential read SQL*Net message from client jobq slave wait PX Deq: Execution Msg PX Deq: Table Q Normal PX Deq Credit: send blkd PX Deq: Execute Reply PX Deq: Parse Reply PX Deq: Join ACK PX Deq Credit: need buffer PX Deq: Table Q Sample 40 1.97 0.00 0.-----.00 Example Foreground Wait Events %Time Total Wait Event Waits -outs Time (s) ---------------------------.386 539 964 836 174 16.373 710.00 0.

Background Wait Events Background wait events are those not associated with a client .001 are shown % Timeouts: value of 0 indicates value was < .09 0. Examples of background system processes are LGWR and DBWR.4 0.0 0.27 2.control file sequential read CGS wait for IPC msg change tracking file synchro db file parallel write db file sequential read reliable message log file parallel write lms flush message acks gc current block 2-way change tracking file synchro 6. waits desc (idle events last) Only events with Total Wait Time (s) >= . Value of null is truly 0 Eve nt log file parallel write db file async I/O submit os thread startup control file parallel write db file sequential read latch: shared pool latch: call allocation Wait s 189.508 422.0 100.1 0.0 .030 899 7. The idle wait events appear at the bottom of both sections and can generally safely be ignored.5% .0 6 1 1 0 0 0 0 0 0 0 1 0 13 1 4 1 2 60 0 1 8.09 1. Note that it is possible for a wait event to appear in both the foreground and background wait events statistics.01 0.09 % b g t ime 71.57 1.04 0.1 0.0 . An example of a non-system background process would be a parallel query slave.6 waits per transaction (or 15ms * 1.1 In this example our control file parallel write waits (which occurs during writes to the control file) are taking up 18 seconds total.549 28.0 .88 1.07 PDFmyURL. They indicate waits encountered by system and non-system processes.0 . Usually not a big contributor.00 0.3 0.0 . with an average wait of 15 milliseconds per wait. Typically these type of events keep record of the time while the client is connected to the database but not requests are being made to the server.6 per transaction = 24ms).0 . Additionally we can see that we have 1.334 711 32.16 0.88 1.o ut s 0 0 0 0 0 0 0 To t al Wait T ime ( s) 615 37 25 16 14 9 9 Avg wait ( ms) 3 1 35 1 15 1 1 Wait s /t xn 1.253 60 291 90 136 106 1 200 59 .18 0. ordered by wait time desc.537 % T ime .2 0.0 .7 566.53 4.1 0.0 0.111 15.0 . for examples the enqueue and latch free events.

303 30 1.00 0.00 0.00 0.00 0.00 0.00 1.00 0.01 0.00 0.34 0.946 17.03 0.863 5.00 0.00 0.63 0.00 0.00 0.05 0.00 0.00 0.719 1 1 11 3 746 0 17 0 0 31 2 11 23 46 0 1 0 1 15 1 4 0 1 0 1 0 0 0 0 1 0 1103 1748 1001 4925 3001 27964 14004 30010 60007 156808 1 0.02 0.01 0.00 0.05 0.540 65.626 5.12 0.710 17.984 642 1.00 0.00 0.942 17.latch free ARCH wait on ATTACH Log archive I/O row cache lock control file sequential read db file parallel read Disk file operations I/O log file sequential read direct path sync ADR block file read log file sync enq: PR .com .01 0.00 0.17 0.00 0.00 0.00 0.06 0.00 0.00 0.760 115.00 0.959 17.442 10 57 40 25 30 2 587 323.00 0.00 0.00 0.05 0.00 0.48 0.00 0.00 0.24 0.00 0.01 0.953 17.00 0.00 0.00 0.38 0.282 598 299 113 795 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 0 0 0 0 0 0 0 38 0 100 97 100 0 50 100 100 33 0 5 3 2 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 356.00 0.885 35.01 0.85 0.00 PDFmyURL.00 0.00 0.264 259 726 2 83.00 0.00 0.00 0.214 838 5 80 11 3 1 264 63 521 20 1 27 3 50 14 1.00 0.00 0.900 27.03 0.00 0.02 0.21 0.00 0.01 0.00 0.165 35.953 17.00 0.00 0.contention latch: cache buffers chains db file single write latch: parallel query alloc buffer LGWR wait for redo copy Data file init write latch: row cache objects latch: session allocation db file scattered read reliable message direct path write asynch descriptor resiz e wait list latch free rdbms ipc reply log file single write ADR block file write ADR file lock library cache: mutex X SQL*Net message to client rdbms ipc message PX Idle Wait DIAG idle wait Space Manager: slave idle wait pmon timer Streams AQ: qmn slave idle wait Streams AQ: qmn coordinator idle wait shared server idle wait dispatcher timer smon timer SQL*Net message from client 8.

PDFmyURL.Indicates physical issues ( .class slave wait 83 0 0 0 0. Any SQL statement appears in the top 5 statements in two or more areas below.903 683 0 0 D B C PU ( s) 17. hashing SQL Ordered by Buffer Gets .May indicate loop issues SQL Ordered by Parse Calls .High logical IO SQL Ordered by Disk Reads .00 Service Statistics A service is a grouping of processes.fiservipo.742 1.954 0 134 SQL Information Section Next in the report we find several different reports that present SQL statements that might be improved by SYS$USERS FGUARDXDB SYS$BACKGROUND D B T ime ( s) 19.IO waits SQL Ordered by CPU Time . The sections are: SQL Ordered by Elapsed Time .Memory issues SQL Ordered by Sharable Memory .056 6.Sorting.Informational SQL Ordered by Version Count . then it is a prime candidate for tuning.555 212 0 0 Physical R e ad s ( K ) 413.733 0 3 Lo g ical R e ad s ( K ) 386.May indicate unsafe bind variables SQL Ordered by Cluster Wait Time . block size) Let try to see what these mean. Application logins (single user) may be grouped with that user name ordered by DB Time Se rvice N ame FGUARD.High physical IO SQL Ordered by Executions . Users may be grouped in SYS$USER.

LIO may have incurred a PIO in order to get the block into the buffer in the first place. optimize SQL to use appropriate indexes and reduce full table scans. we have to latch it (i. except when they become excessive. slower underlying storage. Excessive Elapsed Time could be due to excessive CPU usage or excessive wait times. In conjunction with excessive Elapsed time check to see if this piece of SQL is also a high consumer under Total CPU Time. |Thus to reduce excessive buffer gets. the finer details don’t matter. especially when they become excessive. low capacity in HBC and other hardware causes). You can also look at improving the indexing strategy and consider deploying partitioning (licensed). if the SQL is uppercase it probably comes from a user or application. To reduce total CPU time.SQL Ordered by Elapsed Time Total Elapsed Time = CPU Time + Wait Time. Reducing buffer gets is very important and should not be underestimated. disk reads (or physical reads) are undesirable in an OLTP system. reduce sorting by using composite indexes that can cover sorting and use bind variables to reduce parse times. use indexes and look at optimizing SQL to avoid excessive full table scans. the less concurrency you get. SQL Ordered by Buffer Gets Total buffer gets mean a SQL statement is reading a lot of data from the db block buffers. a latch is still a serialization device. Excessive disk reads do cause performance issues. SQL Ordered by CPU Time When a statement appears in the Total CPU Time area this indicates it used excessive CPU cycles during its processing. PDFmyURL. latch gets etc) or too much Physical IO associated with tables scans or sub-optimal indexes. Otherwise check the wait times and Total Disk Reads. Total disk reads are typified by high physical reads. if it is lowercase it usually comes from the internal or background processes). SQL Ordered by Disk Reads High total disk reads mean a SQL statement is reading a lot of data from disks rather than being able to access that data from the db block buffers. Higher wait times for Disk IO can be associated with a variety of reasons (busy or over saturated SAN. Serialization devices inhibit scalability. The old saying that you reduce the logical IO. Therefore in most cases optimal buffer gets can result in improved performance. the more you use them. It is normally the case. To get a block from db block buffers. consider partitioning. The application is slow. To reduce excessive disk reads. Statistics on IO section in AWR. This is the area that you need to examine and probably the one that will be reported by the users or application support. Although latches are less persistent than locks. They can either indicate issues with wait times (slow disks.e. plus the Operating System diagnostic tools as simple as iostatcan help in identifying these issues. If a SQL statement appears in the total elapsed time area of the report this means its CPU time plus any other wait times made it pop to the top of the pile. because then the physical IO (disk read) will take care of itself holds true. excessive function usage or long parse times. a low buffer cache hit ratio. Indicators that you should be looking at this section for SQL tuning candidates include high CPU percentages in the service section for the service associated with this SQL (a hint. From a consumer perspective. High physical reads after a server reboot are expected as the cache is cold and data is fetched from the disk. Generally speaking buffer gets (AKA logical IO or LIO) are OK. The usual norm is to increase the db buffer cache to allow more buffers and reduce ageing . This section is a gate opener and often you will need to examine other . Excessive CPU processing time can be caused by sorting. Also note that by lowering buffer gets you will require less CPU usage and less latching. However. Full stop. with high IO wait times. in order to prevent someone from modifying the data structures we are currently reading from the buffer).

These SQL statements are big or complex and Oracle has to keep lots of information about these statements OR .SQL ordered by Gets: This section reports the contents of the SQL area ordered by the number of buffer gets and can be used to identify the most CPU Heavy PDFmyURL. security mismatch or overly large SQL statements that join many tables. regardless of whether it is in the SQL pool it undergoes a parse. SQL Ordered by Version Count High version counts are usually due to multiple identical-schema databases. In general high values for Sharable Memory doesn’t necessary imply there is an issue It simply means that: . If the database has excessive physical and logical reads or excessive IO wait times. In a DSS or DW environment large complex statements are normal. Parse Calls Whenever a statement is issued by a user or process. Usually large statements will result in excessive parsing. there is always a chance of unnecessary loop in PL/SQL. the parse can be a hard parse or a soft parse. If the header parse ratios are low look here and in the version count areas. In general statements with high numbers of executions usually are being properly reused. SQL ordered by Physical Reads . high number of logical and or physical reads are candidates for review to be sure they are not being executed multiple times when a single execution would serve. SQL ordered by Executions . In an OLTP database large or complex statements are usually the result of over-normalization of the database design.big number of child cursors exist for those parent cursors . SQL ordered by Parse Calls . attempts to use an OLTP system as a DW or simply poor coding techniques. SQL Ordered by Memory Sharable Memory refers to Shared Pool memory area in SGA . and large CPU usage. recursion. As explained under Parsing.combination of 1 & 2 In case of point 2. hence this particular section in AWR Report states about the SQL STATEMENT CURSORS which consumed the maximum amount of the Shared Pool for their . unsafe bind variables. then look at the SQL statements that show excessive executions and show high physical and logical reads. If the statement is using what are known as unsafe bind variables then the statement will be reparsed each time. I have also seen situations where autosys jobs fire duplicate codes erroneously. The SQL that is stored in the shared pool SQL area (Library cache) is reported in this section in different ways: .SQL Ordered by Executions High total executions need to be reviewed to see if they are genuine executions or loops in SQL code. SQL ordered by Buffer Gets . However. or Oracle bugs. Java or C# . Excessive parse calls usually go with excessive executions. Statements with high number of executions. it may be due to poor coding such as bind variables mismatch.

SQL ordered by Parse Calls: This section shows the number of times a statement was parsed as compared to the number of times it was executed.The statements of interest are those with a large number of gets per execution especially if the number of executions is high. . .SQL ordered by Reads: This section reports the contents of the SQL area ordered by the number of reads from the data files and can be used to identify SQL causing IO bottlenecks which consume the following resources. you may want to check the execution plan. . .000 times more expensive. large Clustering Factor in index etc.Bind variables are not being used. . index fragmentation. Generate Execution Plan for given SQL statement If you have identified one or more problematic SQL statement. Generally speaking. even from the buffer cache. . Possible reasons for high Reads per Exec are use of unselective indexes require large numbers of blocks to be fetched where such blocks are not cached well in the buffer cache.cursor_sharing is set to exact (this should NOT be changed without considerable testing on the part of the client). One to one parse/executions may indicate that: . then execute the scrip to generate the execution plan. It is primarily useful in identifying the most frequently used SQL within the database so that they can be monitored for efficiency.Buffer resources to hold unnecessary data. Generally speaking. .Many DBAs feel that if the data is already contained within the buffer cache the query should be efficient. . This could not be further from the truth. It actually is in the neighborhood of 67 times and actually almost zero if the data is stored in the UNIX buffer cache. the cost of physical I/O is not 10. Retrieving more data than needed.SQL ordered by Executions: This section reports the contents of the SQL area ordered by the number of query executions. PDFmyURL.File IO resources to fetch unnecessary data.SQL.High buffer gets generally correlates with heavy CPU usage . Remember the "Old Hash Value" from the report above (1279400914). .com . . a small performance increase on a frequently used query provides greater gains than a moderate performance increase on an infrequently used query.CPU time needed to fetch unnecessary data. The shared pool may be too small and the parse is not being retained long enough for multiple executions.Additional CPU time to process the query once the data is retrieved into the buffer. requires CPU cycles and interprocess IO.

sqlplus perf st at /perf st at SQL> @?/rdbms/admin/sprepsql.sql Enter the Hash Value.1386862634 ----| | | 52 | |LOAD AS SELECT | | | | | | VIEW | | 1K| 216K| 44 | | FILTER | | | | | | HASH JOIN | | 1K| 151K| 38 | | TABLE ACCESS FULL |USER$ | 29 | 464 | 2 | | TABLE ACCESS FULL |OBJ$ | 3K| 249K| 35 | | TABLE ACCESS BY INDEX ROWID |IND$ | 1 | 7 | 2 | | INDEX UNIQUE SCAN |I_IND1 | 1 | | 1 | | NESTED LOOPS | | 5 | 115 | 16 | | INDEX RANGE SCAN |I_OBJAUTH1 | 1 | 10 | 2 | | FIXED TABLE FULL |X$KZSRO | 5 | 65 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | PDFmyURL. and so may not be indicative of current values -> Rows indicates .-----------. A Plan Hash Value will appear multiple times if the cost has changed -> ordered by Snap Id First First Plan Snap Id Snap Time Hash Value Cost --------. in this example: 1279400914 SQL Text ~~~~~~~~ create table test as select * from all_objects Known Optimizer Plan(s) for this Old Hash Value ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Shows all known Optimizer Plans for this database instance. Bytes and Cost shown below are those which existed at the time the first-ever snapshot captured this plan . The values for Rows. PHV is Plan Hash Value -> ordered by Plan Hash Value -------------------------------------------------------------------------------| Operation | PHV/Object Name | Rows | Bytes| Cost | -------------------------------------------------------------------------------|CREATE TABLE STATEMENT |----. and the Snap Id's they were first found in the shared pool.--------------.these values often change over time.---------6 14 Nov 04 11:26 1386862634 52 Plans in shared pool between Begin and End Snap Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Shows the Execution Plans found in the shared pool between the begin and end snapshots specified.

5 CPU used when call started 196.1 11. how many DBWR Checkpoints occurred.1 DBWR checkpoints 33 0.924.0 0. These are the statistics that the summary information is derived from.5 PDFmyURL.0 1.346 54.---------------.5 8.161 23.2 DBWR buffers scanned 0 0.8 CR blocks created 709 0.-----------CPU used by this session 84. or how many consistent gets occurred during the snapshot).2 .0 DBWR checkpoint buffers written 245 0.-----------. Here is a partial example of the report: Instance Activity Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104 Statistic Total per Second per Trans --------------------------------.4 3.825. A list of the statistics maintained by the RDBMS kernel can be found in Appendix C of the Oracle Reference manual for the version being utilized.| FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | FIXED TABLE FULL |X$KZSPR | 1 | 26 | 14 | | VIEW | | 1 | 13 | 2 | | FAST DUAL | | 1 | | 2 | -------------------------------------------------------------------------------- Instance Activity Stats Section This section provides us with a number of various statistics (such as.

132 7. Either from disk.0 2. disk cache or O/S cache. PDFmyURL.085 0.262.CPU USED BY THIS SESSION.8 157. .085 906.207.1 93.632.8 0.6 0.1 587.2 0.170.965.162 12.8 10.041 25.0 0.613.089.323.5 0..9 15..9 0.5 0.938.parse time CPU Some releases do not correctly store this data and can show huge numbers.7 301.5 119.711..436 21.1 152.2 0. there are also physical reads direct which bypass cache using Parallel Query (not in hit ratios).268 13.331.442 390.0 4.3 5.322.DBWR cross instance writes DBWR free buffers found 93 0 0.3 4.275 49.8 0.0 3. Db block get s = These are block gotten to be changed. Consist ent Get s = These are the reads of a block that are in the cache.0 18.858..931.8 2. branch node splits consistent gets current blocks converted for CR db block changes db block gets hot buffers moved to head of LRU leaf node 90-10 splits leaf node splits logons cumulative physical reads physical writes session logical reads sorts (disk) sorts (memory) sorts (rows) table fetch continued row table scans (long tables) table scans (short tables) 7.429 840. Of particular interest are the following statistics.9 4. PARSE TIME CPU or RECURSIVE CPU USAGE: These numbers are useful to diagnose CPU saturation on the system (usually a query tuning issue).2 7.850.061.394 26.0 0.0 .643.5 Inst ance Act ivit y Terminology Session Logical Reads = All reads cached in memory.365 111 1.3 0.5 0.777 75.506.0 154.9 4.472.709 343. .0 105.4 278..917 4.8 60.724..3 9. The formula to calculate the CPU usage breakdown is: Service (CPU) Time = other CPU + parse time CPU Other CPU = "CPU used by this session" .369 504.0 .7 0. Includes both consistent gets and also the db block gets. MUST be the CURRENT block and not a cr block. Physical Reads = Blocks not read from the cache.4 0.754 197. They are NOT to be confused with consistent read (cr) version of a block in the buffer cache (usually the current version is read).969. Db block changes = These are the db block gets (above) that were actually changed.

The difference between this and "dirty buffers inspected" is the number of buffers that could not be used because they were busy or needed to be written after rapid aging out. a high recursive cpu usage can actually indicate a potential tuning effort. In general if recursive calls is greater than 30 per process. the v$sql view contain a column. It can generally be assumed. but you will need to identify your complete set of PL/SQL. The average buffers scanned may be different from the average scan depth due to write batches filling up before a scan is complete. If the number of checkpoints is reduced. you should identify and sort SQL statements by buffer gets in v$sql. . that can be used to identify SQL. . Retries are needed because either the redo writer has gotten behind. parse t ime cpu= Parsing SQL statements is a heavy operation.FREE BUFFER INSPECTED: the number of buffers skipped over from the end of the LRU queue in order to find a free buffer. NOTE: PL/SQL can generate extra recursive calls which may be unavoidable. hence. . so that it re-executes (in stead of reparse) cursors with frequently executed SQL statements. the data dictionary cache should be optimized and segments should be rebuilt with storage clauses that have few large extents. It is outside the scope of this document to go into detail with this. or because an event (such as log switch) is occurring . . or being read/written. . ot her cpu= The source of other cpu is primarily handling of buffers in the buffer cache. when foreground is looking for a buffer to reuse.REDO LOG SPACE REQUESTS: indicates how many times a user process waited for space in the redo log buffer. finding the ones with the highest CPU load and optimize these.recursive cpu usage = This component can be high if large amounts of PL/SQL are being processed. In Oracle. look at the part ‘SQL ordered by Gets for DB’. The v$sql view contains PARSE_CALLS and EXECUTIONS columns. Try increasing the PDFmyURL. During a checkpoint there is a slight decrease in performance since data blocks are being written to disk and that causes I/O. rollback segment. Start tuning SQL statements from the top of this list. Segments include tables.DBWR TIMEOUTS: the number of timeouts when DBWR had been idle since the last timeout. the performance of normal database operations improve but recovery after instance failure is slower. This count includes both dirty and clean buffers. In your report.DIRTY BUFFERS INSPECTED: the number of times a foreground encountered a dirty buffer which had aged out through the lru queue. . that should be avoided by reusing SQL statements as much as .DBWR BUFFERS SCANNED: the number of buffers looked at when scanning the lru portion of the buffer cache for dirty buffers to make clean.DBWR CHECKPOINTS: the number of checkpoints messages that were sent to DBWR and not necessarily the total number of actual checkpoints that took place. If most work done in PL/SQL is procedural processing (rather than executing SQL). Divide by "dbwr lru scans" to find the average number of buffers scanned. indexes. This should be zero if DBWR is keeping up with foregrounds. and temporary segments. CPU_TIME. including stored procedures. . you need to write the code. which directly shows the cpu time associated with executing the SQL statement. that is parsed often or is only executed once per parse. In programs using Oracle Call Interface. a waiter. unnecessary parting of implicit SQL statements can be avoided by increasing the cursor cache (MAXOPENCURSORS parameter) and by reusing cursors. In precompiler programs. that the CPU time spent by a SQL statement is approximately proportional to the number of buffer gets for that SQL statements.REDO BUFFER ALLOCATION RETRIES: total number of retries necessary to allocate space in the redo buffer. These are the times that DBWR looked for buffers to idle write. Note that this includes scans for reasons other than make free buffer requests.RECURSIVE CALLS: Recursive calls occur because of cache misses and segment extension. They may have a user.

com . If this is not the case. less IO. and where possible. .xxxx. In some cases (i. IO Activity Input/Output (IO) statistics for the instance are listed in the following sections/formats: . . If the number of full table scans is high the application should be tuned to effectively use Oracle indexes. filename. another option is to compress the indexes so that they require less space and hence. .SUMMED DIRTY QUEUE LENGTH: the sum of the lruw queue length after every write request completes.TABLE FETCH BY CONTINUED ROW: indicates the number of rows that are chained to another block.Tablespace I/O Stats for DB: Ordered by total IO per tablespace. From this report you can determine if the tablespace or datafiles are suffering from sub-standard performance in terms of IO response from the disk sub-system. If the statistic "Buffer Waits" for a tablespace is greater than 1000.Table Scans (short t ables) is the number of full table scans performed on tables with less than 5 database blocks. if they exist.init.xxxx'.If the tablespace contains indexes.The queries against the contents of the owning tablespace should be examined and tuned so that less data is retrieved. Indexes. . should be used on long tables if less than 10-20% (depending on parameter settings and CPU count) of the rows from the table are returned.REDO WASTAGE: Number of bytes "wasted" because redo blocks needed to be written before they are completely full. you may want to consider tablespace reorganization in order to spread tables within it across another tablespaces. (divide by write requests to get average queue length after write completion) . check the db_file_multiblock_read_count parameter setting.File I/O Stats for DB: Ordered alphabetically by tablespace. If a datafile consistently has average read times of 20 ms or greater then: .Table Scans (long t ables) is the total number of full table scans performed on tables with more than 5 database blocks. This includes rows that were accessed using an index and rows that were accessed using the statement where rowid = 'xxxxxxxx.TABLE FETCH BY ROWID: the number of rows that were accessed by a rowid. but the ANALYZE table command should be used to further investigate the chaining. or to switch logs . . . . PDFmyURL.ora parameter LOG_BUFFER so that zero Redo Log Space Requests are made. Note that Oracle considers average read times of greater than 20 ms unacceptable. You may also need to tweak optimizer_index_caching and optimizer_index_cost_adj. I/O Stats Section The IO Stats stats report provides information on tablespace IO performance.e. should be eliminated by rebuilding the table. It may be too high.The contents of that datafile should be redistributed across several disks/logical volumes to more easily accommodate the load. tables with long columns) this is unavoidable. Early writing may be needed to commit transactions. to be able to write a database buffer. It is optimal to perform full table scans on short tables rather than using indexes.

87 126.68 126.13 62.dbf FG_DATA_ARCH /oradata/FGUARD/fg_data_arch04.If the disk layout seems optimal.01 0.908 1.93 Av B lks/R d 125.02 0.dbf /oradata/FGUARD/fg_indx02.37 127.791. File IO Stats ordered by Tablespace.948 If the tablespace IO report seems to indicate a tablespace has IO problems.00 0.993 15.132 Av Writ e s/s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 B uf f e r Wait s 5 0 0 1 0 0 3 3 0 0 0 0 1 5 6 Av B uf Wt ( ms) 0.97 0.130 1.20 71.292 0 1 0 0 358 1.00 0.00 43.55 0.85 126.09 0.145 995 709 1.772 16.00 Writ e s 1.00 0.00 1.867 2.02 0. we can then use the file IO stat report allows us to drill into the datafiles of the tablespace in question and determine what the problem might be.02 0. File Tab le sp ace FG_DATA FG_DATA FG_DATA FG_DATA FG_DATA FG_DATA FG_DATA FG_DATA File name /oradata/FGUARD/fg_data01.00 0.163 14. Tablespace IO Stats ordered by IOs (Reads + Writes) desc Tab le sp ace TEMPFG FG_DATA FG_DATA_ARCH FG_INDX SYSAUX SYSTEM UNDOTBS1 R e ad s 1.204 7.dbf FG_DATA_ARCH /oradata/FGUARD/fg_data_arch03.087 1.00 0.548 331..276 332.38 0.00 8.39 1.027 1.dbf /oradata/FGUARD/fg_data05.839.00 PDFmyURL.dbf .246 2.dbf /oradata/FGUARD/fg_data06.00 0.00 0.dbf FG_DATA_ARCH /oradata/FGUARD/fg_data_arch02.491 0 Av R e ad s/s 102 155 3 1 0 0 0 Av R d ( ms) 0. check the disk controller layout.03 31.62 0.00 Writ e s 9.01 0.331 4.34 60.30 0.dbf /oradata/FGUARD/fg_data08.450 Av Writ e s/s 102 1 0 0 0 0 0 B uf f e r Wait s 74 12 0 23 18 278 381 Av B uf Wt ( ms) 1.00 0.618 1 4.00 0.68 124.00 0.dbf /oradata/FGUARD/fg_data03.928 524.588 Av R e ad s/s 18 19 29 23 18 19 18 11 0 1 1 1 0 0 0 Av R d ( ms) 0.00 0.72 124.84 10.19 0.537 575 2.174 16.35 127.409 421.14 3.00 0.95 125.02 0.33 0.dbf /oradata/FGUARD/fg_indx03.85 127.383 2.dbf /oradata/FGUARD/fg_data04.37 39.97 0.00 0.51 29.02 0.718 323.154 4.dbf R e ad s 332.dbf /oradata/FGUARD/fg_data07.10 3.00 FG_DATA_ARCH /oradata/FGUARD/fg_data_arch01.00 0.71 127.332 193.00 Av B lks/R d 125.647 7.338 1.02 1.00 0.dbf /oradata/FGUARD/fg_data02.837.07 127. It may be that the datafiles need to be distributed across more disk sets.00 0.01 0.194 1.58 125.647 48.00 0.dbf FG_INDX FG_INDX FG_INDX /oradata/FGUARD/fg_indx01.349 332.

022.383 0 0 0 0 0 0 0 0 102 0 5.dbf /oradata/FGUARD/sysaux01. Standard block siz e Pools D: default.30 1. A baseline of the database's buffer pool statistics should be available to compare with the current report buffer pool statistics.76 1.18 65. The buffer statistics are comprised of two sections: .dbf /oradata/FGUARD/system01.00 64.00 78.047 382 949 4.dbf /oradata/FGUARD/sysaux02.00 0.445 Fre e B uf f Wait 0 Writ C o mp Wait 0 B uf f e r B usy Wait s 786 .00 1. R: recycle Default Pools for other block siz es: 2k.22 1. A change in that pattern unaccounted for by a change in workload should be a cause for concern. 4k.00 0. It provides a summary of the buffer pool configuration and usage statistics.087 2.dbf /oradata/FGUARD/undotbs01.79 7. Also check the Buf f er Pool Advisory to identify if increasing that parameter (db_cache_size) would help to reduce Physical Reads.00 829 927 271 5.dbf /oradata/FGUARD/TEMPFG01.450 0 0 0 0 0 0 0 102 0 2 3 6 8 10 269 9 74 381 0.Checkpoint Activity: Advisory Statistics Section In this section. PDFmyURL.dbf 1.35 8.00 0.733 1.948 Buffer Pool Statistics Section The buffer pool statistics report follows.837.167 Physical Writ e s 85.05 1.03 31.93 2.Buffer Pool Statistics: This section can have multiple entries if multiple buffer pools are allocated.714 1.dbf /oradata/FGUARD/system02.32 19.41 80.06 3. 32k P D N umb e r o f B uf f e rs 183.dbf /oradata/FGUARD/fg_indx06.465 26 1.780 2.001 153 4. 8k.FG_INDX FG_INDX FG_INDX SYSAUX SYSAUX SYSTEM SYSTEM TEMPFG UNDOTBS1 /oradata/FGUARD/fg_indx04.62 0.dbf /oradata/FGUARD/fg_indx05.00 0. 16k.316 Physical R e ad s 230.07 30.63 28.839. we receive several recommendations on changes that we can perform and how that will affect the overall performance of the DB.00 .23 0. K: keep.303 Po o l Hit % 99 B uf f e r G e t s 34.

10 0.87 0.48 0.886 3.00 439.04 1.180 3.03 1. how big should you make your database buffer cache.15 1.29 0.440 Siz e Fact o r 0.008 1.00 1.818 Est Phys R e ad T ime 1 1 1 1 1 1 1 1 1 1 Est % D B t ime f o r R d s 1922.19 0.829 3.68 0.00 PDFmyURL. This report can be very helpful in "rightsizing" your buffer cache.00 495.00 534. Buffers For Estimate P D D D D D D D D D D Siz e f o r Est ( M) 144 288 432 576 720 864 1. E: End Snapshot Targ t MT T R ( s) B E 0 0 Est d MT T R ( s) 17 16 R e co ve ry Est d IO s 3186 3068 Act ual R e d o B lks 40426 34271 Targ e t R e d o B lks 88123 35669 Lo g Sz R e d o B lks 995328 995328 Lo g C kp t T ime o ut R e d o B lks 88123 35669 Lo g C kp t Int e rval R e d o B lks O p t Lo g Sz ( M) Est d R AC Avail T ime Buffer Pool Advisory The buffer pool advisory report answers the question.841 3.97 B uf f e rs ( t ho usand s) 18 35 53 71 89 106 124 142 160 177 Est Phys R e ad Fact o r 1.58 0.00 1388.00 470. Here is an example of this report: B: Begin Snapshot. It provides an extrapolation of the benefit or detriment that would result if you added or removed memory from the database buffer cache.00 409. Here is an example of the output of this report: Only rows with estimated physical reads >0 are displayed ordered by Block Siz e. By analyzing this report.01 1.39 0.00 593.917 3.00 835.Instance Recovery Stats The instance recovery stats report provides information related to instance recovery.77 0. you can determine roughly how long your database would have required to perform crash recovery during the reporting period.00 676.10 .01 1. These estimates are based on the current size of the buffer cache and the number of logical and physical IO's encountered during the reporting point.976 3.02 1.152 1.01 1.00 Est imat e d Phys R e ad s ( t ho usand s) 4.378 4.864 3.296 1.850 3.

percentage of PGA memory allocated to workareas % Auto W/A Mem .15 % Aut o W/A Me m 0. PGA Reports The PGA reports provide some insight into the health of the PGA.56 422.The PGA Memory Advisor.810 3.304 2.808 3.536 1 1 1 1 1 1 1 1 1 1 1 4 .808 3.84 1.00 390.808 3.00 1.0. provides some insight into how to properly size your PGA via the PGA_AGGREGATE_TARGET database parameter.808 3. . It appears that if we were to reduce the memory allocated to the SGA to half of the size of the current SGA (freeing the memory to the OS for other processes) we would incur an increase of just a few more physical Reads in the process.26 1.00 0. not diffed over the interval) Auto PGA Target .226 1.00 100.35 1.4 88 1.00 1.00 382.actual workarea memory target W/A PGA Used .e.736 2.The PGA Aggr Target Stats report provides information on the configuration of the PGA Aggregate Target parameter during the reporting period.00 1.00 0.199 PG A Me m Allo c( M) 480.812 3.55 1.728 1. sorts).016 2.811 3.62 % PG A W/A Me m 0.808 3.D D D D D D D D D D D 1.10 W/A PG A Use d ( M) 0.94 183 195 213 231 248 266 284 302 319 337 355 1. .813 3.00 1.00 382.74 1.840 PDFmyURL.00 1.00 In this example we currently have 1.00 0.00 1. or if some of those operations were written out to disk.00 1.g.592 2.024 1. Here we show these reports: PGA Aggr Target Stats B: Begin Snap E: End Snap (rows dentified with B or E contain data which is absolute i.16 1.06 1.00 383.448 2.00 0.45 1.00 393.00 1.584 1.00 382. much like the buffer pool advisory report.93 3.160 2.00 387.percentage of workarea memory under manual control PG A Ag g r Targ e t ( M) B E 1.815 3.00 395.488 GB allocated to the SGA (represented by the size factor column with a value of 1.00 % Man W/A Me m 0.The PGA Aggregate Target Histogram report provides information on the size of various operations (e.880 1.65 1.00 1.00 1.024 Aut o PG A Targ e t ( M) 1.00 121.872 2.00 G lo b al Me m B o und ( K ) 163.00 382.840 163. It will indicate if PGA sort operations occurred completely in memory.amount of memory used for all Workareas (manual + auto) % PGA W/A Mem .percentage of workarea memory controlled by Auto Mem Mgmt % Man W/A Mem . .

478 690 422 713 4.50 201.60 1.00 0.25 208.478 690 422 713 4.044.50 Est d Ext ra W/A MB R e ad / Writ t e n t o D isk 717.684.478 12.Pass Exe cs 0 0 0 0 0 0 0 0 0 0 PGA Memory Advisory When using Auto Memory Mgmt.00 1.00 0.13 0.00 0.50 201.379.00 100.478 12.00 100. minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0 PG A Targ e t Est ( MB ) 200 400 800 1.50 201.693.374 24.374 24.240 2.379.145.400 9.693.379.880 3.00 100.693.80 2.25 0.50 201.Pass Exe cs 0 0 0 0 0 0 0 0 0 1.00 53.00 100.00 Est d PG A O ve rallo c C o unt 1.50 201.memory operations Lo w O p t imal 2K 64K 128K 256K 512K 1M 2M 4M 8M 64M Hig h O p t imal 4K 128K 256K 512K 1024K 2M 4M 8M 16M 128M To t al Exe cs 132.044.478 12.50 201.00 .50 201.50 201.478 12.00 W/A MB Pro ce sse d 201.319 24.50 201.478 12.00 100.044.00 0.200 4.50 201.478 PDFmyURL.600 12.044.50 201.374 12.82 181.50 201.00 100.478 12.044.693.800 6.044.00 100.75 1.00 100.00 0.693.729 25.128 796 172 98 56 1.044.507 M.693.507 O p t imal Exe cs 132.00 0.044.693.50 0.693.478 12.00 8.82 181.452.37 181.00 0.912 0 0 0 0 0 0 0 0 0 0 0 0 0 Est d T ime 57.200 1.869.044.044.00 6.145.00 4.800 Siz e Fact r 0.044.991.044.145.00 0.128 796 172 98 56 0 1.82 0.600 1.50 201.478 12.044.00 53.00 53.00 100.693.20 1.00 49.00 Est d PG A C ache Hit % 22.560 2.PGA Aggr Target Histogram Optimal Executions are purely in.40 1.044.920 2.

832 ####### 1.0 22.8 233 17.0 22.0 1 .0 797 82.575. Much like the PGA Memory Advisor or the Buffer Pool advisory report.0 22.8 703 75.719 ------------------------------------------------------------- PDFmyURL.274 240 .------.422 ####### 1.502.575.----------192 .102 288 . and can significantly improve performance if you have not allocated enough memory to the shared pool.1 22.122 ####### .-----------.0 327 28.541.0 22.695 960 2.------.2 421 40.5 22.6 609 64.0 1 .835 ####### 1.Java Pool Advisory = Only if you are using internal Java. if you are getting spills.0 22.545 816 1.575.574.055 672 1.164 576 1.0 17.-----.3 468 46.044 ####### .573.Shared Pool Advisory Use this section to evaluate your shared pool size parameter.1 22.575.782 336 .4 54 3.8 22.5 562 58. This can help you reclaim much needed memory if you have over allocated the shared pool.575.0 7.0 1 .4 515 52.402 432 .St reams Pool Advisory = Only if streams are used.0 1 .-----. it provides some insight into what would happen should you add or remove memory from the shared pool.-------.0 1 .0 22.562.7 656 69.0 22.9 280 23.9 53.571.181 ####### 1.0 .620 864 1.553 6.575.9 750 78.252 ####### 1.444.906 ####### 1.988 ####### 1.0 1 . indicates pool is too small .668 912 1.282 ####### 1.168 ####### 1.256 720 1.597 33.0 22.0 22.7 186 12.1 374 35.0 1 .209 ####### 1.6 139 8.575. Other Advisories here .380 ####### 1.396 528 1.605 ####### 1.8 ####### 382.569.682 ####### 1.495 ####### .902 480 1.0 1 .0 1 .993 ####### 1.711 102.0 22.0 524 1.422 768 1.0 3.575. similar to the PL/SQL area in the library caches Here is an example of the shared pool advisory report: Est LC Est LC Est LC Est LC Shared SP Est LC Time Time Load Load Est LC Pool Size Size Est LC Saved Saved Time Time Mem Size(M) Factr (M) Mem Obj (s) Factr (s) Factr Obj Hits ---------.----.368 14.SGA Target Advisory = Helps for SGA_TARGET settings . The shared pool advisory report provides assistance in right sizing the Oracle shared pool.574.084 384 .675 624 1.6 22.7 22.9 ####### 223.0 1 .5 92 5.

and where the waits are occurring.819 2.-------------data block 13 0 1 undo header 1 0 10 PDFmyURL.112 2.819 1.476 1.5 18.354 150.819 In this example.056 1. In this case. We can see from this report that if we increased the SGA target size to 2112MB.8 20.848 1.649 1.0 18.095 1.---------------528 0.345 148. we would see almost no performance improvement (about a 98 second improvement overall). Also the report will summarize an estimate of physical reads associated with the listed setting for the SGA. our SGA Target size is currently set at 1056MB. It helps you determine the impact of changing the settings of the SGA target size in terms of overall database performance.SGA target advisory The SGA target advisory report is somewhat of a summation of all the advisory reports previously presented in the AWR report.------------------.-----------.320 1. In the following report we find that the 13 buffer busy waits we saw in the buffer pool statistics report earlier are attributed to data block waits. we may determine that adding so much memory to the database is not cost effective. and that the memory can be better used elsewhere. waits desc Class Waits Total Wait Time (s) Avg Time (ms) -----------------.053 443.345 148. Here is an example of the SGA target advisory report: SGA Target SGA Size Est DB Est Physical Size (M) Factor Time (s) Reads . Buffer Wait Statistics The buffer wait statistics report helps you drill down on specific buffer wait events.345 148.3 18.584 1. The report uses a value called DB Time as a measure of the increase or decrease in performance relative to the memory change made. We might then want to pursue tuning remedies to these waits if the waits are significant enough.443 165.----------.---------.8 18.539 792 0.5 25.0 18.595 769. Here is an example of the buffer wait statistics report: Buffer Wait Statistics DB/Inst: AULTDB/aultdb1Snaps: 91-92 -> ordered by wait time desc.

The TX lock is acquired when a transaction initiates its first change and is held until the transaction does a COMMIT or ROLLBACK.71 WF-AWR Flush 12 12 0 7 0 1. if you see high levels of wait times in these reports.TM (DML enqueue): Generally due to application .18 TT-Tablespace 90 90 0 42 0 . . It is used mainly as a queuing mechanism so that other resources can wait for a transaction to complete. As with other reports.TX (Transaction Lock): Generally due to application concurrency mechanisms.737 5.00 CF-Controlfile Transaction 5.00 The action to take depends on the lock type that is causing the most problems.-------------PS-PX Process Reservation 386 358 28 116 0 .43 MW-MWIN Schedule 2 2 0 2 0 5. particularly if foreign key constraints have not been indexed.-----------.-----------. or table setup issues. you might dig further into the nature of the enqueue and determine the cause of the delays. This section is very useful and must be used when the wait event "enqueue" is listed in the "Top 5 timed events". This lock/enqueue is PDFmyURL. The Enqueue activity report provides information on enqueues (higher level Oracle locking) that occur. Here is an example of this report section: EnqueueActivity DB/Inst: AULTDB/aultdb1 Snaps:91-92 -> only enqueues with waits are shown -> Enqueue stats gathered prior to 10g should not be compared with 10g data -> ordered by Wait Time desc.737 0 5 0 . Waits desc Enqueue Type (Request Reason) -----------------------------------------------------------------------------Requests Succ Gets Failed Gets Waits Wt Time (s) Av Wt Time(ms) -----------.----------.----------. The most common lock waits are generally for: .00 TA-Instance Undo 12 12 0 12 0 .43 US-Undo Segment 276 276 0 228 0 .Enqueue Statistics An enqueue is simply a locking mechanism.00 UL-User-defined 7 7 0 7 0 .

---------. small extent sizes.Undo Segment Summary for DB Undo Segment Summary for DB: S901 Instance: S901 Snaps: 2 -3 -> Undo segment block stats: -> uS .parameter3 from v$event_name where name like 'enq%'. parallelizer# . uR . several sortings occurring on the disk).Undo Segment Stats The examples below show typical performance problem related to Undo (rollback) segments: . P2 and P3 values tell what the enqueuemay have been waiting on .parameter1.Undo Segment Summary .unexpired Stolen.---------. update.284 1.expired Stolen. These enqueues are caused if a lot of space management activity is occurring on the database (such as small extent size. V$SESSION_WAIT and V$LOCK give more data about enqueues .unexpired Released.-------.964 8 12 0 0 0/0/0/0/0/0 The description of the view V$UNDOSTAT in the Oracle Database Reference guide provides some insight as to the columns PDFmyURL. Undo Statistics Section The undo segment summary report provides basic information on the performance of undo tablespaces.The P1. and bloom# column parameter1 format a15 column parameter2 format a15 column parameter3 format a15 column lock format a8 Select substr(name.1.expired Released.7) as "lock".parameter2. etc.For BF we get node# . eR . lots of sorting.ST (Space management enqueue): Usually caused by too much space management occurring.-------------.-----.unexpired reUsed -> eS .acquired when performing an insert. For example: create table as select on large tables on busy instances.------------1 20. uU .expired reUsed Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/ TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU . eU .-------. Undo information is provided in the following sections: . or delete on a parent or child table. .

. Should the client encounter SMU problems. SMON will be signaled to perform undo reclaims. and if so.561 208 3 12 0 0 This section provides a more detailed look at the statistics in the previous section by listing the information as it appears in each snapshot.-------. an UNDO_RETENTION of 5 minutes (default) with 50 undo blocks/second (8k blocksize) will generate: Undo Segment Space Required = (300 seconds * 50 blocks/ seconds * 8K/block) = 120 M The retention information (transaction commit time) is stored in every transaction table block and each extent map block. monitoring this view every few minutes would provide more useful information. Undo Segment Space Required = (undo_retention_time * undo_blocks_per_seconds) As an example. The UNDO tablespace still must be sized appropriately. Use of UNDO_RETENTION can potentially increase the size of the undo segment for a given period of time. When the retention period has expired. Latch information is provided in the following three sections: . Only during extreme space constraint issues will retention period not be obeyed.-----12-Mar 16:11 18. which latches are causing the greatest amount of contention on the system.Undo Segment Stats for DB Undo Segment Stats for DB: S901 -> ordered by Time desc Instance: S901 Snaps: 2 -3 uS/uR/uU/ eS/eR/eU ------------0/0/0/0/0/0 0/0/0/0/0/0 Undo Num Max Qry Max Tx Snap Out of End Time Blocks Trans Len (s) Concy Too Old Space -----------.-----------.------.756 8 12 0 0 12-Mar 16:01 1.------. done by scanning each transaction table for undo timestamps and deleting the information from the undo segment extent map.-------.definitions.723 1. From this report you can determine if Oracle is suffering from latching problems. The following calculation can be used to determine how much space a given undo segment will consume given a set value of UNDO_RETENTION. Latch Activity . Latch Sleep breakdown PDFmyURL. so the retention period should not be arbitrarily set too high. Latch Statistics Section The latch activity report provides information on Oracle's low level locking mechanism called a .

which can be reduced further by setting . such as an index root block. Contention for this latch can often only be reduced by reducing the amount of access to cache buffers. Latch contention generally indicates resource contention and supports indications of it in other sections.-------------. Here is a partial example of the latch activity report (it is quite long): Pct Avg Wait Pct Get Get Slps Time NoWait NoWait Latch Name Requests Miss /Miss (s) Requests Miss -----------------------. The parameter session_cached_cursors might need to be set. While each latch can indicate contention on some resource. shared pool= The shared pool latch is heavily used during parsing. Contention on this latch in conjunction with reloads in the SQL Area of the library cache section indicates that the shared pool is too small.-----. in particular during hard parse.0% or a relatively high value in Avg Sleeps/Miss. this latch is used extensively. such as information about tables and columns. During hard parsing.ora to the value ‘force’ provides some reduction in the library cache latch needs for hard parses. the more common latches to watch are: cache buf f er chain= The cache buffer chain latch protects the hash chain of cache buffers. each protected by one of these latches. See Note 62143. if the application is properly written to parse statements once and execute multiple times. This section is particularly useful for determining latch contention on an instance. If you have high contention for this latch. There is minor contention for this latch involved in executing SQL statements. and setting the session_cached_cursors sufficiently high provides some reduction in the library cache latch needs for repeated soft parsing within a single session.-----. can cause contention for this latch.1 'Understanding and Tuning the Shared Pool for an excellent discussion of this topic. See enhancement bug 1589185 for details. The cursor_sharing parameter can be used to completely avoid the row cache latch lookup during parsing. Contention for this latch can often be reduced by increasing the db_block_lru_latches parameter or by reducing the amount of access to cachebuffers.-----ASM allocation 122 0. cache buf f er lru chain= The buffer cache has a set of chains of LRU block. Literal SQL is being used. You can set the cursor_sharing parameter in init. and is used for each access to cache buffers. you will have high contention on this latch. Latch contention is indicated by a Pct Miss of greater than 1.-----. row cache= The row cache latch protects the data dictionary information. a single hot block. library cache= The library cache latch is heavily used during both hard and soft parsing.-----------. Latch Miss Sources This information should be checked whenever the "latch free" wait event or other latch wait events experience long waits. your application should be modified to avoid parsing if at all possible. Contention on this latch confirms a hot block issue.ora to the value ‘force’ to reduce the hard parsing and reduce some of the contention for the shared pool latch. Setting the cursor_sharing parameter in init. Applications that are coded to only parse once per cursor and execute multiple times will almost completely avoid contention for the shared pool latch..0 N/A 0 0 N/A PDFmyURL. Using the X$BH fixed table can identify if some hash chains have many buffers associated with them. If your application is written so that it generally uses literals in stead of bind variables. Often.

ASM map headers ASM map load waiting lis ASM map operation freeli ASM map operation hash t ASM network background l AWR Alerted Metric Eleme Consistent RBA FAL request queue FAL subheap alocation FIB s.o list latch JS broadcast add buf lat JS broadcast drop buf la 60 11 30 45.o chain latch FOB s. Segments Statistics Sections This is a series of reports that let you identify objects that are heavily used.0 0.0 0.0 0.0 0. which can cause additional performance issues.330 107 75 75 14 93 826 826 0. This report can be of further assistance when trying to analyze which latches are causing problems with your database. The latch miss sources report provides a list of latches that encountered sleep conditions.0 0.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A In this example our database does not seem to be experiencing any major latch problems. as the wait times on the latches are 0.653 14. and our get miss pct (Pct Get Miss) is 0 also.0 0.0 0.0 0.0 0. It contains several sub-sections like: Segments by Logical Reads Segments by Physical Reads Segments by Physical Read Requests Segments by UnOptimized Reads Segments by Optimized Reads Segments by Direct Physical Reads Segments by Physical Writes Segments by Physical Write Requests Segments by Direct Physical Writes Segments by Table Scans Segments by DB Blocks Changes Segments by Row Lock Waits .0 0.0 0.0 0. There is also a latch sleep breakdown report which provides some additional detail if a latch is being constantly moved into the sleep cycle.056 1.

46 0. Here is an example of the segments by logical reads report: Segments by Logical Reads Total Logical Reads: 393.032 30. indexes) that are receiving the largest number of logical or physical FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA_ARCH FIPFGUARD FG_INDX .4% of Total O wne r Tab le sp ace N ame O b je ct N ame SIGNATURES SIGNATORY DIBATCH FLOWDOCUMENT_ARCH FLOWDOCUMENT FDSTATE300 Sub o b je ct N ame O b j. In case FTS is happening then proper indexes should be created to eliminate FTS.610.134.59 1.346 % To t al 77.176 30.688 6.730.57 7.10 7. You may want to review the objects and determine why they are hot.478.3% of Total O wne r Tab le sp ace N ame O b je ct N ame SIGNATURES SIGNATORY FLOWDOCUMENT_ARCH FLOWDOCUMENT FDSTATE300 Sub o b je ct N ame O b j.82 1.210 6. and if there are any tuning opportunities available on those objects (e. partitioning).604 360. Most of these SQLs can be found under section SQL Statistics -> SQL ordered by Reads.331. Typ e TABLE TABLE TABLE TABLE PARTITION Physical R e ad s 320. For example.246.58 1.983 Captured Segments account for 86.56 FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA_ARCH FIPFGUARD FG_INDX Segments by Physical Reads Queries using these segments should be analyzed to check whether any FTS is happening on these segments.336 % To t al 81.256 6.142.37 1. Total Physical Reads: 415. Typ e TABLE TABLE TABLE TABLE TABLE PARTITION Lo g ical R e ad s 320. if an object is showing up on the physical reads report.202. These reports can help you find objects that are "hot" objects in the database.09 PDFmyURL. it may be that an index is needed on that object.Segments by ITL Waits Segments by Buffer Busy Waits Segments by Logical Reads and Segments by Physical Reads The "segments by logical reads" and "segments by physical reads" reports provide information on the database segments (tables.g. or on SQL accessing those objects.056.567 Captured Segments account for 99.674.576 6.

234 47.7% of Total O wne r Tab le sp ace N ame O b je ct N ame SIGNATURES SIGNATORY FLOWDOCUMENT_ARCH ACCOUNT FLOWDOCUMENT FDSTATE400 Sub o b je ct N ame O b j.017 5.508 9.509.222 Captured Segments account for 60.509.38 5.248. Typ e TABLE TABLE TABLE TABLE TABLE PARTITION Phys R e ad R e q ue st s 2.234 47.648 0.369 266.701.19 0.66 .12 FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA_ARCH FIPFGUARD FG_DATA FIPFGUARD FG_INDX Segments by Direct Physical Reads Total Direct Physical Reads: 415.806 % To t al 53.01 0.FIPFGUARD FG_INDX FLOWDOCUMENT FDSTATE400 TABLE PARTITION 355.369 266.222 Captured Segments account for 60.701.806 % To t al 53.01 0.017 5.508 9.7% of Total O wne r Tab le sp ace N ame O b je ct N ame SIGNATURES SIGNATORY FLOWDOCUMENT_ARCH ACCOUNT FLOWDOCUMENT FDSTATE400 Sub o b je ct N ame O b j.09 Several segment related reports appear providing information on: •Segments with ITL waits •Segments with Row lock waits •Segments with buffer busy waits •Segments with global cache buffer waits •Segments with CR Blocks received •Segments with current blocks received Segments by Physical Read Requests Total Physical Read Requests: 4.19 0. Typ e TABLE TABLE TABLE TABLE TABLE PARTITION UnO p t imiz e d R e ad s 2.38 5.809 PDFmyURL.12 FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA_ARCH FIPFGUARD FG_DATA FIPFGUARD FG_INDX Segments by UnOptimized Reads Total UnOptimiz ed Read Requests: 4.66 1.

com ** MISSING ** TEMPFG ** MISSING ** TEMPFG SYS SYSAUX WRH$_ACTIVE_SESSION_HISTORY WRH$_ACTIVE_750434027_20743 TABLE PARTITION .056.713 Captured Segments account for 0.00 FIPFGUARD FG_DATA FIPFGUARD FG_DATA ** MISSING ** TEMPFG ** MISSING ** TEMPFG SYS SYSAUX Segments by Physical Write Requests Total Physical Write Requestss: 1. Typ e UNDEFINED UNDEFINED D ire ct Writ e s 2.00 0.619 1.331.14 0.00 0.1% of Total O wne r Tab le sp ace N ame O b je ct N ame PROCESSLOG UN_FD_ACCTSERIALROUTSEQNUMBER ** MISSING: 501101/4223360 ** MISSING: 501102/4265728 I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST ** MISSING ** ** MISSING ** Sub o b je ct N ame O b j.428 2.864 30.872 Captured Segments account for 0.070 1.776 2.969.14 7.427 Captured Segments account for 0.01 0.0% of Total O wne r Tab le sp ace N ame O b je ct N ame ** MISSING: 501101/4223360 ** MISSING: 501102/4265728 Sub o b je ct N ame ** MISSING ** ** MISSING ** O b j.876 6.00 0.619 5.20 0.607.121 796 741 % To t al 0.054.000 % To t al 77.09 0.37 1.46 0.04 0.3% of Total O wne r Tab le sp ace N ame O b je ct N ame SIGNATURES SIGNATORY FLOWDOCUMENT_ARCH FLOWDOCUMENT FLOWDOCUMENT FDSTATE300 FDSTATE400 Sub o b je ct N ame O b j.070 1. Typ e TABLE TABLE TABLE TABLE PARTITION TABLE PARTITION D ire ct R e ad s 320.00 PDFmyURL.Captured Segments account for 86.482 % To t al 0.09 FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_DATA_ARCH FIPFGUARD FG_INDX FIPFGUARD FG_INDX Segments by Physical Writes Total Physical Writes: 57.866.06 0.04 FIPFGUARD FG_DATA FIPFGUARD FG_DATA SYS SYS SYSAUX SYSTEM FIPFGUARD FG_DATA Segments by Direct Physical Writes Total Direct Physical Writes: 56.05 0. Typ e TABLE INDEX UNDEFINED UNDEFINED INDEX Physical Writ e s 28.00 0.576 355.9% of Total O wne r Tab le sp ace N ame O b je ct N ame UN_FD_ACCTSERIALROUTSEQNUMBER PROCESSLOG I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST SIGNATURES HISTGRM$ Sub o b je ct N ame O b j.988 98 % To t al 0.604 357.988 1. Typ e INDEX TABLE INDEX TABLE TABLE Phys Writ e R e q ue st s 3.

200 107.6% of Total O wne r Tab le sp ace N ame O b je ct N ame Sub o b je ct N ame O b j.11 .40 11.39 8.61 33.907 INDEX 3.178 PDFmyURL.232 104.187 1.597 6.508 1.508 501 % To t al 35.527 Captured Segments account for 95.13 11.14 2.918 INDEX 3.00 Segments by Table Scans Total Table Scans: 18.11 5.45 9.06 FIPFGUARD FG_DATA FIPFGUARD FG_INDX SYS SYS SYSTEM SYSTEM FIPFGUARD FG_INDX Segments by Row Lock Waits O wne r Tab le sp ace N ame O b je ct N ame RRA_SRF_IX_04 RRA_PROCSTATUS_IX_03 RRA_REPTRN_PK TRRA_BALANCE_STATUS RRA_PROCSTATUS_IX_02 Sub o b je ct N ame O b j.61 11. Typ e Row Lo ck Wait s % of C ap t ure 20.118 TABLE 1.70 FIPFGUARD FG_INDX FIPFGUARD FG_INDX FIPFGUARD FG_DATA FIPFGUARD FG_DATA FIPFGUARD FG_INDX FLOWDOCUMENT FDSTATE400 FLOWDOCUMENT FDSTATE300 SIGNATORY SIGNATURES FLOWDOCUMENT FDSTATE200 Segments by DB Blocks Changes % of Capture shows % of DB Block Changes for each top segment compared with total DB Block Changes for all segments captured by the Snapshot O wne r Tab le sp ace N ame O b je ct N ame PROCESSLOG PK_PROCESSLOG I_H_OBJ#_COL# HISTGRM$ FK_PROCESSLOG_PROCESSID Sub o b je ct N ame O b j.688 104.016 % o f C ap t ure 18. Typ e TABLE INDEX INDEX TABLE INDEX D B B lo ck C hang e s 175.SYS SYSAUX SYS_LOB0000006306C00038$$ LOB 7 0.44 RRA_OWNER RRA_DATA RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_DATA RRA_OWNER RRA_INDEX INDEX 6.280 109.63 11.750 INDEX 1.14 8. Typ e TABLE PARTITION TABLE PARTITION TABLE TABLE TABLE PARTITION Tab le Scans 6.18 11.

In this scenario. Data displayed is sorted on “Row Lock Waits” column in descending order. the first transaction which acquires lock on the block will able to proceed further whereas other transaction waits for the first transaction to finish. ITL wait happens in case total trasactions trying to update same block at the same time are greater than the ITL parameter value.06 RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX Buffer busy waits happen when more than one transaction tries to access same block at the same time.715 827 696 % of C ap t ure 26. Typ e INDEX INDEX INDEX INDEX INDEX B uf f e r B usy Wait s 4. it first add transaction id in the Internal Transaction List table of the block. Based on the value of this parameter those many ITL slots are created in each block.69 10.82 4.64 10.The statistic displays segment details based on total “Row lock waits” which happened during snapshot . Segments by Buffer Busy Waits O wne r Tab le sp ace N ame O b je ct N ame RRA_REPTRN_PK RRA_REPBAL_IX_03 RRA_PROCSTATUS_IX_03 RRA_MSGINB_PK RRA_PROCSTATUS_PK Sub o b je ct N ame O b j. Size of this table is a block level configurable parameter. DML statements using these segments should be analysed further to check the possibility of reducing concurrency due to row locking. Segments by ITL Waits O wne r Tab le sp ace N ame O b je ct N ame RRA_MSGBLOBS_IX_01 RRA_TRN_IX_08 RRA_INT_CLOB_IX_01 RRA_TRN_IX_05 RRA_TRN_IX_10 Sub o b je ct N ame O b j.577 1.65 11. Hence it is not recommended to increase the ITL parameter value. Total waits happening in the example are very less.76 9.41 RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX RRA_OWNER RRA_INDEX Whenver a transaction modifies segment block.06 17. PDFmyURL.824 1. 23 is the Max one.76 11.00 4. Typ e INDEX INDEX INDEX INDEX INDEX IT L Wait s 23 15 10 10 8 % of C ap t ure 27. It provides information about segments for which more database locking is happening.

This will show something like: Enter value for sql_test: delete old 7: and lower(sql_text) like '%&sql_test%' new 7: and lower(sql_text) like '%delete%' SQL_TEXT PARSED ELAPSED_SEC SNAP_ID SNAPTIME SQL_ID ---------------------------------------.08 16: 00:54 8rpn8jtjnuu73 SECURITY_GROUP_ID = 0 30200 Then with that sql_id.sql_id=txt.'MDSYS'. dba_hist_snapshot snap where stat.begin_interval_time >= sysdate-1 and lower(sql_text) like '%&sql_test%' and parsing_schema_name not in ('SYS'.com . parsing_schema_name as parsed.------------.---------.----------. stat.sql_id and hh24:mi:ss') as snaptime.'dd.display_awr('&sqlid')).'WKSYS') order by elapsed_time_delta asc.-----.snap_id.If there are more than one instances of a process continuously polling database by executing same SQL (to check if there are any records available for processing). dba_hist_sqltext txt.------------DELETE FROM WWV_FLOW_FILE_OBJECTS$ WHERE APEX_0 . same block is read concurrently by all the instances of a process and this result in Buffer Busy wait event.484942 688 23. Enter value for sqlid: 8rpn8jtjnuu73 old 1: select plan_table_output from table (dbms_xplan.display_awr('8rpn8jtjnuu73')) PDFmyURL.sql_id from dba_hist_sqlstat stat. txt. we can retrieve the execution plan from the snapshots: select plan_table_output from table (dbms_xplan.snap_id and snap. Retrieve SQL and Execution Plan from AWR Snapshots This is a simple script that can help you to collect SQL statements executed since yesterday (configurable) that contains a specific value in its sentence.display_awr('&sqlid')) new 1: select plan_table_output from table (dbms_xplan.'SYSMAN'. elapsed_time_delta/1000/1000 as elapsed_sec.end_interval_time. col parsed format a6 col sql_text format a40 set lines 200 set pages 300 select sql_text.snap_id=snap. to_char(snap.

--------------------------------------------. ASH is great here We sample the Wait-Events of active sessions every second into the ASH-Buffer. Or with little effort from the command line like this: -----------------------------------------. session_serial#. It is accessed most comfortable with the Enterprise Manager GUI from the Performance Page (Button ASH Report there).Top 10 CPU consumers in last 5 minutes ----------------------------------------select * from (select session_id.Top 10 waiting sessions in last 5 minutes -------------------------------------------select * PDFmyURL.PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------SQL_ID 8rpn8jtjnuu73 -------------------DELETE FROM WWV_FLOW_FILE_OBJECTS$ WHERE SECURITY_GROUP_ID = 0 Plan hash value: 358826532 ------------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | | | 1 (100)| | 1 | DELETE | WWV_FLOW_FILE_OBJECTS$ | | | | | 2 | INDEX RANGE SCAN| WWV_FLOW_FILES_SGID_FK_IDX | 1 | 202 | 0 (0)| ------------------------------------------------------------------------------------- Get Data from ASH If you need to quickly check a performance problem on your DB. count(*) from v$active_session_history where session_state= 'ON CPU' and sample_time > sysdate .interval '5' minute group by session_id. session_serial# order by count(*) desc ) where rownum <= .

-------------------------.interval '5' minute group by session_id.Who is that SID? -------------------set lines 200 col username for a10 col osuser for a10 col machine for a10 col program for a10 col resource_consumer_group for a10 col client_info for a10 select serial#. session_serial# from v$active_session_history where sample_time > sysdate .But who is that and what SQL was running? --------------------. resource_consumer_group. client_info from v$session where sid = &sid.What did that SID do? ------------------------select distinct sql_id. username. session_serial# order by count(*) desc ) where rownum <= 10. machine. session_serial#. -. . osuser.Retrieve the SQL from the Library Cache: ---------------------------------------------col sql_text for a80 PDFmyURL. ----------------------------------------------.count(*) from v$active_session_history where session_state='WAITING' and sample_time > sysdate .from (select session_id.These 2 queries should spot the most incriminating sessions of the last 5 minutes.interval '5' minute and session_id = &sid. program.

DMPFILE =>'awr_export_wmprod1_101_105'.com . DMPDIR => '/opt/oracle/admin/awrdump/wmprod1'. All data in snapshot ranges that does not conflict with existing data is loaded.htm (Statspack Analyzer) http://www. To complete the transfer. the data being imported is ignored. the data is copied from the staging schema into the target repository's SYS sql_text from v$sql where sql_id='&sqlid'.AWR_EXTRACT( DMPFILE =>'awr_export_wmprod1_101_105'.txmemsys. BID => 101.0) http://www.oraperf. If the snapshot range already exists in the SYS or staging This is accomplished by the administrator specifying a snapshot range and extracting the AWR data to a flat file.) Links with AWR Analyzer Excel Performance Analyzer (Perfsheet v2.AWR_LOAD( SCHNAME => 'foot'. Oracle contains a new package DBMS_SWRF_INTERNAL to provide AWR snapshot export and import The flat file is then loaded into a user-specified staging schema in the target http://www. EID => 105) We then use the AWR_LOAD procedure to load the data into our target repository staging schema: BEGIN DBMS_SWR_INTERNAL. The data in the SYS schema is then used as the source for the ADDM analysis. Moving AWR information Enterprise Manager allows administrators to transfer Automatic Workload Repository snapshots to other workload repositories for offline analysis.MOVE_TO_AWR(SCHNAME => 'foot'. DMPDIR => '/opt/oracle/admin/awrdump/wmprod1') The last step is to transfer the data from our staging schema (FOOT) to the SYS schema for analysis: BEGIN DBMS_SWR_INTERNAL. The example below exports a snapshot range starting with 100 and ending at 105 to the output dump file 'awr_wmprod1_101_105' in the directory '/opt/oracle/admin/awrdump/wmprod1': BEGIN DBMS_SWR_INTERNAL.dbapool.php (Analyze your AWR or Statspack) PDFmyURL. . It processes plain-text statspack or AWR) http://www.html (STATSPACK and AWR Viewer software) Scripts: (Download Tool to Analyze AWR or Statspack Reports) (web based application.http://www.

Sign up to vote on this title
UsefulNot useful