4th Qtr 2010
I personally prefer to have 35 days retention, so that it can cover the wholemonth. If you can afford to store this data for longer periods, then I wouldstrongly suggest you do that. In any case, to trend data for a specied period, you will need to retain the AWR data for that period.
Benefts o AWR?
The following list includes some key benets of Automatic Workload repository:
Easy to nd recent spike in load
Helpful in capacity planning
Design Load testing based on current capacity and load
Easy to write queries against AWR tables
SQL statistics history
Easy to nd if the SQL execution plan for particular SQL statement thatchanged recently I have detailed the following AWR scripts that I use in my day-to-day tasks.These are based upon DBA_HIST* tables and trends that same data that a setof AWR reports would have otherwise provided. You can modify the script totrend the data for particular hours, days, weeks or for the whole data availablebased on the AWR retention you have set for that database.
System Event Trending
The following script can be very helpful when you want to see how an event trended
over a period of time. The script takes two arguments—date and event name.
alter session set nls_date_format=’dd-mon-yy’;set lines 150 pages 100 echo off feedback offcol date_time heading ‘Date time|mm/dd/yy_hh_mi_hh_mi’ for a25col event_name for a26col waits for 99,999,999,999 heading ‘Waits’col time for 99,999 heading ‘Total Wait|Time(sec)’col avg_wait_ms for 99,999 heading ‘Avg Wait|(ms)’prompt “Enter the date in DD-Mon-YY Format:”WITH system_event AS (selectsn.begin_interval_time begin_interval_time,sn.end_interval_time end_interval_time,se.event_name event_name,se.total_waits e_total_waits,lag(se.total_waits,1) over (order by se.snap_id) b_total_waits,se.total_timeouts e_total_timeouts,lag(se.total_timeouts,1) over (order by se.snap_id) b_total_timeouts,se.time_waited_micro e_time_waited_micro,lag(se.time_waited_micro,1) over (order by se.snap_id) b_time_waited_microfrom dba_hist_system_event se,dba_hist_snapshot snwhere
and se.snap_id = sn.snap_idand se.dbid = sn.dbidand se.instance_number = sn.instance_numberand se.dbid = (select dbid from v$database)and se.instance_number = (select instance_number from v$instance)
)selectto_char(se1.BEGIN_INTERVAL_TIME,’mm/dd/yy_hh24_mi’)||to_char(se1.END_INTERVAL_TIME,’_hh24_mi’) date_time,se1.event_name,se1.e_total_waits-nvl(se1.b_total_waits,0) waits,(se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000000 time,((se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000) /(se1.e_total_waits - nvl(se1.b_total_waits,0)) avg_wait_msfrom system_event se1where (se1.e_total_waits-nvl(se1.b_total_waits,0)) > 0and nvl(se1.b_total_waits,0) > 0/
SQL> @event_response“Enter the date in DD-Mon-YY Format:”Enter value for date: 29-jan-10old 15: trunc(sn.begin_interval_time) =’&Date’new 15: trunc(sn.begin_interval_time) =’29-jan-10’Enter value for event_name: db file sequential readold 21: and se.event_name=’&event_name’new 21: and se.event_name=’db file sequential read’Total AvgDate time Wait Time Waitmm/dd/yy_hh_mi_hh_mi EVENT_NAME Waits (sec) (ms)----------------------- ------------------------- ---------- ---------- -------01/29/10_01_00_02_00 db file sequential read 551,356 4,500 801/29/10_02_00_03_00 db file sequential read 1,114,616 7,921 701/29/10_03_00_04_00 db file sequential read 764,481 5,926 801/29/10_04_00_05_00 db file sequential read 845,195 6,633 801/29/10_05_00_06_00 db file sequential read 1,385,501 8,501 601/29/10_06_00_07_00 db file sequential read 3,785,824 14,703 401/29/10_07_00_08_00 db file sequential read 2,393,513 6,996 301/29/10_08_00_09_00 db file sequential read 2,590,092 6,273 201/29/10_09_00_10_00 db file sequential read 2,322,715 5,390 201/29/10_10_00_11_00 db file sequential read 2,806,934 6,913 201/29/10_11_00_12_00 db file sequential read 2,691,573 3,501 101/29/10_12_00_13_00 db file sequential read 1,737,420 3,349 201/29/10_13_00_14_00 db file sequential read 489,453 2,297 501/29/10_14_00_15_00 db file sequential read 791,114 2,842 4
Load Profle Trending
The rst page of the AWR report gives us lot of information about databasebehavior such as whether database is more read intensive or write intensive(depending upon Physical Reads/sec or Physical Writes/Sec statistics), whetherit does lots of logical IO/sec, has a high OLTP component (as derived fromhigher number of Transactions/Sec), or does a lot of parsing (Hard or soft).By looking at a one-hour-interval report you can get some idea, but if you canlook at the trend for the whole day, you will get a broader picture as somedatabases trend from an OLTP workload during the day and DSS workloadduring off hours.The Load Prole section also helps to determine if load has changed over timecompared to the baseline (i.e., an AWR report when the system was healthy).There is no good or best value for these statistics. Although these numbers varies by database and application, when the number of Logons/sec more than10 or the database has a higher hard parse/sec (>100 or so), this could be anindication of an underlying conguration or application design issue thatimplements itself as a performance issue. In this regard, the number of Logical Reads/sec is also a good statistics to look at.The script below trends the physical reads/sec.
alter session set nls_date_format=’dd-mon-yy’;set lines 130 pages 1000 echo off feedback offcol stat_name for a25col date_time for a20col BEGIN_INTERVAL_TIME for a20col END_INTERVAL_TIME for a20prompt “Enter the date in DD-Mon-YY Format and Stats you want to trend like‘redo size’,’physical reads’,’physical writes’,’session logical reads’ etc.”WITH sysstat AS (selectsn.begin_interval_time begin_interval_time,