Professional Documents
Culture Documents
8. Connect hr/hr and gather statistics for all objects under the HR schema using the
DBMS_STATS package, while saving the current statistics then restore the original
statistics.
a. Connect as HR and create a table to hold statistics in that schema.
$ sqlplus hr/hr
SQL> execute -
dbms_stats.create_stat_table(‘HR','MY_STATS');
b. Save the current schema statistics into your local statistics table.
SQL> execute -
dbms_stats.export_schema_stats(‘HR','MY_STATS');
c. Analyze all objects under the HR schema.
SQL> execute dbms_stats.gather_schema_stats(‘HR');
d. Remove all schema statistics from the dictionary and restore the original
statistics you saved in step b.
SQL> execute dbms_stats.delete_schema_stats(‘HR');
SQL> execute -
dbms_stats.import_schema_stats(‘HR','MY_STATS');
Workshop Scenario
At the beginning of each scenario, the database is shut down and then restarted. Perform the
following steps:
Make sure that the job scheduler is set to collect statistics every 10 minutes.
Start the workload generator and note the time. If the workload generator is still running
against the database, stop the generator and restart it.
Allow time for some statistics to be generated (at least 20 minutes). Shorter time periods
make it more difficult to determine where problems exist.
After an adequate period of time, run the script spreport.sql. Choose a start and end time that
falls between the period that the workload generator was running. Name the report in a
manner associated with the scenario, for example, for scenario 1 use reportscn1.txt, for
Scenario 5 use reportscn5.txt, and so on.
Look for the generated report in the directory from which SQLPLUS was executed.
When resizing memory components, do not consume more memory than is actually required
in order to meet the requirements of the objects given. For example, there is little point in
resizing the shared pool to 500MB. The purpose of the workshop is to be as realistic as
possible.
• Collect a snapshot.
• Provide a workload by executing the workload
generator.
• Collect a snapshot.
• Generate a report.
Workshop Scenario 1
Waits recorded on the latch “Shared pool, library cache” could be indicative of a small shared
pool. However, before rushing out and increasing the SHARED_POOL_SIZE, it would be
advisable to determine why the pool is too small. Some reasons are listed below:
• Many SQL statements stored in the SQL area that are only executed 1, and they differ
only in literals in the where clause. A case could be made here for using bind variables,
or setting CURSOR_SHARING. In order to determine these SQL statements examine
the report created by STATSPACK, or query v$sql using the like option of the where
clause to collect information regarding similar SQL statements.
• Examine what packages are loaded using the query shown below:
select * from v$db_object_cache
where sharable_mem > 10000
and (type=‘PACKAGE’ or type=‘PACKAGE BODY’ or
type=‘FUNCTION’ or type=‘PROCEDURE’)
and KEPT=‘NO’;
• Collect a snapshot.
• Provide a workload by executing the workload
generator.
• Collect a snapshot.
• Generate a report.
Workshop Scenario 2
The first indication that the buffer cache is too small is waits on free buffer waits event. The
cache buffers LRU chain latch might also indicate that the buffer cache is too small. Waits on
the latch may also signify that the DBWR process is not able to keep up with the work load.
In order to determine which problem is causing the latch contention, examine the number of
writes in the file statistics found in the STATSPACK report.
On the front page of the STATSPACK report, the section named “Instance Efficiency
Percentages” lists the important ratios of the instance. For this scenario, the values of
“Buffer Hit%:” are of interest.
Values for the “Buffer Hit%:” depend on individual systems. Ideally, this value should
be close to 100 percent; however, there might be several reasons why this goal cannot be
realized. A low percentage indicates that there are a lot of buffers being read into the
database buffer cache.
Before increasing the size of the database buffer cache, you should examine what SQL
statements are being run against the database. You are looking for statements that cause a
high number of buffer gets and how many times these statements are executed. If a statement
is executed only a few times, it may not be worth the effort of tuning. However, a statement
that is executed many times, and has a high number of buffer gets is an ideal candidate for
SQL tuning.
• Collect a snapshot.
• Provide a workload by executing the workload
generator.
• Collect a snapshot.
• Generate a report.
Workshop Scenario 3
Waits on the event log buffer space is an indication that your log buffer is too small.
On the first page of the STATSPACK report there is section named “Instance Efficiency
Percentages.” Note the value of the statistic redo no wait. While this statistic’s ideal
value of 100 percent is seldom achieved, any lesser value indicates that the Redo Log Buffer
is not correctly sized. If no memory is available, consider reducing the amount of redo
created by the use of no logging in appropriate statements.
Query the data dictionary in order to determine the current size of the Redo Log Buffer.
In order to determine an estimate on the increase required, examine the amount of redo that is
generated; this is found on the first page of the STATSPACK report, under the heading “Load
Profile.”
Edit the init.ora file to set a new size for the redo log buffer.
Bounce the database.