This action might not be possible to undo. Are you sure you want to continue?
Frequent commit statements clear out the buffer,leading to a smaller buffer size. Log_buffer has a minimum size of 64k. The value of redo buffer allocation retries should be near 0.This number should not be greater than 1% of the redo entries. 5. The wait may be caused by the log buffer being too small. By checkpointing or by log switching. Incase increase the size of log buffer. Alternatively improve the checkpoint or archiving process. 6. The SECOND_IN_WAIT value of log buffer space event indicate that log switch d’not occur. This also indicates that buffers are being filled up faster than LGWR in writing. This also indicates disk I/O contention on the log files. 7. The Redo Buffer allocation Retries statistic show the number of time a user process wait for space in the redo log buffer to copy new entries over the entries that have been written to disk. 8. DBWn has not completed checkpointing the when the LGWR needs the file again. LGWR has to wait. In the alert file, message “Checkpoint not Complete”. 9. Check the frequency of checkpoints and set the appropriate values for LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT. 10. The archiver can’t write to the archived redo log files or can’t archive the archive operation fast enough. Confirm that the archive device is not full and add redo log group. In this case start multiple archive process. 11. DB_BLOCK_CHECKSUM is set to TRUE and therefore adds performance overhead. 12. WAYS to avoid logging bulk operations in the redo log: a) Direct Path loading without archiving does not generate redo. b) Direct path loading with archiving can use Nologging mode. c) Direct load Inset can use Nologging mode. d) Some SQL statements can use Nologging mode. e) Applies to table, tablespaces, and indexes. f) Does not record changes to data in the redo log buffer (Some minimal logging is still carried out, for operations such as extent allocation.) g) Nologging mode does not apply to every operation on the object for which the attribute is set in the create command for table, index or tablespaces. 13.Java_soft_sessionspace_limit:- limit the size for Java memory usage in a session. If it exceeds this size, a warning is written into an RDBMS trace file. Default is 1M. 14.Java_Max_Sessionspace_size:- a user’s session-duration java state attempts to exceeds this size, the session is killed with an out-memory failure. This limit is purposely set very high so as not to be visible normally. If a user invoked java program is not self-limiting in its memory usage, this setting can place a hard limit on the amount of session space made available to it. 13. SIZING the SGA for JAVA In shared pool 8kb per loaded class. You can use 50mb of shared pool memory for medium size Java application. The default value of 20MB should adequate for typical Java stored procedure usage. 14.MULTIPLE I/O SLAVES a) Provide nonblocking asynchronous I/O requests. b) Are deployed by the DBWn process. c) Are typically not recommended if asynchronous I/O is available. d) Naming convention for slave process ORA_Innn_SID. It may be necessary to turn off the asynchronous I/O code of the platform has bugs or is not efficient, asynchronous I/O can be disable by device type. Multiple I/O slaves is useful for SMP systems with large numbers of CPUs.Multiple processes can’t concurrently be used with multiple I/O slaves. TUNING DBWn I/O by looking at the value of the FREE BUFFER WAIT.
Chapter 12 1. Factor affecting the SELECTION: a. Rows read in Groups. b. SELECT or DML statements on CLUSTER Table. c. Table size d. Rows size, row group, and block size. e. Small or Large Transactions. f. Using parallel queries to load or for SELECT statement. 2. To enhance performance, you can use following data access methods: a. CLUSTERS. b. INDEXES (B-TREE, BITMAP, FUNCTION BASED). c. INDEX-ORGANIZED TABLES. d. MATERIALIZED VIEWS. a. CLUSTER:1. Disk I/O is reduced and access time improved for joins of clustered tables. 2. Each cluster key value is stored only once for all the rows of the same key value it therefore uses less storage space. 3. But full table scan are generally slower on clustered tables than on nonclustered table. 4. INDEX CLUSTER stored null values. There is only one entry for each key value in the cluster index. Therefore a cluster index is likely to be smaller than a normal index on the same set of key value. 5. HASH CLUSTER: For equality searches that use cluster key, a hash cluster can provide greater performance gains than an index cluster, because there is only one segment to scan (no index access is needed). 6. SITUATION WHERE CLUSTERS ARE USEFUL CRITERION INDEX HASH Uniform key distribution YES YES Evenly distribution key value (SEQUENCE) YES Rarely Updated key YES YES OFTEN JOINED MASTER-DETAIL TABLE YES QUERIES USING EQUALITY PREDICATE ON KEY YES PREDICTABLE NUMBER OF KEY VALUE YES 7.WHEN NOT TO USE CLUSTERS: a). If full tables scan is executed often on one of the clustered tables. This table is stored on more blocks than if it had been created alone. b) If the data for all rows of a cluster key value exceeds one or two Oracle blocks: in order to access an individual row in a clustered key table, the oracle server reads all blocks containing rows with the same value. 8.WHEN NOT TO USE HASH CLUSTERS a) If the table is constantly growing and if it is impractical to rebuild a new, larger hash cluster. b) If your application often performs full table scans and you had to allocate a great deal of space to the hash cluster in anticipation of the table growing. 2.B-TREE INDEXES: typically improve the performance of queries that select a small percentage of rows from a table. Indexes are always balanced, and they grow from the bottom up. As the rows are added, the leaf block fills. When the leaf block is full, the Oracle server splits it into two blocks and puts 50% of the block’s contents into the original leaf block, and 50% into a new leaf block. The more levels an index has and deleted rows, these less efficient it may be. If 15% rows are deleted index should rebuild. TESTRICTION: Parallel DML is not supported during online index building. If you specify ONLINE and then issue parallel DML statement, an error is returned. COMPRESS: which eliminates repeated occurrence of key column values and may subsequently reduce storage. Use integer to specify the prefix length (number of prefix columns to compress). For unique indexes, the valid range of
prefix length value is from 1 to the number of key columns minus 1(the default value). If many values like in category name are same it write name only one time and besides writes others rowids only. 3.BITMAP INDEXES: perform best when there are few variations for the value. DML statements do not perform well with bitmap indexes, so for high DML activity, do not use a bitmap index. PERFORMANCE CONSIDERATIONS: 1 1.Bitmap indexes use little storage space: one entry per distinct key value. 2 2.They work very fast with multiple predicates (OR, AND) on low-cardinality columns. 3.They are typically suited to large, read-only systems. But DML statements slow down performance. And not for OLTP environment. 4.Bitmap indexes store null values, B*Tree indexes do not. 5.Parallel query, Parallel DML and parallelized CREATE statements work with bitmap indexes. B-TREE INDEXES BITMAP INDEXES Suitable for high-cardinality columns Suitable for low-cardinality columns Updates on keys relatively inexpensive Updates on keys relatively expensive Inefficient using AND/OR in query Efficient using AND/OR in query Row-level locking Bitmap segment-level locking More storage Less storage Useful for OLTP Useful for DSS 6.For performance REVERSE key index for insert into the sequence. But disadvantages when application statement specify ranges, the explain plan procedures a full table scan, because index is not usable for a range scan. 7.INDEX ORGANIZED TABLES (IOT): IOT are suitable for frequent data access through the primary key. These indexes keep the value and all its location together. For IOT a primary key constraints is mandatory. BENEFITS of IOT: no duplication of the value for the primary key column (index and table columns in indexed tables) therefore less storage is required. Faster key-based access for queries involving exact match or range searches, or both. 8.IOT and HEAP TABLES: compared to heap tables, IOTs have: 1. Faster key-based access to table data. 2. Reduced storages requirement. 3. Create indexes on others column by logical row Ids. 4. Logical row Ids give the fastest possible access to rows in IOTs by using two methods. 1. A physical guess whose access time is equal to that of physical row Ids. 2. Access without the guess (or after an incorrect guess) this performs similar to primary key access of the IOT. 5. The guess is based on knowledge of the file and block that a row resides in. The latter information is accurate when the index is created, but changes if the leaf block splits. If the guess is wrong and the row no longer resides in the specified block, then the remaining portion of the logical row ID (UROWID) entry, the primary key is used to get the row. 6. Small index entries can be stored in a leaf block. This is not necessarily the case for IOT because they store full rows, or at least as much as fits without exceeding the limit set by PCTTHRESHOLD. Storing large entries in index leaf blocks slows down index searches and scans. You can specify that the rows go into an overflow area, by setting a threshold value that represents a percentage of block size. 7. PCTTHRESHOLD: If a row exceeds the size calculated based on this value, all columns after the column named in the INCLUDING clause are moved to the overflow segment. If the OVERFLOW is not specified, then rows exceeding the threshold are rejected. PCTTHRESHOLD default to 50 and must b/w 0 to 50. 8. RESTRICTION: must have a primary key. Can’t use unique constraints. Can’t be clustered. 9.MAPPING TABLE: create bitmap index on IOT by using MAPPING table. A bitmap index on an IOT is similar to a bitmap index on a HEAP table. There is one MAPPING table per index-organized table and it is used by all bitmap indexes created on that IOT. Movement of rows in the index-organized table invalidates the guess data block address in some of the mapping table’s logical row ID entries, the index-organized table can still be accessed using the primary key.
To rebuild the mapping table, use the ALTER TABLE command specifying the clause MAPPING TABLE UPDATE BLOCK REFERENCES. 10.MATERIALIZED VIEWS: A materialized view stores both the definitions of a view and the rows resulting from the execution of the view. Index and partition the materialized view like other table, to improve performance. Fast refreshes only apply the changes made since the last refresh. Two types of fast refresh are available: 1. Fast refresh using materialized view log: In this case, all changes to the base tables are captured in a redo log and then applied to the materialized view. 2. Fast refresh using row ID range: A materialized view can be refresh using fast refresh after direct path load, based on the row Ids of the new rows. Direct loader logs are required for this refresh types. A view defined with a refresh type of force refreshes with the fast mechanism if that is possible, or lese uses a complete refresh. Force is the default refresh type. Chapter 15 1.DATABASE REROURCE MANAGER: 1. MANAGE MIXED WORKLOAD. 2. CONTROL SYSTEM PERFORMANCE. 2.THE DBA can do following works by using Resource Manager. 1.Guarantee groups of users a minimum amount of processing resources regardless of the load on the system and the number of users. 2.Distribute available processing resources by allocating percentages of CPU time to different users and applications. 3.Limit the degree of parallelism that a set of users can use. 4.configure an instance to use a particular plan for allocating resources. A DBA can dynamically change the method. 3.DATABASE RESOURCE MANAGEMENT CONCEPTS: 1.RESOURCE CONSUMER GROUP: define a set of users who have similar requirements for resource use, and also specifies a resource allocation method for allocating the CPU among session. A user can assign to multiple consumer groups but only one group can be active at a time for a session and either user or DBA can switch consumer group during a session. 2.RESOURCE PLAN: Resource allocations are specified in a resource plan. Resource plans contain resource plan directives, which specify the resources that are to be allocated to each resource consumer group. 3.RESOURCE PLAN DIRECTIVES: 1.Assigning consumer group or subplans to resource plan. 2.Allocating resources among consumer groups in the plan by specifying parameters for each resource allocation method. 4.RESOURCE ALLOCATION METHODS: METHOD RESOURCE RECIPIENT ROUND-ROBIN CPU TO SESSIONS GROUPS EMPHASIS CPU TO GROUPS PLANS ABSOLUTE PARALLEL DEGREE PLANS 1.ROUND-ROBIN METHOD: CPU allocation between sessions in a group by (Round Circular) Method. 2.ABSOLUTE: Currently, the resource allocation method for allocating the CPU among Resource Consumer group is the emphasis method. The only resource allocation method for limiting the degree of parallelism is the absolute method. 3.EMPHASIS: The emphasis CPU allocation method determines how much emphasis is given to session in different consumer group in a resource plan. CPU usage is assigned levels from 1 to 8, with level 1 having the highest priority. Sessions in (RCP) with non-zero percentages at higher-priority levels always get the first opportunity to run.
CPU resources are distributed at a given level based on the specified percentages. The percentages of CPU specified for a (RCP) is a maximum for how much that consumer group can use at a given level. If any CPU resources are left after all (RCP) at a given level have been given an opportunity to run, the remaining CPU resources fall into to the next level. The sum of percentages at any given level must be less than or equal to 100. Any levels that have no plan directives explicitly specified have default of 0% for all subplans or consumer groups. The database Resource manager percentage limits are not hard limits. It is certain to act as limits only of system throughput is at maximum. If less than maximum,(RCP) can be provided the resources they demand, even if that means more than the specified limit,provided the consumer groups in question do not have other higher priority groups requesting the spare resources. Database resources manager first seeks to maximize throughput, then to prioritize among the consumer groups. A (RCP) is a set of users treated as a collective unit by the resource manager. The Resource plan directives assign resources to consumer group as a whole, not to individual session. To create (DRM) you must have the ADMINISTER_RESOURCE_MANAGE privilege. And this privilege is granted or revoked by Dbms_resource_Manager_Privs package. It can’t be granted or revoke through the SQL statement. DBMS_RESOURCE_MANAGER Procedures. 1.CREATE_PLAN: Names a resource plan and specifies its allocation method. 2.UPDATES_PLAN: Updates a resource plan’s comments. 3.DELETE_PLAN: Deletes a resource plan and its directives. 4.DELETE_PLAN_CASCADE: Deletes a resource plan and all of its descendents. 5.CREATE_CONSUMER_GROUP: Names a resource consumer group and specifies its allocation method. 6.UPDATE_CONSUMER_GROUP: Updates a consumer group’s comments. 7.DELETE_CONSUMER_GROUP: Deletes a consumer group. 8.CREATE_PLAN_DIRECTIVES: Specifies the resource plans directives that 9. UPDATE_PLAN_DIRECTIVES: Update plan directives. 10. DELETE_PLAN_DIRECTIVES: Deletes plan directives. 11.CREATE_PENDING_AREA: Creates a pending area (scratch area) within which changes can be made to a plan schema. 12.VALIDATE_PENDING_AREA: Validates the pending changes to a plan schema. 13.CLEAR_PENDING_AREA: Clear all pending changes from the pending area. 14.SUBMIT_PENDING_AREA: Submits all changes to a plan schema. 15.SET_INITIAL_CONSUMER_GROUP: Sets the initial consumer group for a user. 16.SWITCH_CONSUMER_GROUP_FOR_SESS: Switches the consumer group of a specific session. 17.SWITCH_CONSUMER_GROUP_FOR_USER: Switches the consumer group of all sessions belonging to a specific user.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.