Professional Documents
Culture Documents
With the metric data files that you provide, our Oracle Cloud
Systems Hub engineers can analyze the system
and database performance and overall health.
STEP 3. ANALYSIS
Our Solution Engineer teams digest
the data and construct a presentation
for pointing out the problems and
solutions for your database systems
STEP 2. COLLECTION
You collect diagnostic information
from your systems via an SQL script
and pass it on for analysis
STEP 1. DISCOVERY
By engaging with our sales teams by
requesting this FREE service, you identify
which systems are viable for a check-up
Copyright © 2021, Oracle and/or its affiliates. All rights reserved.
What will I receive?
An analysis of the system will be provided in a presentation along with
relevant database best practice recommendations and if needed, a proposal
for system upgrade or refresh.
CURRENT ENVIRONMENT:
Database Name Database Version
- 3 Exadata database machines: DBM01 12.1.0.2.0
HYPERNDB 12.1.0.2.0
IBPSFSS 12.1.0.2.0
SOA 12.1.0.2.0
- X4-2 Full Rack (with X6 and X7 upgrades) HYPEDEV 12.1.0.2.0
OBIEREPO 12.1.0.2.0
OFSAPROD 12.1.0.2.0
- X6-2 Half Rack with virtualization FBNOFSA 12.1.0.2.0
TESTEBS 12.1.0.2.0
PROD 12.1.0.2.0
FCCMUAT 12.1.0.2.0
- X7-2 Eight Rack for FCCMDB instance FBNSIT 12.1.0.2.0
MEDATDB 12.1.0.2.0
OFSADEV 12.1.0.2.0
EBSTST 12.1.0.2.0
- Database version for all instances is aligned at 12.1.0.2.0 OFSAUAT 12.1.0.2.0
OBIEPROD 12.1.0.2.0
DISC 12.1.0.2.0
IBPSPMH 12.1.0.2.0
- Out of all databases analyzed, only 3 have serious performance
AGENCYDB 12.1.0.2.0
issues:
DBM01, PROD, OBIEREPO
90
80
73,4 73,4 73,4 73,4
70
0% - 5% Database CPU
60
49,5 49,5 49,5 49,5 49,5 49,5 Overall Host CPU utilization can be seen
50
on the graphic on the left side:
38,3
40
- 73% / 49% / 38% Peak CPU
30
25,2 24,5 24,5 24,5 24,9 24,9 24,9 24,9 24,9 24,9 24,9 - 20% - 25% Average CPU
20 - 2 – 6% Low CPU
10
5,7 5,7 5,7
3 3
1,7 1,7 1,7 1,7 1,7 1,7
0
serverq008 serverq008 serverp009 serverp011 serverp009 serverp005 serverq008 serveri013 serverp009 serverq003 serverq001
baseq044 baseq042 basep051 basep085 basep055 basep031 baseq046 basei112 basep053 baseq018 baseq001
71 72
70 DATABASE MEMORY
66 67
64 Database Memory utilization varies between 16
and 72GB RAM as shown in the left side chart.
60 58
50
HOST MEMORY
Overall Host memory utilization
can be seen on the graphic below:
39 40
40 37
34 35 OS and
32 32 Memory processes,
Free; 11% 15%
30
20 17 18 17
16
13
11
9 8 8 8
10
5 5 5 5 6 5 6 6
3 2 Memory used -
DB, 79%
0
baseq044 baseq042 basep051 basep085 basep055 basep031 baseq046 basei112 basep053 baseq018 baseq001
62065,26GB
*for the databases analyzed – not total on the machine
1000,00
943,37
Database free
900,00 26%
Database used
800,00 64%
700,00
600,00
500,00 471,68
Exadata Storage TB
Total raw capacity 943.37
400,00
Cell Failure Coverage* Disk
314,83
Available raw capacity 943.37
300,00
Available space after redundancy 471.68
Database used 314.83
200,00
Database free 129.57
129,57
FRA used 23.46
100,00
FRA free 22.68
23,46 22,68
3,82 DBFS allocated 3.82
0,00
Available raw Available space Database used Database free FRA used FRA free DBFS allocated
capacity after redundancy
4489,2 3469,1
3303,1 349
1147,8 WRITE IOPS
4585,8
24798,5 1490,4
10512 1113,8
Unused; 3.814
Read Iops Max 3632,3 Read MB Max 304,4
10987,2 2961,3
6631,7 864,9
13743,7 410,3
12636,7 302,9
7500,4 1163,5
Consumed; 5.816
0 5000 10000 15000 20000 25000 30000 0 1000 2000 3000 4000
42265,8 3469,1
26094,3 349
29538,3 1147,8
134252,8 1490,4
150128,6 1113,8
Read Iops Max 22837,5 Read MB Max 304,4
117062,1 2961,3
22701,9 864,9
81382,3 410,3
87291
302,9
25070,3
1163,5
0 50000 100000 150000 200000 0 500 1000 1500 2000 2500 3000 3500 4000
3,61
3,5
Concurrency: A high number of wait events can be seen here
due to sessions being blocked, while waiting on the same
3 resource.
2,69
CPU: DB CPU wait events are not particularly indicating a
2,51 performance degradation. Having a high percentage of DB CPU
2,5
means that the database is processing database user-level calls.
50 48,7
BIGGEST ISSUES DETECTED:
40
19% backup contention
30 18% slow storage IO
20 19 18,59
6% direct path reads
10
5,27
3,02
⇒ 43%
1,01 1,05
Total performance degradation due to storage
0
Backup: MML DB CPU db file sequential direct path read imm op io done resmgr: cpu
Write Backup read quantum
piece
50
50 49
47
46 User IO is mainly consisting of the bellow events:
43
42 • “direct path read”, “direct path write” , “ direct path read temp”
40 40
https://docs.oracle.com/database/121/TGDBA/pfgrf_instance_tune.htm#TGDBA
40 38
37 94485
34 34
32 • “direct path write temp” waits are typically caused by sort operations that
cannot be completed in memory, spilling over to disk.
30 29 https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=132051023
27 512846&parent=EXTERNAL_SEARCH&sourceId=PROBLEM&id=1576956.1&_afrWind
26
25 owMode=0&_adf.ctrl-state=1brk1dwyfh_4
23
• “db file sequential read” - Is one of the most common wait event.
20 https://docs.oracle.com/database/121/TGDBA/pfgrf_instance_tune.htm#TGDBA
16 94485
• “db file scattered read” A scattered read is usually a multiblock read. It can
10
occur for a fast full scan (of an index) in addition to a full table scan.
https://docs.oracle.com/database/121/TGDBA/pfgrf_instance_tune.htm#TGDBA
94479
0
baseq044 baseq042 basep051 basep085 basep055 basep031 baseq046 basei112 basep053 baseq018 baseq001
45 The following graphic shows the concurrency and application waits on each of the
most used databases in the analysis.
40 These wait events are caused either by sessions running user application code (
for example row level locking or explicit lock commands ) or waits for internal
database resources ( library cache locks or latches ).
35
• “enq: TX – row lock contention” - Reducing this type of waits typically involves
20 altering application code to avoid the contention scenario/s. . To resolve this type of
event, you can follow the recommendation from the My Oracle Support Doc ID
15 15 1966048.1
15
18
18
16
Other: Wait events resulted from unusual errors. Always check alert log
4 4 4
4 for internal errors and abnormal behavior.
3
2
2
1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0
0
baseq044 baseq042 basep051 basep085 basep055 basep031 baseq046 basei112 baseq018
Recommendations
Our advice on what can be improved
or needs further investigation
Consider increasing SGA with Consider increasing SGA with ~4GB on Consider increasing SGA with
~10GB on basep051 for a 32% basep085 and PGA with ~0.8GB for a Consider increasing PGA with
10% and 35% Physical Reads and MB
~10GB on baseq046 for a 60- ~1.1GB on baseq044 for a 17%
decrease in Physical Reads.
Read/Written to disk. 61% decrease in Physical Reads. increase in MB R/W performance.
High CPU Utilization 3 IORM, RDMA, PMEM, RoCE, Smart Flash Cache, Exadata Smart
Scan
Active Sessions Surge 5
Waits on Network 3
Exafusion , Smart Fusion Block Transfer, RDMA, PMEM, RoCE
Waits on Cluster 7
Waits on Commit 1
RDMA, PMEM, RoCE
Increased Redo Generation Rate 4
Application generated waits 8 Faster Performance for Large Analytic Queries and Large Loads
Solution proposal
A one-in all solution to all the
problems discovered before