Professional Documents
Culture Documents
Upgrade Migrate Consolidate 19c Compress
Upgrade Migrate Consolidate 19c Compress
1 Copyright © 2022, Oracle and/or its affiliates Updated: June 30, 2023
Roy F. Swonger
Vice President
Database Upgrade,
Utilities & Patching
@RoyFSwonger
Mike Dietrich
https://dohdatabase.com
@dohdatabase
dohdatabase
William Beauregard
william-beauregard-3053791
Rodrigo Jorge
https://dbarj.com.br/en
@rodrigojorgedba
rodrigoaraujorge
Roy Swonger Daniel Overby Hansen
Rodrigo Jorge
royfswonger
@royfswonger
william-beauregard-3053791
rodrigoaraujorge
@rodrigojorgedba
https://dbarj.com.br/en
mikedietrich
@mikedietrichde
https://mikedietrichde.com
dohdatabase
@dohdatabase
https://dohdatabase.com
https://MikeDietrichDE.com/videos
I’m working into a deal in LAD Customer, migrating about aprox 70 db oracle database version 7 (supporting
OLD application developed on Oracle Forms) distributed around all country, my customer is want to
modernized their application to 3 layer architecture (using an J2EE app) and move all data to an oracle DB
central repository.
We are intend to recommend best architecture and sizing but we do not know Oracle 7 database.
Don't do this | Outdated Versions
GA: 29-NOV-2006
Database Upgrade
Innovation
Innovation
2020
2024
2026
2010
2022
2023
2025
2027
2012
2013
2014
2015
2016
2017
2018
2019
2021
2011
MARKET
Oracle 11.2 Waived EXTENDED EXTENDED DRIVEN
Oracle 18
(12.2.0.2)
Oracle 19 EXTENDED
(12.2.0.3)
Oracle 21
Premier Support Waived Extended Support Paid Extended Support Market Driven Support Limited Error Correction
2020
2022
2023
2024
2025
2026
2027
2028
2029
2018
2019
2021
Oracle 12.2.0.1 LIMITED
Oracle 18
(12.2.0.2)
Oracle 19 EXTENDED
(12.2.0.3)
Oracle 21
http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf
Basic Facts | Upgrade vs Migration vs Patching
Oracle 18.6.0 Oracle 19.12.0 Oracle 11.2.0.3 Oracle 19.12.0 Oracle 19.3.0 Oracle 19.12.0
19.10.0
Watch on YouTube
Basic Facts | Patch versus Upgrade
autoupgrade.jar
UPGRADE Oracle 18.6.0 Oracle 19.10.0 or
dbupgrade
Patching before Oracle Database 12.2
Base Release
Bundle Patch 1
Optimizer / off
Patch Set Update 1 Patch Set Update 1
Security Fixes Regression Fixes Security Fixes Regression Fixes Functional Fixes
Bundle Patch 2
Security Fixes Regression Fixes Security Fixes Regression Fixes Functional Fixes
Patching since Oracle Database 12.2
Base Release
Release Update 1
Optimizer / off Functional Fixes
Base Release
Release Update 1
Optimizer / off Functional Fixes
Release Update 2
Optimizer / off Functional Fixes
Base Release
Release Update 2
Optimizer / off Functional Fixes
At the
same date
Security Fixes Regression Fixes
Patching | Release Update Revision 2 (RUR)
Base Release
Release Update 3
Timeline Example
19.3.1 1. Revision
Timeline | October 2019
19.3.2 2. Revision
Timeline Example | January 2020
19.3.2 19.4.2
Timeline Example | April 2020
18c 18.6.0 18.7.0 18.8.0 18.9.0 18.10.0 18.11.0 18.12.0 18.13.0 18.14.0
Patching
18.5.1 18.6.1 18.7.1 18.8.1 18.9.1 18.10.1 18.11.1 18.12.1 18.13.1
End 18c
19c 19.3.0 19.4.0 19.5.0 19.6.0 19.9.0 19.8.0 19.9.0 19.10.0 19.11.0 19.12.0 19.13.0 19.14.0 19.15.0 19.16.0 19.17.0
19.3.1 19.4.1 19.5.1 19.6.1 19.7.1 19.8.1 19.9.1 19.10.1 19.11.1 19.12.1 19.13.1 19.14.1 19.15.1 19.16.1
19.3.2 19.4.2 19.5.2 19.6.2 19.7.2 19.8.2 19.9.1 19.10.2 19.11.2 19.12.2 19.13.3 19.14.2 19.15.2
19.12.1
18c 18.6.0 18.7.0 18.8.0 18.9.0 18.10.0 18.11.0 18.12.0 18.13.0 18.14.0
19c 19.3.0 19.4.0 19.5.0 19.6.0 19.9.0 19.8.0 19.9.0 19.10.0 19.11.0 19.12.0 19.13.0 19.14.0 19.15.0 19.16.0 19.17.0
19.9.0 19.9.1
Attention | The RUR Trap - With Example Numbers
200 225
Further Information
Overview of Database Patch Delivery Methods for 12.2.0.1 and greater (Doc ID 2337415.1)
Welcome to the Jungle
Patching Basics
The Database Patching Process
Tips for Patching Success
Security
Source: https://www.nytimes.com/2019/07/22/business/equifax-settlement.html?module=inline
Security | What happens if you don't upgrade
Twitter
Now what?
Do I need to upgrade to Oracle Database 23c?
Basic Facts | Insight into the Patching Process
• Backports are specific for a release and patch level, e.g. 19.10.0
Merge patch
• Two or many bug fixes combined
Bundle patch
• A collection of many bug fixes on top of a release or another bundle patch
• Examples:
• PSU - Patch Set Update
Oracle Database - Overview
• BP - Bundle Patch of Database Patch Delivery Methods for
• RU - Release Update 12.2.0.1 and later MOS Note: 2337415.1
• RUR – Release Update Revision
Basic Facts | What Is In A Patch
$ORACLE_HOME/OPatch
Basic Facts | How To Apply A Patch
[oracle]$
SQL> STARTUP
[oracle]$ $ORACLE_HOME/OPatch/opatch-verbose
$ORACLE_HOME/OPatch/datapatch apply
New or Cloned
Oracle Home,
Home 19.10.0
Recommendation | Standby-First Patch Apply
Redo Transport
RReeddo
oTTraran
nssppoor
trt
Redo Transport
Primary Standby
New or Cloned
Oracle Home
19.10.0
DGMGRL>
[oracle]$
[oracle]$
convert
$ORACLE_HOME/OPatch/datapatch
DGMGRL>
$ORACLE_HOME/OPatch/opatch
database
switchover
PROD2 totophysical
snapshot
PROD2; -verbose
apply
standby;
Recommendation | Standby-First Patch Apply
Oracle Patch Assurance - Data Guard Standby-First Patch Apply (Doc ID 1265700.1)
Recommendation | RAC Rolling Patching
$ $ORACLE_HOME/OPatch/datapatch
$ORACLE_HOME/OPatch/opatch apply
$ORACLE_HOME/OPatch/opatchauto -verbose
Basic Facts | Insight into the Patching Process
• Standby-First installable
$ opatch lsinventory
• Tells you what is installed in your software home
Pro Tip: Upload opatch lsinventory
with your SRs
DBMS_QOPATCH package provides access to
• Patches installed in the Oracle Home
• MOS Note: 1530108.1 - FAQ on Queryable Patch Inventory
• https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_QOPATCH.html#GUID-
10BC18AA-EC71-44B7-A3E7-2E4D06FA2DC4
CDB_REGISTRY_SQLPATCH
• View detailing information about SQL patches installed in the database
• https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DBA_REGISTRY_SQLPATCH.html#GUID-
F6D2BC12-606C-41FE-B9E8-F3702CD32F89
Patching Strategies
Patching Basics
The Database Patching Process
Tips for Patching Success
The Database Patching Process
https://www.oracle.com/security-alerts/
Do you need to apply this bundle?
Highest single OJVM Base Score per Security Alert since January 2015
9,9 9,8
10,0
9,0 9,0 9,0 9,0 9,0 9,0 9,1 9,0
9,0 8,5
8,2 8,3
8,0 8,0
8,0 7,5 7,5
7,1
7,0 6,5 6,8 6,8
6,0
5,3 5,3 5,3
5,0 4,8
4,3
4,0 3,5
3,1
3,0
2,0
1,0
0,0
Jan 15 Apr 15 Jul 15 Oct 15 Jan 16 Apr 16 Jul 16 Oct 16 Jan 17 Apr 17 Jul 17 Oct 17 Jan 18 Apr 18 Jul 18 Oct 18 Jan 19 Apr 19 Jul 19 Oct 19 Jan 20 Apr 20 Jul 20 Oct 20 Jan 21 Apr 21 Jul 21 Oct 21
Check | OJVM
Is OJVM installed?
or
RAC
Important:
Disable JAVAVM in PDB$SEED to ensure provisioned PDBs have
JAVAVM disabled out of the box
• See also:
• JAVAVM and XML Cleanup in the database
Pro Tip:
MS Windows port requires separate
JDK patching unfortunately
Recommendation | Find out more on the Upgrade Blog
Watch on YouTube
Patch Download Assistant | Find the newest RU
Always use the Patch Download Assistant note MOS Note: 2118136.2
Download an RU with the Patch Download Assistant MOS Note: 2118136.2
Watch on YouTube
opatch | Conflict Check and Patch Apply
Watch on YouTube
The Database Patching Process
Watch on YouTube
Patching a Container Database
Watch on YouTube
The Database Patching Process
18c
Since Oracle 18c, you can install and patch at the same time
• For GI homes
$ mkdir /u01/app/grid/1990
$ cd /u01/app/grid/1990
$ unzip LINUX.X64_193000_grid_home.zip
$ unzip p31750108_19000_Linux_x86-64.zip
18c
Since Oracle 18c, you can install and patch at the same time
• For DB homes
$ mkdir /u01/app/oracle/product/19
$ cd /u01/app/oracle/product/19
$ unzip LINUX.X64_193000_db_home.zip
$ unzip -d p31771877_190000_Linux-x86-64.zip /u01/app/oracle/product/19/patch
18c
• With multiple patches, you need to separate the subdirectories as otherwise the patch xml file gets
overwritten, and patches won't be found
Watch on YouTube
Patching Strategies
Patching Basics
The Database Patching Process
Tips for Patching Success
Summary | Tips for Patching Success
AutoUpgrade
11.2.0.4
12.2.0.1
12.1.0.2
18c
12.2.0.1
18c
19c
ONE BUTTON APPROACH 19c
21c
1. 2. 3.
CONFIG FILE ANALYZE UPGRADE
Optionally,
convert to PDB
Get started | Quick Start Guide
Simple overview
Read it, try it
Download from oracle.com
Photo by Pablo Heimplatz on Unsplash
Check
Before Upgrade
Supportability | OS Certification
Supportability | OS Certification
Platform Certification | Linux x86-64
Older 19c
11.2.0.4 19c
12.1.0.2 19c
12.2.0.1 19c
18c 19c
Database Upgrade | Supported Releases
Older 19c
• So - it depends
Database Upgrade | Intermediate Upgrades
11.2.0.3 19c
18c
Oracle Linux 6
Oracle Linux 7
Database Upgrade | Intermediate Upgrades
11.2.0.3 19c
11.2.0.4
Oracle Linux 5
Oracle Linux 6
Oracle Linux 7
Oracle 19c | Installation
Gold Image
1. Create ORACLE_HOME directory
2. Download image
3. Unpack into ORACLE_HOME
4. ./runInstaller
5. root.sh
Oracle 19c | RPM Installation
RPM
APEX upgrade
• Not part of the database upgrade
• MOS Note: 1088970.1 - Master Note APEX Upgrades
APEX certification
• Minimum APEX Version for Oracle 19c: APEX 18.2
• MOS Note: 1344948.1 - APEX Database and Web Server Certification
Download the newest APEX
• https://www.oracle.com/tools/downloads/apex-v191-downloads.html
Upgrade 19c | Speed it up
Check when dictionary stats have been gathered the last time
SELECT
to_char(max(end_time),'dd-mon-yy hh24:mi') latest, operation
FROM
dba_optstat_operations
WHERE
operation in ('gather_dictionary_stats', 'gather_fixed_objects_stats')
GROUP BY
operation;
0
20
40
60
80
100
0
00:00:10
00:00:00
00:04:30
00:04:40
00:08:50 00:09:20
00:13:10 00:14:00
00:17:30 00:18:40
00:21:50 00:23:20
00:26:10 00:28:00
00:30:30 00:32:40
00:34:50 00:37:20
00:39:10 00:42:00
00:43:30 00:46:40
00:47:50 00:51:20
00:52:10 00:56:00
00:56:30 01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
01:38:00
01:42:40
01:47 :20
01:52:00
01:56:40
02:01:20
02:06:00
02:10:40
02:15:20
02:20:00
02:24:40
02:29:20
02:34:00
02:38:40
Upgrade 19c | Gather Stats In Advance
02:43:20
02:48:00
02:52:40
02:57:20
03:02:00
03:06:40
03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
Gathering stats in advance saves 12 minutes
Operating System | Recommendations
One command
$ java -jar autoupgrade.jar -config cdb1.cfg -mode deploy
ENTERPRISE
JAVA JAR FILE AGENTS DBUA EXTRA LICENSE
MANAGER
• Java 8 required
• Part of Oracle Home since 12.1.0.2
• 3 MB jar file
AutoUpgrade | Need And Don't Need
ENTERPRISE
JAVA JAR FILE AGENTS DBUA EXTRA LICENSE
MANAGER
• No agents to install
• No extra license
AutoUpgrade | Blog Post Series
https://mikedietrichde.com/2019/04/29/the-new-autoupgrade-utility-in-oracle-19c/
AutoUpgrade
Essentials
Download
Configure
Analyze
Check
Upgrade
Download Upgrade
Configure $ java -jar autoupgrade.jar -config CDB1.cfg -mode deploy
Analyze
Check
Upgrade
Download Monitor
Configure
upg> lsj
Analyze
+----+-------+---------+---------+-------+--------------+--------+---------------+
Check |Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME| UPDATED| MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+---------------+
Upgrade | 101| CDB1|PREFIXUPS|EXECUTING|RUNNING|20/11/24 13:38|13:39:26|Remaining 12/13|
+----+-------+---------+---------+-------+--------------+--------+---------------+
Analyze Progress
-----------------------------------
Check Start time: 20/11/24 13:38
Elapsed (min): 13
Upgrade Last update: 2020-11-24T13:48:52.139
Stage: DBUPGRADE
Operation: EXECUTING
Status: RUNNING
Stage summary:
SETUP <1 min
GRP <1 min
PREUPGRADE <1 min
PRECHECKS <1 min
PREFIXUPS 8 min
DRAIN <1 min
DBUPGRADE 3 min (IN PROGRESS)
Error Details:
None
Download Success
Configure upg> Job 101 completed
------------------- Final Summary --------------------
Analyze Number of databases [ 1 ]
---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from CDB1: drop restore point AUTOUPGRADE_9212_CDB1122010
And it includes:
• Recompilation (utlrp.sql)
• Time zone file upgrade
• Postupgrade fixups
• ... and so much more
Download
Configure
Analyze
Check
Upgrade
Watch on YouTube
Shell Scripts
upg1.source_home=/u01/app/oracle/product/12.2.0.1
Restore Point upg1.target_home=/u01/app/oracle/product/19
Underscores upg1.sid=CDB1
Recompilation upg2.source_home=/u01/app/oracle/product/11.2.0.4
upg2.target_home=/u01/app/oracle/product/19
Time Zone upg2.sid=DB11204
Many
Parallel ...
Monitoring upgn.source_home=/u01/app/oracle/product/12.1.0.2
upgn.target_home=/u01/app/oracle/product/19
upgn.sid=HR
PFILE
# Example: global_del_during.ora
Shell Scripts
optimizer_features_enable
Restore Point
Underscores
Recompilation Add parameters to all databases after upgrade
Time Zone global.add_after_upgrade_pfile=/home/oracle/global_add_after.ora
Parallel
# Example: global_add_after.ora
Monitoring
deferred_segment_creation=false
_cursor_obsolete_threshold=1024
_sql_plan_directive_mgmt_control=0
_use_single_log_writer=true
deferred_segment_creation=false
_cursor_obsolete_threshold=1024
_sql_plan_directive_mgmt_control=0
_use_single_log_writer=true
Many Databases
Different Servers
PFILE
Shell Scripts
Restore Point
Underscores
Recompilation
Time Zone
Parallel
Monitoring
Watch on YouTube
Restore Point
Underscores • Default behavior:
Recompilation • Underscores and events will be kept
Time Zone
Parallel
Monitoring
Shell Scripts
Restore Point Example:
Underscores upg1.tune_setting=utlrp_pdb_in_parallel=3,utlrp_threads_per_pdb=4
Recompilation
Time Zone AutoUpgrade will recompile:
Parallel • Three PDBs at a time
Monitoring • Use four threads per PDB
Restore Point
Underscores • Default behavior:
Recompilation • Time zone adjustment happens post upgrade
Time Zone • Database will be restarted several times
Parallel • Important when you use "Downgrade" as fallback strategy as time
Monitoring zone can't be downgraded
Many Databases
Different Servers
PFILE
Shell Scripts
Restore Point
Underscores
Recompilation
Time Zone
Parallel
Monitoring
Monitor via browser:
<au_global_log_dir>/cfgtoollogs/upgrade/auto/state.html
Refreshes automatically
autoupgrade.jar
-deploy
AutoUpgrade | Best Practice
autoupgrade.jar
-analyze
-deploy
autoupgrade.jar
autoupgrade.jar autoupgrade.jar
• Performance feature
PREFIXUPS POSTCHECKS
DBUPGRADE
205 Copyright © 2022, Oracle and/or its affiliates
Proactive Fixups | New Flow
DBUPGRADE
PDBS POSTCHECKS POSTFIXUPS
+--------+----------+--------+
|Database| Stage|Progress|
+--------+----------+--------+
|PDB$SEED| DBUPGRADE| 91 %|
| PDB01|POSTFIXUPS| 0 %|
| PDB02| DBUPGRADE| 20 %|
| PDB03|POSTFIXUPS| 25 %|
| PDB04|POSTFIXUPS| 75 %|
| PDB05|POSTFIXUPS| 10 %|
| PDB06| DBUPGRADE| 6 %|
| PDB07| DBUPGRADE| 91 %|
| PDB08| DBUPGRADE| 91 %|
| PDB09| DBUPGRADE| 91 %|
+--------+----------+--------+
CDB$ROOT upgrade
PDB1 upgrade
DEFAULT
PDB2 upgrade
make_pdbs_available=false
PDB3 upgrade
PDB4 upgrade
PDB 2 available
PDB 1 available
PDB 4 available
patch1.source_home=/u01/app/oracle/product/19.0.0.0/dbhome_19_15_0
patch1.target_home=/u01/app/oracle/product/19.0.0.0/dbhome_19_16_0
patch1.sid=DB19
Hot clone
Refreshable clone
Patching RAC
Proactive fixups
Distributed upgrade
...
224 Copyright © 2022, Oracle and/or its affiliates
What's missing
Windows
RAC rolling
Data Guard standby-first
Watch on YouTube
• Faster
• More flexible
upg1.sid=CDB12102
upg1.target_cdb=CDB19
upg1.pdbs=pdb1
upg1.source_home=/u01/app/oracle/product/12102
upg1.target_home=/u01/app/oracle/product/19
Watch on YouTube
Rename a PDB
upg1.pdbs=pdb1
upg1.target_pdb_name.pdb1=sales
Current limitations:
https://dohdatabase.com/how-to-upgrade-a-single-pdb
Disconnect at will
Upgrade
Open
• non-CDB Migration
• Clone and upgrade a non-CDB into a PDB
12.2.0.1 19c
CDB CDB
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=CSOURCE
upg1.pdbs=PDB1
upg1.target_cdb=CTARGET
upg1.source_dblink.PDB1=CLONEPDB
upg1.target_pdb_name.PDB1=PDBC
upg1.target_pdb_copy_option.PDB1=file_name_convert=('CSOURCE/PDB1', 'CTARGET/PDBC')
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=CSOURCE
upg1.pdbs=PDB1
upg1.target_cdb=CTARGET
upg1.source_dblink.PDB1=CLONEPDB 60
upg1.target_pdb_name.PDB1=PDBC
upg1.target_pdb_copy_option.PDB1=file_name_convert=('CSOURCE/PDB1', 'CTARGET/PDBC')
upg1.start_time=+1h30m
2022-04-30T23:12:39.483201+02:00
PDB3(3):Media Recovery Complete (CDB2)
Completed: ALTER PLUGGABLE DATABASE PDB3 REFRESH
2022-04-30T23:13:36.973819+02:00
ALTER PLUGGABLE DATABASE PDB3 REFRESH
60 sec
2022-04-30T23:13:39.371222+02:00
PDB3(3):Serial Media Recovery started
2022-04-30T23:13:39.527020+02:00
PDB3(3):Media Recovery Complete (CDB2)
Completed: ALTER PLUGGABLE DATABASE PDB3 REFRESH
Watch on YouTube
12.2.0.1 19c
non-CDB CDB
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=NONCDB
upg1.target_cdb=CTARGET
upg1.source_dblink.NONCDB=CLONENONCDB
upg1.target_pdb_name.NONCDB=PDBNONCDB
upg1.target_pdb_copy_option.NONCDB=file_name_convert=('NONCDB', 'CTARGET/PDBNONCDB')
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=NONCDB
upg1.target_cdb=CTARGET
upg1.source_dblink.NONCDB=CLONENONCDB 300
upg1.target_pdb_name.NONCDB=PDBNONCDB
upg1.target_pdb_copy_option.NONCDB=file_name_convert=('NONCDB', 'CTARGET/PDBNONCDB')
upg1.start_time=+45m
1
location
360°
650
Solutions for
13%
the IT department 1
Woman 50 …unique chance to
make an IT career!
Development
87% tools
Men
260 #ITmadeInChreis5
about me
261 #ITmadeInChreis5
current situation
- till end of 2023: migrating all databases from old OS to Exadata compute nodes
- starting 2024: consolidating all databases to Multitenant and Oracle’s next generation
long-term release
262 #ITmadeInChreis5
test setup
- no additional options
263 #ITmadeInChreis5
preparation
grant permissions
264 #ITmadeInChreis5
migration - the config file
265 #ITmadeInChreis5
migration - autoupgrade start
266 #ITmadeInChreis5
migration – in progress
Source Runtime/Min
TEST40 (108) 26
TEST42 (107) ongoing
TEST41 (106) ongoing
267 #ITmadeInChreis5
migration – “finished”
Source Runtime/Min
TEST40 (108) 26
TEST42 (107) 226 (~3.5h)
TEST41 (106) 1770 (29h)
268 #ITmadeInChreis5
errors during migration
2nd error: user profile with idle_time defined; an idle session was killed because of idle_time setting
Tip: think about these kind of errors before starting a long-running migration
269 #ITmadeInChreis5
summary
without
very
comfortable to
use
extra
license stable
costs
(does everything
automatically)
perfect for
pre-
migration
easy tests
syntax
270 #ITmadeInChreis5
AutoUpgrade to a New Server
• STARTUP UPGRADE
upg1.source_home=/tmp
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
Pro tip: Find more details in blog post • -mode upgrade
Oracle AutoUpgrade between two servers
Watch on YouTube
upg1.source_home=/u01/app/oracle/product/19
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
upg1.target_cdb=CDB2
• AutoUpgrade
Command
java -jar autoupgrade.jar -config DB19.cfg -mode deploy
Command
Java -jar autoupgrade.jar -config DB19.cfg -mode deploy
Watch on YouTube
BEFORE AFTER
defer_standby_log_shipping=yes defer_standby_log_shipping=no
BOSTON CHICAGO
DB DB
DB_BOSTON DB_fra27d
Watch on YouTube
A word of advice:
If defer_standby_log_shipping=yes,
all remote log archive destinations are deferred
$ cat PDB1.cfg
SQL> show pdbs
12.2.0.1 19c upg1.pdbs=PDB1
CDB CDB / Primary
upg1.sid=CDB122 CON_NAME OPEN MODE
PDB1
upg1.target_sid=CDB19 READ WRITE
...
For:
• Non-CDB to PDB conversion
• Unplug-plug upgrade
NAME
12.2.0.1 --------------------------------------------------------------------------------
CDB +DATA/DB_BOSTON/DD934E8207292138E053E801000A8351/DATAFILE/system.269.1103046537
+DATA/DB_BOSTON/DD934E8207292138E053E801000A8351/DATAFILE/sysaux.270.1103046537
Primary +DATA/DB_BOSTON/DD934E8207292138E053E801000A8351/DATAFILE/undotbs1.268.1103046537
+DATA/DB_BOSTON/DD934E8207292138E053E801000A8351/DATAFILE/users.273.1103046827
12.2.0.1 NAME
--------------------------------------------------------------------------------
CDB +DATA/DB_FRA27D/DD934E8207292138E053E801000A8351/DATAFILE/system.265.1103050007
Standby +DATA/DB_FRA27D/DD934E8207292138E053E801000A8351/DATAFILE/sysaux.266.1103050007
+DATA/DB_FRA27D/DD934E8207292138E053E801000A8351/DATAFILE/undotbs1.267.1103050009
+DATA/DB_FRA27D/DD934E8207292138E053E801000A8351/DATAFILE/users.269.1103050009
19c
CDB
Standby
• Plug in PDB, standby will find aliases and find the real file locations
From alert log
Recovery scanning directory +DATA/DB_BOSTON/... for any matching files
19c Deleted Oracle managed file +DATA/DB_BOSTON/...
CDB Successfully added datafile 37 to media recovery
Standby Datafile #37: +DATA/DB_FRA27D/.../DATAFILE/users.269.1103050009'
Watch on YouTube
1
UPGRADE GRID INFRASTRUCTURE
2
UPGRADE DATABASE
• Registered and managed through • Supports RAC and RAC One Node
srvctl
• No extra configuration
• SPFile in ASM
• Just connect to one node and
AutoUpgrade takes care of the rest
• CLUSTER_DATABASE=FALSE
• srvctl configuration
blogs.oracle.com
Node 1
Node 2
DBUPGRADE POSTCHECKS POSTFIXUPS
• Performance feature
NODE 1
CLUSTER_DATABASE=FALSE
NODE 2
NODE 1
PDB1 PDB5
CDB$ROOT
PDB2 PDB6
CLUSTER_DATABASE=FALSE CLUSTER_DATABASE=TRUE
NODE 2
PDB3 PDB7
PDB4 PDB8
+--------+----------+--------+----+
|Database| Stage|Progress|Node|
+--------+----------+--------+----+
|PDB$SEED| DBUPGRADE| 91 %| au1|
| PDB01|POSTFIXUPS| 0 %| au1|
| PDB03|POSTFIXUPS| 0 %| au1|
| PDB04|POSTFIXUPS| 0 %| au1|
| PDB05|POSTFIXUPS| 0 %| au1|
| PDB02| DBUPGRADE| 91 %| au2|
| PDB06| DBUPGRADE| 91 %| au2|
| PDB07| DBUPGRADE| 91 %| au2|
| PDB08| DBUPGRADE| 91 %| au2|
| PDB09| DBUPGRADE| 91 %| au2|
+--------+----------+--------+----+
$ cat RACDB.cfg
global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/ RACDB
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid= RACDB
upg1.tune_setting=distributed_upgrade=true
Watch on YouTube
CDB$ROOT upgrade
PDB1 upgrade
DEFAULT
PDB2 upgrade
make_pdbs_available=false
PDB3 upgrade
PDB4 upgrade
PDB 2 available
PDB 1 available
PDB2 upgrade
make_pdbs_available=true
PDB3 upgrade
PDB4 upgrade
PDB 4 available
CDB$ROOT upgrade
PDB1 upgrade
$ cat DB12.cfg
global.keystore=/etc/oracle/keystores/autoupgrade/DB12
...
$ ls -l /etc/oracle/keystores/autoupgrade/DB12
• add
• delete
• list
• save
• help
• exit
TDE> save
$ ls -l /etc/oracle/keystores/autoupgrade/DB12
AUTOLOGIN
Workaround
upg1.source_tns_admin_dir=/u01/app/oracle/admin/DB12/tns_admin
upg1.target_tns_admin_dir=/u01/app/oracle/admin/DB12/tns_admin
$ cat /tmp/au-pfile-tde.txt
WALLET_ROOT='/etc/oracle/keystores/$ORACLE_SID'
TDE_CONFIGURATION='KEYSTORE_CONFIGURATION=FILE'
global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/DB12
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
upg1.target_cdb=CDB2
Start upgrade
$ java -jar autoupgrade.jar -config DB12.cfg -mode deploy
global.autoupg_log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade
global.keystore=/u01/app/oracle/admin/autoupgrade/keystore
upg1.log_dir=/u01/app/oracle/cfgtoollogs/autoupgrade/PDB1
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=CDB1
upg1.target_cdb=CDB2
upg1.pdbs=PDB1
Start upgrade
$ java -jar autoupgrade.jar -config PDB1.cfg -mode deploy
Watch on YouTube
Watch on YouTube
Source:
19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine
running on Oracle Linux (Doc ID 2542082.1)
• Well-known API
• Flexibility
• Simplicity
• Upgrade-on-demand HTTPS
ORDS autoupgrade.jar
Requirement:
• Oracle REST Data Services (ORDS) 22.1.0 or later
Watch on YouTube
global.autoupg_log_dir=/home/oracle/logs {
upg1.source_home=/u01/app/product/11 "global": {
upg1.target_home=/u01/app/product/19 "autoupg_log_dir": "/home/oracle/logs"
upg1.sid=UPGR },
upg1.log_dir=/home/oracle/logs "jobs": [{
upg1.restoration=no "source_home": "/u01/app/product/11",
"target_home": "/u01/app/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}]
}
global.autoupg_log_dir=/home/oracle/logs {
upg1.source_home=/u01/app/product/11 "global": {
upg1.target_home=/u01/app/product/19 "autoupg_log_dir": "/home/oracle/logs"
upg1.sid=UPGR },
upg1.log_dir=/home/oracle/logs "jobs": [{
upg1.restoration=no "source_home": "/u01/app/product/11",
"target_home": "/u01/app/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}]
}
global.autoupg_log_dir=/home/oracle/logs {
upg1.source_home=/u01/app/product/11 "global": {
upg1.target_home=/u01/app/product/19 "autoupg_log_dir": "/home/oracle/logs"
upg1.sid=UPGR },
upg1.log_dir=/home/oracle/logs "jobs": [{
upg1.restoration=no "source_home": "/u01/app/product/11",
"target_home": "/u01/app/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}]
}
global.autoupg_log_dir=/home/oracle/logs {
upg1.source_home=/u01/app/product/11 "global": {
upg1.target_home=/u01/app/product/19 "autoupg_log_dir": "/home/oracle/logs"
upg1.sid=UPGR },
upg1.log_dir=/home/oracle/logs "jobs": [{
upg1.restoration=no "source_home": "/u01/app/product/11",
"target_home": "/u01/app/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}]
}
global.autoupg_log_dir=/home/oracle/logs {
upg1.source_home=/u01/app/product/11 "global": {
upg1.target_home=/u01/app/product/19 "autoupg_log_dir": "/home/oracle/logs"
upg1.sid=UPGR },
upg1.log_dir=/home/oracle/logs "jobs": [{
upg1.restoration=no "source_home": "/u01/app/product/11",
"target_home": "/u01/app/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}]
}
Ignore verification
Input JSON
"taskid": "job_2022_04_27_05.17.24.146_0",
"status": "submitted", of the
file certificate
Equal API method Input file type
"message": "", Endpoint API
AutoUpgrade configuration
"link": "https://localhost:8443/ords/autoupgrade/task?taskid=job_2022_04_27_05.17.24.146_0",
"config": {
"global": {
"autoupg_log_dir": "/home/oracle/logs"
},
$ java "jobs":
-jar autoupgrade.jar
[ -config UPGR.cfg -mode analyze
{
"source_home": "/u01/app/oracle/product/11.2.0.4",
Task Identifier
"target_home": "/u01/app/oracle/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}
]
}
}
$ curl -k https://localhost:8443/ords/autoupgrade/tasks
{
"total_tasks": 1,
"tasks": [
{
"mode": "analyze",
"taskid": "job_2022_04_27_05.17.24.146_0",
"config": {
"jobs": [
{
"source_home": "/u01/app/oracle/product/11.2.0.4",
"sid": "UPGR"
}
]
},
"link": "https://localhost:8443/ords/autoupgrade/task?taskid=job_2022_04_27_05.17.24.146_0"
}
]
}
Task Identifier
378 Copyright © 2022, Oracle and/or its affiliates
REST API | Get Specific Task
$ curl –k 'https://localhost:8443/ords/autoupgrade/task?taskid=job_2022_04_27_05.17.24.146_0'
{
"taskid": "job_2022_04_27_05.17.24.146_0",
"status": "finished", Task Identifier
"message": "",
"link": "https://localhost:8443/ords/autoupgrade/task?taskid=job_2022_04_27_05.17.24.146_0",
"config": {
"global": {
"autoupg_log_dir": "/home/oracle/logs"
},
"jobs": [ Task Status
{
"source_home": "/u01/app/oracle/product/11.2.0.4",
"target_home": "/u01/app/oracle/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}
]
}
}
$ curl -k 'https://localhost:8443/ords/autoupgrade/console?taskid=job_2022_04_27_05.17.24.146_0'
AutoUpgrade is not fully tested on OpenJDK 64-Bit Server VM, Oracle recommends to use Java HotSpot(TM)
AutoUpgrade 22.2.220324 launched with default internal options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 Non-CDB(s) will be analyzed
Job 100 database upgr
Job 100 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
{
"taskid": "job_2022_04_27_05.17.24.146_0",
"status": "submitted",
"message": "",
"link": "https://localhost:8443/ords/autoupgrade/task?taskid=job_2022_04_27_05.17.24.146_0",
"config": {
"global": {
"autoupg_log_dir": "/home/oracle/logs"
},
"jobs": [
{
"source_home": "/u01/app/oracle/product/11.2.0.4",
"target_home": "/u01/app/oracle/product/19",
"sid": "UPGR",
"log_dir": "/home/oracle/logs",
"restoration": "no"
}
]
}
}
$ curl -k 'https://localhost:8443/ords/autoupgrade/log?taskid=job_2022_04_27_05.17.24.146_0'
{
"logs": [
...,
{
"filename": "cfgtoollogs/upgrade/auto/status/status.html",
"link":
"https://localhost:8443/ords/autoupgrade/log?taskid=job_2022_04_27_05.17.24.146_0&name=cfgtoollogs/upgrade/auto/status/status.html"
},
{
"filename": "cfgtoollogs/upgrade/auto/status/status.log",
"link":
"https://localhost:8443/ords/autoupgrade/log?taskid=job_2022_04_27_05.17.24.146_0&name=cfgtoollogs/upgrade/auto/status/status.log"
},
{
"filename": "cfgtoollogs/upgrade/auto/status/progress.json",
"link":
"https://localhost:8443/ords/autoupgrade/log?taskid=job_2022_04_27_05.17.24.146_0&name=cfgtoollogs/upgrade/auto/status/progress.json"
}
]
}
Watch on YouTube
Watch on YouTube
Watch on YouTube
• tmux
• screen
DBUA is not!
../prechecks/<sid>_checklist.cfg
3. Edit config file
4. Deploy
4. Deploy
To override a fixup Edit config file and specify your customer checklist
upg1.checklist=../prechecks/<sid>_checklist.cfg
1. Analyze
2. Edit checklist
4. Deploy
2. Edit checklist
4. DEPLOY
...
Database DB12
+--------------------+---------+--------+
| FixUp Name| Severity|Run Fix?|
+--------------------+---------+--------+
|OLD_TIME_ZONES_EXIST| WARNING| NO|
| POST_DICTIONARY|RECOMMEND| YES|
| POST_UTLRP|RECOMMEND| YES|
| TIMESTAMP_MISMATCH| WARNING| YES|
+--------------------+---------+--------+
Watch on YouTube
If you revert or restore in any other way, you must tell AutoUpgrade
SHUTDOWN IMMEDIATE
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT grpt;
SHUTDOWN IMMEDIATE
STARTUP MOUNT;
ALTER DATABASE OPEN RESETLOGS;
DROP RESTORE POINT grpt;
AutoUpgrade | What if ... you need to flash back?
• Default behavior:
• AutoUpgrade creates GRP except for
- Standard Edition 2
- restoration=no
• GRP will be kept
• GRP needs to be removed manually except for
- drop_grp_after_upgrade=yes will only remove it when upgrade completed successfully
• /etc/oratab
• Clusterware registration
• Moving files
• PFile
• SPFile
• Password file
• Etc.
Watch on YouTube
If you revert or restore in any other way, you must tell AutoUpgrade
rm -rf /u01/app/oracle/cfgtoollogs/autoupgrade
rm -rf /u01/app/oracle/admin/DB1/upglogs
https://dohdatabase.com/is-autoupgrade-resumable/
Watch on YouTube
AutoUpgrade | What if ... AutoUpgrade fails
ERROR1400.ERROR = UPG-1400
ERROR1400.CAUSE = Database upgrade failed with errors
Omit the error code and get a list of all error codes
$ java -jar autoupgrade.jar -error_code
ERROR1000.ERROR = UPG-1000
ERROR1000.CAUSE = It was not possible to create the data file where the jobsTable is being written or there was a
problem during the writing, it might be thrown due to a permission error or a busy resource scenario
ERROR1001.ERROR = UPG-1001
ERROR1001.CAUSE = There was a problem reading the state file perhaps there was corruption writing the file and in
the next write it might be fixed
.
.
AutoUpgrade | What if ... AutoUpgrade fails
Watch on YouTube
"After qualifying the new AutoUpgrade
tool on a representative portion of our
database landscape we found that tool
was doing a great job and is production-
ready. In our automation tool we have
removed a lot of “home-grown” code
and replaced it with AutoUpgrade
functionality.
Daniel Overby Hansen
Since August 2019 all upgrades at
Lead Developer SimCorp have been executed using the
SimCorp A/S - Denmark
AutoUpgrade tool."
After Upgrade
Important things to
do after the upgrade completed
successfully
Photo by Ales Krivec on Unsplash
Things to do right after upgrade
• Check retention
SQL> select dbms_stats.get_stats_history_retention from dual;
• Default: 31 days
• Adjust setting
SQL> exec dbms_stats.alter_stats_history_retention(10);
• Example: 10 days
Time Zone | Facts
Upgrade and potential adjustment of TIMESTAMP WITH TIME ZONE data type
• Use utltz* scripts or DBMS_DST directly
Time zone can't be downgraded
Time zone upgrade may take long
Files are in $ORACLE_HOME/oracore/zoneinfo
Default time zone file
Recommendation
• Apply most recent time zone patch before upgrade
• MOS Note:412160.1
• Not RAC rolling!
BEGIN
DBMS_DST.FIND_AFFECTED_TABLES (AFFECTED_TABLES => 'SYS.DST$AFFECTED_TABLES',
LOG_ERRORS => TRUE,
LOG_ERRORS_TABLE => 'SYS.DST$ERROR_TABLE',
PARALLEL => TRUE);
EXCEPTION ...
Time Zone | Issues and Workaround
NEW IN
Documentation: Oracle 21c Database Globalization Support Guide, Chapter 4.7.1 21c
Post Upgrade | Unified Audit Trail
• To convert:
SQL> EXEC DBMS_AUDIT_MGMT.TRANSFER_UNIFIED_AUDIT_RECORDS;
"
Oracle Multimedia is desupported in Oracle Database 19c, and the implementation is
removed.
• Blog post: Simple migration from Oracle multimedia to secure-file blob data type
Oracle 19c | Streams Removal
"
Starting in Oracle Database 19c (19.1), Oracle Streams is desupported. Oracle
GoldenGate is the replication solution for Oracle Database.
"
Oracle continues to support the DBMS_JOB package. However, you must grant the
CREATE JOB privilege to the database schemas that submit DBMS_JOB jobs.
"
Oracle continues to support the DBMS_JOB package. However, you must grant the
CREATE JOB privilege to the database schemas that submit DBMS_JOB jobs.
"
After an upgrade, or after other database configuration changes, Oracle strongly
recommends that you regather fixed object statistics after you have run
representative workloads on Oracle Database.
Database 19c Upgrade Guide, chapter 7
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END;',
start_date => SYSDATE+7,
auto_drop => TRUE,
comments => 'Gather fixed objects stats after upgrade - one time'
);
DBMS_SCHEDULER.ENABLE (
name => '"SYS"."GATHER_FIXED_OBJECTS_STATS_ONE_TIME"'
);
END;
/
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl \
-n 4 -e \
-C 'PDB$SEED' \
-b sched_gfos -d /home/oracle/sched_gfos/ sched_gfos.sql
upg1.after_action=/home/oracle/sched_gfos/sched_gfos.sh
Primary database
https://dohdatabase.com/2020/11/26/how-to-upgrade-with-data-guard/
446 Copyright © 2022, Oracle and/or its affiliates
Photo by Joshua Sortino on Unsplash
preupgrade.jar
• Download the newest from MOS Note: 884522.1
• You need to do everything by yourself
• Not supported anymore from 21c
• java -jar autoupgrade.jar -preupgrade "target_version=21"
dbupgrade
• Can be used without parameters
• Wrapper for catctl.pl with default settings
Works also days after the upgrade without losing any changes
• If timezone file was upgraded, same timezone file must be present in old Oracle Home
Fallback | Database Downgrade
In addition,
• Recompile object (utlrp)
• Run datapatch
• Gather stats (dictionary, fixed objects and optimizer)
• Grid Infrastructure and /etc/oratab
Fallback | Database Downgrade
Watch on YouTube
Fallback | Database Downgrade
Examples:
• New table is not dropped, but truncated
• New index is not dropped
• Generally, dropping is avoid
Fallback | Flashback Database
Puts the database back into the state it had before the upgrade
SHUTDOWN IMMEDIATE
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT grpt;
SHUTDOWN IMMEDIATE
STARTUP MOUNT;
ALTER DATABASE OPEN RESETLOGS;
DROP RESTORE POINT grpt;
Fallback | Flashback Database
• Default behavior:
• AutoUpgrade creates GRP except for
- Standard Edition 2
- restoration=no
• GRP will be kept
• GRP needs to be removed manually except for
- drop_grp_after_upgrade=yes will only remove it when upgrade completed successfully
Watch on YouTube
Read-only
A database upgrade does not touch user data
Start upgrade
Pro tip: Works for SE2 and databases
in NOARCHIVELOG mode
Read-write
To restore
Watch on YouTube
Primary database
Imported
19c
into database
11.2.0.4 Exported to
dump file with
Copied over e.g. VERSION=11.2.0.4
the network
• Documentation
Want to Know More?
Things that don't matter Oracle Text 00:00:58 Oracle Text 00:00:58
Oracle XML Database 00:04:09 Oracle XML Database 00:04:09
• Amount of user data Oracle Database Java Packages 00:00:33 Oracle Database Java Packages 00:00:33
Oracle Multimedia 00:07:43 Gathering Statistics 00:02:43
Gathering Statistics 00:04:53
Note:This is an excerpt from the alert.log – these parameters will be set implicitly during a STARTUP UPGRADE
Why Upgrade is Different | Upgrade Mode
• Minimum 1
• Maximum 8
Non-CDB
• Default 4
$ dbupgrade -n 2
Worker 1 Worker 2
Now idle
Phase 3 Script 1
.
.
Phase n Script n
Contention
RUNTIME (MINUTES)
48
32
25
23
• Minimum 4
• Maximum unlimited
CDB
• Default CPU count
$ dbupgrade -n 4
• Minimum 1
• Maximum 8
CDB
• Default 2
$ dbupgrade -N 2
CDB
Total number of processors (n)
PDBs upgraded
=
simultaneously
Processor per PDB (N)
PDB$SEED PDB1
CDB$ROOT
$ dbupgrade -n 4 -N 2
PDB$SEED PDB1
PDB2 PDB4
PDB3 PDB5
PDB6 PDB7
CDB$ROOT
$ dbupgrade -n 4 -N 2
Scale by upgrading
more PDBs simultaneously
PDB$SEED PDB1
CDB$ROOT
Higher release
Non-CDB
Single Tenant
Multitenant
"
Oracle Multimedia is desupported in Oracle Database 19c, and the implementation is
removed.
• Blog post: Simple migration from Oracle multimedia to secure-file blob data type
Most components can be removed online
Container database
OWM CONTEXT
• 3 user created PDBs
• PDB$SEED DV SDO
XOQ JAVAVM
APS XDK
Remove component after component
JAVAVM
JServer JAVA VM
DV APS
Data Vault Analytical Workspace
SDO
Spatial Data Option
DV APS
Data Vault Analytical Workspace
XDK
Oracle XDK
All Components - OWM - CONTEXT - DV / OLS - APS / XOQ - SDO - ORDIM - JAVAVM / XML
(OLAP)
DV APS
Data Vault Analytical Workspace
XDK
Oracle XDK
ORDIM XDK
Oracle Multimedia Oracle XDK
CDB$ROOT
Upgrade Time
Components in all container, CDB$ROOT only or nowhere
47,4
42,1
36,3
Everyth ing everywhere Everyth ing in CDB$ROOT only Everyth ing removed
Components
Before Upgrade
Upgrade Core
Database Upgrade
After Upgrade
System Tasks Database Checks User-Defined Tasks
AutoUpgrade | Checks Overview
Components
amd_exists
Configurations
apex_manual_upgrade
awr_expired_snapshots case_insensitive_auth Spatial
em_present auto_login_wallet_required
exf_rul_exists cdb_only_support
disk_space_for_recovery_area
Misc
javavm_status compatible_grp
db_block_size
ols_sys_move compatible_not_set
default_resource_limit network_acl_priv
… dictionary_stats
flash_recovery_area_setup new_time_zones_exist
dir_symlinks_exist
min_archive_dest_size parameter_deprecated
enable_local_undo
tablespaces pending_dst_session
invalid_objects_exist
tempts_notempfile
max_string_size_on_db underscore_events
… …
…
Upgrade Core
Database Upgrade
• Commands toolbox
upg>
Understanding the
directory structure
AutoUpgrade | Understanding the directory structure
One job
Second job
• /cfgtoollogs
• ./upgrade/auto General Logs + State Files
• /database_1
• ./job_number
•./prechecks HTML Report
•./preupgrade
•./prefixups
•./drain
•./dbupgrade Upgrade Logs
•./postupgrade
• ./temp AU Temp files
• /database_2
• …
• ANALYZE
[PRECHECKS]
• FIXUPS
[PRECHECKS,PREFIXUPS]
• DEPLOY
[GRP, PREUPGRADE, PRECHECKS, PREFIXUPS, DRAIN, DBUPGRADE, POSTCHECKS, POSTFIXUPS, POSTUPGRADE, NONCDBTOPDB, SYSUPDATES]
• UPGRADE
[DBUPGRADE,POSTCHECKS,POSTFIXUPS,SYSUPDATES]
• POSTFIXUPS
[POSTFIXUPS]
• /cfgtoollogs
./upgrade/auto
• /database_1
./job_number
./prechecks
./preupgrade
./prefixups
Depends on the execution mode
./drain
./dbupgrade
./postupgrade
./temp
• /database_2
./job_number
…
cdb18x_preupgrade.html
Reports
cdb18x_preupgrade.log
Upgrade logs
AutoUpgrade
Control Files
State Files
Status Files
Status Files
AutoUpgrade
Control Files
State Files
Status Files
CON_ID COUNT(*)
--------- ----------
1 51
3 6359
4 6356
5 6356
• utlrp.sql
• Calls utlprp.sql with CPU_COUNT – 1
• Creates CPU_COUNT – 1 parallel jobs
• Recompilation happens PDB after PDB
• Attempts to compile ALL invalid objects
• utlprp.sql
• Used to override the default parallel degree
• Example
Postpone recompilation
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=CDB1
upg1.run_utlrp=no
upg1.after_action=/database/scripts/compile_my_way.sh
• But you can't postpone PDB$SEED's recompilation
• CDB$ROOT recompiles partially already, too
Modify utlprp.sql
• Makes sense only when you have a lot of INVALID user objects
• Force recompilation to compile ONLY oracle-maintained objects
• Backport available soon
Default Use
Deprecated/desupported • as few as possible
Underscores/events • not longer than needed
Applications
SQL> select name, value
from v$parameter
where substr(name, 0, 1) = '_' or name='event';
VALUE UPDATE_COMMENT
________ ___________________________________________________________
1024 04-03-2021 Daniel: MOS 2431353.1, evaluate after upgrade
• COMPATIBLE
• Enables features
• Always use the default value 19.0.0 in Oracle 19c
• OPTIMIZER_FEATURES_ENABLE
• Just reverts to the parameters used in a previous release
• Avoid using it if possible
• This is not a Swiss Army knife!
• You will turn off a lot of great features
"
Modifying the OPTIMIZER_FEATURES_ENABLE parameter generally is strongly discouraged
and should only be used as a short term measure at the suggestion of Oracle Global Support.
Use Caution if Changing the OPTIMIZER_FEATURES_ENABLE Parameter After an Upgrade (Doc ID 1362332.1)
"
You can allow the Oracle Database instance to automatically manage and tune
memory for you.
Database 19c, Database Administrator’s Guide, chapter 6
4. Issues
[DBT-11211] The Automatic Memory Management option is not allowed when the total physical memory is greater than 4GB.
INS-35178 : The Automated Memory management option is not allowed when the Total physical memory is greater than 4GB
1. ASM instances
Overview
SQL> set serverout on
Check SQL> exec dbms_optim_bundle.GetBugsForBundle;
Enable
Output 19.10.0.0.210119DBRU:
Bug: 29487407, fix_controls: 29487407
Result
Bug: 30998035, fix_controls: 30998035
Info and Issues Bug: 30786641, fix_controls: 30786641
Bug: 31444353, fix_controls: 31444353
Bug: 30486896, fix_controls: 30486896
Bug: 28999046, fix_controls: 28999046
Bug: 30902655, fix_controls: 30902655
Bug: 30681521, fix_controls: 30681521
Bug: 29302565, fix_controls: 29302565
Bug: 30972817, fix_controls: 30972817
...
Overview begin
Check dbms_optim_bundle.enable_optim_fixes(
Enable action => 'ON',
Output scope => 'BOTH',
current_setting_precedence => 'YES');
Result
end;
Info and Issues /
Refresh manually:
• Before and after upgrade
• Before (source) and after (target) logical migration
• After major application upgrades
Gather manually
SQL> BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS('SYS');
DBMS_STATS.GATHER_SCHEMA_STATS('SYSTEM');
END;
/
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl \
-l /tmp \
-b gatherstats -- \
--x"begin dbms_stats.gather_schema_stats('SYS'); dbms_stats.gather_schema_stats('SYSTEM'); end;"
"
After an upgrade, or after other database configuration changes, Oracle strongly
recommends that you regather fixed object statistics after you have run
representative workloads on Oracle Database.
Database 19c Upgrade Guide, chapter 7
What is it?
SQL> SELECT owner, table_name
FROM dba_tab_statistics
WHERE object_type = 'FIXED TABLE';
OWNER TABLE_NAME
________ _______________
SYS X$KQFTA
SYS X$KQFVI
SYS X$KQFVT
SYS X$KQFDT
SYS X$KQFCO
SYS X$KQFOPT
SYS X$KYWMPCTAB
...
Always gather fixed objects stats when the system is warmed up - after your representative workload
Check out Best Practices for Gathering Pro tip: Automated stats gathering only gather
Optimizer Statistics with Oracle Database 19c fixed objects stats if they are completely missing
0
20
40
60
80
100
0
00:00:10
00:00:00
00:04:30
00:04:40
00:08:50 00:09:20
00:13:10 00:14:00
00:17:30 00:18:40
00:21:50 00:23:20
00:26:10 00:28:00
00:30:30 00:32:40
00:34:50 00:37:20
00:39:10 00:42:00
00:43:30 00:46:40
00:47:50 00:51:20
00:52:10 00:56:00
00:56:30 01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
01:38:00
01:42:40
01:47 :20
01:52:00
01:56:40
02:01:20
02:06:00
02:10:40
02:15:20
02:20:00
02:24:40
02:29:20
02:34:00
02:38:40
02:43:20
02:48:00
Statistics | Gather Stats Before Upgrade
02:52:40
02:57:20
03:02:00
Dictionary and fixed objects
03:06:40
03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
Gathering stats in advance saves 12 minutes
Statistics | Good Stats During Upgrade
DURATION REDUCTION
Gathered schema and cluster index stats 13 min 41 sec 3.4 % to previous
This example has been done with one of the tiny Hands-On Lab databases
Statistics | Good Stats During Upgrade
DURATION REDUCTION
Gathered schema and cluster index stats 52 min 25 sec 0.5 % to previous
The system statistics describe hardware characteristics such as I/O and CPU
"
performance and utilization.
System statistics enable the query optimizer to more accurately estimate I/O and
CPU costs when choosing execution plans.
Database 19c SQL Tuning Guide, chapter 10
... in most cases you should use the defaults and not gather system statistics.
... if the workload is mixed or you are not in a position to test the effect of using
EXADATA system statistics, then stick to the defaults even on this platform.
References:
OCCUPANT_NAME SPACE_USAGE_KBYTES
------------------------------ ------------------
SM/ADVISOR 5901376
...
1. SYSAUX Tablespace Grows Rapidly After Upgrading Database to 12.2.0.1 or Above Due To Statistics
Advisor (Doc ID 2305512.1)
2. How To Purge Optimizer Statistics Advisor Old Records From 12.2 Onwards (Doc ID 2660128.1)
• Check retention
SQL> select dbms_stats.get_stats_history_retention from dual;
• Default: 31 days
• Adjust setting
SQL> exec dbms_stats.alter_stats_history_retention(10);
• Example: 10 days
Testing
Best Practices
In
95%
of all cases, "upgrade problem" in reality is a performance issue
after upgrade. Or not database related.
80% - 90%
0%
• Pay-as-you-go
CloneDB
• Copy-on-write
• Uses image copies of data files stored on NFS, delta is written locally
• Clone still has access to Exadata storage features Pro tip: Cool blog post on
PDB sparse clone
Should you refresh object statistics when you upgrade to Oracle 19c?
• It is not required
• But especially when you upgrade from 11.2, histograms can change
• Avoid gradual change of plans when stats become stale
• Better regather object statistics as soon as possible
"
When you transport optimizer statistics between databases, you must use
DBMS_STATS to copy the statistics to and from a staging table, and tools to make
the table contents accessible to the destination database.
Database 19c SQL Tuning Guide, chapter 17
• Schema
• Table
• Database (rare)
Pro tip: You can read more about transporting
• Dictionary and fixed objects (rare) statistics in the SQL Tuning Guide
• No data
• Subset of data
• Superior performance
Watch on YouTube
• The optimizer does not use statistics stored in a user-owned table - only from dictionary
• You can transfer to a higher version - potentially the stats table must be upgraded
SQL> EXEC DBMS_STATS.IMPORT_SCHEMA_STATS ( ...
• If enabled, imported statistics will be added as pending stats until you publish them
We have adopted this method for stats. We migrated 60 TB database from AIX to
"
Exadata using cross-platform transportable tablespace without stats.
"
You can lock statistics to prevent them from changing.
Database 19c SQL Tuning Guide, chapter 15
Known pitfalls
• Original synopses on disk can easily consume hundreds of GB
• By default, transporting stats with DBMS_STATS does not include synopses
New efficient algorithm for synopses can drastically reduce space consumption
ORDERS Table Partition level stats are gathered & synopsis created
Must-Do
• Changed partitions won't be eligible for new stats generation
SQL> exec dbms_stats.set_database_prefs('INCREMENTAL_STALENESS','USE_STALE_PERCENT');
Optional
• Adjust the stale percentage – default: 10%
SQL> exec dbms_stats.set_database_prefs('STALE_PERCENT','15');
Two options
• Coexistence of old and new synopses (default)
• Regathering of synopses
Recommendation
• Use regathering as coexistence leads to issues
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
SQL Tuning Set | Definition
An SQL Tuning Set (STS) enables you to group SQL statements and related
"
metadata in a single database object, which you can use to meet your tuning goals.
SQL statement
Context
Statistics
Plans
SQL> BEGIN
DBMS_SQLSET.CREATE_SQLSET (
sqlset_name => 'UPG_STS_1',
description => 'For upgrade - from source'
);
END;
/
STS
SQL> DECLARE
begin_id number;
end_id number;
cur sys_refcursor;
BEGIN
SELECT min(snap_id), max(snap_id) INTO begin_id, end_id
FROM dba_hist_snapshot;
dbms_sqltune.load_sqlset('UPG_STS_1', cur);
close cur;
Pro tip: Consider excluding other internal
schemas like DBSNMP, ORACLE_OCM,
END;
/ LBACSYS, WMSYS, XDB, SYSTEM
SQL> BEGIN
DBMS_SQLSET.CAPTURE_CURSOR_CACHE_SQLSET(
sqlset_name => 'UPG_STS_1',
time_limit => 900,
repeat_interval => 60,
capture_option => 'MERGE',
capture_mode => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
basic_filter => 'parsing_schema_name not in (''SYS'')',
sqlset_owner => NULL,
recursive_sql => 'HAS_RECURSIVE_SQL');
END;
/ STS
SQL Tuning Sets can also be accessed by way of database server APIs and
"
command-line interfaces. Usage of any subprograms in the DBMS_SQLSET package
to manage SQL Tuning Sets is part of the EE and EE-ES offerings.
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
1. AWR snapshot
2. Execute workload
3. AWR snapshot
4. Upgrade
Compare AWR report
from two different periods
5. AWR snapshot
6. Execute workload
7. AWR snapshot
8. Compare
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
"
plans and statistics by running the SQL statements both in isolation and serially
manner in before-change and after-change environments.
SPA functionality is well integrated with existing SQL Tuning Set (STS), SQL Tuning
Advisor, and SQL Plan Management functionality.
Oracle Database Real Application Testing Data Sheet
COMPARE
Upgrade
Plans
SPA
Test execution
Pro tip: For migrations,
Before upgrade After upgrade import STS into target database
SPA | Regressed Report
From production
workload From test
execution
SPA | Regressed Report
SPA | Regressed Report
SPA | Regressed Report
SPA | Continuous Improvement
19c
COMPARE
Implement COMPARE
change SPA
Test execution
SPA | Regressed Report
Don't have a license for Real Application Testing?
Use OCI!
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
SQL Tuning Advisor is SQL diagnostic software in the Oracle Database Tuning Pack.
" ...
Types of findings:
A SQL profile is a database object that contains auxiliary statistics specific to a SQL
"
statement.
...
The corrected statistics in a SQL profile can improve optimizer cardinality estimates,
which in turn leads the optimizer to select better plans.
Database 19c SQL Tuning Guide, chapter 26
5. Transparent to application
• Does not require application changes
2. Verify the profile – it doesn't get used by the optimizer in the live environment
SQL> alter session set sqltune_category='TEST_ENV';
Prepare If the staging table is migrated together with the user data,
Extract you can skip this step
Transfer
Load Use Data Pump to transfer that single table
SQL> CREATE DATABASE LINK src_link ... ;
Pro tip: You can also import from dump file if there
is no network connectivity to source database
Prepare Finally, load the profiles from the staging table into the data
Extract dictionary
Transfer
SQL> BEGIN
Load DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF (
staging_table_name => 'STAGING',
staging_schema_owner => 'SQLPROFILES',
replace => TRUE);
END;
/
6. Apply recommendations
67 Copyright © 2022, Oracle and/or its affiliates
© 2020 Oracle
8 Corporation
Real World Example | SQL Tuning Advisor in Action
SPOOL sql_tune_f4k19gvr3nu38.txt
SPOOL OFF;
6. Act on findings
• Follow 5 statistics recommendations to gather stats on 5 tables
• Documentation:
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_SQLDIAG.html#GUID-0F29CD05-
6BF3-4EEB-90F5-E2465865C255
• Useful scripts, e.g., create_sql_patch.sql:
http://kerryosborne.oracle-guy.com/2013/06/06/sql-gone-bad-but-plan-not-changed/
BEGIN DECLARE
SYS.DBMS_SQLDIAG_INTERNAL.i_create_patch( l_patch_name VARCHAR2(4000);
sql_text => 'select * big_table', BEGIN
hint_text => 'PARALLEL(big_table,10)', l_patch_name :=
name => 'big_table_sql_patch'); SYS.DBMS_SQLDIAG.create_sql_patch(
END; sql_text => 'select * from big_table',
/ hint_text => 'PARALLEL(big_table,10)',
name => 'big_table_sql_patch');
END;
/
Watch on YouTube
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
SQL plan management uses a mechanism called a SQL plan baseline, which is a set
"
of accepted plans that the optimizer is allowed to use for a SQL statement.
...
Next execution
Repeatable SQL
Filter
Something
on: changed
• Schema
New statistics OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE
OPTIMIZER_USE_SQL_PLAN_BASELINES=TRUE
• SQL
New text
parameters
No plans A
Plan B Plan history
• Module
Upgrade
in baseline Filter - Dedicated part of SQL plan baseline
• Action
Plan A is used
• Enabled
• Accepted
• Fixed
Result:
Performance better
not better
Plan History
Plan A Plan C Plan D Plan B Plan D
Accepted Accepted Accepted Not accepted Not accepted
• Space budget is 10 %
PARAMETER_NAME PARAMETER_VALUE
___________________________________
Change retention __________________
AUTO_CAPTURE_ACTION
SQL> exec DBMS_SPM.CONFIGURE('plan_retention_weeks', 5);
AUTO_CAPTURE_MODULE
AUTO_CAPTURE_PARSING_SCHEMA_NAME
AUTO_CAPTURE_SQL_TEXT
Change space consumption
AUTO_SPM_EVOLVE_TASK OFF
AUTO_SPM_EVOLVE_TASK_INTERVAL 3600
SQL> exec DBMS_SPM.CONFIGURE('space_budget_percent', 5);
AUTO_SPM_EVOLVE_TASK_MAX_RUNTIME 1800
PLAN_RETENTION_WEEKS 53
SPACE_BUDGET_PERCENT 10
SQL> DECLARE
cnt number;
BEGIN
cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET('UPG_STS_1');
END;
/
Plan History
Plan A Plan C Plan B
Accepted Accepted Not accepted
Automatically
accepted
702 Copyright © 2022, Oracle and/or its affiliates
SPM | What if ... literals
Load into
Test Plan baseline
19c Plans fromSQL plan baseline
tuning
Automatically
accepted
SQL> DECLARE
plans_loaded NUMBER;
filter VARCHAR2(255);
BEGIN
filter := 'sql_id=''czzzubf8fjz96'' AND plan_hash_value=''1165613724''';
plans_loaded := DBMS_SPM.LOAD_PLANS_FROM_SQLSET (
sqlset_name => 'UPG_STS_1',
Plan baseline basic_filter => filter
);
END;
/
Prepare If the staging table is migrated together with the user data,
Extract you can skip this step
Transfer
Load Use Data Pump to transfer that single table
SQL> CREATE DATABASE LINK src_link ... ;
Pro tip: You can also import from dump file if there
is no network connectivity to source database
Prepare Finally, load the baselines from the staging table into the data
Extract dictionary
Transfer
SQL> DECLARE
Load l_count NUMBER;
BEGIN
l_count := DBMS_SPM.UNPACK_STGTAB_BASELINE (
table_name => 'SPB_STAGING',
table_owner => 'SPM');
END;
/
Improves cardinality
Method Restrict plan usage Applies hints
estimates
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
You can use Database Replay to capture a workload on the production system and
"
replay it on a test system with the exact timing, concurrency, and transaction
characteristics of the original workload.
This enables you to test the effects of a system change without affecting the
production system.
Database 19c Testing Guide, chapter 9
Test
system
Workload
capture
11.2.0.4 19c
Replay
analysis Preprocess
Capture files
Replay files
1. Platform independent
3. Per-PDB capture/replay
1. 3. 5.
Collect Analyze Manage
2. 4. 6.
Compare Tune Test
);
END;
ENABLED DISABLED
parameter => 'ALTERNATE_PLAN_BASELINE',
value => 'AUTO'
parameter => 'ALTERNATE_PLAN_BASELINE',
value
);
END;
=> 'EXISTING'
/ /
Parameter optimizer_adaptive_plans
• Default: TRUE
• Adjust join methods, bitmap pruning and parallel distribution methods during runtime after parsing
Parameter optimizer_adaptive_statistics
• Default: FALSE
• Create dynamic statistics, SQL Plan Directives and do automatic reoptimization
Recommendation
Parameter _sql_plan_directive_mgmt_control
Recommendation
Parameter _cursor_obsolete_threshold
Recommendation
Parameter optimizer_real_time_statistics
Recommendation
• Until 19.9.0
• _optimizer_gather_stats_on_conventional_dml=FALSE
• _optimizer_use_stats_on_conventional_dml=FALSE
• From 19.10.0 on: optimizer_real_time_statistics=FALSE
Parameter deferred_segment_creation
Recommendation
Slides
Recording
Want to Know More?
"
Desupport of Non-CDB Oracle Databases
12.1.0.2
12.2.0.1 Single tenant
Max. 1 PDB
18c
ExaCS
DBCS EE-HP 4096 PDBs
Included
DBCS EE-EP
742 Copyright © 2022, Oracle and/or its affiliates
Multitenant | Architecture
Non-CDB database
Memory
SGA
Background processes
PMON, SMON, CKPT, DWBR, LGWR, ...
Files
Redo logs, control files, archived logs, flashback logs, ...
Memory
SGA
Container database
Background processes
PMON, SMON, CKPT, DWBR, LGWR, ...
Files
Redo logs, control files, archived logs, flashback logs, ...
CDB_views and
DBA_views, ALL_views and USER_views DBA_VIEWS DBA_VIEWS DBA_VIEWS DBA_VIEWS
CDB$ROOT
CDB_VIEWS
catcon.pl
• MOS Note: 1932340.1 - How to execute sql scripts in Multitenant environment (catcon.pl)
CDB$ROOT
catcon.pl
SQL
Script
MIGRATE
to multitenant architecture
1. 2. 3. 4.
Create Upgrade Plug In Convert
Recommendation
• Use AL32UTF8
National character set AL16UTF6
Recommendation
• Use Local Undo mode SQL> startup upgrade
SQL> alter database local undo on;
• Each PDB has its own UNDO
tablespace
Allows use of
• Flashback Pluggable Database
• Hot Cloning
... and more
Recommendation
Be aware!
Be really aware!
• Blog post
Recommendation
Be aware
COMPATIBLE=19.0.0
MIGRATE
to multitenant architecture
1. 2. 3. 4.
Create Upgrade Plug In Convert
Recommendation
2. Convert to PDB
MIGRATE
to multitenant architecture
1. 2. 3. 4.
Create Upgrade Plug In Convert
BEGIN
IF dbms_pdb.check_plug_compatibility('/tmp/DB19.xml') THEN
dbms_output.put_line('PDB compatible? ==> Yes');
ELSE
dbms_output.put_line('PDB compatible? ==> No');
END IF;
Pro tip: You can generate a manifest file
END; of a remote database via a database link
/
TYPE MESSAGE
______________________________________________________________________________________________________________
ERROR '19.9.0.0.0 Release_Update' is installed in the CDB but no release updates are installed in the PDB
ERROR DBRU bundle patch 201020: Not installed in the CDB but installed in the PDB
ERROR PDB's version does not match CDB's version: PDB's version 12.2.0.1.0. CDB's version 19.0.0.0.0.
WARNING CDB parameter compatible mismatch: Previous '12.2.0' Current '19.0.0'
WARNING PDB plugged in is a non-CDB, requires noncdb_to_pdb.sql be run.
• Data files
• Components
• Parameters
• Services
• Patch level
• Time zone
... and more
• Optional
MIGRATE
to multitenant architecture
1. 2. 3. 4.
Create Upgrade Plug In Convert
1. Open PDB
SQL> alter pluggable database DB19 open;
SQL> alter session set container=DB19;
3. Restart PDB
SQL> alter pluggable database DB19 close;
SQL> alter pluggable database DB19 open;
Pro tip: As of 12.2 the script
noncdb_to_pdb.sql is re-runnable
• Requires downtime
• Irreversible
Watch on YouTube
MIGRATION
options
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Read-only CDB
Using
manifest file
Re-using Generate
data files manifest file
SYSTEM
Data files SYSAUX XML
DATA
Command
java -jar autoupgrade.jar -config DB19.cfg -mode deploy
Command
java -jar autoupgrade.jar -config DB19.cfg -mode deploy
No fallback
• Data files are re-used
Fast option
Cross-platform
• Potentially roll off patches before unplug
• But can't go across Endian format
Watch on YouTube
MIGRATION
options
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
CDB Read-only
Using
manifest file
Generate
manifest file
SYSTEM SYSTEM
SYSAUX Data files SYSAUX XML
DATA DATA
Copy
data files
• OMF
FILE_NAME_CONVERT=NONE
Prerequisites:
• Source must be 12.1.0.2 or newer
• Block size must match
• Blog post
Command
java -jar autoupgrade.jar -config DB19.cfg -mode deploy
Fallback option
• Original data files are preserved
Cross-platform
• Potentially roll off patches before unplug
• But can't go across Endian format
MIGRATION
options
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Create
Non-CDB pluggable
database
Downtime
CDB
Same or
Data PumpData Pump higher release
SYSTEM
Data files SYSAUX
SYSTEM
DATA
SYSAUX
DMP
DATA
Create
Non-CDB pluggable
database
Downtime
SYSTEM
Data files SYSAUX
SYSTEM
DATA
SYSAUX
DATA
Fallback option
• Original database is preserved
MIGRATION
options
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Data Pump
Full Transportable
Export/Import
New, empty
Tablespace
Meta plug-in
data transfer 19c PDB
Source database (users, tables, indexes, triggers, PL/SQL ...)
12.1.0.2
SYSTEM SYSTEM
Final
Level
Level10incremental
imagefile backup
backup
Tablespaces
DATA DATA
Convert
Different data file
Set read-only
endian format on restore
Blog posts:
• Overview
• Step-by-step
Fallback option
• Original database is preserved
MIGRATION
options
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Create
Switch over
pluggable
Non-CDB
database
GoldenGate replication
SYSTEM
Data files SYSAUX
SYSTEM
DATA
SYSAUX
DATA
Migration:
• From lower version
• Across endianness
Fallback option
• Original database is preserved
• Optionally, reverse replication for fallback even after go-live
Zero downtime
Plug-in Plug-in
Data Pump Transportable GoldenGate
NoCopy Copy
New 11.2.0.4
PDB
non-CDB
19c
Downtime
COMPATIBLE=19.0.0
p
D ata Pum
file
via dump
COMPATIBLE=12.1.0
CDB 12.1.0.2
Complete overview
on Unsplash
Unsplash
• Install matching higher patch into CDB's home or rollback patch
from non-CDB
Roos on
OliverHorner
• non-CDB has a different DB_BLOCK_SIZE than CDB
• Use matching DB_n_CACHE_SIZE parameters in CDB's spfile
byCasey
Photoby
Photo
806 Copyright © 2022, Oracle and/or its affiliates
Pitfalls | Overview
Every migration
• Is an architectural change
• Requires downtime
• Requires a fallback
STANDBY
database
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Three options
Upgrade
Oracle Home 19.10.0 via Redo
STBDY Apply
Read Only
Shutdown Shutdown
Generate
Oracle 19.10.0 list in ASM
Oracle 19.10.0
PROD STBDY
PDB1
xml ASM
Alias
List
Create
aliases
noncdb_to_pdb.sql
(runtime varies)
• MAA Team:
MOS Note: 2273304.1
Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary
Database of a Data Guard Configuration
Three options
Source
Target
Primary
SQL> CREATE PLUGGABLE DATABASE ... STANDBYS=NONE;
Redo
Target
Standby
PDB created
Data files missing
RMAN> restore pluggable database ... from service ... ;
...
SQL> alter pluggable database enable recovery;
Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)
Three options
STANDBY
database
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Works seamlessly
1. Ensure STANDBY_FILE_MANAGEMENT=AUTO
• Optionally, configure DB_FILE_NAME_CONVERT as well
STANDBY
database
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Data Pump
Full Transportable
Export/Import
Source Target
Primary
Tablespace plug-in
Redo
via redo apply
Tablespaces Target
Restore Standby
Step by Step Process of Migrating non-CDBs and PDBs Using ASM for File Storage (Doc ID 1576755.1)
STANDBY
database
GoldenGate
Transportable
Data Pump
Plug-in Copy
Plug-in NoCopy
Supported configuration
Photo
Photoby
byJohn Salvino on
JC Gellidon onUnsplash
Unsplash
Multitenant
Create
pluggable
Non-CDB Shutdatabase
down
Read-only CDB
Using
manifest file
Import
Export Re-using Generate
TDE
TDE keys files
keys
data manifest file
SYSTEM
Data files SYSAUX XML
DATA
Encrypted
Data Pump
• Full support for TDE
• Export from and import into encrypted databases
• Dump files can be encrypted as well
• Use ENCRYPTION_PWD_PROMPT to avoid typing password in cleartext on command line
Transportable Tablespaces
• Same endian migration: Full support for TDE
• Use ENCRYPTION_PWD_PROMPT
• Cross endian migration: No support for TDE
GoldenGate
• Full support for TDE
849 Copyright © 2022, Oracle and/or its affiliates
TDE | Full House
Multitenant Upgrades
when you adopted the CDB architecture
85
3 Copyright © 2022, Oracle and/or its affiliates
CDB Upgrades | Options
• Minimum 4
• Maximum unlimited
CDB
• Default CPU count
$ dbupgrade -n 4
• Minimum 1
• Maximum 8
CDB
• Default 2
$ dbupgrade -N 2
CDB
Total number of processors (n)
PDBs upgraded
=
simultaneously
Processor per PDB (N)
PDB$SEED PDB1
PDB2 PDB4
PDB3 PDB5
PDB6 PDB7
CDB$ROOT
$ dbupgrade -n 4 -N 2
Scale by upgrading
more PDBs simultaneously
PDB$SEED PDB1
CDB$ROOT
Higher release
Non-CDB
Single Tenant
Multitenant
• Faster
• More flexible
upg1.sid=CDB12102
upg1.target_cdb=CDB19
upg1.pdbs=pdb1
upg1.source_home=/u01/app/oracle/product/12102
upg1.target_home=/u01/app/oracle/product/19
Watch on YouTube
Rename a PDB
upg1.pdbs=pdb1
upg1.target_pdb_name.pdb1=sales
Current limitations:
https://dohdatabase.com/how-to-upgrade-a-single-pdb
3. Disconnect at will
• Making Use Deferred PDB Recovery and the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)
Blog post: Upgrading in the cloud with refreshable PDB clones Blog Post: Refreshable Clones for Upgrades
10
30
20
40
60
90
80
100
0
50
00:00:00
00:03:5 0
00:07:40
00:11:30
00:15:20
00:19:10
00:23:00
00:26:50
00:30:40
00:34:30
00:38:20
00:42:10
00:46:00
00:49:50
00:53:40
00:57:30
01:01:20
01:05:10
01:09:00
01:12:5 0
01:16:40
01:20:30
01:24:20
01:28:10
01:32:00
01:35:50
01:39:40
01:43:30
Faster Upgrades | CPU Utilization
01:47 :20
01:51:10
01:55:00
01:58:50
02:02:40
02:06:30
Ideal utilization
02:10:20
02:14:10
02:18:00
02:21:50
02:25:40
02:29:30
02:33:20
02:37:10
02:41:00
02:44:50
02:48:40
02:52:30
02:56:20
03:00:10
03:04:00
Total upgrade time: 4 hours 8 minutes
03:07:50
03:11:40
03:15:30
03:19:20
03:23:10
03:27:00
03:30:5 0
03:34:40
03:38:30
03:42:20
03:46:10
03:50:00
03:53:50
03:57:40
04:01:30
04:05:20
04:09:10
20
40
60
80
100
0
00:00:00
00:04:40
00:09:20
00:14:00
00:18:40
00:23:20
00:28:00
00:32:40
00:37:20
00:42:00
00:46:40
00:51:20
00:56:00
01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
02:20:00
02:24:40
02:29:20
02:34:00
02:38:40
02:43:20
02:48:00
02:52:40
02:57:20
03:02:00
03:06:40
03:11:20
03:16:00
• Gather dictionary and fixed objects stats in advance (7 days)
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
20
40
60
80
100
0
20
40
60
80
100
0
00:00:10
00:00:00
00:04:30
00:04:40
00:08:50 00:09:20
00:13:10 00:14:00
00:17:30 00:18:40
00:21:50 00:23:20
00:26:10 00:28:00
00:30:30 00:32:40
00:34:50 00:37:20
00:39:10 00:42:00
00:43:30 00:46:40
00:47:50 00:51:20
00:52:10 00:56:00
00:56:30 01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
01:38:00
01:42:40
01:47 :20
01:52:00
01:56:40
02:01:20
02:06:00
02:10:40
02:15:20
Faster Upgrades | CPU Utilization
02:20:00
02:24:40
02:29:20
02:34:00
02:38:40
02:43:20
02:48:00
02:52:40
02:57:20
03:02:00
Dictionary and fixed objects
03:06:40
03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
Gathering stats in advance saves 12 minutes
20
40
60
80
100
0
00:00:00
00:04:40
00:09:20
00:14:00
00:18:40
00:23:20
00:28:00
00:32:40
00:37:20
00:42:00
00:46:40
00:51:20
00:56:00
Upgrade CDB$ROOT
01:00:40
• Remove components
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
01:38:00
01:42:40
01:47 :20
01:52:00
01:56:40
02:01:20
02:06:00
02:10:40
02:15:20
Faster Upgrades | CPU Utilization
02:20:00
02:24:40
02:29:20
02:34:00
02:38:40
02:43:20
02:48:00
02:52:40
02:57:20
03:02:00
03:06:40
03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
• AutoUpgrade automatically assigns 8 parallel processes to CDB$ROOT upgrade
20
40
60
80
100
20
40
60
80
100
0
00:00:10 00:00:00
00:04:30 00:04:40
00:08:50 00:09:20
00:13:10 00:14:00
00:17:30 00:18:40
00:21:50 00:23:20
00:26:10 00:28:00
00:30:30 00:32:40
00:34:50 00:37:20
00:39:10 00:42:00
00:43:30 00:46:40
00:47:50 00:51:20
00:52:10 00:56:00
00:56:30 01:00:40
01:00:50 01:05:20
01:05:10 01:10:00
01:09:30 01:14:40
01:13:5 0 01:19:20
01:18:10 01:24:00
01:22:30 01:28:40
01:26:5 0 01:33:20
01:31:10 01:38:00
01:35:30 01:42:40
01:39:50 01:47 :20
01:44:10 01:52:00
01:48:30 01:56:40
01:52:50 02:01:20
01:57:10 02:06:00
02:01:30 02:10:40
02:05:50 02:15:20
Faster Upgrades | CPU Utilization
02:10:10 02:20:00
02:14:30 02:24:40
02:18:5 0 02:29:20
02:23:10 02:34:00
02:27:30 02:38:40
02:31:50 02:43:20
02:36:10 02:48:00
02:40:30 02:52:40
02:44:50 02:57:20
02:49:10 03:02:00
02:53:30 03:06:40
02:57:50 03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
13 minutes faster
Removing all components
All components installed
20
40
60
80
100
0
00:00:00
00:04:40
00:09:20
00:14:00
00:18:40
00:23:20
00:28:00
00:32:40
00:37:20
00:42:00
00:46:40
00:51:20
00:56:00
01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
02:20:00
02:24:40
20
40
60
80
100
0
00:00:10 00:00:00
00:04:50 00:04:40
00:09:30 00:09:20
00:14:10 00:14:00
00:18:50 00:18:40
00:23:30 00:23:20
00:28:10 00:28:00
00:32:5 0 00:32:40
00:37:30 00:37:20
00:42:10 00:42:00
00:46:50 00:46:40
00:51:30 00:51:20
00:56:10 00:56:00
01:00:50 01:00:40
01:05:30 01:05:20
01:10:10 01:10:00
01:14:50 01:14:40
01:19:30 01:19:20
01:24:10 01:24:00
01:28:5 0 01:28:40
01:33:30 01:33:20
01:38:10 01:38:00
01:42:50 01:42:40
01:47 :30 01:47 :20
01:52:10 01:52:00
01:56:50 01:56:40
02:01:30 02:01:20
02:06:10 02:06:00
02:10:50 02:10:40
02:15:30 02:15:20
Faster Upgrades | CPU Utilization
02:20:10 02:20:00
02:24:50 02:24:40
02:29:30 02:29:20
02:34:10 02:34:00
02:38:50 02:38:40
02:43:30 02:43:20
02:48:10 02:48:00
02:52:5 0 02:52:40
02:57:30 02:57:20
03:02:10 03:02:00
03:06:50 03:06:40
03:11:30 03:11:20
03:16:10 03:16:00
03:20:5 0 03:20:40
03:25:30 03:25:20
03:30:10 03:30:00
03:34:50 03:34:40
03:39:30 03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
26 minutes faster
32 parallel processes
54 parallel processes
upg1.catctl_options=-n 54
PROCESSES dramatically
Pro tip: Remember to increase
20
40
60
80
100
20
40
60
80
100
0
00:00:10 00:00:00
00:04:30 00:04:40
00:08:50 00:09:20
00:13:10 00:14:00
00:17:30 00:18:40
00:21:50 00:23:20
00:26:10 00:28:00
00:30:30 00:32:40
00:34:50 00:37:20
00:39:10 00:42:00
00:43:30 00:46:40
00:47:50 00:51:20
00:52:10 00:56:00
00:56:30 01:00:40
01:00:50 01:05:20
01:05:10 01:10:00
01:09:30 01:14:40
01:13:5 0 01:19:20
01:18:10 01:24:00
01:22:30 01:28:40
01:26:5 0 01:33:20
01:31:10 01:38:00
01:35:30 01:42:40
01:39:50 01:47 :20
01:44:10 01:52:00
01:48:30 01:56:40
01:52:50 02:01:20
01:57:10 02:06:00
02:01:30 02:10:40
02:05:50 02:15:20
Faster Upgrades | CPU Utilization
02:10:10 02:20:00
02:14:30 02:24:40
02:18:5 0 02:29:20
02:23:10 02:34:00
02:27:30 02:38:40
02:31:50 02:43:20
02:36:10 02:48:00
02:40:30 02:52:40
02:44:50 02:57:20
02:49:10 03:02:00
02:53:30 03:06:40
02:57:50 03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
48 minutes faster
Removing all components
All components installed
20
40
60
80
100
0
00:00:00
00:04:40
00:09:20
00:14:00
00:18:40
00:23:20
00:28:00
00:32:40
00:37:20
00:42:00
00:46:40
00:51:20
00:56:00
01:00:40
01:05:20
01:10:00
01:14:40
01:19:20
01:24:00
01:28:40
01:33:20
02:20:00
02:24:40
02:29:20
02:34:00
• Recompilation (utlrp) already highly parallelized 02:38:40
02:43:20
02:48:00
02:52:40
02:57:20
03:02:00
03:06:40
03:11:20
03:16:00
03:20:40
03:25:20
03:30:00
03:34:40
03:39:20
03:44:00
03:48:40
03:53:20
03:58:00
04:02:40
04:07:20
04:12:00
Faster Upgrades | Conclusion
• Remove components
Multitenant and Time Zone Patching
Photo by Adam Muise on Unsplash
88 Copyright ©©2022
Copyright 2022, Oracle
Oracle and/or
and/or its affiliates
its affiliates.
6
Time Zone | Multitenant DST Version and Patching
• Patching $ORACLE_HOME
• Containers need to be "TZ upgraded"
• PDBs and CDB$ROOT can stay on different
TZ values
Check script:
perl catcon.pl -n 1 –S
-l /home/oracle
-b tz_check_PDBs
perl catcon.pl -n 1 –s
-d $ORACLE_HOME/rdbms/admin
-l /home/oracle utltz_upg_check.sql
-b tz_check_ROOT_SEED
-d $ORACLE_HOME/rdbms/admin
utltz_upg_check.sql
This will restart the database twice, first in UPGRADE mode, then in normal mode
• Exclusive locks may happen
Apply script:
perl catcon.pl -n 1 –S
SQL> alter pluggable database all open;
-l /home/oracle
-b tz_apply_PDBs
perl catcon.pl -n 1 –s
-d $ORACLE_HOME/rdbms/admin
-l /home/oracle utltz_upg_apply.sql
-b tz_apply_ROOT_SEED
-d $ORACLE_HOME/rdbms/admin
utltz_upg_apply.sql
How to patch all your PDBs with a new time zone patch?
Success?
Remarks
Customer
Project
Constraints
Preparation
Upgrade
Success?
Remarks
Customer Motivation
Project 2017 • Developers want Oracle 12.2 features
Upgrade
Success?
Remarks
Upgrade
Success?
Remarks
Performance
Warehouse
Customer Parallel upgrade catctl.pl unfolds its full power when upgrading many
PDBs at the same time
Project 2017
• 50 PDBs upgraded in less than 4 hours
Constraints
When we encounter issues, we fix them before going live
Preparation
• Follow their projects on: https://mobiliar.ch/db-blog
Upgrade
Success?
100% Multitenant Consolidation reached in Oct 1, 2019
Remarks
Preparation
Upgrade
Success?
Remarks
Preparation
Migration
Success?
Remarks
Constraints
Preparation
Migration
Success?
Remarks
Preparation Pre-Actions
• Lock the app user on source PDB
Migration • Deactivate the app service on the PDB
• Create DB Link for remote clone
Success? • Remove PDB from Cloud Control
Success?
Remarks
Remarks
Preparation
Migration
Success?
Remarks
Upgrade
Success?
Remarks
Upgrade
Success?
Remarks
Constraints
Constraints
Preparation
Upgrade
Success?
Remarks
Upgrade
Success?
Remarks
Alain Fuhrer
Head IT Database Services
La Mobilière
Bern, Switzerland
Chapter 6
Migration Strategies
Simplicity Downtime
Techniques include:
• Data Pump
• Transportable Tablespaces
• Full Transportable Export/Import
• Data Guard
• Incremental Backups
• Oracle GoldenGate
We will give you detailed insights!
Data Trans-
Guard portable
Data Golden
RMAN
Pump Gate
Advantages Consideration
• Ease of use • Duration for large amounts of data and
• Universal complex structures
• Change structures, character set,
and much more
• Platform independent
• Architecture independent
• Works across versions
• Backwards compatible
Documentation
• Oracle Database 19c Utilities Guide
Imported
into database
Exported to
dump file Copied over
the network
Data Pump | Network Mode
Database Link
Imported directly
over network
Data Pump | Mode Comparison
Requires disk space for dump files No extra disk space needed
DBMS_DATAPUMP DBMS_DATAPUMP
DBMS_METADATA DBMS_METADATA
expdp impdp
Data Pump | Prerequisites
No diagnostics
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "METAL"."ALBUMS" 988.8 KB 28069 rows
. . imported "METAL"."BANDS" 3.444 MB 37723 rows
. . imported "METAL"."REVIEWS" 66.47 MB 21510 rows
All diagnostics
16-OCT-20 17:26:57.158: Processing object type SCHEMA_EXPORT/TABLE/TABLE
16-OCT-20 17:26:58.262: Startup took 1 seconds
16-OCT-20 17:26:58.264: Startup took 1 seconds
16-OCT-20 17:26:59.082: Completed 3 TABLE objects in 1 seconds
16-OCT-20 17:26:59.082: Completed by worker 1 1 TABLE objects in 1 seconds
16-OCT-20 17:26:59.082: Completed by worker 2 1 TABLE objects in 0 seconds
16-OCT-20 17:26:59.082: Completed by worker 3 1 TABLE objects in 0 seconds
16-OCT-20 17:26:59.313: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
16-OCT-20 17:27:01.943: . . imported "METAL"."ALBUMS" 988.8 KB 28069 rows in 2 seconds using external_table
16-OCT-20 17:27:03.778: . . imported "METAL"."BANDS" 3.444 MB 37723 rows in 2 seconds using external_table
16-OCT-20 17:27:12.644: . . imported "METAL"."REVIEWS" 66.47 MB 21510 rows in 13 seconds using external_table
Importing as BasicFiles
10-OCT-20 21:43:21.848: W-3 . . imported "SCHEMA"."TABLE" 31.83 GB 681025 rows in 804 seconds using direct_path
Importing as SecureFiles
15-OCT-20 18:16:48.663: W-13 . . imported "SCHEMA"."TABLES" 31.83 GB 681025 rows in 261 seconds using external_table
1. Ensure STANDBY_FILE_MANAGEMENT=AUTO
• Optionally, configure DB_FILE_NAME_CONVERT as well
Imported
19c
into database
11.2.0.4 Exported to
dump file with
Copied over e.g. VERSION=11.2.0.4
the network
Limitations:
--CONNECT SYSTEM
-- new object type path: SCHEMA_EXPORT/USER
CREATE USER "TPCC" IDENTIFIED BY VALUES
'S:F9E9DD2D0A8D0AEA2ACB9000FD1EDE144005661F7A9AE2BD6951DE396931;BB4954843B02D85D'
DEFAULT TABLESPACE "TPCCTAB"
TEMPORARY TABLESPACE "TEMP";
-- new object type path: SCHEMA_EXPORT/SYSTEM_GRANT
GRANT UNLIMITED TABLESPACE TO "TPCC";
-- new object type path: SCHEMA_EXPORT/ROLE_GRANT
GRANT "CONNECT" TO "TPCC";
GRANT "RESOURCE" TO "TPCC";
-- new object type path: SCHEMA_EXPORT/TABLESPACE_QUOTA
DECLARE
TEMP_COUNT NUMBER;
SQLSTR VARCHAR2(200);
BEGIN
SQLSTR := 'ALTER USER "TPCC" QUOTA UNLIMITED ON "TPCCTAB"';
EXECUTE IMMEDIATE SQLSTR;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -30041 THEN
SQLSTR := 'SELECT COUNT(*) FROM USER_TABLESPACES
WHERE TABLESPACE_NAME = ''TPCCTAB'' AND CONTENTS = ''TEMPORARY''';
EXECUTE IMMEDIATE SQLSTR INTO TEMP_COUNT;
IF TEMP_COUNT = 1 THEN RETURN;
ELSE RAISE;
END IF;
ELSE
RAISE;
END IF;
END;
/
Enables you to start Data Pump Export and Import directly from PL/SQL
DECLARE
JOBHNDL NUMBER;
BEGIN
JOBHNDL := SYS.DBMS_DATAPUMP.OPEN(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => NULL,
version => NULL,
ena_sec_roles => 0);
SYS.DBMS_DATAPUMP.ADD_FILE(
handle => JOBHNDL,
filename => 'demo_exp.log',
directory => 'DMPDIR',
filesize => NULL,
filetype => 3,
reusefile => NULL);
SYS.DBMS_DATAPUMP.SET_PARALLEL(
handle => JOBHNDL,
degree => 8);
...
...
...
Default: 100
Set to AUTO: 50 % of SESSIONS
Problem: Too many parallel workers are used in a Data Pump job,
depleting the resources of the CDB
Default: 50
Set to AUTO: 50 % of SESSIONS
Data Trans-
Guard portable
Data Golden
RMAN
Pump Gate
Source
database
Configure:
Optional • Redo transport
Switchover • Redo apply
RMAN> DUPLICATE
RESTORE STANDBY
TARGETCONTROLFILE
DATABASE ... ;
RMAN> RESTORE
FOR STANDBY
DATABASE ... ;
RMAN> RECOVER
FROM ACTIVE
DATABASE
DATABASE
UNTIL
...
... ;
Backup Target
database
• Preferred solution
• Seamless switchover
with properly configured application
Redo
apply
Standby
Primary database
database
Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (Doc ID 413484.1)
PLATFORM_NAME ENDIAN_FORMAT
____________________________________ ________________
Apple Mac OS (x86-64) Little
HP IA Open VMS Little
HP Open VMS Little
HP Tru64 UNIX Little
Linux IA (32-bit) Little
Linux IA (64-bit) Little
Linux x86 64-bit Little
Microsoft Windows IA (32-bit) Little
Microsoft Windows IA (64-bit) Little
Microsoft Windows x86 64-bit Little
Solaris Operating System (x86) Little
Solaris Operating System (x86-64) Little
Redo apply,
MIRA 2x MIRA 4x
TB/Day 11.2.0.4 12.1.0.2 12.2 12.2 12.2
Batch 57 57 57 115 226
OLTP 14 15 15 29 60
Source: Redo Apply Best Practices – Oracle Data Guard and Active Data Guard
Connection,
MIRA 2x MIRA 4x
Gbps 11.2.0.4 12.1.0.2 12.2 12.2 12.2
Batch 57 / 6 57 / 6 57 / 6 115 / 11 226 / 22
OLTP 14 / 2 15 / 2 15 / 2 29 / 3 60 / 6
Source: Redo Apply Best Practices – Oracle Data Guard and Active Data Guard
Source
11.2.0.4
Disconnect
Final
standby
synchronization
Downtime
Upgrade
Target
11.2.0.4
19c
DOWNTIME ddddd
alter system flush redo to ... confirm apply;
Source
Redo apply
database Target
Downtime database
Optinally,
upgrade
Redo apply
Future
standby
Upgrade
Success?
Remarks
Customer Case | Payback
Remarks
Customer Case | Payback
Constraints
Database 14TB
Preparation Downtime: less than 8hrs
Upgrade
Network "bottleneck"
Success?
• Remedy: Special IB cabled connection from V1 to X2-2
Remarks
Customer Case | Payback
Constraints
Preparation
Upgrade
Success?
Remarks
InfiniBand cable
Customer Case | Payback
Preparation
RMAN Restore
Upgrade 64 parallel channels
Success?
Remarks
InfiniBand cable
Customer Case | Payback
Preparation
Upgrade
Success?
Remarks
Customer Case | Payback
Remarks
Customer Case | Payback
Constraints
Physical standby as migration vehicle was the key technique
Preparation • Allows several test runs
• Copy time does not account for downtime
Upgrade
Success?
Remarks
2020?
Customer Case | Payback
Constraints
Preparation
Upgrade
Success?
Remarks
2020?
Migration Strategies
PHYSICAL LOGICAL
REDO
SQL
SQL
DELETE FROM CUSTOMERS WHERE ...
REDO
Switchover
UPDATE TRANSACTIONS SET ...
MANUAL DBMS_ROLLING
Data Guard broker must be disabled Data Guard broker can be enabled
Recommended
MANUAL DBMS_ROLLING
Data Guard broker must be disabled Data Guard broker can be enabled
Recommended
86 INSTRUCTIONS OR CHECKS
Upgrade database
Record completion of user upgrade of PROD2
Scan LADs for presence of PROD1 destination
Test if PROD1 is reachable using configured TNS service
Call broker to enable redo transport to PROD2
SQL> exec dbms_rolling.switchover; Archive all current online redo logs
SQL> exec dbms_rolling.finish_plan; Archive all current online redo logs
Stop logical standby apply
Start logical standby apply
Wait until apply lag has fallen below 600 seconds
Notify Data Guard broker that switchover to logical
standby database is starting
Log post-switchover instructions to events table
Switch database to a logical standby
Notify Data Guard broker that switchover to logical
standby database has completed
1036 Copyright © 2022, Oracle and/or its affiliates Wait until end-of-redo has been applied
...
Rolling Upgrade | Demo
Upgrade
Users are still Logical Standby
connected here
SQL apply
SID PROD SID PROD
DB_UNIQUE_NAME PROD1 DB_UNIQUE_NAME PROD2
Host bm1 Host bm2
Rolling Upgrade | Demo
New primary
Standby database
database
Switch-over
Users
Restart
Applyin
Flashback connected here
19c Database
Oracle
redoHome
Watch on YouTube
• Upgrade happens on CDB level - when you switchover - the entire CDB switches over
• Adding new PDBs in primary after instantiating logical standby is possible, but
cumbersome
Step-by-step instructions in
Exadata Cloud Database 19c Rolling Upgrade
With DBMS_ROLLING (Doc ID 2832235.1)
Data Trans-
Guard portable
Data Golden
RMAN
Pump Gate
Source
database
Downtime
RMAN> RECOVER
RESTORE DATABASE ...
Target
database
• Well-known process
recover database;
alter database open resetlogs;
11.2.0.4
Downtime
RMAN> RECOVER DATABASE ...
19c
Level 01
Level
Source
database Target
Downtime primary
Archive
Level
Level 01 SQL> ALTER DATABASE OPEN RESETLOGS;
Configure:
• redo transport
• Redo apply
Archive
Level
Level 01 RMAN> RECOVER DATABASE ...
Target
standby
Crash
BEFORE AFTER
Level 0
Data Trans-
Guard portable
Data Golden
RMAN
Pump Gate
Big-endian Little-endian
Source: https://en.wikipedia.org/wiki/Endianness
RMAN DBMS_FILE_TRANSFER
Supported in newest version of Perl scripts Not supported in Perl scripts version 4
PLATFORM_NAME ENDIAN_FORMAT
____________________________________ ________________
AIX-Based Systems (64-bit) Big
Apple Mac OS Big
HP-UX (64-bit) Big
HP-UX IA (64-bit) Big
IBM Power Based Linux Big
IBM zSeries Based Linux Big
Linux OS (S64) Big
Solaris[tm] OE (32-bit) Big
Solaris[tm] OE (64-bit) Big
Stored in Stored in
SYSTEM tablespace user tablespaces
SYSTEM DATA
SYSTEM DATA
• Database version
• Database architecture (non-CDB / CDB)
TBS1
TBS1
TBS2 TBS2
Data Pump
Full Transportable
Export/Import
New, empty
Tablespace
Meta plug-in
data transfer 19c PDB
Source database (users, tables, indexes, triggers, PL/SQL ...)
12.1.0.2
SYSTEM SYSTEM
Watch on YouTube
Data Pump
Full Transportable
Export/Import
New, empty
Tablespace
Meta plug-in
data transfer 19c PDB
Source database (users, tables, indexes, triggers, PL/SQL ...)
12.1.0.2
SYSTEM SYSTEM
Final
Level
Level10incremental
imagefile backup
backup
Tablespaces
DATA DATA
Convert
Different data file
Set read-only
endian format on restore
Watch on YouTube
Data Pump
Full Transportable
Export/Import
Source Target
Primary
Tablespace plug-in
Redo
via redo apply
Tablespaces Target
Restore Standby
Recommendation
• Keep production CDB on AL32UTF8
• Provision temporary CDB with desired character set
• Create new empty PDB in temporary CDB
• Clone custom PDB to production CDB
Database Creation Enable Block Change Tracking on source for incremental backups
Backup / Recovery
SQL> SELECT status, filename FROM V$BLOCK_CHANGE_TRACKING;
TDE SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
PERL Scripts
• Conversion on destination is usually faster than on source
• Requires
• Enterprise Edition (on-prem)
• Enterprise Edition Extreme Performance (DBCS)
• Exadata
Linux:
• Source: 10.2.0.3 or newer
• Target: 11.2.0.4 or newer
Other:
• Source: 12.1.0.1 or newer
• Target: 12.1.0.1 or newer
• Or use older scripts:
11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
Automate
• To ensure consistency and avoid human error
Clean-up procedure
• In case of failure and rollback
• To repeat tests
• Offline source database afterwards
• Significantly speed up
metadata export and import phase
19c
DIRECTORY=DP_DIR
DUMPFILE=tts.dmp
LOGFILE=logfile.log
TTS_CLOSURE_CHECK=TEST_MODE
TRANSPORT_TABLESPACES=(TTS)
19c
Step-by-Step
Final
Prepare & Roll Incremental
Setup Level-0 Forward Backup Transport Clean Up
1 2 3 4 5 6
TBS2
TBS1
TBS2
Staging
DBMS_FILE_TRANSFER
xttdriver.pl -S
xttdriver.pl –G
Convert
xtt.properties
tablespaces=TBS1,TBS2
platformid=13
src_scratch_location=/NFS_backups/
TBS1 dest_scratch_location=/NFS_backups/
dest_datafile_location=+DATA
TBS2
asm_home=/u01/app/19/grid
asm_sid=+ASM1
parallel=16
rollparallel=2
res.txt
scp
Level 0
TBS1
TBS2
Staging
Level 0
xttdriver.pl --backup
Level 0
TBS1 TBS1
TBS2 TBS2
Staging
Level 0
xttdriver.pl --restore
res.txt
scp
Level 1
TBS1
TBS2
Staging
Level 1
xttdriver.pl --backup
Level 1
TBS1 TBS1
TBS2 TBS2
Staging
Level 1
xttdriver.pl --restore
res.txt
scp
Level 1 Level 1
TBS1 TBS1
TBS2 TBS2
Staging
Level 1 Level 1
Read Only
TBS1
TBS2
Read Only
res.txt
scp
TBS1 TBS1
TBS2 TBS2
Staging
Read Only Level 1 Level 1
$ impdp SYSTEM \
NETWORK_LINK=v121 \
FULL=Y \
Read Only TRANSPORTABLE=ALWAYS \
TRANSPORT_DATAFILES='+DATA/tbs1.dbf'
TBS1 TBS1
TRANSPORT_DATAFILES='+DATA/tbs2.dbf'
TBS2 TBS2
Read Only
Read Only
TBS1 TBS1
RMAN Validation
TBS2 TBS2
Read Only
Customer Customer
• One of the top healthcare insurance providers in the United States
Project 2017
• Over 50,000 employees, over $50 BILLION annual revenue
Constraints
Oracle Partner
Preparation
• Centric Consulting, a management and technology consulting company
Migration • Oracle Platinum Partner
Success?
Remarks
Customer Case | Health Care
Customer Source
• AIX 5.3, Oracle Database 11.2.0.3, DB on filesystem
Project 2017
Constraints Target
Preparation • Exadata running Oracle Linux, Database 12.1.0.2, RAC/ASM
Migration
Enterprise data warehouse & operational data store
Success? • Critical for day-to-day operations
• Minimizing downtime is critical
Remarks
• Data Guard for DR
Customer Case | Health Care
Remarks
Customer Case | Health Care
Customer Workarounds
• 40 identical directories, each held a complete XTTS PERL script installation
Project 2017
• Distribute >530 tablespaces into 40 tablespace groups
Constraints • Run 40 jobs concurrently with PARALLEL=2, or 80 files at a time
Preparation
Result: ~800 MB/sec throughput
Migration
• Reduced prepare phase from 27 days to 6 days
Success?
Remarks
Customer Case | Health Care
Customer Environment
Project 2017
Constraints
Preparation
Migration
Remarks
Customer Case | Health Care
Success?
Remarks
Customer Case | Health Care
Project 2017
Constraints
Preparation
Migration
Success?
Remarks Downtime
Window
18 hours
Customer Case | Health Care
Success?
Remarks
Pushing the limits
https://www.oracle.com/technetwork/database/exadata/exacc-x7-ds-4126773.pdf
Restore/Recover
Rebuild Meta
Restore Level 1
Restore/Recover
dbmigusera.pl
Inc Backup Transport
Customized PERL scripts
for Recovery Appliance
Rebuild Meta
Customize xtt.properties
# SBT parameter configuration to be used for restore and recover operations
sbtlibparms="SBT_LIBRARY=/u01/app/oracle/product/12.2.0.1/dbhome_1/lib/libra.so, …')"
sourceplatid=2
dbid=4173218531
retrycount=2
Restore/Recover
Restore/Recover
Custom TTS
Customized
Restore/Recover Transportable
Tablespaces Scripts
Rebuild Meta
Check script
• Checks for objects in SYSTEM tablespace
• Size of database
• Tablespaces
• Object count meta objects
• And more …
CHECK
AWR extract
Plan Capture
No downtime required
Create
• Profiles
• Dummy tablespaces
Restore/Recover
• Temporary tablespaces
• Users
Transport • Directories
Drop
Rebuild Meta
• Dummy tablespaces
Create
• Grants, Roles, Directories, Functions for
tables/indexes
From dfcopy.txt (xtt PERL):
• Generate TRANSPORT_DATAFILES strings
Restore/Recover
Transport Downtime!
Rebuild Meta
Metadata export
• function, package, procedure, database_link,
sequence, view, synonym
Transport
Important
Rebuild Meta • GATHER_SCHEMA_STATS('SYS')
• GATHER_SCHEMA_STATS('SYSTEM')
TTS Import
• Afterwards tablespaces are attached
Restore/Recover
Transport
Rebuild Meta
TTS
Import
Meta Import
• Rebuilds:
• Functions
Restore/Recover • Packages
• Procedures
Transport
• Database links
Rebuild Meta
• Sequences
• Views
• Synonyms
Metadata
Import
Clean up
• Drop GRP
• Other treatments
Run check script
• Comparison Before/After
CHECK CHECK
2022
TRADEWARE
PERSONAL – SYMPATHIC – STRONG
Founded
1990
Company form
Public limited company,
Locations
privately owned Thalwil
St. Gallen
Employees Basel
28 Bern (planned)
© Tradeware AG
THOMAS GOOP
Senior Consultant
Engineered Systems, HA/DR, Backup/Recovery,
Migration
© Tradeware AG
PROJECT
© Tradeware AG
SOME FIGURES
- Source
- AIX 7 on P9 HW, Oracle RDBMS 19.13
- Target
- Exadata X8, Oracle RDBMS 19.13, Single Instance
- Database
- 40+ TB
- Storage Mirroring for DR
- Data Guard as DR backup, async
© Tradeware AG
CHALLENGES
© Tradeware AG
PREPARATION
- Getting a RA
- Reading Mike’s «Migrating Very Large Databases»
- Distributing the TBS based on size into 16 RA XTTS PERL homes
© Tradeware AG
ENVIRONMENT
Site A Site B
X8-8
Backup X6-2
Restore X8-8 Data Guard
X8M-2
X6-2 X8M-2
X8-2 EF
X8-2 EF
X8-2 EF X8M-2 HC
X8-2 EF X8M-2 HC
X6-2 HC X8-2 EF X8M-2 HC
X6-2 HC X8-2 EF X8M-2 HC
X6-2 HC X8-2 EF X8M-2 HC
© Tradeware AG
IMPROVMENTS
© Tradeware AG
MIGRATION
Online
- Full backup 26h Restore 8h
- 1st Incremental 10h Recover 2h
- 2nd incremental 3h Recover 1h
Read-Only
- Final Incremental 1h Recover 0.5h
- Expdp 4h Impdp 2h
© Tradeware AG
Different
MIGRATION
techniques
Data Trans-
Guard portable
Data Golden
RMAN
Pump Gate
Configure extract
• Extremely flexible
• DDL replication
• Truncate
• Sequences
SOURCE_OBJECT_NAME INSTANTIATION_SCN
------------------------------------
TCUSTMER 829723224
New Replicat parameter TCUSTORD 829723223
DBOPTIONS ENABLE_INSTANTIATION_FILTERING
Bandwidth:
Integrated Extract - only the changes to tables that are being captured will be sent to the Extract
process itself
GoldenGate requires
database minimal supplemental logging
which does not impose a significant overhead
Generate report:
• Check prerequisites
• Database characteristics
• Find database objects of interest
• Extract/replicat statistics
• Check database readiness
Source
database
Downstream
database
Redo
transport Extract
GoldenGate
Target
database Replicat
GoldenGate
Optional Target Target
replication
primary standby
Redo
apply
GoldenGate
replication Target
database
Start
Backup
Level 0
backup
completed
Target database
Schema A
Target database
Schema B
Target database
Schema C
New
database
Bidirectional
replication
Supports:
• Oracle Database 11.2.0.4 and higher
Pay license
Traditional
OCI GoldenGate
Integrated Integrated
Extract Replicat
SQ
et L*
L*N Ne
t
S Q
Trail files
Watch on YouTube
GoldenGate moves
petabytes of real-time
Get started for data per day at Web scale
$1.34 per OCPU per hour
84%
of Fortune 100
use GoldenGate
CPU Usage
Pay only for what you use
• Scaling happens online / no downtime
• Per-second billing
4
OCI GoldenGate
0
Dynamic Auto-Scale
Automatically scale up to 3X
Scaling with zero downtime
Development / Trials
• Start with 1 OCPU and Auto-scale on
Bandwidth:
Integrated Extract - only the changes to tables that are being captured will be sent to the Extract
process itself
"
Starting in Oracle Database 19c (19.1), Oracle Streams is desupported. Oracle
GoldenGate is the replication solution for Oracle Database.
Further Information
Different
MIGRATION
techniques
Transient
duplicate
Standby
Logical
Golden
RMAN
RMAN
Guard
Pump
FTEX
FTEX
Data
Data
Gate
Incr.
Incr.
TTS
Simplicity Simple Simple Complex Simple Simple Complex Moderate Moderate Complex
Downtime Significant Near Zero Near Zero Significant Low Significant Significant Low Zero
Version Change + + + + + +
Same-Endianness OS Change + (+) (+) (+) (+) + + + +
Big/Little Endianness OS Change + + + + +
Same Hardware + + + + + + + +
Hardware Exchange + + + + + + + + +
non-CDB to CDB/PDB + + + + +
Encrypt + + + + + +
Fallback After Go-Live / Upgrade + +
Character Set Change + +
Simplicity Downtime
Performance Stability | After Migration
Performance Stability
Tips, Tricks & Underscores - Thursday 4 March 2021
• https://mikedietrichde.com/2019/12/06/great-license-news-spatial-and-graph-machine-learning/
12.1
Parameter:
• MAX_STRING_SIZE=EXTENDED
• Can be used on PDB level
12.1
Step by step:
ALTER SYSTEM set MAX_STRING_SIZE=EXTENDED scope=SPFILE;
SHUTDOWN IMMEDIATE
STARTUP UPGRADE
@?/rdbms/admin/utl32k.sql
SHUTDOWN IMMEDIATE
STARTUP
12.1
Caution!
• For new tables:
• Up to 3964 bytes will be stored in a regular VARCHAR2
• Above this limit, data will be stored in an inline SecureFile LOB
• See: http://www.ludovicocaldara.net/dba/extended-data-types-storage/
• _scalar_type_lob_storage_threshold=4000 by default
• Inline SecureFile LOBs does not support NOLOGGING
• For existing tables:
• Row chaining
• Workaround: DBMS_REDEFINITION or Online Table Move
12.1
Performance
• Extended VARCHAR2 can save roundtrips
• See: https://blog.dbi-services.com/12c-extended-datatypes-better-than-clob/
Statistics Statistics
---------------------------------------------- ----------------------------------------------
5 recursive calls 4 recursive calls
0 db block gets 0 db block gets
136 consistent gets 28 consistent gets
80 physical reads 0 physical reads
0 redo size 0 redo size
16310 bytes sent via SQL*Net to client 90721 bytes sent via SQL*Net to client
11890 bytes received via SQL*Net from client 380 bytes received via SQL*Net from client
52 SQL*Net roundtrips to/from client 2 SQL*Net roundtrips to/from client
0 sorts (memory) 0 sorts (memory)
0 sorts (disk) 0 sorts (disk)
10 rows processed 10 rows processed
12.1
Potential pitfall
• COMPATIBLE < 19.0.0
• SecureFile LOB requires at least 16 blocks in an extent
• COMPATIBLE ≥ 19.0.0
• SecureFile LOB requires at least 32 blocks in an extent
Example:
• DB_BLOCK_SIZE: 16K
• Tablespace uniform extent size: 128k
è One extents contains only 8 blocks
è No SecureFile LOB can be created
12.1
Rename:
SQL> ALTER DATABASE
MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '/u01/oracle/rbdb1/user01.dbf';
Relocate to ASM:
SQL> ALTER DATABASE
MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '+DATA';
12.1
12.1
Only works for data files that belong to the current container
Watch on YouTube
Online
Table Move
12.2
Move table:
SQL> alter table lots_of_data move online tablespace users;
In parallel:
SQL> alter table lots_of_data move online tablespace users parallel 4;
12.2
12.2
12.2
Compress:
SQL> alter table lots_of_data
move online tablespace users
row store compress advanced;
Uncompress:
SQL> alter table lots_of_data
move online tablespace users
nocompress;
12.2
Caution:
• ROWIDs change
Watch on YouTube
Online Convert to Partitioned Table | Overview NEW IN
12.2
Convert:
SQL> alter table lots_of_data
modify partition by hash (object_id) partitions 8
online
update indexes (i_lots_of_data global);
Watch on YouTube
DBMS_REDEFINITION
"
You can redefine tables online with the DBMS_REDEFINITION package.
... it is accessible to both queries and DML during much of the redefinition process.
Typically, the table is locked in the exclusive mode only during a very small window ...
Database 19c Administrator'sGuide
Exclusive
lock
Interim table -
Synchronization
Make your changes
18c
Since Oracle 18c, you can install and patch at the same time
• For GI homes
$ mkdir /u01/app/grid/1990
$ cd /u01/app/grid/1990
$ unzip LINUX.X64_193000_grid_home.zip
$ unzip p31750108_19000_Linux_x86-64.zip
18c
Since Oracle 18c, you can install and patch at the same time
• For DB homes
$ mkdir /u01/app/oracle/product/1990
$ cd /u01/app/oracle/product/1990
$ unzip LINUX.X64_193000_db_home.zip
$ unzip -d p31771877_190000_Linux-x86-64.zip /u01/app/oracle/product/1990/patch
18c
• With multiple patches, you need to separate the subdirectories as otherwise the patch xml file gets
overwritten and patches won't be found
18c
Setup
[oracle@hol ~]$ cd /u01/app/oracle/product/ROOH19/
1. Install as usual [oracle@hol ROOH19]$ cd bin
Watch on YouTube
Important directories
cd $(orabaseconfig)
/u01/app/oracle
cd $(orabasehome)
/u01/app/oracle/homes/OraDB19Home2
Warning: Oracle Data Pump is exporting from a database that supports long
identifiers to a version that does not support long identifiers.
• https://mikedietrichde.com/2018/07/09/export-with-data-pump-and-long-identifiers/
• Database links
21
Numeric operation:
SQL> alter system set cpu_count='8/2' scope=both;
Other parameters:
SQL> alter system set sga_target=sga_max_size scope=both;
Combination:
SQL> alter system set shared_pool_size='sga_target*0.2' scope=both;
Expression Based Parameters | Overview NEW IN
21
Environment variable:
SQL> alter system set cpu_count='$NUMBER_OF_PROCESSORS/2';
21
PFile:
*.cpu_count=(${NUMBER_OF_PROCESSORS} / 2)
*.aq_tm_processes=MIN(40, PROCESSES*0.1)
*.job_queue_processes=processes
Documentation: Syntax
Expression Based Parameters | Demo
Watch on YouTube
Data Guard
19
Documentation: Restore Point Replication and Automatic Flashback and DML redirect
Watch on YouTube
12.2
12.2
Pro tip: Get all the details in the blog post How to
Documentation: Concept Stop Hardcoding Your TDE Keystore Password
Watch on YouTube
shutdown
startup
administer key management set keystore open force keystore identified by S3cr3t
container=all;
5. Autologin Keystore
$ ls -lrt /u01/app/oracle/admin/CDB2/wallet/tde/
18c
18c
Watch on YouTube
19
Documentation
1379 Copyright © 2022, Oracle and/or its affiliates
Gradual Password Rollover | Overview NEW IN
19
(TYPE=(DATABASE));(CLIENT ADDRESS=((PROTOCOL=tcp)(HOST=10.0.1.225)(PORT=24983)));
(LOGON_INFO=((VERIFIER=12C-OLD)(CLIENT_CAPABILITIES=O5L_NP,O7L_MR,O8L_LI)));
19
Watch on YouTube
Oracle 12.1.0.2
Oracle 12.2.0.1
Oracle 18c
Oracle 19c
DBMS_PRIVILEGE_CAPTURE BEGIN
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
CREATE_CAPTURE name => 'tuning_privs',
• Create a privilege capture analysis policy
description => 'analyze tuning privs',
type => DBMS_PRIVILEGE_CAPTURE.
ENABLE_CAPTURE G_CONTEXT,
• Enable the analysis policy condition => 'SYS_CONTEXT(''USERENV'',
Run it for a given period of time ''SESSION_USER'')=''SMITH''');
DISABLE_CAPTURE END;
• Stop the privilege analysis run /
GENERATE_RESULT
• Populate dictionary views with analysis results
DROP_CAPTURE / DELETE_RUN
• Drop it if it is not longer needed
• Or delete the results of this run only
DBMS_PRIVILEGE_CAPTURE BEGIN
DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
CREATE_CAPTURE ('tuning_privs');
• Create a privilege capture analysis policy END;
ENABLE_CAPTURE /
• Enable the analysis policy
Run it for a given period of time
DISABLE_CAPTURE
• Stop the privilege analysis run
GENERATE_RESULT
• Populate dictionary views with analysis results
DROP_CAPTURE / DELETE_RUN
• Drop it if it is not longer needed
• Or delete the results of this run only
DBMS_PRIVILEGE_CAPTURE BEGIN
DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE
CREATE_CAPTURE ('tuning_privs');
• Create a privilege capture analysis policy END;
ENABLE_CAPTURE /
• Enable the analysis policy
Run it for a given period of time
DISABLE_CAPTURE
• Stop the privilege analysis run
GENERATE_RESULT
• Populate dictionary views with analysis results
DROP_CAPTURE / DELETE_RUN
• Drop it if it is not longer needed
• Or delete the results of this run only
DBMS_PRIVILEGE_CAPTURE BEGIN
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
CREATE_CAPTURE ('tuning_privs');
• Create a privilege capture analysis policy END;
ENABLE_CAPTURE /
• Enable the analysis policy
Run it for a given period of time
DISABLE_CAPTURE
• Stop the privilege analysis run
GENERATE_RESULT
• Populate dictionary views with analysis results
DROP_CAPTURE / DELETE_RUN
• Drop it if it is not longer needed
• Or delete the results of this run only
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
USERNAME SYS_PRIV
-------- ------------------------------
SMITH ADMINISTER ANY SQL TUNING SET
SMITH ADMINISTER DATABASE TRIGGER
SMITH ADMINISTER RESOURCE MANAGER
SMITH ADMINISTER SQL TUNING SET
SMITH ALTER ANY ASSEMBLY
SMITH ON COMMIT REFRESH
...
DBMS_PRIVILEGE_CAPTURE BEGIN
DBMS_PRIVILEGE_CAPTURE.DELETE_RUN
CREATE_CAPTURE ('tuning_privs');
• Create a privilege capture analysis policy END;
ENABLE_CAPTURE /
• Enable the analysis policy
Run it for a given period of time BEGIN
DISABLE_CAPTURE DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE
• Stop the privilege analysis run ('tuning_privs');
GENERATE_RESULT END;
• Populate dictionary views with analysis results /
DROP_CAPTURE / DELETE_RUN
• Drop it if it is not longer needed
• Or delete the results of this run only
Characteristics:
Documentation: Concept
Watch on YouTube
2.
Methods
3.
Scenarios
CONCEPTS
FALLBACK
Returns the database to a previous
release without losing changes
• Tablespace header
compatible=12.1.0 compatible=19.0.0
On plug-in:
COMPATIBLE OPTIMIZER_FEATURES_ENABLE
• Enables features • Just reverts to the parameters used in a previous release
• Changes on-disk structures • Avoid using it if possible
• This is not a Swiss Army knife!
• You will turn off a lot of great features
"
Modifying the OPTIMIZER_FEATURES_ENABLE parameter generally is strongly discouraged
and should only be used as a short term measure at the suggestion of Oracle Global Support.
Use Caution if Changing the OPTIMIZER_FEATURES_ENABLE Parameter After an Upgrade (Doc ID 1362332.1)
CDB1 CDB2
12.2.0.1 19.13.0
timezlrg_26.dat timezlrg_32.dat
CDB1 CDB2
12.2.0.1 19.13.0
timezlrg_26.dat timezlrg_32.dat
Patch: timezlrg_32.dat
$ ls -l $ORACLE_HOME/oracore/zoneinfo
...
11.2.0.4 timezone_14.dat
...
12.1.0.2 timezone_18.dat
...
timezone_25.dat
12.2.0.1 timezone_26.dat
timezone_27.dat
timezone_28.dat
timezone_29.dat
timezone_30.dat
18 timezone_31.dat
19 timezone_32.dat
timezone_33.dat
timezone_34.dat
21 timezone_35.dat
timezone_36.dat
Workaround:
Create database creation scripts with DBCA
Backup
Flashback
2. Downgrade
Methods Data Pump
GoldenGate
3.
Scenarios
4. 5.
Practice Summary
BACKUP
Read-only
A database upgrade does not touch user data
Start upgrade
Pro tip: Works for SE2 and databases
in NOARCHIVELOG mode
Read-write
To restore
FL ASHBACK
• Requires:
• Enterprise Edition
• ARCHIVELOG mode
• 10-20 GB for Flashback Logs
• COMPATIBLE must not be changed
FLASHBACK
DOWNGR ADE
• No data loss
• Requires:
• COMPATIBLE must not be changed
• Time zone file version must match
Examples:
• New dictionary tables are not dropped, but truncated
• New indexes are not dropped
• Generally, dropping is avoided
Downgrade reverts only the data dictionary
to a state compatible with a previous release
19c 11.2.0.4
19c 12.1.0.2
19c 18c
21c 12.2.0.1
21c 18c
21c 19c
CDB-only architecture
DOWNGRADE
SQL> @?/rdbms/admin/catdwgrd.sql
SQL> shutdown immediate
DATA PUMP
• Requires:
• Time zone file version must match
DATA PUMP
impdp ...
Imported
19c
into database
11.2.0.4 Exported to
dump file with
Copied over e.g. VERSION=11.2.0.4
the network
Pro tip: A shared NFS mount
point eliminates the network copy
GOLDENG ATE
iles
4. Apply changes 3. Trail F
M ove to o
ld 1. S ta r t
e
Via GoldenGate
syste m GoldenGat
Capture
5. Switch clients back
um p
2 . Data P when synchronization
file
via dump has caught up
expdp ... version=11.2.0.4 Pro tip: If you keep your old database in
place, you can skip the data pump step
Bidirectional
replication
various methods to
SUMMARY
Data Loss x x
Phased migration x
Documentation
1.
Intro
2.
Methods
Upgrade
3. Upgrade with PDB Conversion
Scenarios Unplug / Plug / Upgrade
4. 5.
Practice Summary
Unplug-Plug
PDB to PDB
Conversion
non-CDB to PDB
Upgrade
non-CDB to non-CDB
CDB to CDB
• Default behavior:
• AutoUpgrade creates GRP except for
- Standard Edition 2
- restoration=no
• GRP will be kept
• GRP needs to be removed manually except for
- drop_grp_after_upgrade=yes will only remove it when upgrade completed successfully
• /etc/oratab
• Clusterware registration
• Moving files
• PFile
• SPFile
• Password file
• Etc.
Watch on YouTube
PRIMARY STANDBY
FLASHBACK
SQL> shutdown immediate
SQL> shutdown immediate
SQL> startup mount
SQL> flashback database ...;
SQL> alter database open resetlogs; SQL> startup mount
SQL> flashback database ...
SQL> alter database recover managed
standby database ...;
In Oracle Database 19c you can EXPORT CONFIGURATION and IMPORT CONFIGURATION in Data Guard CLI (DGMGRL)
DB REDO
DB
DB_BOSTON DB_fra25c
DB REDO
DB
DB_BOSTON DB_fra25c
Watch on YouTube
• 18
• 12.2
• 12.1.0.2
• Dictionary statistics
• Gather immediately after downgrade
• Optimizer statistics
• Regather stale statistics
PRIMARY STANDBY
SQL> startup downgrade
$ ./dbdowngrade
Wait for all redo to be applied
Restart database in lower release Oracle Home Restart database in lower release Oracle Home
SQL> @catrelod
SQL> @utlrp
$ datapatch -verbose
In Oracle Database 19c you can EXPORT CONFIGURATION and IMPORT CONFIGURATION in Data Guard CLI (DGMGRL)
Stop database (all instances) and start one instance in higher release Oracle Home
SQL> alter system set cluster_database=false scope=spfile sid='*';
Downgrade
$ ./dbdowngrade
$ datapatch -verbose
BOSTON CHICAGO
DB REDO
DB
DB_BOSTON DB_fra25c
DB REDO
DB
DB_BOSTON DB_fra2wv
Watch on YouTube
...
• Bug 30072483
Unplug-Plug
PDB to PDB
Conversion
non-CDB to PDB
Upgrade
non-CDB to non-CDB
CDB to CDB
Source database
12.2.0.1
Target database
19c COMPATIBLE=12.2.0
Command
java -jar autoupgrade.jar -config DB19.cfg -mode deploy
COMPATIBLE=12.2.0
CDB 12.2.0.1
NON-CDB PDB
12.2.0.1 12.2.0.1
@$ORACLE_HOME/rdbms/admin/pdb_to_noncdb.sql
NON-CDB PDB
12.2.0.1 19.11
COMPATIBLE=12.2.0
Transportable TTS
• Blog post
COMPATIBLE=12.2.0
Watch on YouTube
Unplug-Plug
PDB to PDB
Conversion
non-CDB to PDB
Upgrade
non-CDB to non-CDB
CDB to CDB
CDB1 CDB2
12.2.0.1 19.13.0
COMPATIBLE=12.2.0 COMPATIBLE=19.1.0
Source and target CDB must have the identical time zone files present
• Check time zone file in source and target CDBs
select * from V$TIMEZONE_FILE;
Source and target CDB must have the identical time zone files present
• Apply time zone patch to CDB$ROOT
SQL> start $ORACLE_HOME/rdbms/admin/utltz_upg_check.sql
SQL> start $ORACLE_HOME/rdbms/admin/utltz_upg_apply.sql
• Attention: Restart will happen
CDB1 CDB2
12.2.0.1 19.13.0
COMPATIBLE=12.2.0 COMPATIBLE=12.2.0
start ?/rdbms/admin/catrelod.sql
1529 Copyright © 2022, Oracle and/or its affiliates
Downgrade-Unplug-Plug | PDB Recompilation
Recompilation
• Start utlrp.sql
@$ORACLE_HOME/rdbms/admin/utlrp.sql
Upgrade to 19c
AutoUpgrade UPGR
Convert to PDB
AutoUpgrade UPGR CDB2
Migrate to 19c
Full transportable export/import FTEX CDB2 / PDB2
Unplug-plug upgrade
AutoUpgrade PDB3 CDB2
Upgrade to 19c
AutoUpgrade DB12
Then launch the workshop and access it within your browser (noVNC link)
No installation of any tool is required if you use the Green Button lab
• Instructions are in the browser inside the lab
• 100+ videos
• No marketing
• No buzzword
• All tech
Link
https://MikeDietrichDE.com
https://DOHdatabase.com
https://www.dbarj.com.br/en
THANK
YOU
Webinars:
https://MikeDietrichDE.com/videos
YouTube channel:
OracleDatabaseUpgradesandMigrations
THANK
YOU
HANDS-ON LAB
Instructions
Live Labs
Guided tour
THANK
YOU