You are on page 1of 52

What’s New In Oracle 19c Database

A brief Presentation on some of its important features

Presented by Amit Kurani


Agenda

❖ Oracle Database19c brief introduction.


❖Oracle Database 19c Supported OS (Operating Systems)
❖Long term support.
❖Important features.
❖New $Views.
❖Deprecated parameters.
❖Q & A.

2
Oracle Database 19c offers market-leading performance, scalability,
reliability, and security both on-premises and in the cloud. And since
it’s the Long Term Release of the Oracle Database 12c and 18c
family of products, it provides the highest level of release stability
and longest time frame for support and bug fixes.

Oracle Database 19c is the final, and therefore 'long term support'
release of the Oracle Database 12c family of products (which
includes Oracle Database 18c). 'Long term support' means that
Oracle Database 19c comes with 4 years of premium support and a
minimum of 3 years extended support.

3
Oracle Database 19c Supported OS (Operating Systems)

Supported OS Memo

Oracle Linux 7.4 or later


Red Hat Enterprise Linux 7.4 or later
SUSE Linux Enterprise Server 12 SP3 or later
Oracle Solaris 11.4 or later
Windows Server 2019
Windows Server 2016
Windows Server 2012 R2
Windows 10 64-bit RAC is non-supported
Windows 8.1 64-bit RAC is non-supported
AIX 7.1 TL5 SP1 or later、 AIX 7.2 TL2 SP1 or later
4
5
6
7
8
9
10
11
Direct upgrade path for 19c Upgrade

12
New Auto Upgrade Tool
❖Oracle Database Auto Upgrade allows DBAs to upgrade one
or many databases without human intervention

❖ The Auto Upgrade utility identifies issues before upgrades,


deploys upgrades, performs post upgrade actions, and
brings up the upgraded Oracle Database

❖ Currently supports 12.2 and 18c

13
See MOS Note 2485457.1 for more details
Auto-Upgrade for Oracle Database

What is the AutoUpgrade?

The Oracle Database AutoUpgrade utility is a new tiny little command line tool which allows you
to upgrade your databases in an unattended way. I call it the Hands-Free Upgrade. The idea of
the tool is to run the prechecks against multiple databases, fix 99% of the potential issues, set a
restore point in case something goes wrong – and then upgrade your databases. And of course,
do the postupgrade, recompilation and time zone adjustment.

It includes automatic retry and fallback, the possibility to schedule upgrades for future points in
time, and the ability to set, change or remove initialization parameters as desired.
It saves time and money by upgrading hundreds of databases with one command and replacing
bespoke high maintenance upgrade solutions.

The only thing you need to provide is a config file in text format.
Note:-
You get it when you install Oracle Database 19c (19.3) or newer. Or – and this is the
recommended source, you download the most recent version from MOS Note: 2485457.1

14
Example of config file

15
16
Automatic Indexing
The automatic indexing feature automates the index management tasks in an Oracle database. Automatic indexing automatically
creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database
performance. The automatically managed indexes are also known as auto indexes.

Functionality
❖ Runs the automatic indexing process in the background periodically at a predefined time interval.

❖ The index management task is now dynamically performed by the database itself via a task which executes in the background
every 15 minutes

❖ Analyzes application workload, and accordingly creates new indexes and drops the existing under performing indexes to
improve database performance.

❖ Rebuilds the indexes that are marked unusable due to table partitioning maintenance operations, such as ALTER TABLE MOVE.

❖ Provides PL/SQL APIs for configuring automatic indexing in a database and generating reports related to automatic indexing
operations.

Note:-
This feature is currently restricted to Enterprise Edition on Engineered Systems. 17
Enable automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be
used in SQL statements:

The DBMS_AUTO_INDEX package is used to manage the automatic indexing feature.

The on-off switch for automatic indexing is controlled using the AUTO_INDEX_MODE property, which has the
following allowed values.

• IMPLEMENT : Turns on automatic indexing. New indexes that improve performance are made visible and
available for use by the optimizer.
• REPORT ONLY : Turns on automatic indexing, but new indexes remain invisible.
• OFF : Turns off automatic indexing
The automatic indexing process runs in background every 15 minutes and performs the following operations:
1. Identifies auto index candidates based on the usage of table columns in SQL statements.
2. Creates invisible auto indexes for the auto index candidates.
3. The invisible auto indexes are validated against SQL statements.

A. Index are marked visible If the performance is improved


B. Indexes are configured as unusable, If performance of SQL statements is not improved.
C. If performance degrades for some SQLs, the indexes are marked visible except for the these SQLs (blacklisted)

18
4. The auto indexes that are not used for a long period are deleted (373 days by default).
How to Configure?

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

Disable automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

Set retention period for unused auto indexes to 90 days:

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90');

Set retention period for unused non-auto indexes to 60 days:

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '60');

Define tablespace of TBSAUTO to store auto indexes:

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'TBSAUTO');

Allocates 5% of the tablespace for auto indexes:

❖ EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5');


19
Automatic Indexing

20
Real-Time Statistics
Oracle database 19c introduced real-time statistics to reduce the chances that stale statistics will adversely affect optimizer
decisions when generating execution plans.

Oracle database 12.1 introduced online statistics gathering for bulk loads. This feature allowed the database to gather a subset of
statistics during CTAS and some direct path insert operations. Real-time statistics take this further, allowing statistics to be
gathered as part of some conventional path operations.

Oracle introduced new parameters


"_optimizer_gather_stats_on_conventional_dml"and "_optimizer_use_stats_on_conventional_dml" which are true
by default
How does real time statistics works?

❖ By default the "_optimizer_gather_stats_on_conventional_dml" is true so its automatically kicks off


❖ When a DML operation is currently modifying a table (conventional), Oracle Database dynamically computes values for the
most essential statistics if the above parameter is on.
❖ Consider a example of table that is having lot of inserts and rows are increasing. Real-time statistics keep track of the
increasing row count as rows are being inserted. If the optimizer performs a hard parse of a new query, then the optimizer ca n
use the real-time statistics to obtain a more accurate cost estimate.
❖ DBA_TAB_COL_STATISTICS and DBA_TAB_STATISITICS has columns NOTES tell real time statistics have been used.
STATS_ON_CONVENTIONAL_DML 21
No more stale statistics in 19c
There is an odd contradiction that we all encounter for most databases, especially if they are predominantly
used during the business day. Here is how that contradiction comes to be – it is in the way that we obtain and
use optimizer statistics on those databases. The contradiction runs like this:

• To minimize service disruption, we gather statistics at a quiet time, for example, in the middle of the night.
• We then use those statistics during the business day whilst user activity is at its highest.
• Highest user activity will typically mean the highest frequency of data changes.
• Hence the statistics are at their peak accuracy when no-one is using them to optimize queries, and they are
at their least accurate when everyone is using them to optimize queries!

Having great statistics that are not used, and poor statistics that are used all the time seems a nonsensical way to do things,
but there really hasn’t been any alternative unless you were prepared to consume precious server resources to collect
statistics whilst your users are performing their transactions. I have known customers to do this in the past, but of course, it
takes careful monitoring and management to ensure that the slicing up of the resource “pie” is done appropriately to keep
the right balance between business needs and background database tasks.

19c brings a nifty solution to this contradiction by having the database automatically maintain details about how data is
changing in a table via a feature called Real Time Statistics.
22
Enable the Real time Statistics

--- optimizer online stats gathering for conventional DML


ALTER SYSTEM SET "_optimizer_gather_stats_on_conventional_dml"= TRUE;

--- use optimizer statistics gathered for conventional DML


ALTER SYSTEM SET "_optimizer_use_stats_on_conventional_dml"= TRUE;

Disable the Real time Statistics

--- optimizer online stats gathering for conventional DML


ALTER SYSTEM SET "_optimizer_gather_stats_on_conventional_dml"= FALSE;

--- use optimizer statistics gathered for conventional DML


ALTER SYSTEM SET "_optimizer_use_stats_on_conventional_dml"= FALSE;

Use of Hint with Real time Statistics


NO_GATHER_OPTIMIZER_STATISTICS: hint prevents the collection of real-time statistics

23
Real-Time Statistics

24
SQL Quarantine
Oracle Database 19c introduced an extension of Runaway Query Management called SQL Quarantine.

Cancelling a runaway query is helpful to prevent wasting system resources, but if that problem query is run repeatedly, it could
still result in a considerable amount of wasted resources. SQL Quarantine solves this problem by quarantining cancelled SQL
statements, so they can't be run multiple times.

With Oracle Resource Manager, we had a way to limit and regulate use of resources like CPU and I/O as well as we had the ability
to prevent the execution of any long running queries which exceeded a defined threshold.

So we could ‘cancel’ or terminate a SQL query which was running longer than a defined threshold of say 20 minutes.

All that was good – but nothing prevented that same query from being executed again and again running each time for 20
minutes and only then getting terminated – wasting 20 minutes of resources each time it was executed.

New in Oracle 19c is the new concept of SQL Quarantine where if a particular SQL statement exceeds the specified resource limit
(set via Oracle Resource Manager), then the Resource Manager terminates the execution of that statement and “quarantines”
the plan.
Example→
https://oracle-base.com/articles/19c/sql-quarantine-19c#prerequisites

Note:-
This feature is only available on Engineered Systems such as Exadata and Exadata Cloud Service. There is a workaround for testing by enabling the
"_exadata_feature_on" initialisation parameter. Don't use this on a real instance or you will be breaking your license agreement.
25
Create a quarantine configuration for an execution plan for a SQL statement
Specify quarantine thresholds in the quarantine configuration

You can also perform the following operations related to quarantine configurations using the DBMS_SQLQ package
subprograms:

Enable or disable a quarantine configuration


Delete a quarantine configuration
Transfer quarantine configurations from one database to another

By using DBMS_SQLQ.ALTER_QUARANTINE procedure we can create threshold based on below:


· CPU time
· Elapsed time
· I/O in megabytes
· Number of physical I/O requests
· Number of logical I/O requests

At a very high level, this is what you need, to setup SQL Quarantine
• Create a resource plan that limits the SQL execution time to a specific number of seconds. SQL Statements that exceed this limit, run
longer than X seconds, will be terminated.
• Run a SQL statement that exceeds this limit, X seconds.
• You quarantine the SQL statement using DBMS_SQLQ package. You can check the DBA_SQL_QUARANTINE view to see quarantined SQL
statements.
• Run the SQL statement again. If the statement runs with the same execution plan, then the statement is terminated right away with: ORA-
56955 quarantined plan used. 26
SQL Quarantine

27
Automatic Database Diagnostic Monitor (ADDM) Support for Pluggable Databases (PDBs)

• This feature is available from Oracle 19c database.


• You can now use Automatic Database Diagnostic Monitor (ADDM) analysis for pluggable
databases (PDBs) in a multitenant environment. ADDM analysis at a PDB level enables you
to tune a PDB effectively for better performance.
• Starting with Oracle Database 12c, ADDM is enabled by default in the root container of a
multitenant container database (CDB). Starting with Oracle Database 19c, you can also use
ADDM in a pluggable database (PDB).
• ADDM is enabled by default in a CDB root.
• ADDM does not work in a PDB by default, because automatic AWR snapshots are disabled
by default in a PDB. To use ADDM in a PDB, you must enable automatic AWR snapshots in
the PDB.
• A user whose current container is the CDB root can view ADDM results for the entire CDB.
The ADDM results can include information about multiple PDBs. ADDM results related to a
PDB are not included if the PDB is unplugged .
28
PDB-Level ADDM Restrictions
Unlike in a non-CDB, ADDM does not report the following issues in a PDB, because these issues apply to a CDB as a
whole and do not apply to an individual PDB
I/O problems due to:
• undersized buffer cache
• undersized streams pool
• excessive temporary writes
• excessive checkpoint writes
• excessive undo writes
• excessive PQ checkpoint writes
• excessive truncate writes
• excessive tablespace DDL checkpoint
• I/O capacity limit
SQL hard parsing issues due to:
• cursor aging
• out-of-memory failed parse
SGA sizing issues:
ADDM also does not report the following issues in a PDB, because these issues cannot be resolved at a PDB level:
• Cluster messaging related issues, such as network latency, congestion, contention, and lost blocks
• Log file switch waits on archiving and on checkpoint incomplete
• Too many free-buffer waits
• Contention on log buffer waits,Waits due to CPU bottleneck,Operating system VM paging,Session slot wait event,CPU
quantum wait event, RMAN related wait events, such as PQ queued wait event, PGA limit wait event, and I/O queue29wait.
Active Data Guard : DML Redirection
• Allows DML to be executed against the standby database
• On execution the DML operation is passed to the primary database
• The primary database performs the operations and generates the redo
• The redo is then applied to the standby and control returned to the
application
• The transaction is fully ACID (Atomicity, Consistency, Isolation, Durability)
compliant
• Allows for incidental updates to be run against the standby increasing the
overall flexibility of the disaster recovery architecture.

30
Active Data Guard DML Redirection
You can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which
occasionally execute DML’s, on the standby database.

DML operations on a standby can be transparently redirected to and run on the primary database. This includes DML
statements that are part of PL/SQL blocks. The Active Data Guard session waits until the corresponding changes are shipped to
and applied to the Active Data Guard standby. Read consistency is maintained during the DML operation and the standby
database on which the DML is run can view its uncommitted changes. However, all the other standby database instances can
view these changes only after the transaction is committed.

How to configure?

To configure automatic redirection of DML operations for all standby sessions in an Active Data Guard environment:

❖ ADG_REDIRECT_DML=TRUE

To configure automatic redirection of DML operations for the current session, use the following command:

❖ ALTER SESSION ENABLE ADG_REDIRECT_DML;

Note → Active Data Guard require extra license to use.


31
32
Automatic flashback standby database when Primary database is flashed back
Prior to Oracle 19c, when using the "Flashback Database" feature on a primary database in a data guard
environment would result in a standby database which is no longer in sync with the primary database; in previous
releases, in order to ensure both primary and secondary are synced, then a manual procedure to flash back to
standby database was required.

Starting with Oracle 19c, a standby database that is in a mounted state can automatically follow the primary
database after a RESETLOGS operation on the primary. This simplifies standby management after a RESETLOGS
operation on the primary.

The MRP performs the following actions:


• detects the new incarnation​
• flashes back the standby or the PDB on the standby to the same point in time as that of the primary or the
PDB on the primary​
• restarts the standby recovery and moves the standby to the new branch of redo​
• The flashback operation will succeed only when the standby database has sufficient flashback data.​
• If you do not want the standby to automatically follow the primary, either keep the standby database in OPEN
mode or stop the MRP process on the standby

A new parameter(_standby_auto_flashback) is introduced which enables the standby database to be flashed


33
back
automatically when Flashback Database is performed on the primary database.
Oracle 19c Restore Point Replication From Primary To Standby

• The process of flashing back a physical standby to a point in time that was captured on the primary is
simplified by automatically replicating restore points from primary to the standby.
• These restore points are called replicated restore points.
• These restore points are called Replicated Restore Points and have the restore point name suffixed with a
“_PRIMARY”.
• Irrespective of whether a restore point on the primary database is a guaranteed restore point or a normal
restore point, the corresponding replicated restore point is
always a normal restore point.
• COMPATIBLE initialization parameter for both the primary database and the standby database is set to 19.0.0 or
higher
• Primary database is open
• A restore point that is created on a primary database when the primary is in mount mode is not replicated. This
restriction is because the restore point information is replicated though the redo.
• When you delete a restore point on the primary, the corresponding replicated restore point on the
standby is also deleted.
• The managed redo process (MRP) manages the creation and maintenance of replicated restore
points. If restore points are created on the primary database when MRP is not running, then these
restore points are replicated to the standby database after MRP is started.
34
Clear Flashback Logs Periodically for Increased Fast Recovery Area (FRA) Size

DBA`s cannot manage the flashback logs in the fast recovery area directly other than by setting the flashback retention target or using
guaranteed restore points. Nevertheless, you can manage fast recovery area space as a whole to maximize the space available for retention
of flashback logs. The Only quick way we purge the flashback is to turn off Flashback and Turn on Again.

Starting with Oracle Database Release 19c, Oracle Database monitors flashback logs in the fast recovery area and automatically deletes
flashback logs that are beyond the retention period (Not Waiting for FRA Crunch). When the retention target is reduced, flashback logs that
are beyond the retention period are deleted immediately.

Duplicate standby DB from active primary database using Compression:

In 19c you can duplicate a standby database from an active primary DB using COMPRESSION feature which can send the
database block via the network in a compressed format saving the bandwidth and the time as well.
Command example:

run {
allocate channel disk1 type disk;
allocate auxiliary channel aux1 type disk;
duplicate target database for standby from active database USING COMPRESSED BACKUPSET
spfile
parameter_value_convert 'orcl','orcls'
set db_name='orcl'
set db_unique_name='orcls';
35
}
New features in DBCA in the silent mode

• Creation Duplicate of an Oracle Database, CreateDuplicateDB command, in DBCA Silent


Mode

• Ability to Create a PDB by Cloning a Remote PDB Using DBCA in Silent Mode

• Ability to relocate a PDB to another CDB Using DBCA in Silent Mode

36
New features of data pump support for resource usage restrictions

During the export and import of the data pump, the use of its resources can be restricted. This can be achieved
with two new parameters:

• MAX_DATAPUMP_JOBS_PER_PDB and MAX_DATAPUMP_PARALLEL_PER_JOB.


• The default value of MAX_DATAPUMP_JOBS_PER_PDB is 100.
• The value of 0-250 determines the maximum number of PDB parallel data pumps.
• The default value of MAX_DATAPUMP_PARALLEL_PER_JOB is 50.
• The value of 1-1024 determines the maximum number of parallel processes allowed for each Oracle data
pump job.

Oracle Data Pump Test Mode for Transportable Tablespaces (TTS)

• You can more easily determine how long an export takes, and discover unforeseen issues not reported by the
closure check.
• Test mode for Transportable Tablespaces (TTSs) performs a metadata-only export test using TTSs or full
transportable export or import.
• It also removes the requirement for the source database tablespaces to be in read-only mode.
• The resulting of export dump file is not available for use by Data Pump Import.
37
Oracle Network Log File Segmentation

Configuring listener.log file rotation – 19c

One annoying thing that happens time to time on a database listener, specially with
client/server applications, is starting rejecting some connections when listener.log file is
greater than 2gb. In versions between 11g and 18c, you can monitor this using a customize
shell script or scheduling some jobs via scheduler.

On Oracle 19c, Oracle has introduced two new parameters on listener administration. We can configure log rotation inside
Oracle scope, without any additional configuration on OS side or shell script.

LOG_FILE_NUM_<<listener_name>> –>

The LOG_FILE_NUM_listener_name is a diagnostic parameter of the listener.ora file that specifies the number of log file
segments. If you don’t define this parameter, the feature will not work and file will grow indefinitely.

LOG_FILE_SIZE_<<listener_name>> –>

The LOG_FILE_SIZE_listener_name diagnostic parameter of the listener.ora file specifies the size of each log file segment. This
38
parameter is in MB and default is 300Mb
Passwords Removed from Oracle Database Schema/Accounts

What are schema only accounts? These are users (schemas) in the database, that have no password.
Since they have no password, these schemas cannot connect directly to the database (this is the whole purpose of the schema
only account).

Without being able to connect directly to the schema only accounts, the schema itself is more secure. In general you don’t
want users, applications connecting directly with the schema account.

No more workarounds are needed to properly secure the objects and data of a schema.

• Most of the Oracle Database supplied schema-only accounts now have their passwords removed to prevent users from
authenticating to these accounts.
• This enhancement does not affect the sample schemas. Sample schemas are still installed with their default passwords.
• Administrators can still assign passwords to the default schema-only accounts. Oracle recommends changing the schemas
back to a schema-only account afterward.
• The benefit of this feature is that administrators no longer have to periodically rotate the passwords for these Oracle
Database provided schemas. This feature also reduces the security risk of attackers using default passwords to hack into
these accounts.

39
Privilege Analysis Features
• Privilege Analysis (PA), dynamically analyzes privilege and role usage for database users and application service accounts at
run time.
• PA helps to further harden the application by identifying unused privileges and roles based upon the actual usage of the
roles and privileges by a user or from within the application.

Understanding the set of unused roles and privileges is key to identify the least number of privileges the application needs to
run. Reports generated by PA are richer and reflect the actual privileges and roles used/unused by users and applications
during runtime.

Static based role/privilege analysis tools can only show which roles and privileges are granted to users. Understanding actual
usage of roles and privilege is essential to implementing a least privilege model for all database accounts and reduces your
application attack surface

Steps To Setup Privilege Analysis->

❖ Create: DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE
❖ Enable : DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
❖ Wait: Leave the policy enabled for a period of time: hours, days, weeks, months.
❖ Disable DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE.
❖ Generate Reports DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT.
❖ Take action where-ever appropiate
❖ Drop: DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE.
40
EASY CONNECT PLUS - EZCONNECT
Oracle Database 19c introduces a significantly enhanced connection string syntax called Easy Connect Plus.

By reducing the need for external configuration files, Easy Connect Plus makes it easier to use features such as TLS
connections, wallets, load balancing, connection timeouts, and to tune network buffer sizes. With this feature there is no
need to have tnsnames.ora,sqlnet.ora files present or TNS_ADMIN is set

Easy Connect Plus can be used in JDBC and .NET applications. It is also usable in applications and drivers built on the C
language Oracle Call Interface (OCI) such as the Oracle Database drivers for Python, Node.js, PHP, Ruby, and Go.
Easy connect background

An Easy Connect string includes a host name, an optional port, and the database service name:

database_host[:port][/[service_name]

Examples
1. Using protocol
tnsping tcp://joords1:1521/sales.us.example.com

Converts into the following descriptor


(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=Joords1)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=sales.us.example.com))) 41
Orachk & Exachk Support for Encrypting Collection Files
• Oracle ORAchk and Oracle EXAchk diagnostic collection files may contain sensitive data. Starting in this release, you can
encrypt and decrypt diagnostic collection ZIP files and protect them with a password.
• Oracle ORAchk and Oracle EXAchk generate reports and show only the most critical checks by default.
• The critical checks are those that have the most severe potential effect
• In earlier releases of Oracle ORAchk and Oracle EXAchk, reports contained over one hundred checks and thus made the
analysis more time-consuming. With the most critical checks, you can analyze the reports efficiently, and quickly resolve
critical problems and prevent downtime or performance issues.

Note:-This feature is available only on Linux and Solaris platforms.

Hybrid Partitioned Tables

Hybrid Partitioned table – some partitions exist in the database and some partitions are hosted external to the database – maybe
on a normal file system or ACFS or even Hadoop File System.

• Using this feature we may like to move older non-active partitions of a table to possibly cheaper storage hosted outside the
database – while the active data is hosted in partitions stored inside the Oracle database
• External partitions may reside on standard file systems, HDFS or Object Storage (coming soon).
• This model supports the placement of data for long term archiving on lower cost storage whilst allowing it’s continued online
access
• Supports long term access to infrequently access data for regulatory requirements. 42
43
Finer granularity Supplemental Logging

Supplemental logging was designed and implemented for Logical Standby or full database replication requirements. This adds
unnecessary overhead in environments where only a subset of tables is being replicated. Fine-grained supplemental logging
provides a way for partial database replication users to disable supplemental logging for uninteresting tables so that even
when supplemental logging is enabled in database or schema level, there is no supplemental logging overhead for
uninteresting tables.

Use of this feature can significantly reduce the overhead in terms of resource usage and redo generation in case when only
some of the tables in the database require supplemental logging, such as in a Golden Gate partial replication configuration.

If both source and mining database are at redo compatibility 19 or higher and the value of enable_goldengate_replication is
TRUE, then Oracle Database 19c and higher provides an advantage of reduced supplemental logging overhead for Oracle
GoldenGate.

How to enable→

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION;
Database altered.

SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database;

SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_SR
------------------------------ ------------------------------
44
IMPLICIT YES
HIGH-FREQUENCY STATISTICS
The automatic optimizer statistics collection job which calls DBMS_STATS package runs in predefined maintenance windows and
these maintenance windows are open once a day during which various jobs including the gathering of statistics is performed.

For volatile tables statistics can go stale between two consecutive executions of such automatic statistics collection jobs. The
presence of stale statistics could potentially cause performance problems because the optimizer is choosing sub-optimal execution
plans.

The new feature introduced in Oracle 19c called High-Frequency Automatic Optimizer Statistics Collection complements the
standard automatic statistics collection job.

By default, the high-frequency statistics collection occurs every 15 minutes and as such there is less possibility of having stale
statistics even for those tables where data is changing continuously.

The DBMS_STATS.SET_GLOBAL_PREFS procedure is used to enable and disable the high-frequency statistics gather task as well as
change the execution interval (default 15 minutes) and the maximum run time (60 minutes).

To Enable HIGH-FREQUENCY STATISTICS

EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');

EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300'); 45
Root Scripts Automation Support for Oracle Database Installation
Starting with Oracle Database 19c, the database installer, or setup wizard, provides options to set up permissions
to run the root configuration scripts automatically, as required, during a database installation.
You continue to have the option to run the root configuration scripts manually.
Setting up permissions for root configuration scripts to run without user intervention can simplify database
installation and help avoid inadvertent permission errors.

46
Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux
In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on
the node which we were patching on.

Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep
running and accessible to the database users during Oracle Grid Infrastructure patching.

Downtime Oracle Grid Infrastructure Patching enables patching of Oracle Grid Infrastructure without interrupting database
operations. Patches are applied out-of-place and in a rolling fashion, with one node being patched at a time, while the
database instances on the node remain operational. Zero-downtime Oracle Grid Infrastructure Patching supports Oracle Real
Application Clusters (Oracle RAC) databases on clusters with two or more nodes. Zero-Downtime Grid Infrastructure Patching
significantly increases database availability by allowing customers to perform a rolling patch of Oracle Grid Infrastructure
without interrupting database operations on the node being patched and without impacting capacity or performance on those
database instances.

47
New ALTER SYSTEM Clause FLUSH PASSWORDFILE_METADATA_CACHE

• The ALTER SYSTEM clause FLUSH PASSWORDFILE_METADATA_CACHE refreshes the metadata cache with the latest
details of the database password file.

• Querying the V$PASSWORDFILE_INFO view retrieves the latest details of the database password file.

• This functionality is useful when the database password file name or location is changed, and the metadata cache
needs to be refreshed with the details of the updated database password file.

48
What is De-supported in Oracle 19c

• Oracle Multimedia is desupported. Oracle recommends to store multimedia content in SecureFiles LOBs,
and use third party products for image processing and conversion.

• Oracle Streams feature is desupported. Oracle recommends to use Oracle GoldenGate product to replace
all replication features of Oracle Streams.

• DBMS_JOB Jobs Converted to DBMS_SCHEDULER Jobs in Oracle Database 19c. DBMS_JOBS already
deprecated since 12.2.

49
50
51
Q&A

Thank You

52

You might also like