Professional Documents
Culture Documents
F61232-39
April 2024
Oracle Cloud Using Oracle Autonomous Database Serverless,
F61232-39
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names
may be trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Contents
Preface
Audience xxvii
Documentation Accessibility xxvii
Related Documents xxvii
Conventions xxviii
1 Overview
What's New 1-1
April 2024 1-1
March 2024 1-2
February 2024 1-3
January 2024 1-3
About Autonomous Database 1-4
What is Oracle Autonomous Database? 1-4
Key features 1-5
About Autonomous Database Workload Types 1-8
About Oracle Autonomous Data Warehouse 1-8
About Oracle Autonomous Transaction Processing 1-9
About Autonomous JSON Database 1-10
About Oracle APEX Application Development 1-11
Autonomous Database Region Availability 1-11
Always Free Autonomous Database 1-11
Upgrade an Always Free Autonomous Database to a Paid Instance 1-14
Get Help and Contact Support 1-15
Post Questions on Forums 1-15
Open a Support Ticket 1-16
Service Level Objectives (SLOs) 1-16
RTO and RPO Objectives 1-17
Built-In Tool Availability 1-17
Zero-Regression Service Level Objective 1-18
How Is Autonomous Database Billed? 1-19
Autonomous Database Billing Summary 1-19
iii
Compute Models in Autonomous Database 1-21
ECPU Compute Model Billing Information 1-22
OCPU Compute Model Billing Information 1-27
Oracle Autonomous Database Serverless Features Billing 1-31
Oracle Autonomous Database Serverless Billing for Database Tools 1-38
2 Quick Start
Before You Begin 2-1
Autonomous Database 15 Minute Quick Start 2-2
3 Tasks
Provision an Autonomous Database 3-2
Connect to Autonomous Database 3-2
About Connecting to an Autonomous Database Instance 3-3
Secure Connections to Autonomous Database 3-4
Connect to Autonomous Database Through a Firewall 3-7
Using Application Continuity 3-7
Connect to Autonomous Database Using a Client Application 3-8
About Connecting to Autonomous Database Using a Client Application 3-9
Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections with Wallets
(mTLS) 3-9
Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections Using TLS
Authentication 3-12
Prepare for JDBC Thin Connections 3-13
Connect Microsoft .NET, Visual Studio Code, and Visual Studio with a Wallet
(mTLS) 3-14
Connect Microsoft .NET, Visual Studio Code, and Visual Studio Without a Wallet 3-15
Connect Node.js and other Scripting Languages (mTLS) 3-17
Connect Node.js, and Other Scripting Languages Without a Wallet 3-20
Connect Power BI and Microsoft Data Tools to Autonomous Database 3-22
Connect Python Applications to Autonomous Database 3-23
Download Database Connection Information 3-33
Download Client Credentials (Wallets) 3-34
Wallet README File 3-37
View TNS Names and Connection Strings for an Autonomous Database Instance 3-38
Connect to Autonomous Database Using Oracle Database Tools 3-40
Connect Oracle SQL Developer with a Wallet (mTLS) 3-41
Connect Oracle SQL Developer Without a Wallet 3-43
Connect SQL*Plus with a Wallet (mTLS) 3-45
Connect SQL*Plus Without a Wallet 3-46
iv
Connect Oracle SQLcl Cloud with a Wallet (mTLS) 3-47
Connect Oracle SQLcl Cloud Without a Wallet 3-49
Connect with Built-In Oracle Database Actions 3-51
About Database Actions (SQL Developer Web) 3-52
Access Database Actions as ADMIN 3-52
Provide Database Actions Access to Database Users 3-53
Required Roles to Access Tools from Database Actions 3-54
Access Database Actions, Oracle APEX, Oracle REST Data Services, and
Developer Tools Using a Vanity URL 3-55
Connect with JDBC Thin Driver 3-55
JDBC Thin Connections with a Wallet (mTLS) 3-56
JDBC Thin TLS Connections Without a Wallet 3-64
Preparing for Oracle Call Interface Connections 3-65
Oracle Call Interface (OCI) Connections and Wallets (mTLS) 3-66
Oracle Call Interface (OCI) Connections with TLS Authentication 3-66
Predefined Database Service Names for Autonomous Database 3-66
Connect with Oracle Analytics Desktop 3-67
Connect with Oracle Analytics Cloud 3-67
Connection and Networking Options and Features 3-67
Using ACLs, VCNs, and Private Endpoints with Autonomous Database 3-68
Connect with Oracle Cloud Infrastructure FastConnect 3-71
Access Autonomous Database with VCN Transit Routing 3-71
Access Autonomous Database with Service Gateway 3-71
Use Database Resident Connection Pooling with Autonomous Database 3-71
Manage Users 3-72
Create Users on Autonomous Database 3-73
Create Users on Autonomous Database with Database Actions 3-74
Create Users on Autonomous Database - Connecting with a Client Tool 3-75
Unlock User Accounts on Autonomous Database 3-76
About User Passwords on Autonomous Database 3-76
Manage the Administrator Account on Autonomous Database 3-77
Remove Users 3-79
Manage User Profiles with Autonomous Database 3-80
Manage Password Complexity on Autonomous Database 3-82
Gradual Database Password Rollover for Applications 3-83
Manage User Roles and Privileges 3-84
Manage Users and User Roles on Autonomous Database - Connecting with
Database Actions 3-84
Manage User Privileges on Autonomous Database - Connecting with a Client Tool 3-85
Create and Update User Accounts for Oracle Machine Learning 3-87
Create User 3-87
Add Existing Database User Account to Oracle Machine Learning Components 3-88
v
Use Identity and Access Management (IAM) Authentication with Autonomous
Database 3-89
About Identity and Access Management (IAM) Authentication with Autonomous
Database 3-90
Prerequisites for Identity and Access Management (IAM) Authentication on
Autonomous Database 3-92
Enable Identity and Access Management (IAM) Authentication on Autonomous
Database 3-93
Create Identity and Access Management (IAM) Groups and Policies for IAM Users 3-94
Add IAM Users on Autonomous Database 3-96
Add IAM Roles on Autonomous Database 3-97
Create IAM Database Password for IAM Users 3-98
Connect to Autonomous Database with Identity and Access Management (IAM)
Authentication 3-99
Configure IAM Proxy Authentication 3-104
Disable Identity and Access Management (IAM) Authentication on Autonomous
Database 3-106
Notes for Using Autonomous Database Tools with Identity and Access
Management (IAM) Authentication 3-107
Use Azure Active Directory (Azure AD) with Autonomous Database 3-107
About Integrating Oracle Autonomous Database with Microsoft Azure AD 3-107
Enable Azure AD Authentication on Autonomous Database 3-113
Map Oracle Database Schemas and Roles for Azure AD 3-122
Azure AD Client Configuration and Access for Autonomous Database 3-123
Use Microsoft Active Directory with Autonomous Database 3-123
Prerequisites to Configure CMU with Microsoft Active Directory on Autonomous
Database 3-124
Configure CMU with Microsoft Active Directory on Autonomous Database 3-126
Kerberos Authentication for CMU with Microsoft Active Directory 3-128
Add Microsoft Active Directory Roles on Autonomous Database 3-132
Add Microsoft Active Directory Users on Autonomous Database 3-133
Tools Restrictions with Active Directory on Autonomous Database 3-134
Connect to Autonomous Database with Active Directory User Credentials 3-134
Verify Active Directory User Connection Information with Autonomous Database 3-135
Remove Active Directory Users and Roles on Autonomous Database 3-136
Disable Active Directory Access on Autonomous Database 3-136
Configure Kerberos Authentication with Autonomous Database 3-137
About Kerberos Authentication 3-137
Components of the Kerberos Authentication System 3-138
Enable Kerberos Authentication on Autonomous Database 3-139
Disable Kerberos Authentication on Autonomous Database 3-141
Notes for Kerberos Authentication on Autonomous Database 3-141
Create Credentials or Configure Policies and Roles 3-142
vi
Manage Credentials 3-142
Create Credentials to Access Cloud Services 3-142
List Credentials 3-143
Delete Credentials 3-144
Use Vault Secret Credentials 3-144
Use Vault Secret Credentials with Oracle Cloud Infrastructure Vault 3-145
Use Vault Secret Credential with Azure Key Vault 3-148
Use Vault Secret Credential with AWS Secrets Manager 3-150
Use Vault Secret Credential with GCP Secret Manager 3-152
Refresh Vault Secret Credentials 3-154
Update Vault Secret Credentials 3-155
Configure Policies and Roles to Access Resources 3-155
Use Resource Principal to Access Oracle Cloud Infrastructure Resources 3-156
Use Amazon Resource Names (ARNs) to Access AWS Resources 3-161
Use Azure Service Principal to Access Azure Resources 3-171
Use Google Service Account to Access Google Cloud Platform Resources 3-177
Load Data 3-185
View Data Pump Jobs and Import Data with Data Pump 3-186
About Data Loading 3-186
Load Data with Oracle Database Actions 3-187
Load Data from Files in the Cloud 3-188
Create Credentials and Copy Data into an Existing Table 3-189
Create Credentials and Load Data Pump Dump Files into an Existing Table 3-191
Load JSON on Autonomous Database 3-192
Monitor and Troubleshoot Loads 3-205
Examples for Loading Data into Autonomous Database 3-206
Import Data Using Oracle Data Pump on Autonomous Database 3-210
Export Your Existing Oracle Database to Import into Autonomous Database 3-210
Import Data Using Oracle Data Pump Version 18.3 or Later 3-211
Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier) 3-215
Access Log Files for Data Pump Import 3-218
Oracle Data Pump Import and Table Compression 3-219
Load Data from Local Files with Oracle Database Actions 3-219
Load Data into Existing Autonomous Database Table with Oracle Database Actions 3-220
Load Data or Query Data from Files in a Directory 3-223
Load Data from Local Files Using SQL*Loader 3-224
Use Data Pipelines for Continuous Load and Export 3-225
About Data Pipelines on Autonomous Database 3-225
Create and Configure Pipelines 3-229
Test Pipelines 3-235
Control Pipelines (Start, Stop, Drop, or Reset a Pipeline) 3-236
vii
Monitor and Troubleshoot Pipelines 3-239
Use Oracle Maintained Pipelines 3-243
The Data Load Page 3-244
The Data Load Page 3-244
Transform Data Using Data Studio 3-325
The Data Transforms Page 3-325
Link Data 3-382
Link to Object Storage Data 3-382
Query External Data 3-384
Access ORC, Parquet or Avro file types 3-386
Access partitioned object storage data 3-389
Querying External Partitioned Data 3-403
Query Hybrid Partitioned Data 3-405
Query External Data Pump Dump Files 3-407
Query Big Data Service Hadoop (HDFS) Data from Autonomous Database 3-409
Integrate Data Lake metadata with Data Catalog 3-409
Query External Data with AWS Glue Data Catalog 3-431
Validate External Data 3-439
Validate External Partitioned Data 3-440
Validate Hybrid Partitioned Data 3-441
View Logs for Data Validation 3-442
Use Database Links 3-443
Create Database Links from Autonomous Database to Another Autonomous
Database 3-444
Create Database Links to an Oracle Database that is not an Autonomous
Database 3-457
Create Database Links to Non-Oracle Databases 3-472
Create Database Links from Other Databases to Autonomous Database 3-492
Drop Database Links 3-494
Call Web Services 3-494
Submit an HTTP Request 3-495
Submit an HTTP Request to a Public Host 3-495
Submit an HTTP Request to a Private Host 3-496
UTL_HTTP Submit an HTTP Request to Private Site with a Proxy 3-497
Use Credential Objects to Set HTTP Authentication 3-499
Move Files 3-501
Create and Manage Directories 3-501
Create Directory in Autonomous Database 3-501
Drop Directory in Autonomous Database 3-503
List Contents of Directory in Autonomous Database 3-503
Access Network File System from Autonomous Database 3-504
Load Data from Directories in Autonomous Database 3-510
viii
Copy Files Between Object Store and a Directory in Autonomous Database 3-511
Bulk Operations for Files in the Cloud 3-512
About Bulk File Operations 3-513
Bulk Copy Files in Cloud Object Storage 3-513
Bulk Move Files Across Cloud Object Storage 3-514
Bulk Download Files from Cloud Object Storage 3-515
Bulk Upload Files to Cloud Object Storage 3-516
Bulk Delete Files from Cloud Object Storage 3-517
Monitor and Troubleshoot Bulk File Loads 3-517
Notes for Bulk File Operations 3-519
Replicate Data 3-520
Replicate Data with Oracle GoldenGate 3-520
Export Data 3-520
Export Data to Object Store as CSV, JSON, or XML using EXPORT_DATA 3-522
Export JSON Data to Cloud Object Storage 3-522
Export Data as CSV to Cloud Object Storage 3-524
Export Data as Parquet to Cloud Object Storage 3-526
Export Data as XML to Cloud Object Storage 3-528
File Naming for Text Output (CSV, JSON, Parquet, or XML) 3-530
Export Data to a Directory Location 3-533
Export Data as CSV to a Directory 3-533
Export Data as JSON a Directory 3-535
Export Data as Parquet to a Directory 3-537
Export Data as XML to a Directory 3-539
Export Data to a Directory as Oracle Data Pump Files 3-540
Export Data with Data Pump to an Autonomous Database Directory 3-542
Use Data Pump to Create a Dump File Set on Autonomous Database 3-543
Move Dump File Set from Autonomous Database to Your Cloud Object Store 3-545
Export Data with Data Pump to Object Store 3-546
Use Oracle Data Pump to Export Data to Object Store Using CREDENTIAL
Parameter (Version 19.9 or Later) 3-547
Use Oracle Data Pump to Export Data to Object Store Setting
DEFAULT_CREDENTIAL Property 3-550
Export Data to Object Store as Oracle Data Pump Files using EXPORT_DATA 3-553
Download Dump Files, Run Data Pump Import, and Clean Up Object Store 3-555
Access Log Files for Data Pump Export 3-556
Oracle GoldenGate Capture for Oracle Autonomous Database 3-556
Data Sharing 3-556
Use Cloud Links for Read Only Data Access on Autonomous Database 3-557
About Cloud Links on Autonomous Database 3-557
Grant Cloud Links Access for Database Users 3-562
Register or Unregister a Data Set 3-563
ix
Register a Data Set with Authorization Required 3-566
Register a Data Set with Offload Targets for Data Set Access 3-568
Use Cloud Links from a Read Only Autonomous Database Instance 3-572
Find Data Sets and Use Cloud Links 3-572
Revoke Cloud Links Access for Database Users 3-573
Monitor and View Cloud Links Information 3-574
Define a Virtual Private Database Policy to Secure a Registered Data Set 3-574
Notes for Cloud Links 3-576
Use Pre-Authenticated Request URLs for Read Only Data Access on Autonomous
Database 3-577
About Pre-Authenticated Request (PAR) URLs on Autonomous Database 3-578
Generate a PAR URL for a Table or a View 3-579
Generate a PAR URL with a Select Statement 3-580
Define a Virtual Private Database Policy to Secure PAR URL Data 3-581
List PAR URLs 3-582
Use a PAR URL to Access Data 3-583
Invalidate PAR URLs 3-584
Monitor and View PAR URL Usage 3-585
Notes for Using PAR URLs to Share Data 3-585
Share Data with Data Studio 3-585
The Data Share Tool 3-585
Develop 3-619
Create Applications with Oracle APEX 3-620
About Oracle APEX 3-621
Access Oracle APEX Administration Services 3-622
Create Oracle APEX Workspaces in Autonomous Database 3-623
Access Oracle APEX App Builder 3-624
Create Oracle APEX Developer Accounts 3-625
Load Data from the Cloud into Oracle APEX 3-626
Use JSON Data with Oracle APEX 3-627
Use Web Services with Oracle APEX 3-628
Send Email from Oracle APEX 3-630
Control Oracle APEX Upgrades 3-632
Access Oracle APEX, Oracle REST Data Services, and Developer Tools Using a
Vanity URL 3-635
Oracle APEX Limitations on Autonomous Database 3-635
Use JSON with Autonomous Database 3-637
Work with Simple Oracle Document Access (SODA) in Autonomous Database 3-638
Work with JSON Documents Using SQL and PL/SQL APIs on Autonomous
Database 3-640
Load JSON Documents with Autonomous Database 3-640
Import SODA Collection Data Using Oracle Data Pump Version 19.6 or Later 3-641
x
Develop RESTful Services in Autonomous Database 3-644
About Oracle REST Data Services in Autonomous Database 3-645
Access RESTful Services and SODA for REST 3-645
Develop with Oracle REST Data Services on Autonomous Database 3-646
Use SODA for REST with Autonomous Database 3-647
Access Oracle REST Data Services, Oracle APEX, and Developer Tools Using a
Vanity URL 3-653
About Customer Managed Oracle REST Data Services on Autonomous Database 3-653
Use Oracle Database API for MongoDB 3-654
Configure Access for MongoDB and Enable MongoDB 3-654
User Management for MongoDB 3-658
Create a Test Autonomous Database User for MongoDB 3-659
Connect MongoDB Applications to Autonomous Database 3-660
Send Email and Notifications on Autonomous Database 3-665
Send Email on Autonomous Database 3-666
Send Slack Notifications from Autonomous Database 3-675
Send Microsoft Teams Notifications from Autonomous Database 3-678
User Defined Notification Handler for Scheduler Jobs 3-681
About User Defined Notification Handler for Scheduler Jobs 3-682
Create a Job Notification Handler Procedure 3-682
Register the Job Handler Notification Procedure 3-684
Trigger the Job Handler Notification Procedure 3-685
De-Register the Job Handler Notification Procedure 3-687
Oracle Extensions for IDEs 3-687
Use Oracle Cloud Infrastructure Toolkit for Eclipse 3-688
Use Oracle Developer Tools for Visual Studio 3-689
Use Oracle Developer Tools for VS Code 3-690
Use Oracle Java on Autonomous Database 3-690
Enable Oracle Java 3-690
Check Oracle Java Version 3-691
Load Java classes and JAR Files into Autonomous Database 3-691
Notes for Oracle Java on Autonomous Database 3-692
Test Workloads with Oracle Real Application Testing 3-692
About Oracle Real Application Testing 3-693
Capture-Replay Workloads between Autonomous Databases 3-693
Capture-Replay Workloads between non-Autonomous and Autonomous Databases 3-699
DBA_CAPTURE_REPLAY_STATUS View 3-701
Manage and Store Files in a Cloud Code Repository with Autonomous Database 3-702
About Cloud Code Repositories with Autonomous Database 3-703
Initialize a Cloud Code Repository 3-704
Create and Manage a Cloud Code Repository 3-705
xi
Create and Manage Branches in a Cloud Code Repository 3-706
Export Schema Objects to the Cloud Code Repository Branch 3-708
Use File Operations with a Cloud Code Repository 3-709
Use SQL Install Operations with a Cloud Code Repository 3-710
Invoke User Defined Functions 3-711
About User Defined Functions 3-712
Steps to Invoke OCI Cloud Functions as SQL Functions 3-712
Steps to Invoke AWS Lambda as SQL Functions 3-718
Steps to Invoke Azure Function as SQL Functions 3-722
Invoke External Procedures as SQL Functions 3-725
Analyze 3-741
Build Reports and Dashboards Using Oracle Analytics 3-741
Create Dashboards, Reports, and Notebooks 3-742
Create Dashboards and Reports to Analyze and Visualize Your Data 3-742
Create Notebooks, Workspaces, and Projects with Oracle Machine Learning
Notebooks 3-744
Spatial Analytics with Oracle Spatial 3-746
About Oracle Spatial with Autonomous Database 3-746
Oracle Spatial Limitations with Autonomous Database 3-747
The Data Analysis Tool 3-747
Searching and obtaining information about Analytic Views 3-751
Creating Analytic Views 3-752
Working with Analyses 3-763
Viewing Analyses 3-763
Workflow to build Analyses 3-763
Creating Analyses 3-764
Creating Reports 3-764
Using Calculation Templates 3-775
Oracle Autonomous Database Add-in for Excel 3-782
Oracle Autonomous Database add-on for Google Sheets 3-807
The Catalog Page 3-835
Registering Cloud Links to access data 3-836
Gathering Statistics for Tables 3-837
Editing Tables 3-837
About the Catalog Page 3-846
Sorting and Filtering Entities 3-850
Searching for Entities 3-851
Viewing Entity Details 3-859
The Data Insights Page 3-862
About Insights 3-862
Generate Insights and View Reports 3-863
xii
Use Oracle OLAP with Autonomous Database 3-864
Manage the Service 3-864
Lifecycle Operations 3-865
Provision Autonomous Database 3-866
Start Autonomous Database 3-871
Stop Autonomous Database 3-871
Restart Autonomous Database 3-872
Rename Autonomous Database 3-872
Update the Display Name for an Autonomous Database Instance 3-875
Terminate an Autonomous Database Instance 3-876
Change Autonomous Database Operation Mode to Read/Write Read-Only or
Restricted 3-876
Schedule Start and Stop Times for an Autonomous Database Instance 3-879
Concurrent Operations on Autonomous Database 3-881
Updating Compute and storage limits 3-882
Add CPU or Storage Resources or Enable Auto Scaling 3-883
Remove CPU or Storage Resources or Disable Auto Scaling 3-885
Use Auto Scaling 3-887
Update Billing Model, License Type, or Update Instance to a Paid Instance 3-892
View and Update Your License and Oracle Database Edition on Autonomous
Database (ECPU Compute Model) 3-893
View and Update Your License and Oracle Database Edition on Autonomous
Database (OCPU Compute Model) 3-895
Update to ECPU Billing Model on Autonomous Database 3-897
Update Always Free Instance to Paid with Autonomous Database 3-898
Upgrade Autonomous JSON Database to Autonomous Transaction Processing 3-899
Upgrade APEX Service to Autonomous Transaction Processing 3-900
Clone and Move an Autonomous Database 3-900
About Cloning on Autonomous Database 3-901
Prerequisites for Cloning 3-903
Clone an Autonomous Database Instance 3-903
Clone an Autonomous Database from a Backup 3-908
Cross Tenancy Cloning 3-915
Clone Autonomous Database to Change Workload Type 3-921
Notes for Cloning Autonomous Database 3-922
Move an Autonomous Database to a Different Compartment 3-924
Use Refreshable Clones with Autonomous Database 3-925
About Refreshable Clones on Autonomous Database 3-926
Create a Refreshable Clone for an Autonomous Database Instance 3-932
View Refreshable Clones for an Autonomous Database Instance 3-936
Edit Automatic Refresh Policy for Refreshable Clone 3-937
Refresh a Refreshable Clone on Autonomous Database 3-938
xiii
Disconnect a Refreshable Clone from the Source Database 3-939
Reconnect a Refreshable Clone to the Source Database 3-941
Create a Cross Tenancy Refreshable Clone 3-942
Use the API 3-944
Refreshable Clone Notes 3-944
Manage Autonomous Database Built-in Tools 3-945
About Autonomous Database Built-in Tools 3-946
View Autonomous Database Built-in Tools Status 3-948
Configure Autonomous Database Built-in Tools 3-949
Configure Autonomous Database Built-in Tools when you Provision or Clone an
Instance 3-954
Notes for Autonomous Database Built-in Tools 3-958
Use Autonomous Database Events 3-959
About Events Based Notification and Automation on Autonomous Database 3-960
Critical Events on Autonomous Database 3-960
Information Events on Autonomous Database 3-962
Autonomous Database Event Types 3-965
Get Notified of Autonomous Database Events 3-968
Manage Time Zone File Updates on Autonomous Database 3-969
About Time Zone File Update Options 3-969
Use AUTO_DST_UPGRADE Time Zone File Option 3-971
Use AUTO_DST_UPGRADE_EXCL_DATA Time Zone File Option 3-973
Monitor Autonomous Database Availability 3-974
Monitor Regional Availability of Autonomous Databases 3-976
Manage and View Notifications 3-976
About Work Requests 3-977
View Patch and Maintenance Window Information, Set the Patch Level 3-977
About Scheduled Maintenance and Patching 3-978
View Maintenance Event History 3-979
View Patch Details 3-980
Set the Patch Level 3-981
View Maintenance Status Notifications 3-982
View and Manage Customer Contacts for Operational Issues and Announcements 3-983
View Oracle Cloud Infrastructure Operations Actions 3-984
DBA_OPERATOR_ACCESS View 3-985
Monitor and Manage Performance 3-985
Monitor the Performance of Autonomous Database 3-986
Use Database Actions to Monitor Activity and Utilization 3-986
Use Database Actions to Monitor Active Session History Analytics and SQL
Statements 3-995
Monitor Autonomous Database with Performance Hub 3-996
Manage Concurrency and Priorities on Autonomous Database 3-999
xiv
Database Service Names for Autonomous Data Warehouse 3-1000
Database Service Names for Autonomous Transaction Processing and
Autonomous JSON Database 3-1001
Idle Time Limits 3-1002
Service Concurrency 3-1002
Change MEDIUM Service Concurrency Limit (ECPU Compute Model) 3-1007
Change MEDIUM Service Concurrency Limit (OCPU Compute Model) 3-1010
Predefined Job Classes with Oracle Scheduler 3-1015
Manage CPU/IO Shares on Autonomous Database 3-1015
Manage Runaway SQL Statements on Autonomous Database 3-1017
Use Fast Ingest on Autonomous Database 3-1018
Monitor the Performance of Autonomous Database with Oracle Management Cloud 3-1019
Monitor Performance with Autonomous Database Metrics 3-1019
View Metrics for an Autonomous Database Instance 3-1020
View Metrics for Autonomous Databases in a Compartment 3-1022
Autonomous Database Metrics and Dimensions 3-1023
Use Database Management Service to Monitor Databases 3-1023
Perform SQL Tracing on Autonomous Database 3-1024
Configure SQL Tracing on Autonomous Database 3-1024
Enable SQL Tracing on Autonomous Database 3-1026
Disable SQL Tracing on Autonomous Database 3-1027
View Trace File Saved to Cloud Object Store on Autonomous Database 3-1027
View Trace Data in SESSION_CLOUD_TRACE View on Autonomous Database 3-1029
Service Console Replacement with Database Actions 3-1029
Available Metrics: oci_autonomous_database 3-1030
Use Sample Data Sets in Autonomous Database 3-1039
Use the Oracle Autonomous Database Free Container Image 3-1040
About the Autonomous Database Free Container Image 3-1041
Oracle Autonomous Database Free Container Image License 3-1041
Oracle Autonomous Database Free Container Image Features 3-1042
Oracle Autonomous Database Free Container Image Recommendations and
Restrictions 3-1042
Container Registry Locations for Oracle Autonomous Database Free Container Image 3-1044
Start Autonomous Database Free Container Image 3-1045
Perform Database Operations using adb-cli 3-1046
Connect to an Autonomous Database Free Container Image 3-1047
ORDS/APEX/Database Actions 3-1047
Available TNS Aliases 3-1047
Setup a Wallet and Connect 3-1048
Setup TLS Walletless Connection and Connect 3-1049
Migrate Data Between Autonomous Database Free Containers 3-1049
xv
4 Tutorials
Load and Analyze Your Data with Autonomous Database 4-1
Self-service Tools for Everyone Using Oracle Autonomous Database 4-1
Manage and Monitor Autonomous Database 4-2
Integrate, Analyze and Act on All Data using Autonomous Database 4-2
Load the Autonomous Database with Data Studio 4-2
QuickStart Java applications with Autonomous Database 4-3
5 Features
Data Lake Capabilities 5-1
About SQL Analytics on Data Lakes 5-2
Use Data Studio to Load and Link Data 5-4
Query Data Lakehouse Using SQL 5-6
Advanced Analytics 5-8
Collaborate Using Common Metadata 5-9
Spark on Autonomous Database 5-10
Data Studio 5-10
High Availability 5-14
Use Application Continuity on Autonomous Database 5-15
About Application Continuity on Autonomous Database 5-15
Configure Application Continuity on Autonomous Database 5-16
Client Configuration for Continuous Availability on Autonomous Database 5-22
Notes for Application Continuity on Autonomous Database 5-27
Use Standby Databases with Autonomous Data Guard for Disaster Recovery 5-28
About Standby Databases 5-29
Enable Autonomous Data Guard 5-41
Add a Cross-Region Standby Database 5-43
Perform a Switchover 5-47
Automatic Failover with a Standby Database 5-50
Perform a Manual Failover 5-51
Convert Cross Region Peer to Snapshot Standby 5-54
Update Standby to Use a Backup Copy Peer 5-60
Disable a Cross Region Standby Database 5-61
Events and Notifications for a Standby Database 5-65
Use the API 5-66
Autonomous Data Guard Notes 5-66
Autonomous Database Cross-Region Paired Regions 5-69
Backup and Restore Autonomous Database Instances 5-69
About Backup and Recovery on Autonomous Database 5-70
Edit Automatic Backup Retention Period on Autonomous Database 5-71
xvi
Create Long-Term Backups on Autonomous Database 5-74
View Backup Information and Backups on Autonomous Database 5-77
Restore and Recover your Autonomous Database 5-77
Backup and Restore Notes 5-80
Use OraMTS Feature 5-81
About OraMTS Recovery Service 5-82
Prerequisites to Enable OraMTS Recovery Service on Autonomous Database 5-82
Enable OraMTS Recovery Service on an Autonomous Database 5-83
Disable OraMTS Recovery Service on an Autonomous Database 5-84
Use Backup-Based Disaster Recovery 5-84
About Backup-Based Disaster Recovery 5-85
View Backup-Based Disaster Recovery Peer 5-88
Add a Cross-Region Disaster Recovery Peer 5-89
Update Disaster Recovery Type 5-92
Disable a Cross-Region (Remote) Peer 5-93
Perform a Switchover to a Backup Copy Peer 5-94
Perform a Failover 5-97
Convert Cross Region Backup-Based Disaster Recovery Peer to Snapshot
Standby 5-99
Use the API 5-100
Backup-Based Disaster Recovery Events 5-100
Track Table Changes with Flashback Time Travel 5-100
About Flashback Time Travel 5-101
Enable Flashback Time Travel for a Table 5-102
Disable Flashback Time Travel for a Table 5-102
Modify the Retention Time for Flashback Time Travel 5-103
Purge Historical Data for Flashback Time Travel 5-103
View Flashback Time Travel Information 5-104
Notes for Flashback Time Travel 5-104
Security 5-105
Security Features Overview 5-106
Security and Authentication in Oracle Autonomous Database 5-107
Configuration Management 5-108
Data Encryption 5-108
Data Access Control 5-109
Auditing Overview on Autonomous Database 5-112
Assessing the Security of Your Database and its Data 5-113
Regulatory Compliance Certification 5-114
Security Testing Policies 5-115
Manage Encryption Keys on Autonomous Database 5-115
About Master Encryption Key Management on Autonomous Database 5-116
xvii
Prerequisites to Use Customer-Managed Encryption Keys on Autonomous
Database 5-118
Use Customer-Managed Encryption Keys with Vault Located in Local Tenancy 5-124
Use Customer-Managed Encryption Key Located in a Remote Tenancy 5-126
Switch to Oracle-Managed Encryption Keys on Autonomous Database 5-127
View History for Customer-Managed Encryption Keys on Autonomous Database 5-127
Notes for Using Customer-Managed Keys with Autonomous Database 5-128
Configure Network Access with Access Control Rules (ACLs) and Private Endpoints 5-131
Configure Network Access with Access Control Rules 5-131
Configure Network Access with Private Endpoints 5-138
Update Network Options to Allow TLS or Require Only Mutual TLS (mTLS)
Authentication on Autonomous Database 5-152
Audit Autonomous Database 5-156
About Auditing Autonomous Database 5-156
Register Oracle Data Safe on Autonomous Database 5-159
Extend Audit Record Retention with Oracle Data Safe on Autonomous Database 5-160
View and Manage Oracle Data Safe Audit Trails on Autonomous Database 5-160
View and Manage Audit Policies with Oracle Data Safe on Autonomous Database 5-161
Generate Audit Reports with Data Safe on Autonomous Database 5-161
Rotate Wallets for Autonomous Database 5-161
About Wallet Rotation 5-161
Rotate Wallets with Immediate Rotation 5-162
Rotate Wallets with Grace Period 5-163
Use Oracle Database Vault with Autonomous Database 5-166
Oracle Database Vault Users and Roles on Autonomous Database 5-166
Enable Oracle Database Vault on Autonomous Database 5-167
Disable Oracle Database Vault on Autonomous Database 5-168
Disable User Management with Oracle Database Vault on Autonomous Database 5-168
Enable User Management with Oracle Database Vault on Autonomous Database 5-169
IAM Policies for Autonomous Database 5-169
IAM Permissions and API Operations for Autonomous Database 5-169
Policy Details for Autonomous Database 5-173
Policies to Manage Autonomous Databases 5-175
Use Oracle Data Safe 5-176
About Data Safe 5-176
Register Data Safe 5-178
Use Data Safe Features 5-178
Break Glass Access for SaaS on Autonomous Database 5-179
About Break Glass Access on Autonomous Database 5-179
Enable Break Glass Access 5-181
Disable Break Glass Access 5-183
Notes for Break Glass Access 5-184
xviii
Encrypt Data While Exporting or Decrypt Data While Importing 5-184
Encrypt Data While Exporting to Object Storage 5-185
Decrypt Data While Importing from Object Storage 5-189
Performance Monitor and Management 5-194
Use Operations Insights on Autonomous Database 5-195
Manage Optimizer Statistics on Autonomous Database 5-195
Manage Optimizer Statistics with Data Warehouse Workloads 5-195
Manage Optimizer Statistics with Transaction Processing and JSON Database
Workloads 5-196
Manage Automatic Indexing on Autonomous Database 5-197
Manage Automatic Partitioning on Autonomous Database 5-198
About Automatic Partitioning 5-199
How Automatic Partitioning Works 5-199
Configure Automatic Partitioning 5-200
Use Automatic Partitioning 5-202
Generate Automatic Partitioning Reports 5-203
Example Automatic Partitioning Scenarios 5-204
Data Dictionary Views for Automatic Partitioning 5-209
Graph Analytics using Graph Studio 5-210
Use Oracle Graph Analytics 5-211
Build Applications Using Graph APIs 5-211
Use Oracle Machine Learning with Autonomous Database 5-212
OML Machine Learning 5-212
Creating Analytic Views 5-214
Use Select AI to Generate SQL from Natural Language Prompts 5-225
Usage Guidelines 5-225
About SQL Generation 5-226
Use DBMS_CLOUD_AI to Configure AI Profiles 5-227
Configure DBMS_CLOUD_AI Package 5-227
Use OpenAI 5-229
Use Cohere 5-230
Use Azure OpenAI Service 5-230
Create and Set an AI Profile 5-230
Use AI Keyword to Enter Prompts 5-231
Examples of Using Select AI 5-233
Terminology 5-254
JSON Document Stores 5-255
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging 5-255
Cache Messages with Singleton Pipes 5-255
About Caching Messages with Singleton Pipes 5-256
Automatic Refresh of Cached Message with a Cache Function 5-259
xix
Create an Explicit Singleton Pipe 5-260
Create an Explicit Singleton Pipe with a Cache Function 5-262
Use Persistent Messaging with Messages Stored in Cloud Object Store 5-263
About Persistent Messaging with DBMS_PIPE 5-263
Create an Explicit Persistent Pipe and Send a Message 5-265
Retrieve a Persistent Message on Same Database 5-268
Retrieve a Persistent Message by Creating a Pipe on a Different Database 5-269
Remove a Persistent Pipe 5-271
Query Text in Object Storage 5-272
About Using a Text Index to Query Text on Object Storage 5-272
Create a Text Index on Object Storage Files 5-273
Drop an Index on the Cloud Storage Files 5-274
Text Index Reference Table 5-274
Monitor Text Index Creation 5-275
Use Oracle Workspace Manager on Autonomous Database 5-276
About Using Oracle Workspace Manager on Autonomous Database 5-276
Enable Oracle Workspace Manager on Autonomous Database 5-277
Disable Oracle Workspace Manager on Autonomous Database 5-277
Use and Manage Elastic Pools on Autonomous Database 5-278
About Elastic Pools 5-279
About Elastic Pool Billing 5-281
About Elastic Pools with Autonomous Data Guard Enabled 5-284
About Elastic Pool Leader and Member Operations 5-285
Create Join or Manage an Elastic Pool 5-286
Join an Existing Elastic Pool 5-287
Change the Elastic Pool Shape 5-288
Create or Join an Elastic Pool While Provisioning or Cloning an Instance 5-289
List Elastic Pool Members 5-292
Remove Pool Members from an Elastic Pool 5-292
As Pool Leader Remove Members from an Elastic Pool 5-293
Notes for Leaving an Elastic Pool 5-293
Terminate an Elastic Pool 5-293
Elastic Pool Notes 5-294
Migrate Oracle Databases to Autonomous Database 5-294
Migration Prerequisites 5-295
Recommended Migration Methods 5-295
6 Reference
Previous Feature Announcements 6-1
2023 What's New 6-2
xx
December 2023 6-2
November 2023 6-3
October 2023 6-3
September 2023 6-4
August 2023 6-5
July 2023 6-6
June 2023 6-6
May 2023 6-8
April 2023 6-9
March 2023 6-10
January 2023 6-11
February 2023 6-12
What's New 2022 6-13
December 2022 6-13
November 2022 6-14
October 2022 6-15
September 2022 6-16
August 2022 6-16
July 2022 6-17
June 2022 6-18
May 2022 6-19
April 2022 6-20
March 2022 6-20
February 2022 6-22
January 2022 6-22
2021 What's New 6-23
December 2021 6-24
November 2021 6-25
October 2021 6-26
September 2021 6-26
August 2021 6-27
July 2021 6-28
June 2021 6-28
May 2021 6-29
April 2021 6-30
March 2021 6-31
February 2021 6-32
January 2021 6-33
2020 What's New 6-33
December 2020 6-33
November 2020 6-34
xxi
October 2020 6-34
September 2020 6-35
August 2020 6-35
July 2020 6-36
June 2020 6-36
May 2020 6-37
April 2020 6-37
March 2020 6-38
February 2020 6-39
January 2020 6-39
2019 What's New 6-40
December 2019 6-41
November 2019 6-41
October 2019 6-42
September 2019 6-42
August 2019 6-43
July 2019 6-43
June 2019 6-44
May 2019 6-44
April 2019 6-45
March 2019 6-46
February 2019 6-47
January 2019 6-47
2018 What's New 6-47
October 2018 6-48
September 2018 6-48
August 2018 6-48
July 2018 6-48
June 2018 6-49
May 2018 6-49
April 2018 6-49
Sample Star Schema Benchmark (SSB) Queries and Analytic Views 6-50
Star Schema Benchmark Queries 6-50
Star Schema Benchmark Analytic Views 6-53
Autonomous Database Supplied Package Reference 6-56
DBMS_CLOUD Package 6-57
DBMS_CLOUD Endpoint Management 6-57
DBMS_CLOUD Subprograms and REST APIs 6-58
DBMS_CLOUD URI Formats 6-142
DBMS_CLOUD Package Format Options 6-151
DBMS_CLOUD Package Format Options for EXPORT_DATA 6-163
xxii
DBMS_CLOUD Avro, ORC, and Parquet Support 6-169
DBMS_CLOUD Exceptions 6-181
DBMS_CLOUD_ADMIN Package 6-182
Summary of DBMS_CLOUD_ADMIN Subprograms 6-182
DBMS_CLOUD_ADMIN Exceptions 6-223
DBMS_CLOUD_AI Package 6-223
Summary of DBMS_CLOUD_AI Subprograms 6-223
DBMS_CLOUD_FUNCTION Package 6-235
Summary of DBMS_CLOUD_FUNCTION Subprograms 6-235
DBMS_CLOUD_LINK Package 6-245
DBMS_CLOUD_LINK Overview 6-246
Summary of DBMS_CLOUD_LINK Subprograms 6-246
DBMS_CLOUD_LINK_ADMIN Package 6-253
DBMS_CLOUD_LINK_ADMIN Overview 6-253
Summary of DBMS_CLOUD_LINK_ADMIN Subprograms 6-253
DBMS_CLOUD_MACADM Package 6-257
CONFIGURE_DATABASE_VAULT Procedure 6-258
DISABLE_DATABASE_VAULT Procedure 6-258
DISABLE_USERMGMT_DATABASE_VAULT Procedure 6-259
ENABLE_DATABASE_VAULT Procedure 6-259
ENABLE_USERMGMT_DATABASE_VAULT Procedure 6-260
DBMS_CLOUD_NOTIFICATION Package 6-260
DBMS_CLOUD_NOTIFICATION Overview 6-260
Summary of DBMS_CLOUD_NOTIFICATION Subprograms 6-261
DBMS_DATA_ACCESS Package 6-267
DBMS_DATA_ACCESS Overview 6-267
DBMS_DATA_ACCESS Security Model 6-268
Summary of DBMS_DATA_ACCESS Subprograms 6-268
DBMS_PIPE Package 6-271
DBMS_PIPE Overview for Singleton Pipes 6-272
Summary of DBMS_PIPE Subprograms for Singleton Pipes 6-272
DBMS_PIPE Overview for Persistent Messaging Pipes 6-280
Summary of DBMS_PIPE Subprograms for Persistent Messaging 6-281
DBMS_CLOUD_PIPELINE Package 6-291
Summary of DBMS_CLOUD_PIPELINE Subprograms 6-291
DBMS_CLOUD_PIPELINE Attributes 6-297
DBMS_CLOUD_PIPELINE Views 6-299
DBMS_CLOUD_REPO Package 6-300
DBMS_CLOUD_REPO Overview 6-300
DBMS_CLOUD_REPO Data Structures 6-301
DBMS_CLOUD_REPO Subprogram Groups 6-301
xxiii
Summary of DBMS_CLOUD_REPO Subprograms 6-303
DBMS_DCAT Package 6-324
Data Catalog Users and Roles 6-324
Required Credentials and IAM Policies 6-325
Summary of Connection Management Subprograms 6-328
Summary of Synchronization Subprograms 6-332
Summary of Data Catalog Views 6-340
DBMS_MAX_STRING_SIZE Package 6-354
CHECK_MAX_STRING_SIZE Function 6-355
MODIFY_MAX_STRING_SIZE Procedure 6-355
DBMS_AUTO_PARTITION Package 6-356
CONFIGURE Procedure 6-357
VALIDATE_CANDIDATE_TABLE Function 6-359
RECOMMEND_PARTITION_METHOD Function 6-360
APPLY_RECOMMENDATION Procedure 6-362
REPORT_ACTIVITY Function 6-363
REPORT_LAST_ACTIVITY Function 6-364
CS_RESOURCE_MANAGER Package 6-364
LIST_CURRENT_RULES Function 6-364
LIST_DEFAULT_RULES Function 6-365
REVERT_TO_DEFAULT_VALUES Procedure 6-366
UPDATE_PLAN_DIRECTIVE Procedure 6-367
CS_SESSION Package 6-368
SWITCH_SERVICE Procedure 6-368
DBMS_SHARE Package 6-370
Summary of Share Producer Subprograms 6-370
Summary of Share Consumer Subprograms 6-411
Summary of Share Producer Views 6-433
Summary of Share Consumer Views 6-450
DBMS_SHARE Constants 6-454
Autonomous Database for Experienced Oracle Database Users 6-463
About Autonomous Database for Experienced Oracle Database Users 6-464
Data Warehouse Workload with Autonomous Database 6-464
Transaction Processing and JSON Database Workloads with Autonomous
Database 6-467
Autonomous Database Views 6-468
Track Table and Partition Scan Access with Autonomous Database Views 6-469
Track Oracle Cloud Infrastructure Resources, Cost and Usage Reports with
Autonomous Database Views 6-470
Pipeline Views 6-480
DBMS_CLOUD_AI Views 6-484
DBMS_CLOUD_FUNCTION Views 6-486
xxiv
DBMS_CLOUD_LINK Views 6-487
DBMS_DATA_ACCESS Views 6-490
Oracle Cloud Infrastructure Logging Interface Views 6-491
Stripe Views on Autonomous Database 6-498
Autonomous Database – Oracle Database Features 6-511
Always Free Autonomous Database – Oracle Database 21c 6-512
Always Free Autonomous Database Oracle Database 21c Features 6-513
Always Free Autonomous Database Oracle Database 21c Notes 6-517
Autonomous Database RMAN Recovery Catalog 6-517
Use Autonomous Database as an RMAN Recovery Catalog 6-517
Notes for Users Migrating from Other Oracle Databases 6-518
Initialization Parameters 6-519
SQL Commands 6-528
Data Types 6-531
PL/SQL Packages Notes for Autonomous Database 6-532
Oracle XML DB 6-537
Oracle Text 6-538
Oracle Flashback 6-539
Oracle Database Real Application Security 6-539
Oracle LogMiner 6-540
Choose a Character Set for Autonomous Database 6-540
Database Features Unavailable in Autonomous Database 6-542
Logical Partition Change Tracking and Materialized Views 6-543
About Logical Partition Change Tracking 6-543
Using Logical Partition Change Tracking 6-544
Example: Logical Partition Change Tracking 6-547
Migrating MySQL and Third-Party Databases to Autonomous Database 6-553
Migrating Amazon Redshift to Autonomous Database 6-553
Autonomous Database Redshift Migration Overview 6-554
Connect to Amazon Redshift 6-554
Connect to Autonomous Database 6-557
Start the Cloud Migration Wizard 6-558
Review and Finish the Amazon Redshift Migration 6-564
Use Generated Amazon Redshift Migration Scripts 6-565
Perform Post Migration Tasks 6-567
Migrating MySQL to Autonomous Database 6-567
Migrating Microsoft SQL Server or Sybase Adaptive Server to Autonomous Database 6-568
Migrating IBM DB2 (UDB) Database to Autonomous Database 6-568
Migrating Teradata Database to Autonomous Database 6-568
SODA Collection Metadata 6-568
SODA Collection Metadata on Autonomous Database 6-568
xxv
SODA Default Collection Metadata on Autonomous Database 6-569
SODA Customized Collection Metadata on Autonomous Database 6-570
Cloud Support and Identity Information 6-572
Obtain Tenancy Details 6-572
Database Name 6-573
Region 6-573
Tenancy OCID 6-573
Database OCID 6-573
Compartment OCID 6-573
Outbound IP Address 6-574
Availability Domain 6-574
xxvi
Preface
This document describes how to manage, monitor, and use Oracle Autonomous Database
and provides references to related documentation.
• Audience
• Documentation Accessibility
• Related Documents
• Conventions
Audience
This document is intended for Oracle Cloud users who want to manage and monitor Oracle
Autonomous Database.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Related Documents
Depending on the region and when you provisioned your database, and in some cases
depending on your provisioning choice, the Oracle Database version for your Autonomous
Database is either Oracle Database 19c or Oracle Database 21c.
If you have Oracle Database 19c, then many database concepts and features of this service
are further documented here:
Oracle Database 19c
If you are using Always Free Autonomous Database with Oracle Database 21c, then many
concepts and features of this service are further documented here:
Oracle Database 21c
For additional information, see these Oracle resources:
• Welcome to Oracle Cloud Infrastructure
xxvii
Preface
Conventions
The following text conventions are used in this document:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated with an
action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for which you
supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code in
examples, text that appears on the screen, or text that you enter.
xxviii
1
Overview
April 2024
Feature Description
Live Data Sharing Using Use Data Sharing to share data, as of the latest database commit, with
Database Actions in Autonomous Databases in the same region. A Data Sharing recipient
Autonomous Database always views the latest data.
See Overview of the Data Share Tool for more information.
1-1
Chapter 1
What’s New for Oracle Autonomous Database Serverless
Feature Description
Updated Version of Oracle A new version of the Oracle Autonomous Database Free Container
Autonomous Database Free Image is available with updates and new features. Use the Oracle
Container Image Autonomous Database Free Container Image to run Autonomous
Database in a container in your own environment, without requiring
access to Oracle Cloud Infrastructure Console or to the internet.
See Use the Oracle Autonomous Database Free Container Image for
more information.
March 2024
Feature Description
Refreshable Clone When you enable the automatic refresh option Autonomous
Automatic Refresh Option Database automatically refreshes a refreshable clone with data
from the source database.
See Refreshable Clone with Automatic Refresh Enabled and Edit
Automatic Refresh Policy for Refreshable Clone for more
information.
Invoke External Procedures You can invoke external procedures written in the C language
from Autonomous Database as SQL functions. You do not install
external procedures on Autonomous Database. To use an
external procedure, the procedure is hosted remotely on a VM
running in an Oracle Cloud Infrastructure Virtual Cloud Network
(VCN).
See Invoke External Procedures as SQL Functions for more
information.
Network File System (NFS) You can attach a Network File System to a directory location in
NFSv4 Support your Autonomous Database. This allows you to load data from
Oracle Cloud Infrastructure File Storage in your Virtual Cloud
Network (VCN) or from any other Network File System in on-
premises data centers. Depending on the version of the Network
File System you want to access, both NFSv3 and NFSv4 are
supported.
See Access Network File System from Autonomous Database for
more information.
Support for Regular The format option regexuri is supported with the following
Expressions in DBMS_CLOUD procedures: COPY_COLLECTION, COPY_DATA,
DBMS_CLOUD CREATE_EXTERNAL_TABLE, CREATE_EXTERNAL_PART_TABLE,
Procedures and CREATE_HYBRID_PART_TABLE.
See DBMS_CLOUD Package Format Options for more
information
Encrypt or Decrypt Data For greater security you can encrypt data that you export to
Object Storage. When data on Object Storage is encrypted you
can decrypt the data you import or when you use the data in an
external table.
See Encrypt Data While Exporting or Decrypt Data While
Importing for more information.
JOB_QUEUE_PROCCESSES You can set the JOB_QUEUE_PROCESSES initialization parameter.
Initialization Parameter Setting this parameter to 0 disables non-Oracle supplied
Scheduler jobs.
See JOB_QUEUE_PROCESSES for more information.
1-2
Chapter 1
What’s New for Oracle Autonomous Database Serverless
Feature Description
Pre-Authenticated Request You can generate and manage Pre-Authenticated Request (PAR)
URLs Access Information URLs for data on Autonomous Database and Autonomous
Database provides views that allow you to monitor PAR URL
usage.
See Monitor and View PAR URL Usage for more information.
UTL_HTTP allows With UTL_HTTP, when the Autonomous Database instance is on a
HTTP_PROXY Connections private endpoint you can call UTL_HTTP.set_proxy and both
HTTPS and HTTP_PROXY connections are allowed.
See Submit an HTTP Request to a Private Host and PL/SQL
Packages Notes for Autonomous Database for more information.
February 2024
Feature Description
Pre-Authenticated Request You can generate and manage Pre-Authenticated Request (PAR) URLs
URLs for Read Only Data for data on Autonomous Database.
Access See Use Pre-Authenticated Request URLs for Read Only Data Access
on Autonomous Database for more information.
Replicate Backups to a You can enable replication of automatic backups from a primary
Cross-Region Disaster database to a cross-region disaster recovery peer. Up to seven days of
Recovery Peer automatic backups are available in the remote region when this feature
is enabled.
See Replicating Backups to a Cross-Region Autonomous Data Guard
Standby and Replicating Backups to a Cross-Region Backup Based
Disaster Recovery Peer for more information.
Select AI with OCI Autonomous Database can interact with AI service providers. Select AI
Generative AI now supports OCI Generative AI, Azure OpenAI Service, OpenAI, and
CohereAI.
This feature supports having LLMs work with Oracle database by
generating SQL from natural language prompts. This allows you to talk
to your database.
See Use Select AI to Generate SQL from Natural Language Prompts
for more information.
January 2024
Feature Description
ECPU Compute Model for all Autonomous Database offers two compute models when you create or
Workload Types clone an instance: ECPU and OCPU. Previously only the workload
types Data Warehouse and Transaction Processing supported the
ECPU compute model. Now all workload types support the ECPU
compute model, including JSON and APEX.
See Compute Models in Autonomous Database for more information.
Manage Time Zone File Autonomous Database provides several options to automatically
Updates update time zone files.
See Manage Time Zone File Updates on Autonomous Database for
more information.
1-3
Chapter 1
About Autonomous Database
Feature Description
Custom Kerberos Service You can use a custom Kerberos service name when you configure
Name Support for Kerberos Kerberos to authenticate Autonomous Database users. Using a custom
Authentication service name you can use the same Keytab files on multiple
Autonomous Database instances when you enable Kerberos
authentication (you do not need to create and upload different Keytab
files for each Autonomous Database instance).
See Configure Kerberos Authentication with Autonomous Database for
more information.
Clone Database to Change You can create a clone of an Autonomous Database and select a
Workload Type different workload type for the cloned database.
See Clone Autonomous Database to Change Workload Type for more
information.
1-4
Chapter 1
About Autonomous Database
backing up the database, patching and upgrading the database, and growing or shrinking the
database. Autonomous Database is a completely elastic service.
At any time you can scale, increase or decrease, either the compute or the storage capacity.
When you make resource changes for your Autonomous Database instance, the resources
automatically shrink or grow without requiring any downtime or service interruptions.
Autonomous Database is built upon Oracle Database, so that the applications and tools that
support Oracle Database also support Autonomous Database. These tools and applications
connect to Autonomous Database using standard SQL*Net connections. The tools and
applications can either be in your data center or in a public cloud. Oracle Analytics Cloud and
other Oracle Cloud services provide support for Autonomous Database connections.
Autonomous Database also includes the following:
• Oracle APEX: a low-code development platform that enables you to build scalable,
secure enterprise apps with world-class features.
• Oracle REST Data Services (ORDS): a Java Enterprise Edition based data service that
makes it easy to develop modern REST interfaces for relational data and JSON
Document Store.
• Database Actions: is a web-based interface that uses Oracle REST Data Services to
provide development, data tools, administration, and monitoring features for Autonomous
Database.
• Oracle Machine Learning Notebooks Early Adopter is an enhanced web-based notebook
platform for data engineers, data analysts, R and Python users, and data scientists. You
can write code, text, create visualizations, and perform data analytics including machine
learning. In Oracle Machine Learning, notebooks are available in a project within a
workspace, where you can create, edit, delete, copy, move, and even save notebooks as
templates.
Key Features
• Managed: Oracle simplifies end-to-end management of the database:
– Provisioning new databases
– Growing or shrinking storage and compute resources
– Patching and upgrades
– Backup and recovery
• Fully elastic scaling: Scale compute and storage independently to fit your database
workload with no downtime:
– Size the database to the exact compute and storage required
– Scale the database on demand: Independently scale compute or storage
– Shut off idle compute to save money
• Auto scaling: Allows your database to use more CPU and IO resources or to use
additional storage automatically when the workload or storage demand requires
additional resources:
1-5
Chapter 1
About Autonomous Database
1-6
Chapter 1
About Autonomous Database
1-7
Chapter 1
About Autonomous Database
1-8
Chapter 1
About Autonomous Database
Oracle Database in an environment that is tuned and optimized for data warehouse
workloads.
To get started you create an Autonomous Database with workload type Data Warehouse and
specify the number of ECPUs (OCPUs if your database uses OCPUs) and the storage
capacity in TB's for the Autonomous Database.
You can use Autonomous Database with Oracle Analytics Cloud or Oracle Analytics Desktop
to easily create visualizations and projects that reveal trends in your company’s data and help
you answer questions and discover important insights about your business.
The following figure shows the Autonomous Database architecture with related components
for analytics and data warehousing.
Oracle Analytics
Desktop
SQL Developer Machine Database REST Data Oracle 3rd Party BI on
Learning Actions Services APEX
Oracle Cloud
Infrastructure
Data Integration
Services
3rd Party BI
On-Premise
Oracle DI
Platform Cloud Oracle Autonomous Database
3rd Party DI on
Oracle Cloud Oracle Object Storage Cloud
Infrastructure
1-9
Chapter 1
About Autonomous Database
1-10
Chapter 1
About Autonomous Database
No matter what kind of data your applications use, whether JSON or something else, you can
take advantage of all Oracle Database features. This is true regardless of the kind of Oracle
Autonomous Database you use.
JSON data is stored natively in the database. In a SODA collection on an Autonomous
Database JSON data is stored in Oracle's native binary format, OSON.
Government Cloud
See Oracle Cloud Infrastructure Government Cloud for information about availability in
Government Cloud regions.
1-11
Chapter 1
About Autonomous Database
Notes:
Note:
Always Free Autonomous Databases provisioned with Oracle Database 21c
do not support Oracle Cloud Infrastructure Identity and Access Management
(IAM) authentication and authorization.
1-12
Chapter 1
About Autonomous Database
Autonomous Data Guard Not Available for Always Free Autonomous Database
Autonomous Data Guard is not available with Always Free Autonomous Databases. See Use
Standby Databases with Autonomous Data Guard for Disaster Recovery for more
information.
Tools Configuration Options Not Available for Always Free Autonomous Database
Always Free Autonomous Database does not provide configuration options for Autonomous
Database tools and does not allow you to disable Autonomous Database tools. For example,
you cannot specifically disable HTTP access to Oracle APEX and REST Data Services on
Always Free databases.
Supplemental Logging and Oracle GoldenGate Extract are Not Available for Always
Free Autonomous Database
The following are not available for Always Free Autonomous Databases:
• Supplemental logging
• Oracle GoldenGate Extract
See Supplemental Logging and Configuring Extract to Capture from an Autonomous
Database for more information.
1-13
Chapter 1
About Autonomous Database
Note:
After an Always Free Autonomous Database has been stopped and is later
started, you may need to reconnect to the database from SQL*Net database
clients. You can use the same Oracle Wallet and database user credentials
to reconnect.
Note:
For details on upgrading your Always Free APEX Service to an Oracle APEX
Application Development paid instance, see Upgrading Always Free APEX
Service to a Paid Version.
If your account has finished a trial without upgrading to paying status, you can
continue using Always Free databases but you cannot upgrade Always Free instances
to paid instances until the account is first upgraded to paying status. See Upgrade
Your Free Oracle Cloud Promotion for more information.
1-14
Chapter 1
Get Help, Search Forums, and Contact Support
If your Oracle Cloud Infrastructure account is in a trial period or has paying status and the
Oracle Database version for the Always Free database is Oracle Database 19c, then you can
upgrade the Always Free database to a paid instance as follows:
1. From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
2. On the Autonomous Databases page select an Always Free Autonomous Database from
the links under the Display name column.
3. On the Autonomous Database Details page, from the More actions drop-down list select
Upgrade instance to paid.
4. Click Upgrade instance to paid.
When you upgrade from an Always Free instance to a paid instance you get a paid instance
of the same workload type with the minimum CPU and minimum storage available for that
workload type. After you upgrade you can scale up CPU and storage resources to fit your
needs.
For example:
• Data Warehouse workload type: When you upgrade to paid from an Always Free
instance, the upgrade process provides a paid Autonomous Database instance with
workload type Data Warehouse, with 2 ECPUs and 1 TB of database storage.
• Transaction Processing workload type: When you upgrade to paid from an Always Free
instance, the upgrade process provides a paid Autonomous Database instance with
workload type Transaction Processing, with 2 ECPUs and 1 TB of database storage.
1-15
Chapter 1
Service Level Objectives (SLOs)
1. On the Oracle Cloud Infrastructure open the Help menu ( ) and under Request
Help, click Create Support Request.
2. Enter the following:
• Issue Summary: Enter a title that summarizes your issue. Avoid entering
confidential information.
• Describe Your Issue: Provide a brief overview of your issue.
– Include all the information that support needs to route and respond to your
request.
See Obtain Tenancy Details for details on obtaining Autonomous
Database information.
– Include troubleshooting steps taken and any available test results.
• Select the severity level for this request.
3. Click Create Support Request.
1-16
Chapter 1
Service Level Objectives (SLOs)
1-17
Chapter 1
Service Level Objectives (SLOs)
The following terms apply to the Availability Service Level Objective for the Oracle
Autonomous Database Built-in Tools listed in this table:
• “HTTP Error Rate” applies separately to each Database Built-in Tool and means
the percentage value corresponds to: (i) the total number of failed HTTP calls
made to the applicable tool with a status of “Bad Gateway” or “Service
Unavailable” in a minute period during a calendar month, divided by (ii) the total
number of HTTP calls made to the tool in a minute period.
• “Monthly Uptime Percentage” is calculated by subtracting from 100%, the average
of the HTTP Error Rate for each minute period during the applicable calendar
month.
The following terms apply to the Availability Service Level Objective for Oracle
Database API for MongoDB:
• A "Connection through Oracle Database API for MongoDB" is a direct connection
established from any tool or application to the Cloud Service using Oracle
Database API for MongoDB.
• “Monthly Uptime Percentage” is calculated by subtracting from 100%, the
percentage of minutes during the calendar month in which the applicable Cloud
Service was "Unavailable".
• “Unavailable” means a minute period when: (i) no connection through Oracle
Database API for MongoDB is or can be established and (ii) all continuous
attempts, at least five, to establish such a connection fail.
1-18
Chapter 1
How Is Autonomous Database Billed?
level option, you can test the patches on these instances before the patches are applied to
your production database. If you see issues in your test or pre-prod database, you can file
service requests to get the issues addressed before the patch is applied to your production
database.
Oracle provides a service level objective of zero regressions in your production database.
"Regression" in this document is described as issues introduced by patches or updates to the
Autonomous Database performed during the maintenance window announced on your
database console.
After a patch is applied to your database on the Early patch level, if you find and report an
issue on that database through a service request, Oracle will use commercially reasonable
efforts to address the problem so that the same issue does not occur in your production
database.
See Set the Patch Level for more information.
1-19
Chapter 1
How Is Autonomous Database Billed?
1-20
Chapter 1
How Is Autonomous Database Billed?
1-21
Chapter 1
How Is Autonomous Database Billed?
Note:
OCPU is a legacy billing metric and has been retired for Autonomous
Data Warehouse (Data Warehouse workload type) and Autonomous
Transaction Processing (Transaction Processing workload type).
Oracle recommends using ECPUs for all new and existing Autonomous
Database deployments. See Oracle Support Document 2998742.1 for
more information.
1-22
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
ECPU usage Oracle B95702 ECPU per hour 1 ECPU Cost report: in the
(License Autonomous Minimum number product/
Included) Transaction of ECPUs is 2 Description
Processing – column, find,
ECPU "Oracle
Autonomous
Transaction
Processing –
ECPU"
Usage report: in
the product/
resource
column, find
PIC_ATP_ECPU_
LI
ECPU usage Oracle B95704 ECPU per hour 1 ECPU Cost report: in the
(Bring your own Autonomous Minimum number product/
license) Transaction of ECPUs is 2 Description
Processing – column, find,
ECPU – BYOL "Oracle
Autonomous
Transaction
Processing –
ECPU – BYOL"
Usage report: in
the product/
resource
column, find
PIC_ATP_ECPU_
BYOL
Database Oracle B95706 Gigabyte Storage 1 GB Cost report: in the
Storage usage Autonomous Capacity Per Minimum storage product/
Database Month size is 20 GB Description
Storage for column, find,
Transaction "Oracle
Processing Autonomous
Database
Storage for
Transaction
Processing"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE_ATP
1-23
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
Backup Storage Oracle B95754 Gigabyte Storage 1 GB Cost report: in the
usage Autonomous Capacity Per Backups are product/
Database Month billed per GB Description
Storage column, find,
"Database
Backup Storage"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE
See Cost and Usage Reports Overview for details on the cost and usage reports.
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
ECPU usage Oracle B95701 ECPU per hour 1 ECPU Cost report: in the
(License Autonomous Data Minimum number product/
Included) Warehouse – of ECPUs is 2 Description
ECPU column, find,
"Oracle
Autonomous Data
Warehouse –
ECPU"
Usage report: in
the product/
resource
column, find
PIC_ADW_ECPU_
LI
ECPU usage Oracle B95703 ECPU per hour 1 ECPU Cost report: in the
(Bring your own Autonomous Data Minimum number product/
license) Warehouse – of ECPUs is 2 Description
ECPU – BYOL column, find,
"Oracle
Autonomous Data
Warehouse –
ECPU – BYOL"
Usage report: in
the product/
resource
column, find
PIC_ADW_ECPU_
BYOL
1-24
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
Database Oracle B95754 Gigabyte Storage 1 TB (1024 GB) Cost report: in the
Storage usage Autonomous Capacity Per product/
Database Month Description
Storage column, find,
"Oracle
Autonomous
Database
Storage"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE
Backup Storage Oracle B95754 Gigabyte Storage 1 GB Cost report: in the
usage Autonomous Capacity Per Backups are product/
Database Month billed per GB Description
Storage column, find,
"Oracle
Autonomous
Database Backup
Storage"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE
See Cost and Usage Reports Overview for details on the cost and usage reports.
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
ECPU usage Autonomous B99708 ECPU per hour 1 ECPU Cost report: in the
(License JSON Database Minimum number product/
Included) – ECPU of ECPUs is 2 Description
column, find,
"Autonomous
JSON Database
– ECPU"
Usage report: in
the product/
resource
column, find
PIC_AJD_ECPU_
LI
1-25
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
Database Oracle B95706 Gigabyte Storage 1 GB Cost report: in the
Storage usage Autonomous Capacity Per Minimum storage product/
Database Month size is 20 GB Description
Storage for column, find,
Transaction "Oracle
Processing Autonomous
Database
Storage for
Transaction
Processing"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE_ATP
Backup Storage Oracle B95754 Gigabyte Storage 1 GB Cost report: in the
usage Autonomous Capacity Per Backups are product/
Database Month billed per GB Description
Storage column, find,
"Database
Backup Storage"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE
See Cost and Usage Reports Overview for details on the cost and usage reports.
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
ECPU usage APEX Service – B99709 ECPU per hour 1 ECPU Cost report: in the
(License ECPU Minimum number product/
Included) of ECPUs is 2 Description
column, find,
"APEX Service –
ECPU"
Usage report: in
the product/
resource
column, find
PIC_APEX_ECPU
_LI
1-26
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
Database Oracle B95706 Gigabyte Storage 1 GB Cost report: in the
Storage usage Autonomous Capacity Per Minimum storage product/
Database Month size is 20 GB Description
Storage for column, find,
Transaction "Oracle
Processing Autonomous
Database
Storage for
Transaction
Processing"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE_ATP
Backup Storage Oracle B95754 Gigabyte Storage 1 GB Cost report: in the
usage Autonomous Capacity Per Backups are product/
Database Month billed per GB Description
Storage column, find,
"Database
Backup Storage"
Usage report: in
the product/
resource
column, find
PIC_ADB_STORA
GE
See Cost and Usage Reports Overview for details on the cost and usage reports.
Note:
OCPU is a legacy billing metric and has been retired for Autonomous Data
Warehouse (Data Warehouse workload type) and Autonomous Transaction
Processing (Transaction Processing workload type). Oracle recommends using
ECPUs for all new and existing Autonomous Database deployments. See Oracle
Support Document 2998742.1 for more information.
1-27
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
OCPU usage Oracle B90453 OCPU per hour 1 OCPU Cost report: In
(License Autonomous the product/
Included) Transaction Description
Processing column, find
"Oracle
Autonomous
Transaction
Processing"
Usage report: in
the product/
resource
column, find
PIC_ATP_COMPU
TE
OCPU usage Oracle B90454 OCPU per hour 1 OCPU Cost report: Cost
(Bring your own Autonomous report: in the
license) Transaction product/
Processing – Description
BYOL column, find,
"Oracle
Autonomous
Transaction
Processing –
BYOL"
Usage report: in
the product/
resourceID
column, find
PIC_ATP_COMPU
TE_BYOL
Database Oracle B90455 Terabyte Storage 1 TB Cost report: in the
Storage usage Autonomous Capacity Per product/
Transaction Month Description
Processing - column, find,
Exadata Storage "Oracle
Autonomous
Transaction
Processing -
Exadata Storage"
Usage report: in
the product/
resource
column, find
PIC_ATP_EXADA
TA_STORAGE
1-28
Chapter 1
How Is Autonomous Database Billed?
See Cost and Usage Reports Overview for details on the cost and usage reports.
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
OCPU usage Oracle B89040 OCPU per hour 1 OCPU Cost report: in the
(License Autonomous Data product/
Included) Warehouse Description
column, find
"Oracle
Autonomous Data
Warehouse"
Usage report: in
the product/
resource
column, find
PIC_ADWC_COMP
UTE
OCPU usage Oracle B89039 OCPU per hour 1 OCPU Cost report: in the
(Bring your own Autonomous Data product/
license) Warehouse – Description
BYOL column, find
"Oracle
Autonomous Data
Warehouse –
BYOL"
Usage report: in
the product/
resource
column, find
PIC_ADWC_COMP
UTE_BYOL
Database Oracle B89041 Terabyte Storage 1 TB Cost report: in the
Storage usage Autonomous Data Capacity Per product/
Warehouse - Month Description
Exadata Storage column, find
"Oracle
Autonomous Data
Warehouse -
Exadata Storage"
Usage report: in
the product/
resource
column, find
PIC_ADWC_EXAD
ATA_STORAGE
See Cost and Usage Reports Overview for details on the cost and usage reports.
1-29
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
OCPU usage Oracle B92212 OCPU per hour 1 OCPU Cost report: in the
(License Autonomous product/
Included) JSON Database Description
column, find
"Oracle
Autonomous
JSON Database"
Usage report: in
the product/
resource
column, find
PIC_AJD_COMPU
TE
Database Oracle B90453 Terabyte Storage 1 TB Cost report: in the
Storage usage Autonomous Capacity Per product/
Transaction Month Description
Processing - column, find
Exadata Storage "Oracle
Autonomous
JSON Database -
Exadata Storage"
Usage report: in
the product/
resource
column, find
PIC_ATP_EXADA
TA_STORAGE
Note:
In the OCPU model, 60 days of automatic backups are included in the cost of
database storage and have no additional charge. Long-term backups are
billed additionally, as database storage (Part Number: B90453).
See Cost and Usage Reports Overview for details on the cost and usage reports.
1-30
Chapter 1
How Is Autonomous Database Billed?
Resource SKU Name Part Number Metric Min. Increments Cost and Usage
Description Allowed Reports
OCPU usage Oracle APEX B92911 OCPU per hour 1 OCPU Cost report: In
(License Application the product/
Included) Development Description
column, find
"Oracle APEX
Application
Development"
Usage report: in
the product/
resource
column, find
PIC_ADBS_APEX
_LI
Database Oracle B90453 Terabyte Storage 1 TB Cost report: in the
Storage usage Autonomous Capacity Per product/
Transaction Month Description
Processing - column, find,
Exadata Storage "Oracle
Autonomous
Transaction
Processing -
Exadata Storage"
Usage report: in
the product/
resource
column, find
PIC_ATP_EXADA
TA_STORAGE
Note:
In the OCPU model, 60 days of automatic backups are included in the cost of
database storage and have no additional charge. Long-term backups are billed
additionally, as database storage (Part Number: B90453).
See Cost and Usage Reports Overview for details on the cost and usage reports.
1-31
Chapter 1
How Is Autonomous Database Billed?
1-32
Chapter 1
How Is Autonomous Database Billed?
If you then delete 1 TB of data, your allocated storage remains at 4.9 TB and you are
billed for 5 TB until you perform a shrink operation. When you perform a shrink operation,
may be able to shrink /reduce your allocated storage back to 3.9 TB. After the shrink
operation completes and your allocated storage (3.9 TB) is once again below your
reserved base storage (4 TB), you will once again be billed for your reserved base
storage of 4 TB. See Shrink Storage for more information.
• Autonomous Data Guard Standby – Local (Same-Region)
Local Autonomous Data Guard peer databases incur the additional cost of the base
ECPUs and the storage of the Primary database, including any auto-scaled storage
usage, billed on the Primary database itself. Auto-scaled ECPUs of the Primary database
are not billed for additionally on the local Autonomous Data Guard peer database. The
number of base ECPUs is specified by the number of ECPUs as shown in the ECPU
count on the Oracle Cloud Infrastructure Console.
For example, if you enable a local Autonomous Data Guard peer on a source database
with:
– 2 (base) ECPUs with Compute auto-scaling enabled and are consuming about 4
ECPUs per hour
– 1 TB of (base) storage with storage auto-scaling, consuming a total of 2 TB of
database storage
For the local Autonomous Data Guard peer you are billed for an additional 2 ECPUs
(your base ECPU selection), plus an additional 2 TB of storage (that is, the same amount
of storage reserved for the source Primary with auto-scale, on the Primary database).
When the Primary database is stopped, neither the primary nor the peer database is
billed for ECPU.
• Autonomous Data Guard Standby – Remote (Cross-Region)
Autonomous Data Guard cross-region peer databases incur the additional cost of the
base ECPUs and twice (2x) the storage of the Primary database, including any auto-
scaled storage usage, billed on the remote peer database. Auto-scaled ECPUs of the
Primary are not billed for additionally on the remote peer database. The number of base
ECPUs is specified by the number of ECPUs as shown in the ECPU count on the Oracle
Cloud Infrastructure Console.
For example, if you enable a cross-region Autonomous Data Guard peer for a source
database with:
– 2 (base) ECPUs with Compute auto-scaling enabled and are consuming about 4
ECPUs per hour
– 1 TB of (base) storage with storage auto-scaling, consuming a total of 2 TB of
database storage
For the cross-region Autonomous Data Guard peer you are billed an additional 2 ECPUs
(your base ECPU selection), plus 4 TB of storage (that is 2x the storage reserved for the
source Primary with auto-scaling, billed on the remote peer database).
When the Primary database is stopped, neither the primary nor the peer database are
billed for ECPU.
When the option Enable cross-region backup replication to disaster recovery peer is
selected, you are billed for twice (2x) the backup storage size required for the replicated
backups, billed to the remote standby.
1-33
Chapter 1
How Is Autonomous Database Billed?
1-34
Chapter 1
How Is Autonomous Database Billed?
Remote refreshable clones are billed for twice (2x) the amount of storage as their source
database.
For example, if you create a 2 ECPU remote refreshable clone from a source database
with:
– 4 ECPUs
– 1 TB of storage with storage auto-scaling and consuming 2 TB of storage
For the remote refreshable clone you are billed for 2 ECPUs (that is, the refreshable
clone's ECPU selection) and 4 TB of storage (that is, 2x the storage reserved for the
source database)
Starting or stopping the source database does not affect the refreshable clone - The
refreshable clone can be started or stopped independently.
• Snapshot Standby for Remote (Cross-Region) Disaster Recovery
Snapshot standby ECPU usage is billed based on the base ECPU count and any
additional ECPU usage if compute auto-scaling is enabled. The number of base ECPUs
is specified by the number of ECPUs, as shown in the ECPU count on the Oracle Cloud
Infrastructure Console.
Snapshot standby storage usage is billed based on the storage of the snapshot standby
plus (1x) the storage of the source primary database.
For example, if you have a 2 ECPU and 3 TB snapshot standby from a source database
with:
– 4 ECPUs
– 1 TB of storage with storage auto-scaling and consuming 2 TB of storage
Your snapshot standby will be billed for 2 ECPUs (that is, the snapshot standby's ECPU
selection) and 3 TB + 2 TB = 5 TB of database storage (that is, the storage reserved on
the snapshot standby + the storage reserved on its source database)
• Elastic Pools: An elastic pool allows you to provision 4 times the ECPU count that you
have as your pool size. For example, if you have a pool with pool size of 128 ECPUs, you
can provision up to 512 ECPUs in this pool. In other words, when your pool size is 128
ECPUs, your pool capacity is four times the pool size (in this example, 512 ECPUs).
Databases that belong to a pool are not billed individually for compute. The compute
billing for all the pool members and the leader happens through the leader. In other
words, individual members of an elastic pool do not get billed for compute as long as they
are part of the pool. This applies no matter what the workload type is for the pool
member. For example, when a pool member with workload type Data Warehouse is
added to a pool, it's compute usage is billed to the pool leader at the Transaction
Processing compute usage rate. The storage billing, on the other hand, continues to be
billed to individual Autonomous Database instances irrespective of them being part of a
pool or not.
Let’s assume you have an elastic pool with pool size of 128 ECPUs. Given the pool size,
the pool capacity is 512 ECPUs (pool capacity = 4x pool size). For this sample, here are
some common billing questions and answers:
– What is the maximum number of Autonomous Database instances allowed in this
pool? A total of 512 Autonomous Database instances with 1 ECPU each (elastic pool
members or the leader can have an individual ECPU allocation as low as 1 ECPU).
– What happens if the aggregated ECPU utilization high watermark of the pool is
greater than the pool size? If the aggregated ECPU utilization high watermark is less
than or equal to the pool size in a given billing hour, the hourly charge is in the
1-35
Chapter 1
How Is Autonomous Database Billed?
amount of the pool size. If the aggregated ECPU utilization high watermark is
more than the pool size but less than or equal to the 2x pool size in a given
billing hour, the hourly charge is in the amount of 2x pool size. If the
aggregated ECPU utilization high watermark is more than 2x pool size in a
given billing hour, the hourly charge is in the amount of 4x pool size.
For example, let’s assume 512 Autonomous Database instances with 1 ECPU
each are in an elastic pool with a pool size of 128 ECPUs. If the aggregated
ECPU utilization high watermark of these databases is 100 ECPUs between
1pm-2pm, and 250 ECPUs between 2pm-3pm; then the billing is 128 ECPU
hours between 1pm-2pm and 256 ECPU hours between 2pm-3pm.
See About Elastic Pool Billing for more information.
1-36
Chapter 1
How Is Autonomous Database Billed?
over 4 TB in a given hour, say to 4.9 TB, you are billed for 5 TB of storage from that hour
onward.
If you then delete 1 TB of data, your allocated storage remains at 4.9 TB and you are
billed for 5 TB until you perform a shrink operation. When you perform a shrink operation,
Autonomous Database may be able to shrink / reduce your allocated storage back to 3.9
TB. After the shrink operation completes and your allocated storage (3.9TB) is once
again below your reserved base storage (4 TB), you will once again be billed for your
reserved base storage of 4 TB. See Shrink Storage for more information.
• Autonomous Data Guard Standby Local (same-region)
Local Autonomous Data Guard peer databases incur the additional cost of the base
OCPUs and the storage of the Primary database, including any auto-scaled storage
usage, billed on the Primary database itself. Auto-scaled OCPUs of the Primary database
are not billed for additionally on the local Autonomous Data Guard peer database. The
number of base OCPUs is specified by the number of OCPUs, as shown in the OCPU
count on the Oracle Cloud Infrastructure Console.
When the Primary database is stopped, neither the primary nor the peer database is
billed for OCPU.
• Autonomous Data Guard Standby Remote (cross-region)
Autonomous Data Guard cross-region standby databases incur the additional cost of the
base OCPUs and twice (2x) the storage of the Primary database, including any auto-
scaled storage usage, billed on the remote peer database. Auto-scaled OCPUs of the
Primary are not billed for additionally on the remote peer database. The number of base
OCPUs is specified by the number of OCPUs, as shown in OCPU count on the Oracle
Cloud Infrastructure Console.
When the Primary database is stopped, neither the primary nor the peer database is
billed for OCPU.
When the option Enable cross-region backup replication to disaster recovery peer is
selected, you are billed to the remote standby database's OCPU database storage, for
twice (2x) the 7 day replicated backup storage size, rounded up to the nearest TB.
• Backup-based Disaster Recovery Local (same-region) Backup Copies There are no
additional costs for local Backup-Based Disaster Recovery, other than the cost of
keeping automatic backups.
• Backup-Based Disaster Recovery Remote (cross-region) Backup Copies
Billing for a cross-region Backup-Based Disaster Recovery with OCPUs is twice (2x) the
amount of storage required for backups replicated to the remote region, billed as
database storage to the remote peer, rounded up to the nearest TB.
When the option Enable cross-region backup replication to disaster recovery peer is
selected, you are billed to the remote peer database's OCPU database storage, for twice
(2x) the replicated backup storage size, rounded up to the nearest TB.
• Refreshable Clone Local (same-region)
Local refreshable clones have their own configurable OCPU selection, and so they are
billed for OCPU based on the user-selected OCPU (with or without auto scaling); they do
not get billed additionally over the OCPU selection. The number of OCPUs is specified by
the number OCPUs as shown in the OCPU count on the Oracle Cloud Infrastructure
Console.
Local refreshable clones are billed for the same amount of storage as their source
database.
1-37
Chapter 1
How Is Autonomous Database Billed?
Starting or stopping the source database does not affect the refreshable clone.
The refreshable clone can be started or stopped independently.
• Refreshable Clone Remote (cross-region)
Remote refreshable clones have their own configurable OCPU selection, and so
they are billed for OCPU based on the user-selected OCPU (with or without auto
scaling); they do not get billed additionally over the OCPU selection. The number
of OCPUs is specified by the number of OCPUs as shown in the OCPU count on
the Oracle Cloud Infrastructure Console.
Remote refreshable clones are billed for twice (2x) the amount of storage as their
source database.
Starting or stopping the source database does not affect the refreshable clone.
The refreshable clone can be started or stopped independently.
• Snapshot Standby for Remote (cross-region) disaster recovery
Snapshot standby OCPU usage is billed based on the base OCPU count and any
additional OCPU usage if compute auto-scaling is enabled. The number of base
OCPUs is specified by the OCPUs as shown in the OCPU count on the Oracle
Cloud Infrastructure Console.
Snapshot standby storage usage is billed based on the storage of the snapshot
standby plus (1x) the storage of the source primary database.
1-38
Chapter 1
How Is Autonomous Database Billed?
• You do not pay for a tool's ECPU allocation if you do not use the tool.
• The VMs associated with a tool are provisioned when you start using a tool. For example,
if Graph Studio is disabled, billing does not begin when you enable the tool. Billing begins
when you start using Graph Studio.
• The ECPU count specifies the compute resource allocation that is dedicated to the tool.
The value you enter for ECPU count applies in addition to the database ECPU count you
specify for the instance. After you start using a tool with a specified ECPU count, you are
billed per ECPU hour reserved from the time the built-in tool is launched.
• Billing stops for a built-in tool's allocated ECPUs when the built-in tool is disabled, the
instance stops or is terminated, or when the built-in tool is idle for more than the specified
Maximum idle time. The default maximum idle times depend on the tool:
• The default maximum idle times depend on the built-in database tool:
– Graph Studio: 240 minutes
– Oracle Machine Learning User Interface: 60 minutes
– Data Transforms: 10 minutes
For example, assume you allocate 4 ECPUs, as the base ECPU count, for the Autonomous
Database instance, with compute auto-scaling disabled, and 2 ECPUs for Graph Studio and
set the maximum idle time to 30 minutes. Using Graph Studio for 30 min between 2 pm and 3
pm, your bill for the specified time period (2-3pm) is 4 ECPUs plus the additional tool usage
for the Graph Studio usage of 2 ECPUs, which is a total of 6 ECPUs, assuming no other tools
are used in this period.
As another example, assume you allocate 6 ECPUs, as the base ECPU count, for the
Autonomous Database instance, with compute auto-scaling disabled, and you configure 4
ECPUs for Oracle Machine Learning (OML) with the maximum idle time set to 15 minutes.
When the database is open and available for the hour between 2 to 3 pm, and you use OML
for 15 min starting at 2 pm, with the specified 15 minute maximum idle time, you are charged
for 4 ECPUs for OML compute usage for 30 minutes plus 6 ECPUs for the database compute
resources for the full hour, for a total of 8 ECPUs assuming no other tools are used in this
period.
Note: billing is per minute.
1-39
2
Quick Start
2-1
Chapter 2
Autonomous Database 15 Minute Quick Start
2-2
3
Tasks
3-1
Chapter 3
Provision or Clone an Autonomous Database
3-2
Chapter 3
Connect to Autonomous Database
3-3
Chapter 3
Connect to Autonomous Database
3-4
Chapter 3
Connect to Autonomous Database
• Clients connecting with TLS do not need to worry about wallet rotation. Wallet rotation is
a regular procedure for mTLS connections.
• TLS connections can be faster (providing less connection latency). TLS authentication
can provide reduced connection latency compared to mTLS.
• TLS and mTLS connections are not mutually exclusive. Mutual TLS (mTLS)
authentication is enabled by default and always available. When you enable TLS
authentication, you can use either mTLS or TLS authentication.
• Using TLS authentication does not compromise the fully encrypted end-to-end
communication between a client and Autonomous Database.
• About Mutual TLS (mTLS) Authentication
Using Mutual Transport Layer Security (mTLS), clients connect through a TCPS (Secure
TCP) database connection using standard TLS 1.2 with a trusted client certificate
authority (CA) certificate. With mutual authentication both the client application and
Autonomous Database authenticate each other. Autonomous Database uses mTLS
authentication by default.
• About TLS Authentication
Using Transport Layer Security (TLS), clients connect through a TCPS (Secure TCP)
database connection using standard TLS 1.2. A client uses its list of trusted Certificate
Authorities (CA)s to validate the server’s CA root certificate. If the issuing CA is trusted,
the client verifies that the certificate is authentic. This allows the client and Autonomous
Database to establish the encrypted connection before exchanging any messages.
3-5
Chapter 3
Connect to Autonomous Database
Client Computer
Wallet/Keystore
Wallet/Keystore
Oracle Autonomous
Database
3-6
Chapter 3
Connect to Autonomous Database
• If the client is connecting with managed ODP.NET or ODP.NET Core versions 19.13 or
21.4 (or above) using TLS authentication, the client can connect without providing a
wallet.
There are network access prerequisites for TLS connections. See Network Access
Prerequisites for TLS Connections for more information.
db2022adb_high = (description = (
address=(protocol=tcps)
(port=1522)
(host=adb.example.oraclecloud.com))
(connect_data=(service_name=example_high.adb.oraclecloud.com))
(security=(ssl_server_dn_match=yes)))
Your firewall must allow access to servers within the .oraclecloud.com domain using port
1522. To connect to Autonomous Database, depending upon your organization's network
configuration, you may need to use a proxy server to access this port or you may need to
request that your network administrator open this port.
Note:
By default Application Continuity is disabled.
See Use Application Continuity on Autonomous Database for more information on Application
Continuity.
3-7
Chapter 3
Connect to Autonomous Database
3-8
Chapter 3
Connect to Autonomous Database
Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections with Wallets
(mTLS)
Preparing for any type of Oracle Call Interface (OCI) connection with mTLS authentication
requires the installation of client software, downloading client credentials, and configuring
certain files and environment variables.
This topic covers the steps to prepare an application to connect using mTLS authentication
with a wallet that you download from an Autonomous Database instance. See Prepare for
Oracle Call Interface, ODBC, and JDBC OCI Connections Using TLS Authentication for
information on the steps to prepare for TLS authentication with these connection types.
3-9
Chapter 3
Connect to Autonomous Database
5. Create the TNS_ADMIN environment variable and set it to the location of the
credentials file.
Use this environment variable to change the directory path of Oracle Net Services
configuration files from the default location of ORACLE_HOME\network\admin to
the location of the secure folder containing the credentials file you saved in Step 2.
Set the TNS_ADMIN environment variable to the directory where the unzipped
credentials files are, not to the credentials file itself.
For example, on UNIX/Linux set TNS_ADMIN to the full path of the directory where
you unzipped the client credentials:
export TNS_ADMIN=/home/adb_credentials
set TNS_ADMIN=d:\myapp\adb_credentials
3-10
Chapter 3
Connect to Autonomous Database
and tnsnames.ora files. Connections through an HTTP proxy are only available with Oracle
Client software version 12.2.0.1 or later.
Note:
To avoid manual updates in sqlnet.ora and tnsnames.ora files, you can use
SQLcl and specify the HTTP proxy on the command line. See Connect Oracle
SQLcl Cloud with a Wallet (mTLS) for more information.
1. Add the following line to the sqlnet.ora file to enable connections through an HTTP
proxy:
SQLNET.USE_HTTPS_PROXY=on
2. Add the HTTP proxy hostname and port to the connection definitions in tnsnames.ora.
You need to add the https_proxy and https_proxy_port parameters in the address
section of connection definitions. For example, the following sets the HTTP proxy to
proxyhostname and the HTTP proxy port to 80; replace these values with your HTTP
proxy information:
ADB1_high =
(description=
(address=
(https_proxy=proxyhostname)(https_proxy_port=80)
(protocol=tcps)(port=1522)(host=adb.example.oraclecloud.com)
)
(connect_data=(service_name=adb1_high.adb.oraclecloud.com)
)
(security=(ssl_server_dn_match=yes))
)
)
Note:
Configuring sqlnet.ora and tnsnames.ora for the HTTP proxy may not be enough
depending on your organization's network configuration and security policies. For
example, some networks require a username and password for the HTTP proxy. In
such cases contact your network administrator to open outbound connections to
hosts in the oraclecloud.com domain using port 1522 without going through an
HTTP proxy.
3-11
Chapter 3
Connect to Autonomous Database
2. Copy the entries in the tnsnames.ora file provided in the Autonomous Database
wallet to your existing tnsnames.ora file.
Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections Using
TLS Authentication
Preparing for any type of Oracle Call Interface (OCI) connection with TLS
authentication requires the installation of client software and configuring certain files
and environment variables.
Oracle Call Interface (OCI) clients support TLS authentication without a wallet if you
are using the following client versions:
• Oracle Instant Client/Oracle Database Client 19.13 - only on Linux x64
• Oracle Instant Client/Oracle Database Client 19.14 (or later) and 21.5 (or later) -
only on Linux x64 and Windows
See Update your Autonomous Database Instance to Allow both TLS and mTLS
Authentication for information on allowing TLS connections.
1. Install Oracle Instant Client.
a. Got to the Oracle Instant Client page and click Download Now: Oracle Instant
Client
b. On the Oracle Instant Client Downloads page, select your platform.
For example, under Instant Client for Linux, select the Instant Client for
Linux x86-64 architecture (for this example, to download the Linux x86-64
version).
c. Under Version 19.14.0.0.0 (Requires glibc 2.14), select an Instant Client
package to download.
d. If you are building a language API or driver from source code, you may also
need to download the Instant Client SDK Package version 19.14: Oracle
Instant Client
e. Unzip the base package you selected. If you also download the SDK, unzip it
in the same directory.
f. On Linux, create a symbolic link if it does not exist. For example:
cd /home/myuser/instantclient_19_14
ln -s libclntsh.so.19.1 libclntsh.so
If there is no other Oracle software on your system that will be impacted, add
Instant Client to the runtime link path. For example:
3-12
Chapter 3
Connect to Autonomous Database
Alternatively set the library path in each shell that runs your application. For example:
export LD_LIBRARY_PATH=/home/myuser/
instantclient_19_14:$LD_LIBRARY_PATH
Note:
The Linux Instant Client download files are available as .zip files or .rpm
files. You can use either version.
2. If you have not already done so, enable TLS connections on your Autonomous Database
instance.
See Update your Autonomous Database Instance to Allow both TLS and mTLS
Authentication for details.
3. Run Your Application
a. Update your application to connect using your database username, your password,
and the Oracle Net connect name given in the unzipped tnsnames.ora file. For
example, user, adb_user, password, and db2022adb_low as the connect string.
b. Alternatively, change the connect string in tnsnames.ora to match the string used
by your application.
c. Run your application.
Allowing TLS connections to Autonomous Database does not disallow mutual TLS (mTLS)
connections. Both Mutual TLS (mTLS) and TLS connections are valid when an Autonomous
Database instance is configured to allow TLS connections. See Connect Node.js and other
Scripting Languages (mTLS) for information on connecting using mutual TLS (mTLS) with a
wallet.
In this case, update your sqlnet.ora file by adding the following:
3-13
Chapter 3
Connect to Autonomous Database
If your Java Virtual Machine (JVM) is configured to cache DNS address lookups, your
application may be using only one IP address to connect to your Autonomous
Database, resulting in lower throughput. To prevent this you can change your JVM's
networkaddress.cache.ttl value to 0, so that every connection request does a new
DNS lookup. This ensures that different threads in your application are distributed over
multiple load balancers.
To change the networkaddress.cache.ttl value for all applications, or in your
application, do one of the following:
• Configure the security policy to set the value for all applications:
Set networkaddress.cache.ttl=0 in the file $JAVA_HOME/jre/lib/security/
java.security
• Set the following property in your application code:
java.security.Security.setProperty("networkaddress.cache.ttl" ,
"0");
Connect Microsoft .NET, Visual Studio Code, and Visual Studio with a Wallet
(mTLS)
Oracle Autonomous Database supports connectivity to the Microsoft .NET
Framework, .NET Core, Visual Studio, and Visual Studio Code.
Oracle Data Provider for .NET (ODP.NET) provides run-time ADO.NET data access to
a database.
ODP.NET has the following driver types:
• Unmanaged ODP.NET for .NET Framework applications
• Managed ODP.NET for .NET Framework applications
• ODP.NET Core for .NET Core applications
The following minimum versions are required:
3-14
Chapter 3
Connect to Autonomous Database
• ODP.NET: 12.1 (Starting with April 2022 WINDBBP; ODP.NET Managed only), 18 (base
release or later), 19.4 (or later, except for 19.10), or 21 (base release or later).
Oracle Developer Tools for Visual Studio and Oracle Developer Tools for VS Code provide
database application design-time support in the Microsoft development environment,
including tools for managing Autonomous Databases in the Oracle Cloud.
Oracle Developer Tools for VS Code provides database application design-time support in
Visual Studio Code.
These software components are available as a free download from the following sites:
• Managed ODP.NET and ODP.NET Core: NuGet Gallery
• Unmanaged ODP.NET: Oracle Data Access Components Downloads
• Oracle Developer Tools for Visual Studio Code: VS Code Marketplace
• Oracle Developer Tools for Visual Studio: Oracle Developer Tools for Visual Studio
Oracle recommends using the latest provider and tools version with Oracle Autonomous
Database.
Click the following link for instructions to download, install, and configure these components
for use with mutual TLS (mTLS) and wallets:
• Developing .NET Applications for Oracle Autonomous Database
To use mutual TLS (mTLS) authentication to connect to Autonomous Database, you must
download a wallet. See Download Client Credentials (Wallets) for information downloading
wallets.
ODP.NET connectivity to Oracle Autonomous Database supports mutual TLS (mTLS) with
wallets as well as TLS. See Connect Microsoft .NET, Visual Studio Code, and Visual Studio
Without a Wallet for information on using TLS connections.
To learn more about using Oracle Autonomous Database and .NET, try the free .NET
Development with Oracle Autonomous Database Quick Start. This lab walks you through
setting up a .NET web server on Oracle Cloud Infrastructure that connects to Oracle
Autonomous Database. Next, the lab guides developing and deploying a simple ASP.NET
Core web application that uses all these components. By the end, you will have a live,
working website on the Internet.
Connect Microsoft .NET, Visual Studio Code, and Visual Studio Without a Wallet
Oracle Autonomous Database supports connectivity to the Microsoft .NET Framework, .NET
Core, Visual Studio, and Visual Studio Code using TLS authentication without a wallet.
Oracle Data Provider for .NET (ODP.NET) provides run-time ADO.NET data access to
Autonomous Database. ODP.NET has the following driver types:
• Unmanaged ODP.NET for .NET Framework applications
• Managed ODP.NET for .NET Framework applications
• ODP.NET Core for .NET Core applications
Oracle Developer Tools for Visual Studio and Oracle Developer Tools for VS Code provide
database application design-time support in the Microsoft development environment,
including tools for managing Autonomous Databases in the Oracle Cloud.
Oracle Developer Tools for VS Code provides database application design-time support in
Visual Studio Code.
3-15
Chapter 3
Connect to Autonomous Database
These software components are available as a free download from the following sites:
• Managed ODP.NET and ODP.NET Core: NuGet Gallery
• Unmanaged ODP.NET: Oracle Data Access Components Downloads
• Oracle Developer Tools for Visual Studio Code: VS Code Marketplace
• Oracle Developer Tools for Visual Studio: Oracle Developer Tools for Visual Studio
Oracle recommends using the latest provider and tools version with Oracle
Autonomous Database.
When you connect using TLS authentication with managed ODP.NET and ODP.NET
Core you do not need to deploy the Oracle wallet or the Oracle network configuration
files sqlnet.ora or tnsnames.ora with your application. Instead, you supply the
data source attribute, a TLS connection string, with the configuration information in the
ODP.NET connection.
To use TLS connections with Managed ODP.NET and ODP.NET Core, do the
following:
1. Obtain managed ODP.NET or ODP.NET Core versions 19.13 or 21.4 (or above).
Lower level versions do not support TLS connections with Oracle Autonomous
Database.
2. Enable TLS connections on your Autonomous Database instance. See Update
your Autonomous Database Instance to Allow both TLS and mTLS Authentication
for details.
3. After you enable TLS connections, supply a TLS connection string in the ODP.NET
data source to connect to an Autonomous Database instance. See View TNS
Names and Connection Strings for an Autonomous Database Instance for details
on viewing or copying TLS connection strings.
Notes for the TLS connection string:
• The TLS connection string uses quotation marks around the distinguished
name. If you store the TLS connection string in a .NET string, add a backslash
escape sequence before each quotation mark (for example, \" ). This
allows .NET to recognize the quotation mark as part of the TLS connection
string.
• Verify that the connection string includes
(SECURITY=(SSL_SERVER_DN_MATCH=TRUE)) to ensure that the client matches
the server DN. If not specified, add this to the connect string. For example:
(description=
(retry_count=20)(retry_delay=3)(address=(protocol=tcps)
(port=1521)
(host=HOSTNAME))(connect_data=(service_name=SERVICE_NAME))
(security=(ssl_server_dn_match=true)))
Allowing TLS connections to Autonomous Database does not disallow mutual TLS
(mTLS) connections. Both Mutual TLS (mTLS) and TLS connections are valid when an
Autonomous Database instance is configured to allow TLS connections. See Connect
Microsoft .NET, Visual Studio Code, and Visual Studio with a Wallet (mTLS) for
information on connecting using mutual TLS (mTLS) with a wallet.
To learn more about using Oracle Autonomous Database and .NET, try the free .NET
Development with Oracle Autonomous Database Quick Start. This lab walks you
3-16
Chapter 3
Connect to Autonomous Database
through setting up a .NET web server on Oracle Cloud Infrastructure that connects to Oracle
Autonomous Database. Next, the lab guides developing and deploying a simple ASP.NET
Core web application that uses all these components. By the end, you will have a live,
working website on the Internet.
Note:
Oracle Call Interface (OCI) clients support mTLS authentication with a wallet if you
are connecting using the following client versions:
• Oracle Instant Client/Oracle Database Client: 18.19 (or later), 19.2 (or later), or
21 (base release or later).
For details on connecting Node.js or other scripting laguages without a wallet, see
Connect Node.js, and Other Scripting Languages Without a Wallet.
3-17
Chapter 3
Connect to Autonomous Database
cd /home/myuser/instantclient_18_5
ln -s libclntsh.so.18.1 libclntsh.so
If there is no other Oracle software on your system that will be impacted, add
Instant Client to the runtime link path. For example:
Alternatively set the library path in each shell that runs your application. For
example:
export LD_LIBRARY_PATH=/home/myuser/
instantclient_18_5:$LD_LIBRARY_PATH
Note:
The Linux Instant Client download files are available as .zip files
or .rpm files. You can use either version.
3-18
Chapter 3
Connect to Autonomous Database
Enable Oracle Network Connectivity and Obtain the Security Credentials (Oracle
Wallet)
1. Obtain client security credentials to connect to Autonomous Database. You obtain a zip
file containing client security credentials and network configuration settings required to
access your database. You must protect this file and its contents to prevent unauthorized
database access. Obtain the client security credentials file as follows:
• ADMIN user: Click Database connection. See Download Client Credentials
(Wallets).
• Other user (non-administrator): Obtain the Oracle Wallet from the administrator for
your Autonomous Database.
2. Extract the client credentials (wallet) files:
a. Unzip the client credentials zip file.
b. If you are using Instant Client, make a network/admin subdirectory hierarchy
under the Instant Client directory if necessary. Then move the files to this
subdirectory. For example depending on the architecture or your client system and
where you installed Instant Client, the files should be in the directory:
C:\instantclient_12_2\network\admin
or
/home/myuser/instantclient_18_5/network/admin
or
/usr/lib/oracle/18.5/client64/lib/network/admin
• If you are using a full Oracle Client move the file to $ORACLE_HOME/network/
admin.
c. Alternatively, put the unzipped wallet files in a secure directory and set the TNS_ADMIN
environment variable to that directory name.
Note:
From the zip file, only these files are required: tnsnames.ora, sqlnet.ora,
cwallet.sso, and ewallet.p12.
3. If you are behind a proxy follow the steps in “Connections with an HTTP Proxy”, in
Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections with Wallets
(mTLS).
3-19
Chapter 3
Connect to Autonomous Database
Note:
Oracle Call Interface (OCI) clients support TLS authentication without a
wallet if you are using the following client versions:
• Oracle Instant Client/Oracle Database Client 19.13 - only on Linux x64
• Oracle Instant Client/Oracle Database Client 19.14 (or later) and 21.5 (or
later) - only on Linux x64 and Windows
cd /home/myuser/instantclient_19_14
ln -s libclntsh.so.19.1 libclntsh.so
3-20
Chapter 3
Connect to Autonomous Database
If there is no other Oracle software on your system that will be impacted, add Instant
Client to the runtime link path. For example:
Alternatively set the library path in each shell that runs your application. For example:
export LD_LIBRARY_PATH=/home/myuser/
instantclient_19_14:$LD_LIBRARY_PATH
Note:
The Linux Instant Client download files are available as .zip files or .rpm
files. You can use either version.
3-21
Chapter 3
Connect to Autonomous Database
3-22
Chapter 3
Connect to Autonomous Database
– Visual Studio with the SQL Server Integration Services extension, which also
supports SQL Server Data Tools
– Power BI On-premises Data gateway
– Power BI Desktop
• Install and configure ODP.NET on the Windows client
• Validate the MS tool connection to Autonomous Database
Below are step-by-step connectivity tutorials for each of the MS tools listed above. These
tutorials show how to connect these MS tools with either on-premises Oracle databases or
Oracle Autonomous Database:
• Power BI Desktop: Connect to Oracle Database
• Power BI Service: Connect to Oracle Database
• SQL Server Data Tools and Integration Services: Connect to Oracle Database
• SQL Server Analysis Services: Connect to Oracle Database
• SQL Server Reporting Services: Connect to Oracle Database
Note:
Oracle recommends you keep up to date with Python and python-oracledb
driver releases.
3-23
Chapter 3
Connect to Autonomous Database
Collecting oracledb
Downloading oracledb-1.0.3-cp310-cp310-win_amd64.whl (1.0 MB)
---------------------------------------- 1.0/1.0 MB 1.8 MB/s
eta 0:00:00
Collecting cryptography>=3.4
Downloading cryptography-37.0.4-cp36-abi3-win_amd64.whl (2.4 MB)
---------------------------------------- 2.4/2.4 MB 3.5 MB/s
eta 0:00:00
Collecting cffi>=1.12
Downloading cffi-1.15.1-cp310-cp310-win_amd64.whl (179 kB)
---------------------------------------- 179.1/179.1 kB 5.4
MB/s eta 0:00:00
Collecting pycparser
Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
---------------------------------------- 118.7/118.7 kB 7.2
MB/s eta 0:00:00
Installing collected packages: pycparser, cffi, cryptography,
oracledb
Successfully installed cffi-1.15.1 cryptography-37.0.4
oracledb-1.0.3 pycparser-2.21
• In the case where you do not have permission to write to system directories,
include the --user option. For example:
• If a binary package is not available for your platform, running pip will
download the source package instead. The source is compiled and the
resulting binary is installed.
See Installing python-oracledb for additional options and tips.
3. If you want to use the python-oracledb driver in Thick mode, install Oracle Client
software.
By default, python-oracledb runs in Thin mode which connects directly to Oracle
Database. Thin mode does not require Oracle Client libraries. However, some
additional functionality is available when python-oracledb runs in Thick mode.
3-24
Chapter 3
Connect to Autonomous Database
Note:
See Oracle Database Features Supported by python-oracledb for information
on supported features in python-oracledb Thin and Thick modes. Not all of the
features shown in this link are available with Autonomous Database.
Python-oracledb uses Thick mode when you use either the Oracle Instant client libraries
or the Oracle Database Client libraries and you call oracledb.init_oracle_client() in
your Python code.
When you install Oracle Client Software, there are differences in required minimum
versions for mTLS and TLS connections, as follows:
• Mutual TLS (mTLS) Connections:
– If your database is on a remote computer, then download the free Oracle Instant
Client “Basic” or “Basic Light” package for your operating system architecture.
Use a supported version: Oracle Instant Client: 18.19 (or later), 19.2 (or later),
or 21 (base release or later).
– Alternatively, you can use the Full Oracle Database client libraries when they are
available on your system (including Full Oracle Database Client: Oracle
Database Client: 18.19 (or later), 19.2 (or later), or 21 (base release or later).
• TLS Connections: Oracle Call Interface (OCI) clients support TLS authentication
without a wallet if you are using the following client versions:
– Oracle Instant Client/Oracle Database Client 19.14 (or later) and 21.5 (or later) -
only on Linux x64 and Windows
– Alternatively, you can use the Full Oracle Database client libraries when they are
available on your system, including Full Oracle Database Client 19.14 (or later)
and 21.5 (or later).
Topics
3-25
Chapter 3
Connect to Autonomous Database
Note:
You must select TLS in the TLS Authentication drop-down to obtain
the TLS connection strings before you copy a connection string
(when the value is Mutual TLS the connection strings have different
values and do not work with TLS connections).
c. Copy the Connection String for the database service you want to use with your
application.
See View TNS Names and Connection Strings for an Autonomous Database
Instance for more information.
3-26
Chapter 3
Connect to Autonomous Database
cs='''(description = (retry_count=20)(retry_delay=3)
(address=(protocol=tcps)
(port=1522)(host=xxx.oraclecloud.com))
(connect_data=(service_name=xxx.adb.oraclecloud.com))
(security=(ssl_server_dn_match=yes)))'''
connection=oracledb.connect(
user="admin",
password=password,
dsn=cs)
cs='''(description = (retry_count=20)(retry_delay=3)
(address=(protocol=tcps)
(port=1522)(host=xxx.oraclecloud.com))
(connect_data=(service_name=xxx.adb.oraclecloud.com))
(security=(ssl_server_dn_match=yes)))'''
oracledb.init_oracle_client()
connection=oracledb.connect(
user="admin",
password=password,
dsn=cs)
3-27
Chapter 3
Connect to Autonomous Database
3. Perform this step if you only want to connect in Thin mode: Run Python
Application with python-oracledb Thin Mode with a Wallet (mTLS)
4. Perform this step if you want to connect in Thick mode: Run Python Application
with python-oracledb Thick Mode with a Wallet (mTLS)
Topics
Note:
Protect the wallet.zip file and its contents to prevent unauthorized
database access.
Run Python Application with python-oracledb Thin Mode with a Wallet (mTLS)
By default, python-oracledb uses Thin mode to connect directly to your Autonomous
Database instance.
In Thin mode only two files from the wallet zip are needed:
• tnsnames.ora: Maps net service names used for application connection strings
to your database services.
• ewallet.pem: Enables SSL/TLS connections in Thin mode.
To connect in Thin mode:
1. Move tnsnames.ora and ewallet.pem files to a location on your system.
3-28
Chapter 3
Connect to Autonomous Database
• Linux
• Windows
Linux
For example on Linux:
/opt/OracleCloud/MYDB
Windows
For example on Windows:
C:\opt\OracleCloud\MYDB
• Linux
• Windows
Linux
For example, on Linux to connect as the ADMIN user using oracledb.connect with the
db2024_low network service name (the service name is found in tnsnames.ora):
connection=oracledb.connect(
config_dir="/opt/OracleCloud/MYDB",
user="admin",
password=password,
dsn="db2024_low",
wallet_location="/opt/OracleCloud/MYDB",
wallet_password=wallet_pw)
3-29
Chapter 3
Connect to Autonomous Database
Windows
For example, on Windows to connect as the ADMIN user using oracledb.connect
with the db2024_low network service name (the service name is found in
tnsnames.ora):
connection=oracledb.connect(
config_dir=r"C:\opt\OracleCloud\MYDB",
user="admin",
password=password,
dsn="db2024_low",
wallet_location=r"C:\opt\OracleCloud\MYDB",
wallet_password=wallet_pw)
The use of a ‘raw’ string r"..." means that backslashes are treated as directory
separators.
As shown in this example, wallet_location and config_dir are set to the same
directory (and the directory contains tnsnames.ora and ewallet.pem).
Specifying the same directory for these files is not required.
If you are behind a firewall, you can tunnel TLS/SSL connections through a proxy
using HTTPS_PROXY in the connect descriptor or by setting connection attributes.
Successful connection depends on specific proxy configurations. Oracle does not
recommend using a proxy in a production environment, due to the possible impact on
performance.
In Thin mode you can specify a proxy by adding the https_proxy and
http_proxy_port parameters.
connection=oracledb.connect(
config_dir="/opt/OracleCloud/MYDB",
user="admin",
password=password,
dsn="db2024_low",
wallet_location="/opt/OracleCloud/MYDB",
wallet_password=wallet_pw,
https_proxy='myproxy.example.com',
https_proxy_port=80)
connection=oracledb.connect(
config_dir=r"C:\opt\OracleCloud\MYDB",
user="admin",
password=password,
dsn="db2024_low",
wallet_location=r"C:\opt\OracleCloud\MYDB",
wallet_password=wallet_pw,
3-30
Chapter 3
Connect to Autonomous Database
https_proxy='myproxy.example.com',
https_proxy_port=80)
Run Python Application with python-oracledb Thick Mode with a Wallet (mTLS)
By default, python-oracledb runs in Thin mode which connects directly to Oracle Database.
Additional python-oracledb features are available when the driver runs in Thick mode.
Note:
Thick mode requires that the Oracle Client libraries are installed where you run
Python. You must also call oracledb.init_oracle_client() in your Python code.
In Thick mode the following three files from the wallet zip file are required:
• tnsnames.ora: Contains the net service names used for application connection strings
and maps the strings to your database services.
• sqlnet.ora: Specifies the SQL*Net client side configuration.
• cwallet.sso: Contains the auto-open SSO wallet.
To connect in Thick mode:
1. Place the files tnsnames.ora, sqlnet.ora, and cwallet.sso on your system.
Use one of two options to place these files on your system:
• If you are using Instant Client, move the files to a network/admin subdirectory
hierarchy under the Instant Client directory. For example depending on the
architecture or your client system and where you installed Instant Client, the files
should placed be in a directory location such as:
/home/myuser/instantclient_19_21/network/admin
or
/usr/lib/oracle/19.21/client64/lib/network/admin
For example, on Linux if you are using the full Oracle Client move the files
to $ORACLE_HOME/network/admin.
• Alternatively, move the files to any accessible directory.
For example, on Linux move the files to the directory /opt/OracleCloud/MYDB
and edit sqlnet.ora to change the wallet location directory to the directory
containing the cwallet.sso file.
For example, on Linux edit sqlnet.ora as follows:
3-31
Chapter 3
Connect to Autonomous Database
When the configuration files are not in the default location, your application
needs to indicate where they are, either with the config_dir parameter in the
call oracledb.init_oracle_client() or by setting the TNS_ADMIN
environment variable.
Note:
Neither of these settings are needed, and you do not need to edit
sqlnet.ora if you put all the configuration files in the network/
admin directory.
oracledb.init_oracle_client()
connection=oracledb.connect(
user="admin",
password=password,
dsn="db2024_low")
When configuration files are in a directory outside of the instant client configuration
directory, set the config_dir parameter when you call
oracledb.init_oracle_client.
• Linux
• Windows
3-32
Chapter 3
Connect to Autonomous Database
Linux
For example, on Linux to connect as the ADMIN user using the db2024_low network
service name:
oracledb.init_oracle_client(config_dir="/opt/OracleCloud/MYDB")
connection=oracledb.connect(
user="admin",
password=password,
dsn="db2024_low")
Windows
For example, on Windows to connect as the ADMIN user using the db2024_low network
service name:
oracledb.init_oracle_client(config_dir=r"C:\opt\OracleCloud\MYDB")
connection=oracledb.connect(
user="admin",
password=password,
dsn="db2024_low")
The use of a ‘raw’ string r"..." means that backslashes are treated as directory
separators.
If you are behind a firewall, you can tunnel TLS/SSL connections through a proxy using
HTTPS_PROXY in the connect descriptor or by setting connection attributes. Successful
connection depends on specific proxy configurations. Oracle does not recommend using a
proxy in a production environment, due to the possible impact on performance.
In Thick mode you can specify a proxy by editing the sqlnet.ora file and adding a line:
SQLNET.USE_HTTPS_PROXY=on
mydb_high=(description=
(address=(https_proxy=myproxy.example.com)
(https_proxy_port=80)
(protocol=tcps)(port=1522)(host=...)
3-33
Chapter 3
Connect to Autonomous Database
If you are not an Autonomous Database administrator and your application requires a
wallet to connect, then your administrator should provide you with the client
credentials. You can also view TNS names and connection strings for your database.
• Download Client Credentials (Wallets)
To download client credentials you can use the Oracle Cloud Infrastructure
Console or Database Actions.
• Wallet README File
The wallet README file contains the wallet expiration information and details for
Autonomous Database tools and resources.
• View TNS Names and Connection Strings for an Autonomous Database Instance
From the Database Connection page on the Oracle Cloud Infrastructure Console
you can view Autonomous Database TNS names and connection strings.
Note:
The password you provide when you download the wallet protects the
downloaded Client Credentials wallet.
For commercial regions, the wallet password complexity for the password you supply
requires the following:
• Minimum of 8 characters
• Minimum of 1 letter
• Minimum of 1 numeric character or 1 special character
For US Government regions, the wallet password complexity requires all of the
following:
• Minimum of 15 characters
• Minimum of 1 lowercase letter
• Minimum of 1 uppercase letter
• Minimum of 1 numeric character
• Minimum 1 special character
To download client credentials from the Oracle Cloud Infrastructure Console:
1. Navigate to the Autonomous Database details page.
2. Click Database connection.
3. On the Database connection page select the Wallet type:
• Instance wallet: Wallet for a single database only; this provides a database-
specific wallet.
• Regional wallet: Wallet for all Autonomous Databases for a given tenant and
region (this includes all service instances that a cloud account owns).
3-34
Chapter 3
Connect to Autonomous Database
Note:
Oracle recommends you provide a database-specific wallet, using Instance
wallet, to end users and for application use whenever possible. Regional
wallets should only be used for administrative purposes that require potential
access to all Autonomous Databases within a region.
Note:
When you use Database Actions to download a wallet there is no Wallet type
option on the Download Client Credentials (Wallet) page and you always
download an instance wallet. If you need to download the regional wallet click
Database connection on the Oracle Cloud Infrastructure Console.
File Description
cwallet.sso Auto-open SSO wallet
ewallet.p12 PKCS12 file. The PKCS12 file is protected by the
wallet password provided while downloading the
wallet.
ewallet.pem Encoded certificate file used to authenticate with
certificate authority (CA) server certificate.
3-35
Chapter 3
Connect to Autonomous Database
File Description
keystore.jks Java keystore file. This file is protected by the
wallet password provided while downloading the
wallet.
ojdbc.properties Contains the wallet related connection property
required for JDBC connection. This should be in
the same path as tnsnames.ora.
README Contains wallet expiration information and links for
Autonomous Database tools and resources.
See Wallet README File for information on the
contents of the README file.
sqlnet.ora SQL*Net client side configuration.
tnsnames.ora Network configuration file storing connect
descriptors.
truststore.jks Java truststore file. This file is protected by the
wallet password provided while downloading the
wallet.
3-36
Chapter 3
Connect to Autonomous Database
3-37
Chapter 3
Connect to Autonomous Database
• If you rename your Autonomous Database instance, the tools links change and the
old links no longer work. To obtain valid tools links you must download a new
Wallet zip file with an updated README file. The SODA drivers link is a resource
link and this link does not change when you rename an instance.
• The README in a regional wallet does not contain the Autonomous Database tools
and resources links.
Note:
See Update your Autonomous Database Instance to Allow both TLS and
mTLS Authentication for information on allowing TLS connections.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
3-38
Chapter 3
Connect to Autonomous Database
3-39
Chapter 3
Connect to Autonomous Database
3-40
Chapter 3
Connect to Autonomous Database
3-41
Chapter 3
Connect to Autonomous Database
Note:
Versions of SQL Developer before 18.2 require that you enter a
Keystore Password.
3-42
Chapter 3
Connect to Autonomous Database
Note:
If you are using Microsoft Active Directory, then for Username enter the Active
Directory "AD_domain\AD_username" (you may include double quotes), and for the
Password, enter the password for the Active Directory user. See Use Microsoft
Active Directory with Autonomous Database for more information.
Note:
See Update your Autonomous Database Instance to Allow both TLS and mTLS
Authentication for information on allowing TLS connections.
3-43
Chapter 3
Connect to Autonomous Database
See View TNS Names and Connection Strings for an Autonomous Database
Instance for information on viewing and copying connection stings.
2. Start Oracle SQL Developer and in the connections panel, right-click Connections
and select New Database Connection....
3. Choose the Connection Type Custom JDBC.
4. Enter the following information:
• Name: Enter the name for this connection.
• Username: Enter the database username. You can either use the default
administrator database account ADMIN provided as part of the service or create
a new schema, and use it.
• Password: Enter the password for the database user.
• Connection Type: Select Custom JDBC.
• Custom JDBC URL: Enter the following:
jdbc:oracle:thin:@ followed by the connection string you copied in step one.
For example, a sample value for the Custom JDBC URL field is:
jdbc:oracle:thin:@(description= (retry_count=20)(retry_delay=3)
(address=(protocol=tcps)
(port=1521)(host=adb-preprod.us-phoenix-1.oraclecloud.com))
(connect_data=(service_name=u9adutfb2_fmexample1_medium.adb.oraclecl
oud.com))
(security=(ssl_server_dn_match=yes)))
When you copy the connection string, the values for region and databasename are
for your Autonomous Database instance.
5. Click Connect to connect to the database.
3-44
Chapter 3
Connect to Autonomous Database
Note:
If you are using Microsoft Active Directory, then for Username enter the Active
Directory "AD_domain\AD_username" (you may include double quotes), and for the
Password, enter the password for the Active Directory user. See Use Microsoft
Active Directory with Autonomous Database for more information.
Enter password:
Last Successful login time: Wed Nov 18 2020 12:36:56 -08:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.5.0.0.0
SQL>
Notes:
3-45
Chapter 3
Connect to Autonomous Database
Note:
See Update your Autonomous Database Instance to Allow both TLS and
mTLS Authentication for information on allowing TLS connections.
To install and configure the client and connect to the Autonomous Database using
SQL*Plus with TLS authentication, do the following:
1. Prepare for Oracle Call Interface (OCI) connections.
You can use TLS authentication without a wallet in SQL*Plus if you are using the
following client versions:
• Oracle Instant Client/Oracle Database Client 19.13 - only on Linux x64
• Oracle Instant Client/Oracle Database Client 19.14 (or later) and 21.5 (or later)
- only on Linux x64 and Windows
See Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections Using
TLS Authentication for more information.
2. Copy a connection string for the Autonomous Database.
To connect with TLS authentication copy a TLS connection string. On the
Database Connection page, under TLS Authentication, select TLS to view the
connection strings for connecting with TLS authentication.
See View TNS Names and Connection Strings for an Autonomous Database
Instance for information on viewing and copying connection stings.
See Predefined Database Service Names for Autonomous Database for
information on the different databases services for each connection string.
3. Connect using a database user and password, and provide the connection string
you copied in Step 2.
On UNIX/Linux start sqlplus with the connection string, enclosed in quotes on the
command line, as follows:
sqlplus username/password@'my_connect_string'
3-46
Chapter 3
Connect to Autonomous Database
On Windows, start sqlplus with the database user and password with the copied
connection string, as follows (as compared to UNIX/Linux, on Windows do not surround
the connection string with quotes):
sqlplus username/password@my_connect_string
For example (for clarity line breaks added in the connection string):
Note:
If you are connecting to an Autonomous Database instance using Microsoft Active
Directory credentials, then connect using an Active Directory user name in the form
of "AD_domain\AD_username" (double quotes must be included), and Active
Directory user password. See Use Microsoft Active Directory with Autonomous
Database for more information.
sql -oci
3-47
Chapter 3
Connect to Autonomous Database
When connecting using Oracle Call Interface, the Oracle Wallet is transparent to
SQLcl.
SQLcl with a JDBC Thin Connection
To connect using a JDBC Thin connection, first configure the SQLcl cloud
configuration and then connect to the database.
1. Start SQLcl with the /nolog option.
sql /nolog
For example:
sql /nolog
2. Configure the SQLcl session to use a proxy host and your Oracle Wallet:
SQL> set cloudconfig -proxy=proxyhost:port directory/client_credentials.zip
3-48
Chapter 3
Connect to Autonomous Database
For example:
sql /nolog
Note:
If you are connecting to Autonomous Database using Microsoft Active Directory
credentials, then connect using an Active Directory user name in the form of
"AD_domain\AD_username" (double quotes must be included), and Active
Directory user password. See Use Microsoft Active Directory with Autonomous
Database for more information.
For more information, on the connection types specified in tnsnames.ora, see Manage
Concurrency and Priorities on Autonomous Database.
For information on SQLcl, see Oracle SQLcl.
Note:
See Update your Autonomous Database Instance to Allow both TLS and mTLS
Authentication for information on allowing TLS connections.
You can use SQLcl version 4.2 or later with Autonomous Database. Download SQLcl from
oracle.com.
If you use JDBC Thin Driver, prepare for JDBC Thin connections. See Prepare for JDBC Thin
Connections.
To connect using a JDBC Thin Driver with TLS authentication, do the following to connect to
the database.
1. Copy a connection string for the Autonomous Database.
3-49
Chapter 3
Connect to Autonomous Database
sql username/password@'my_connect_string'
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 -
Production
Version 19.12.0.1.0
SQL>
On Windows, start sql with the /nolog option and then connect with the copied
connection string, as follows (as compared to UNIX/Linux, on Windows do not
surround the connection string with quotes):
3-50
Chapter 3
Connect to Autonomous Database
(security=(ssl_server_dn_match=yes)))
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.1.0
SQL>
Note:
If you are connecting to Autonomous Database using Microsoft Active Directory
credentials, then connect using an Active Directory user name in the form of
"AD_domain\AD_username" (double quotes must be included), and Active
Directory user password. See Use Microsoft Active Directory with Autonomous
Database for more information.
3-51
Chapter 3
Connect to Autonomous Database
See About Database Actions in Using Oracle Database Actions for more information.
Note:
If your Autonomous Database is configured to use a Private Endpoint, then
you can only access Database Actions from clients in the same Virtual Cloud
Network (VCN).
See Configuring Network Access with Private Endpoints for more
information.
3-52
Chapter 3
Connect to Autonomous Database
2. Select a quick link to go directly to a quick link action or select View all database
actions to access the full Database Actions Launchpad.
For example, click SQL to use a SQL Worksheet. On the SQL Worksheet you can use
the Consumer Group drop-down list to select the consumer group to run your SQL or
PL/SQL code. See Executing SQL Statements in the Worksheet Editor for more
information.
Depending on your browser, if the Console cannot access the database as ADMIN you will
be prompted for your database ADMIN username and password.
3-53
Chapter 3
Connect to Autonomous Database
the user status with one of the following colors: green (Open), blue (Locked), or
red (Expired).
The default view is Card View. You can select either grid view or card view by
clicking the Card View or Grid View icons.
3. A URL is displayed in the user's card only if the user is REST Enabled. It provides
the URL to access Database Actions. Click to copy the URL to the clipboard.
3-54
Chapter 3
Connect to Autonomous Database
Access Database Actions, Oracle APEX, Oracle REST Data Services, and
Developer Tools Using a Vanity URL
By default you access Oracle APEX apps, REST endpoints, and developer tools on
Autonomous Database using the oraclecloudapps.com domain name. You can optionally
configure a vanity URL or custom domain name that is easy to remember to help promote
your brand identity.
After you acquire a desired domain name and matching SSL certificate from a vendor of your
choice, deploy an Oracle Cloud Infrastructure Load Balancer in your Virtual Cloud Network
(VCN) using your Autonomous Database as the backend. Your Autonomous Database
instance must be configured with a private endpoint in the same VCN. See Configuring
Network Access with Private Endpoints for more information.
To learn more, see the following:
• Introducing Vanity URLs for APEX and ORDS on Oracle Autonomous Database
• Automate Vanity URL Configuration Using Terraform
3-55
Chapter 3
Connect to Autonomous Database
Note:
If you use TLS (instead of mTLS) for your connections using JDBC Thin
Driver with JDK8u162 or higher, a wallet is not required.
3-56
Chapter 3
Connect to Autonomous Database
Using a JDBC URL Connection String with JDBC Thin Driver and Wallets
The connection string is found in the file tnsnames.ora which is part of the client
credentials download. The tnsnames.ora file contains the predefined service names. Each
service has its own TNS alias and connection string.
A sample entry, with dbname_high as the TNS alias and a connection string in
tnsnames.ora follows:
dbname_high= (description=
(address=(protocol=tcps)(port=1522)(host=adb.example.oraclecloud.com))
3-57
Chapter 3
Connect to Autonomous Database
(connect_data=(service_name=dbname_high.oraclecloud.com))
(security=(ssl_server_dn_match=yes)))
Set the location of tnsnames.ora with the property TNS_ADMIN in one of the following
ways:
• As part of the connection string (only with the 18.3 or newer JDBC driver)
• As a system property, -Doracle.net.tns_admin
• As a connection property (OracleConnection.CONNECTION_PROPERTY_TNS_ADMIN)
Using the 18.3 JDBC driver, the connection string includes the TNS alias and the
TNS_ADMIN connection property.
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/
wallet_dbname"
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=C:\\Users\\test\
\wallet_dbname"
Note:
If you are using 12.2.0.1 or older JDBC drivers, then the connection string
contains only the TNS alias. To connect using older JDBC drivers:
• Set the location of the tnsnames.ora, either as a system property with
-Doracle.net.tns_admin or as a connection property
(OracleConnection.CONNECTION_PROPERTY_TNS_ADMIN).
• Set the wallet or JKS related connection properties in addition to
TNS_ADMIN.
For example, in this case you set the TNS alias in the DB_URL without the
TNS_ADMIN part as:
DB_URL=”jdbc:oracle:thin:@dbname_high”
See Predefined Database Service Names for Autonomous Database for more details.
3-58
Chapter 3
Connect to Autonomous Database
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/wallet_dbname"
Note:
If you are using Microsoft Active Directory with a database, then in the sample
source code update the username with the Active Directory username and
update the password with the Active Directory user password. See Use
Microsoft Active Directory with Autonomous Database for more information.
3. Set the wallet location: The properties file ojdbc.properties is pre-loaded with the
wallet related connection property.
oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=$
{TNS_ADMIN})))
Note:
You do not modify the file ojdbc.properties. The value of TNS_ADMIN
determines the wallet location.
4. Compile and Run: Compile and run the sample to get a successful connection. Make
sure you have oraclepki.jar , osdt_core.jar, and osdt_cert.jar, in the classpath.
For example:
java –classpath
./lib/ojdbc8.jar:./lib/ucp.jar:./lib/oraclepki.jar:./lib/
osdt_core.jar:./lib/osdt_cert.jar:. UCPSample
3-59
Chapter 3
Connect to Autonomous Database
Note:
The auto-login wallet part of Autonomous Database downloaded client
credentials zip file removes the need for your application to use username/
password authentication.
DB_URL="jdbc:oracle:thin:@dbname_high?TNS_ADMIN=/Users/test/
wallet_dbname"
Note:
If you are using Microsoft Active Directory with Autonomous Database,
then make sure to change the sample source code to use the Active
Directory username and the Active Directory user password. See Use
Microsoft Active Directory with Autonomous Database for more
information.
3. Set JKS related connection properties: Add the JKS related connection
properties to ojdbc.properties file. The keyStore and truststore password are
the password specified when you downloading the client credentials .zip file.
To use SSL connectivity instead of Oracle Wallet, specify the keystore and
truststore files and their respective password in the ojdbc.properties file as
follows:
3-60
Chapter 3
Connect to Autonomous Database
Note:
Make sure to comment the wallet related property in ojdbc.properties. For
example:
4. Compile and Run: Compile and run the sample to get a successful connection. For
example:
DB_URL="jdbc:oracle:thin:@dbname_high”
Note:
If you are using Microsoft Active Directory with Autonomous Database, then
update the sample source code to use the Active Directory username and
Active Directory user password. See Use Microsoft Active Directory with
Autonomous Database for more information.
3-61
Chapter 3
Connect to Autonomous Database
3. Set the wallet location: Add the OraclePKIProvider at the end of the provider list
in the file java.security (this file is part of your JRE install located
at $JRE_HOME/jre/lib/security/java.security) which typically looks
like:
security.provider.14=oracle.security.pki.OraclePKIProvider
4. Compile and Run: Compile and run the sample to get a successful connection.
Make sure to have oraclepki.jar , osdt_core.jar, and osdt_cert.jar, in the
classpath. Also, you need to pass the connection properties. Update the
properties with the location where tnsnames.ora and wallet files are located.
java –classpath
./lib/ojdbc8.jar:./lib/ucp.jar:./lib/oraclepki.jar:./lib/
osdt_core.jar:./lib/osdt_cert.jar:.
-Doracle.net.tns_admin=/users/test/wallet_dbname
-Doracle.net.ssl_server_dn_match=true
-Doracle.net.ssl_version=1.2 (Not required for 12.2)
-Doracle.net.wallet_location= “(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=/users/test/wallet_dbname)))”
UCPSample
Note:
These are Windows system examples. Add a \ continuation character if you
are setting –D properties on multiple lines on UNIX ( Linux or a Mac).
DB_URL="jdbc:oracle:thin:@dbname_high”
3-62
Chapter 3
Connect to Autonomous Database
Note:
If you are using Microsoft Active Directory with Autonomous Database, then
update the sample source code to use the Active Directory username and
Active Directory user password. See Use Microsoft Active Directory with
Autonomous Database for more information.
3. Compile and Run: Compile and run the sample to get a successful connection. You
need to pass the connection properties as shown. Update the properties with the location
where tnsnames.ora and JKS files are placed. If you want to pass these connection
properties programmatically then refer to DataSourceForJKS.java. For example:
java
-Doracle.net.tns_admin=/users/test/wallet_dbname
-Djavax.net.ssl.trustStore=truststore.jks
-Djavax.net.ssl.trustStorePassword=**********
-Djavax.net.ssl.keyStore=keystore.jks
-Djavax.net.ssl.keyStorePassword=************
-Doracle.net.ssl_server_dn_match=true
-Doracle.net.ssl_version=1.2 // Not required for 12.2
db2022adb_high =
(description=
(address=
(https_proxy=proxyhostname)(https_proxy_port=80)
(protocol=tcps)(port=1522)(host=adb.example.oraclecloud.com)
)
(connect_data=(service_name=db2022adb_high.adb.oraclecloud.com)
)
(security=security=(ssl_server_dn_match=yes)
)
)
3-63
Chapter 3
Connect to Autonomous Database
Notes:
• JDBC Thin client versions earlier than 18.1 do not support connections
through HTTP proxy.
• Successful connection depends on specific proxy configurations and the
performance of data transfers would depend on proxy capacity. Oracle
does not recommend using this feature in Production environments
where performance is critical.
• Configuring tnsnames.ora for the HTTP proxy may not be enough
depending on your organization's network configuration and security
policies. For example, some networks require a username and password
for the HTTP proxy.
• In all cases, contact your network administrator to open outbound
connections to hosts in the oraclecloud.com domain using the relevant
port without going through an HTTP proxy.
JDBC Thin Driver Connection Prerequisites for TLS Connections Without a Wallet
Applications that use JDBC Thin driver support TLS and mutual TLS (mTLS)
authentication. Connecting to an Autonomous Database instance with TLS
authentication requires database credentials (username and password) and provides a
secure connection, but does not require that you download Oracle wallets or Java
KeyStore (JKS) files.
Note:
See Update your Autonomous Database Instance to Allow both TLS and
mTLS Authentication for information on allowing TLS connections.
3-64
Chapter 3
Connect to Autonomous Database
Using a JDBC URL TLS Connection String for JDBC Thin Driver Without a Wallet
To connect the database using JDBC Thin Driver with TLS without a wallet, you provide a
connection string. Each database service has its own TNS Name and connection string.
To run an application using the JDBC Thin Driver with TLS authentication without a wallet:
1. Copy a connection string for the Autonomous Database.
To connect with TLS authentication copy a TLS connection string. On the Database
Connection page, under TLS Authentication, select TLS to view the connection strings
for connecting with TLS authentication.
See View TNS Names and Connection Strings for an Autonomous Database Instance for
information on viewing and copying connection stings.
See Predefined Database Service Names for Autonomous Database for information on
the different databases services for each connection string.
2. Set the DB_URL parameter.
Use the following format for the DB_URL parameter:
DB_URL=jdbc:oracle:thin:@my_connect_string
For example:
DB_URL=jdbc:oracle:thin:@(description= (retry_count=20)(retry_delay=3)
(address=(protocol=tcps)
(port=1521)(host=adb.region.oraclecloud.com))
(connect_data=(service_name=u9adutfb2ba8x4d_database_medium.adb.oracleclou
d.com))
(security=(ssl_server_dn_match=yes)))
3-65
Chapter 3
Connect to Autonomous Database
When WALLET_LOCATION is used, Oracle Net Services automatically uses the wallet.
The wallet is used transparently to the application. See Prepare for Oracle Call
Interface, ODBC, and JDBC OCI Connections with Wallets (mTLS) for information on
setting WALLET_LOCATION.
Note:
See Update your Autonomous Database Instance to Allow both TLS and
mTLS Authentication for information on allowing TLS connections.
See Prepare for Oracle Call Interface, ODBC, and JDBC OCI Connections Using TLS
Authentication to prepare for Oracle Call Interface connections.
When a wallet is required and you set WALLET_LOCATION parameter in the
sqlnet.ora file, Oracle Net Services finds the location of the wallet and uses the
wallet (use of the wallet is transparent to the application).
3-66
Chapter 3
Connect to Autonomous Database
3-67
Chapter 3
Connect to Autonomous Database
Note:
For all of these options connections to your Autonomous Database use
certificate-based authentication and Secure Sockets Layer (SSL).
3-68
Chapter 3
Connect to Autonomous Database
When you provision or clone your Autonomous Database you specify one of the following
network access options:
• Allow secure access from everywhere: This option assigns a public endpoint, public IP
and hostname, to your database. With this selection you have two options:
– The database is accessible from all IP addresses: this is the default option when you
provision or clone Autonomous Database.
– Select Configure access control rules: This option lets you restrict access by
defining access control rules in an Access Control List (ACL). By specifying an ACL,
the database will be accessible from a whitelisted set of IP addresses or VCNs.
If you configure your database with the Allow secure access from everywhere option,
you can add, modify, or remove ACLs after you provision or clone the database.
See Overview of Restricting Access with ACLs for more information.
• Virtual cloud network: This option assigns a private endpoint, private IP, and hostname
to your database. Specifying this option allows traffic only from the VCN you specify;
access to the database from all public IPs or VCNs is blocked. This allows you to define
security rules, ingress/egress, at the Network Security Group (NSG) level and to control
traffic to your Autonomous Database.
See Configure Private Endpoints When You Provision or Clone an Instance for more
information.
Note:
After you provision or clone your database, you can change the selection you make,
to either Allow secure access from everywhere or Virtual cloud network for the
database instance.
See Change from Public to Private Endpoints with Autonomous Database for more
information.
See Change from Private to Public Endpoints with Autonomous Database for more
information.
3-69
Chapter 3
Connect to Autonomous Database
Specifying an access control list blocks all IP addresses that are not in the ACL list
from accessing the database. After you specify an access control list, the database
only accepts connections from addresses on the access control list and the database
rejects all other client connections.
Depending on where the client machines that connect to your database are located
you have the following options with ACLs:
• If your client machines connect to your database through the public internet, then
you can use ACLs to specify the client machine's public IP addresses or their
public CIDR blocks. In this case only the specified public IP addresses can access
your database.
• If the client machines reside in an Oracle Cloud Infrastructure Virtual Cloud
Network (VCN), you can configure a Service Gateway to connect to your
database. In this case, you can specify the VCN in your ACL, this allows all client
machines in that VCN to access your database and blocks all other connections.
Furthermore, you can specify the VCN and a list of private IP addresses or CIDR
blocks in that VCN. This allows only those client machines with the specified IP
addresses or CIDR blocks to access your database and blocks all other
connections.
See VCNs and Subnets for details on Virtual Cloud Networks (VCN).
See Access to Oracle Services: Service Gateway for details on setting up a
Service Gateway.
• If you have on-premises clients that connect to your database through Transit
Routing, you can specify the VCN and also the private IP addresses or CIDR
blocks of these on-premises clients to access to your database.
See Transit Routing: Private Access to Oracle Services for details on Transit
Routing.
• You can use these options together to set multiple rules to allow access from
different types of clients. Multiple rules do not exclude each other.
See Configuring Network Access with Access Control Rules (ACLs) for the steps for
configuring network access with ACLs, either when you provision or clone your
database, or whenever you want to add, modify or remove ACLs.
3-70
Chapter 3
Connect to Autonomous Database
3-71
Chapter 3
Manage Users
connection pool in your database that enables a significant reduction in key database
resources required to support many client connections and when the database needs
to scale for many simultaneous connections.
When you connect to Autonomous Database you choose one of the following
depending on values specified in the tnsnames.ora configuration file:
example_high= (description=
(address=(protocol=tcps)(port=1522)
(host=adb.example.oraclecloud.com))
(connect_data=(service_name=example_high.oraclecloud.com)
(SERVER=POOLED))
(security=(ssl_server_dn_match=yes)))
When you connect with (SERVER=POOLED) specified in the tnsnames.ora file you
obtain a connection from DRCP.
For Autonomous Database, note the following for working with Database Resident
Connection Pools (DRCP):
• DRCP is enabled by default; however using DRCP is optional. To choose a pooled
connection specify SERVER=POOLED in tnsnames.ora. If you do not specify
SERVER=POOLED, you connect with a dedicated connection.
• You cannot start or stop DRCP.
See Using Database Resident Connection Pool for more information.
Manage Users
Describes administration tasks for managing database users on Autonomous
Database.
• Create Users on Autonomous Database
There are several options to create users on Autonomous Database. You can use
Oracle Database Actions Database Users card or use client-side tools that
connect to the database to create database users.
• Remove Users on Autonomous Database
To remove users from your database, connect to the database as the ADMIN user
using any SQL client tool.
3-72
Chapter 3
Manage Users
3-73
Chapter 3
Manage Users
5. Set a value for the Quota on tablespace DATA for the user.
3-74
Chapter 3
Manage Users
6. If you want to grant roles for the new user, click the Granted Roles tab and select the
roles for the user. For example, select DWROLE and CONNECT.
7. Click Create User.
Database Actions shows the User Created confirmation message.
See Manage User Roles and Privileges on Autonomous Database for more information on
granting roles and adding or updating privileges for a user.
See The Database Users Page for detailed information on Database Actions Database
Users.
If you provide Web Access for the new user, then you need to send a URL to the new user.
See Provide Database Actions Access to Database Users for more information.
The administrator needs to provide the credentials wallet to the new user for client-side
access. See Connect to Autonomous Database for more information on client-side access
credentials.
Note:
Autonomous Database requires strong passwords; the password you specify must
meet the default password complexity rules. See About User Passwords on
Autonomous Database for more information.
See Create Oracle APEX Workspaces in Autonomous Database for information on creating
APEX workspaces.
See Create and Update User Accounts for Oracle Machine Learning Components on
Autonomous Database to add user accounts for Oracle Machine Learning Notebooks.
Note:
IDENTIFIED with the EXTERNALLY clause is not supported with Autonomous
Database.
In addition, IDENTIFIED with the BY VALUES clause is not allowed.
3-75
Chapter 3
Manage Users
This creates new_user with connect privileges. This user can now connect to the
database and run queries. To grant additional privileges to users, see Manage User
Roles and Privileges on Autonomous Database.
The administrator needs to provide the credentials wallet to the user new_user. See
Connect to Autonomous Database for more information on client credentials.
Note:
Autonomous Database requires strong passwords; the password you specify
must meet the default password complexity rules. See About User
Passwords on Autonomous Database for more information.
See Provide Database Actions Access to Database Users to add users for Database
Actions.
See Create Oracle APEX Workspaces in Autonomous Database for information on
creating APEX workspaces.
See Create and Update User Accounts for Oracle Machine Learning Components on
Autonomous Database to add user accounts for Oracle Machine Learning
components.
See SQL Language Reference for information on the ALTER USER command.
3-76
Chapter 3
Manage Users
To change the password complexity rules and password parameter values you can alter the
default profile or create a new profile and assign it to users. See Manage User Profiles with
Autonomous Database for more information.
The following are the Autonomous Database default profile password parameter values:
See Manage User Profiles with Autonomous Database for information on using CREATE USER
or ALTER USER with a profile clause.
See SQL Language Reference for information on the ALTER USER command.
3-77
Chapter 3
Manage Users
Note:
You can also use Database Actions to change the password for the ADMIN
user. See Manage Users and User Roles on Autonomous Database -
Connecting with Database Actions for more information.
The password for the default administrator account, ADMIN, has the same password
complexity rules mentioned in the section Create Users on Autonomous Database -
Connecting with a Client Tool.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
Use the following steps to unlock the ADMIN account by updating the ADMIN
password:
1. On the Details page, from the More actions drop-down list, select Administrator
password.
2. On the Administrator password page enter the new password and confirm.
3. Click Update.
This operation unlocks the ADMIN account if it was locked.
The password for the default administrator account, ADMIN, has the same password
complexity rules mentioned in the section Create Users on Autonomous Database -
Connecting with a Client Tool.
3-78
Chapter 3
Manage Users
Note:
Using an Oracle Cloud Infrastructure vault secret for the ADMIN password is only
supported with the APIs.
Oracle Cloud Infrastructure Vault secrets are credentials that you use with Oracle Cloud
Infrastructure services. Storing secrets in a vault provides greater security than you might
achieve storing them elsewhere, such as in code or in configuration files. By calling database
APIs you can use secrets from the Vault Service to set the ADMIN password. The vault
secret password option is available when you create or clone an Autonomous Database
instance, or when you set or reset the ADMIN password.
You create secrets using the Oracle Cloud Infrastructure Console, CLI, or API.
Notes for using a vault secret to set or reset the ADMIN password:
• In order for Autonomous Database to reach the secret in a vault, the following conditions
must apply:
– The secret must be in current or previous state.
– If you specify a secret version in the API call, the specified secret version is used. If
you do not specify a secret version, the call uses the latest secret version.
– You must have the proper user group policy that allows READ access to the specific
secret in a given compartment. For example:
• The password stored in the secret must conform to Autonomous Database password
requirements.
See the following for more information:
• Overview of Vault
• CreateAutonomousDatabase
• UpdateAutonomousDatabaseDetails
• About User Passwords on Autonomous Database
• Managing Secrets
3-79
Chapter 3
Manage Users
Note:
This removes all user_name objects and the data owned by user_name is
deleted.
You can also remove a database user with Oracle Database Actions. To remove a
user, from the Database Users page, click the to open the context menu for a user,
and then select Delete. See Manage Users and User Roles on Autonomous Database
- Connecting with Database Actions for more information on using Oracle Database
Actions Database Users.
Note:
Autonomous Database has restrictions on the profile clause. See SQL
Commands for information on CREATE PROFILE and ALTER PROFILE
restrictions.
1. To add or alter a profile, as the ADMIN user run either CREATE PROFILE or ALTER
PROFILE. For example:
If you are not the ADMIN user, then you must have CREATE PROFILE privilege to
run CREATE PROFILE. If you run ALTER PROFILE, then you must have ALTER
PROFILE privilege.
2. Use the new or altered profile with a CREATE USER or ALTER USER command. For
example:
This creates new_user with profile new_profile and with connect privileges. The
new_user can now connect to the database and run queries. To grant additional
privileges to users, see Manage User Privileges on Autonomous Database -
Connecting with a Client Tool.
3-80
Chapter 3
Manage Users
See CREATE PROFILE for information on using CREATE PROFILE or ALTER PROFILE.
You can import existing profiles created in other environments using Oracle Data Pump
Import (impdp). Any existing profile association with database users is preserved after
importing into Autonomous Database. When a newly created user, created from an Oracle
Data Pump import, attempts to login for the first time, the login is handled as follows:
• The password complexity restrictions are the same as the restrictions for any user on
Autonomous Database.
• If the user's password violates the password complexity requirements, the account is
expired with a 30-day grace period. In this case the user is required to change their
password before the grace period ends.
Note:
Profile assignments for users with profile ORA_PROTECTED_PROFILE cannot be
modified.
The following users share the ORA_PROTECTED_PROFILE profile and the profile assignment of
these users cannot be changed:
• ADBSNMP
• ADB_APP_STORE
• ADMIN
• DCAT_ADMIN
• GGADMIN
• RMAN$VPC
When you create or alter a profile, you can specify a Password Verification Function (PVF) to
manage password complexity. See Manage Password Complexity on Autonomous Database
for more information.
• Manage Password Complexity on Autonomous Database
You can create a Password Verify Function (PVF) and associate the PVF with a profile to
manage the complexity of user passwords.
• Gradual Database Password Rollover for Applications
3-81
Chapter 3
Manage Users
Note:
The minimum password length for a user specified PVF is 8 characters and
must include at least one upper case letter, one lower case letter and one
numeric character. The minimum password length for the DEFAULT profile is
12 characters (the DEFAULT profile uses the CLOUD_VERIFY_FUNCTION PVF).
The password cannot contain the username.
Oracle recommends using a minimum password length of 12 characters. If
you define a profile's PVF, and set the minimum password length to less than
12 characters, then tools such as Oracle Database Security Assessment
Tool (DBSAT) and Qualys report this as a database security risk.
For example, to specify a PVF for a profile, use the following command:
If the profile is created or altered by any user other than the ADMIN user, then you
must grant the EXECUTE privilege on the PVF. If you create a PVF and the password
check fails, the database reports the ORA-28219 error.
You can specify an Oracle supplied PVF, from one of the following:
• CLOUD_VERIFY_FUNCTION (this is the default password verification function for
Autonomous Database):
This function checks for the following requirements when users create or modify
passwords:
– The password must be between 12 and 30 characters long and must include
at least one uppercase letter, one lowercase letter, and one numeric character.
– The password cannot contain the username.
– The password cannot be one of the last four passwords used for the same
username.
– The password cannot contain the double quote (") character.
– The password must not be the same password that is set less than 24 hours
ago.
• ORA12C_STIG_VERIFY_FUNCTION
This function checks for the following requirements when users create or modify
passwords:
– The password has at least 15 characters.
– The password has at least 1 lower case character and at least 1 upper case
character.
3-82
Chapter 3
Manage Users
3-83
Chapter 3
Manage Users
allows the database password of the application user to be altered while allowing the
older password to remain valid for the time specified by the PASSWORD_ROLLOVER_TIME
limit. During the rollover period of time, the application instance can use either the old
password or the new password to connect to the database server. When the rollover
time expires, only the new password is allowed.
See Managing Gradual Database Password Rollover for Applications for more
information.
Note:
If you want to manage the user's account settings, for example if you
want to provide Web Access to provide access to Database Actions, or if
you want to lock the user's account, you can do this from the User tab.
3-84
Chapter 3
Manage Users
This displays the Granted Roles tab with a list of available roles and selection boxes. For
each role, you can check Granted to grant the role, Admin to permit the user to grant the
role to other users, and Default to use the default settings for Granted and Admin.
5. Select the roles you want to grant to the user.
For example, select CONNECT and DWROLE.
For each role, you can select Granted to grant the role, Admin to permit the user to
grant the role to other users, and Default to use the default settings for Granted and
Admin. A new user is granted CONNECT and RESOURCE roles when Web Access is
selected.
3-85
Chapter 3
Manage Users
2. As the ADMIN user grant DWROLE. For example, the following command grants
DWROLE to the user adb_user:
Note:
Granting UNLIMITED TABLESPACE privilege allows a user to use all the
allocated storage space. You cannot selectively revoke tablespace
access from a user with the UNLIMITED TABLESPACE privilege. You can
grant selective or restricted access only after revoking the privilege.
3-86
Chapter 3
Manage Users
Create User
An administrator creates new user accounts and user credentials for Oracle Machine
Learning in the User Management interface.
Note:
You must have the administrator role to access the Oracle Machine Learning User
Management interface.
3-87
Chapter 3
Manage Users
Note:
With a new database user, an administrator needs to issue grant commands
on the database to grant table access to the new user for the tables
associated with the user's Oracle Machine Learning notebooks.
Note:
You must have the ADMIN role to access the Oracle Machine Learning User
Management interface.
3-88
Chapter 3
Manage Users
Note:
Initially, the Role field shows the role None for existing database users. After
adding a user the role Developer is assigned to the user.
7. Select a user. To select a user select a name in the User Name column. For example,
select ANALYST1.
Selecting the user shows the Oracle Machine Learning Edit User page.
8. Enter a name in the First Name field. (Optional)
9. Enter the last name of the user in the Last Name field. (Optional)
10. In the Email Address field, enter the email ID of the user.
Making any change on this page adds the existing database user with the required
privileges as an Oracle Machine Learning component user.
11. Click Save.
This grants the required privileges to use the Oracle Machine Learning application. In Oracle
Machine Learning this user can then access any tables the user has privileges to access in
the database.
Note:
Autonomous Database integration with Oracle Cloud Infrastructure IAM is
supported in commercial regions with identity domains as well as in the legacy IAM,
which does not include identity domains. IAM with identity domains was introduced
with new Oracle Cloud Infrastructure tenancies that were created after November 8,
2021. Autonomous Database supports users and groups in default and non-default
identity domains.
• About Identity and Access Management (IAM) Authentication with Autonomous Database
You can enable an Autonomous Database instance to use Oracle Cloud Infrastructure
(IAM) authentication and authorization for users.
• Prerequisites for Identity and Access Management (IAM) Authentication on Autonomous
Database
Describes the prerequisites for enabling IAM user access on Autonomous Database.
• Enable Identity and Access Management (IAM) Authentication on Autonomous Database
Describes the steps to enable IAM user access on Autonomous Database.
• Create Identity and Access Management (IAM) Groups and Policies for IAM Users
Describes the steps to write policy statements for an IAM group to enable IAM user
access to Oracle Cloud Infrastructure resources, specifically Autonomous Database
instances.
3-89
Chapter 3
Manage Users
Note:
Autonomous Database integration with Oracle Cloud Infrastructure IAM is
supported in commercial regions with identity domains as well as in the
legacy IAM, which does not include identity domains. IAM with identity
domains was introduced with new Oracle Cloud Infrastructure tenancies that
were created after November 8, 2021. Autonomous Database supports users
and groups in default and non-default identity domains.
Oracle Cloud Infrastructure IAM integration with Autonomous Database supports the
following:
• IAM Database Password Authentication
3-90
Chapter 3
Manage Users
Note:
Always Free Autonomous Databases provisioned with Oracle Database 21c do not
support Oracle Cloud Infrastructure Identity and Access Management (IAM)
authentication and authorization.
Note:
Any supported 12c and above database client can be used for IAM database
password access to Autonomous Database.
An Oracle Cloud Infrastructure IAM database password allows an IAM user to log in to an
Autonomous Database instance as Oracle Database users typically log in with a user name
and password. The user enters their IAM user name and IAM database password. An IAM
database password is a different password than the Oracle Cloud Infrastructure Console
password. Using an IAM user with the password verifier you can login to Autonomous
Database with any supported database client.
For password verifier database access, you create the mappings for IAM users and OCI
applications to the Autonomous Database instance. The IAM user accounts themselves are
managed in IAM. The user accounts and user groups can be in either the default domain or
in a custom, non-default domain.
3-91
Chapter 3
Manage Users
• A client application or tool can request the database token from IAM for the user
and can pass the database token through the client API. Using the API to send the
token overrides other settings in the database client. Using IAM tokens requires
the latest Oracle Database client 19c (at least 19.16). Some earlier clients (19c
and 21c) provide a limited set of capabilities for token access. Oracle Database
client 21c does not fully support the IAM token access feature.
• If the application or tool does not support requesting an IAM database token
through the client API, the IAM user can first use Oracle Cloud Infrastructure
command line interface (CLI) to retrieve the IAM database token and save it in a
file location. For example, to use SQL*Plus and other applications and tools using
this connection method, you first obtain the database token using the Oracle Cloud
Infrastructure (OCI) Command Line Interface (CLI). If the database client is
configured for IAM database tokens, when a user logs in with the slash login form,
the database driver uses the IAM database token that has been saved in a default
or specified file location.
• A client application or tool can use an Oracle Cloud Infrastructure IAM instance
principal or resource principal to get an IAM database token, and use the IAM
database token to authenticate itself to an Autonomous Database instance.
• IAM users and OCI applications can request a database token from IAM with
several methods, including using an API-key. See Configuring a Client Connection
for SQL*Plus That Uses an IAM Token for an example. See About Authenticating
and Authorizing IAM Users for an Oracle Autonomous Database for a description
of other methods such as using a delegation token within an OCI cloud shell.
3-92
Chapter 3
Manage Users
Note:
Autonomous Database integration with Oracle Cloud Infrastructure IAM is
supported in commercial regions with identity domains as well as in the legacy IAM,
which does not include identity domains. IAM with identity domains was introduced
with new Oracle Cloud Infrastructure tenancies that were created after November 8,
2021. Autonomous Database supports users and groups in default and non-default
identity domains.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'OCI_IAM' );
END;
/
By default the force parameter is false. When another external authentication method is
enabled and force is false, DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION
reports the following error:
If you want to disable the external authentication that is currently enabled and use IAM
authentication instead, include the force parameter.
For example:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'OCI_IAM',
force => TRUE );
END;
/
3-93
Chapter 3
Manage Users
NAME VALUE
---------------------- -------
identity_provider_type OCI_IAM
Create Identity and Access Management (IAM) Groups and Policies for IAM
Users
Describes the steps to write policy statements for an IAM group to enable IAM user
access to Oracle Cloud Infrastructure resources, specifically Autonomous Database
instances.
A policy is a group of statements that specifies who can access particular resources,
and how. Access can be granted for the entire tenancy, databases in a compartment,
or individual databases. This means you write a policy statement that gives a specific
group a specific type of access to a specific type of resource within a specific
compartment.
Note:
Defining a policy is required to use IAM tokens to access Autonomous
Database. A policy is not required when using IAM database passwords to
access Autonomous Database.
To enable Autonomous Database to allow IAM users to connect to the database using
IAM tokens:
1. Perform Oracle Cloud Infrastructure Identity and Access Management
prerequisites by creating a group and adding users to the group.
For example, create the group sales_dbusers.
See Managing Groups for more information.
2. Write policy statements to enable access to Oracle Cloud Infrastructure resources.
a. In the Oracle Cloud Infrastructure console click Identity & Security.
b. Under Identity & Security click Policies.
c. To a write policy, click Create Policy.
d. On the Create Policy page, enter a Name and a Description.
e. On the Create Policy page, select Show manual editor.
3-94
Chapter 3
Manage Users
For example to create a policy that limits members of DBUsers group to access
Autonomous Databases in compartment testing_compartment only:
For example to create a policy that limits group access to a single database in a
compartment:
3-95
Chapter 3
Manage Users
• Policies can allow IAM users to access Autonomous Database instances across
the entire tenancy, in a compartment, or can limit access to a single Autonomous
Database instance.
• You can use either instance principal or resource principal to retrieve database
tokens to establish a connection from your application to an Autonomous
Database instance. If you are using an instance principal or resource principal, you
must map a dynamic group. Thus, you cannot exclusively map instance and
resource principals; you only can map them through a shared mapping and putting
the instance or resource instance in an IAM dynamic group.
You can create Dynamic Groups and reference dynamic groups in the policies you
create to access Oracle Cloud Infrastructure. See Configure Policies and Roles to
Access Resources and Managing Dynamic Groups for details.
This creates a shared global user mapping. The mapping, with global user
sales_group, is effective for all users in the IAM group. Thus, anyone in the
db_sales_group can log in to the database using their IAM credentials (through
the shared mapping of the sales_group global user).
The following example shows how to accomplish this for a non-default domain:
3-96
Chapter 3
Manage Users
3. If you want to create additional global user mappings for other IAM groups or users,
follow these steps for each IAM group or user.
Note:
Database users that are not IDENTIFIED GLOBALLY can continue to login as before,
even when the Autonomous Database is enabled for IAM authentication.
The following example shows how to create the user by specifying a non-default domain,
sales_domain:
3-97
Chapter 3
Manage Users
2. Set database authorization for Autonomous Database roles with CREATE ROLE or
ALTER ROLE statements and include the IDENTIFIED GLOBALLY AS clause,
specifying the IAM group name.
Use the following syntax to map a global role to an IAM group:
The following example shows how to create the role by specifying a non-default
domain, sales_domain:
4. If you want an existing database role to be associated with an IAM group, then use
ALTER ROLE statement to alter the existing database role to map the role to an IAM
group. Use the following syntax to alter an existing database role to map it to an
IAM group:
If you want to add additional global role mappings for other IAM groups, follow these
steps for each IAM group.
3-98
Chapter 3
Manage Users
Note:
If your Autonomous Database instance is in Restricted Mode, only the users with
the RESTRICTED SESSION privilege such as ADMIN can connect to the database.
You can use an Oracle Cloud Infrastructure IAM database token to access an Autonomous
Database instance with supported clients, including the following:
• JDBC-Thin with support for IAM Token Authentication is supported with the following:
– JDBC version 19.13.0.0.1 (or later): See JDBC and UCP Downloads for JDBC
drivers.
– JDBC version 21.4.0.0.1 (or later): See JDBC and UCP Downloads for JDBC drivers.
See Support for IAM Token-Based Authentication for more information:
• SQL*Plus and Oracle Instant Client: Supported with SQL*Plus and Instant Client on Linux
versions 19.13 or later, and Instant Client on Linux versions 21.4 or later.
See Identity and Access Management (IAM) Token-Based Authentication for more
information.
• The database client can also be configured to retrieve a database token using the IAM
username and IAM database password.
See Client Connections That Use a Token Requested by an IAM User Name and
Database Password for more information.
• .NET clients (latest version of Linux or Windows). .NET software components are
available as a free download from the following sites:
3-99
Chapter 3
Manage Users
Note:
You cannot configure native network encryption when passing an IAM token.
Only Transport Layer Security (TLS) by itself is supported, not native network
encryption or native network encryption with TLS.
3-100
Chapter 3
Manage Users
Configuring a Client Connection for SQL*Plus That Uses an IAM Database Password
You can configure SQL*Plus to use an IAM database password.
• As the IAM user, log in to the Autonomous Database instance by using the following
syntax:
CONNECT user_name@db_connect_string
Enter password: password
3-101
Chapter 3
Manage Users
In this specification, user_name is the IAM user name. There is a limit of 128 bytes
for the combined domain_name/user_name.
The following example shows how IAM user peter_fitch can log in to an
Autonomous Database instance.
sqlplus /nolog
connect peter_fitch@db_connect_string
Enter password: password
Some special characters will require double quotation marks around user_name
and password. For example:
"peter_fitch@example.com"@db_connect_string
If the security token has expired, a window will appear so the user can log
in to OCI again. This generates the security token for the user. OCI CLI
will use this refreshed token to get the db-token.
• Retrieving a db-token with a delegation token: When you log in to the
cloud shell, the delegation token is automatically generated and placed in
the /etc directory. To get this token, run the following command in the
cloud shell:
3-102
Chapter 3
Manage Users
c. The database client can also be configured to retrieve a database token using the
IAM username and IAM database password.
See Client Connections That Use a Token Requested by an IAM User Name and
Database Password for more information.
See Required Keys and OCIDs for more information.
4. Ensure that you are using the latest release updates for the Oracle Database client
releases 19c and 21c.
This configuration only works with the Oracle Database client release 19c or 21c.
5. Follow the existing process to download the wallet from the Autonomous Database and
then follow the directions for configuring it for use with SQL*Plus.
a. Confirm that DN matching is enabled by looking for SSL_SERVER_DN_MATCH=ON in
sqlnet.ora.
Note:
Partial or full DN matching is required when sending a token from the
database client to Autonomous Database. If Autonomous Database is using
a private endpoint, you need to specify a host value for the connect string
parameter. Using an IP address for the host parameter in the connect string
will not work with DN matching and the IAM token will not be sent to the
database.
See Private Endpoints Configuration Examples on Autonomous Database
for configuration information on how to set the host parameter when using a
private endpoint.
b. Configure the database client to use the IAM token by adding TOKEN_AUTH=OCI_TOKEN
to the sqlnet.ora file. Because you will be using the default locations for the
database token file, you do not need to include the token location.
The TOKEN_AUTH and TOKEN_LOCATION values in the tnsnames.ora connect strings take
precedence over the sqlnet.ora settings for that connection. For example, for the
connect string, assuming that the token is in the default location (~/.oci/db-token for
Linux):
(description=
(retry_count=20)(retry_delay=3)
(address=(protocol=tcps)(port=1522)
(host=example.us-phoenix-1.oraclecloud.com))
(connect_data=(service_name=aaabbbccc_exampledb_high.example.oraclecloud.c
om))
(security=(ssl_server_dn_match=yes))
(TOKEN_AUTH=OCI_TOKEN)))
After the connect string is updated with the TOKEN_AUTH parameter, the IAM user can log in to
the Autonomous Database instance by running the following command to start SQL*Plus.
3-103
Chapter 3
Manage Users
You can include the connect descriptor itself or use the name of the descriptor from the
tnsnames.ora file.
connect /@exampledb_high
Or:
connect /@(description=
(retry_count=20)(retry_delay=3)
(address=(protocol=tcps)(port=1522)
(host=example.us-phoenix-1.oraclecloud.com))
(connect_data=(service_name=aaabbbccc_exampledb_high.example.oracleclou
d.com))
(security=(ssl_server_cert_dn="CN=example.uscom-
east-1.oraclecloud.com,
OU=Oracle BMCS US, O=Example Corporation,
L=Redwood City, ST=California, C=US")
(TOKEN_AUTH=OCI_TOKEN)))
The database client is already configured to get a db-token because TOKEN_AUTH has
already been set, either through the sqlnet.ora file or in a connect string. The
database client gets the db-token and signs it using the private key and then sends
the token to the Autonomous Database. If an IAM user name and IAM database
password are specified instead of slash /, then the database client will connect using
the password instead of using the db-token.
Use Instance Principal to Access Autonomous Database with Identity and Access
Management (IAM) Authentication
After the ADMIN user enables Oracle Cloud Infrastructure IAM on Autonomous
Database, an application can access the database through an Oracle Cloud
Infrastructure IAM database token using an instance principal.
See Accessing the Oracle Cloud Infrastructure API Using Instance Principals for more
information.
3-104
Chapter 3
Manage Users
At this stage, the IAM user can log in to the database instance using the proxy. For example,
to connect using a password verifier:
CONNECT peterfitch[hrapp]@connect_string
Enter password: password
CONNECT [hrapp]/@connect_string
3-105
Chapter 3
Manage Users
CONNECT peterfitch[hrapp]/password\!@connect_string
SHOW USER;
--The output should be "USER is HRAPP"
SELECT SYS_CONTEXT('USERENV','AUTHENTICATION_METHOD') FROM DUAL;
--The output should be "PASSWORD_GLOBAL_PROXY"
SELECT SYS_CONTEXT('USERENV','PROXY_USER') FROM DUAL;
--The output should be "PETERFITCH_SCHEMA"
SELECT SYS_CONTEXT('USERENV','CURRENT_USER') FROM DUAL;
--The output should be "HRAPP"
CONNECT [hrapp]/@connect_string
SHOW USER;
--The output should be USER is "HRAPP "
SELECT SYS_CONTEXT('USERENV','AUTHENTICATION_METHOD') FROM DUAL;
--The output should be "TOKEN_GLOBAL_PROXY"
SELECT SYS_CONTEXT('USERENV','PROXY_USER') FROM DUAL;
--The output should be "PETERFITCH_SCHEMA"
SELECT SYS_CONTEXT('USERENV','CURRENT_USER') FROM DUAL;
--The output should be "HRAPP"
3-106
Chapter 3
Manage Users
For example:
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION;
END;
/
2. If you also want to update access to IAM from the resource, in this case the Autonomous
Database instance, you may need to remove or modify the IAM group and the policies
you set up to allow access to IAM from the Autonomous Database instance.
Notes for Using Autonomous Database Tools with Identity and Access Management
(IAM) Authentication
Provides notes for using Autonomous Database tools with IAM authentication enabled.
• Oracle APEX is not supported for IAM users with Autonomous Database. See Create
Oracle APEX Workspaces in Autonomous Database for information on using regular
database users with Autonomous Database.
• Database Actions is not supported for IAM users with Autonomous Database. See
Provide Database Actions Access to Database Users for information on using regular
database users with Autonomous Database.
• Oracle Machine Learning Notebooks and other components are not supported for IAM
Authorized users with Autonomous Database. See Add Existing Database User Account
to Oracle Machine Learning Components for information on using regular database users
with Autonomous Database.
3-107
Chapter 3
Manage Users
Azure AD users and applications can log in with Azure AD Single Sign On (SSO)
credentials to access the database. This is done with an Azure AD OAuth2 access
token that the user or application first requests from Azure AD. This OAuth2 access
token contains the user identity and database access information and is then sent to
the database. Refer to Refer to the Microsoft article Passwordless authentication
options for Azure Active Directory for information about configuring multi-factor and
passwordless authentication.
You can perform this integration in the following Oracle Database environments:
• On-premises Oracle Database release 19.18 and later
• Oracle Autonomous Database on Shared Exadata Infrastructure
• Oracle Autonomous Database on Dedicated Exadata Infrastructure
• Oracle Base Database Service
• Oracle Exadata Cloud Service (Oracle ExaCS)
The instructions for configuring Azure AD use the term "Oracle Database" to
encompass these environments.
This type of integration enables the Azure AD user to access an Oracle Autonomous
Database instance. Azure AD users and applications can log in with Azure AD Single
Sign On (SSO) credentials to get an Azure AD OAuth2 access token to send to the
database.
The Azure AD administrator creates and registers Oracle Autonomous Database with
Azure AD. Within Azure AD, this is called an app registration, which is short for
application registration. This is the digital information that Azure AD must know about
the software that is using Azure AD. The Azure AD administrator also creates
application (app) roles for the database app registration in Azure AD. App roles
connect Azure users, groups, and applications to database schemas and roles. The
Azure AD administrator assigns Azure AD users, groups, or applications to the app
roles. These app roles are mapped to a database global schema or a global role or to
both a schema and a role. An Azure AD user, group, or application that is assigned to
an app role will be mapped to a database global schema, global role, or to both a
schema and a role. An Oracle global schema can also be mapped exclusively to an
Azure AD user. An Azure AD guest user (non-organization user) or an Azure AD
service principal (application) can only be mapped to a database global schema
through an Azure AD app role. An Oracle global role can only be mapped from an
Azure app role and cannot be mapped from an Azure user.
Oracle Autonomous Database tools including Oracle APEX, Database Actions, Oracle
Graph Studio, and Oracle Database API for MongoDB are not compatible with using
Azure AD tokens to connect with the database.
Tools and applications that are updated to support Azure AD tokens can authenticate
users directly with Azure AD and pass the database access token to the Oracle
Autonomous Database instance. You can configure existing database tools such as
SQL*Plus to use an Azure AD token from a file location. In these cases, Azure AD
tokens can be retrieved using tools like Microsoft PowerShell or Azure CLI and put into
a file location. An Azure AD OAuth2 database access tokens are issued with an
expiration time. The Oracle Database client driver will ensure that the token is in a
valid format and that it has not expired before passing it to the database. The token is
scoped for the database, which means that there is information in the token about the
database where the token will be used. The app roles the Azure AD principal was
assigned to in the database Azure AD app registration are included as part of the
3-108
Chapter 3
Manage Users
access token. The directory location for the Azure AD token should only have enough
permission for the user to write the token file to the location and the database client to
retrieve these files (for example, just read and write by the user). Because the token allows
access to the database, it should be protected within the file system.
Azure AD users can request a token from Azure AD using a number of methods to open an
Azure login window to enter their Azure AD credentials.
Oracle Autonomous Database accepts tokens representing the following Azure AD principals:
• Azure AD user, who is registered user in the Azure AD tenancy
• Guest user, who is registered as a guest user in the Azure AD tenancy
• Service, which is the registered application connecting to the database as itself with the
client credential flow (connection pool use case)
Oracle Autonomous Database supports the following Azure AD authentication flows:
• Authorization code, most commonly used for human users (not applications) to
authenticate to Azure AD in a client environment with a browser
• Client credentials, which are for database applications that connect as themselves (and
not the end-user)
• On-Behalf-Of (OBO), where an application requests an access token on behalf of a
logged-in user to send to the database
• Resource owner password credential (ROPC), which is not recommended for production
use, but can be used in test environments where a pop-up browser user authentication
would be difficult to incorporate. ROPC needs the Azure AD user name and password
credential to be part of the token request call.
• Architecture of Oracle Database Integration with Microsoft Azure AD
Microsoft Azure Active Directory access tokens follow the OAuth 2.0 standard with
extensions.
• Azure AD Users Mapping to an Oracle Database Schema and Roles
Microsoft Azure users must be mapped to an Oracle Database schema and have the
necessary privileges (through roles) before being able to authenticate to the Oracle
Database instance.
• Use Cases for Connecting to an Oracle Database Using Azure AD
Oracle Database supports several use cases for connecting to the database.
3-109
Chapter 3
Manage Users
The following diagram is a generalized flow diagram for OAuth 2.0 standard, using the
OAuth2 token. See Authentication flow support in MSAL in the Microsoft Azure AD
documentation for more details about each supported flow.
Oracle Cloud
Infrastructure
3 Authentication
Get Public Key
Azure AD
6
Request Request
Authorization Token
Code
Oracle
Database
2 4
Access
Access Token
1 Database 5
Applications
The authorization code flow is an OAuth2 standard and is described in detail as part of
the standards. There are two steps in the flow. The first step authenticates the user
and retrieves the authorization code. The second step uses the authorization code to
get the database access token.
1. The Azure AD user requests access to the resource, the Oracle Database
instance.
2. The database client or application requests an authorization code from Azure AD.
3. Azure AD authenticates the Azure AD user and returns the authorization code.
4. The helper tool or application uses the authorization code with Azure AD to
exchange it for the OAuth2 token.
5. The database client sends the OAuth2 access token to the Oracle database. The
token includes the database app roles the user was assigned to in the Azure AD
app registration for the database.
6. The Oracle Database instance uses the Azure AD public key to verify that the
access token was created by Azure AD.
Both the database client and the database server must be registered with the app
registrations feature in the Azure Active Directory section of the Azure portal. The
database client must be registered with Azure AD app registration. Permission must
also be granted to allow the database client to get an access token for the database.
3-110
Chapter 3
Manage Users
In Microsoft Azure, an Azure AD administrator can assign users, groups, and applications to
the database app roles.
Exclusively mapping an Azure AD schema to a database schema requires the database
administrator to create a database schema when the Azure AD user joins the organization or
is authorized to the database. The database administrator must also modify the privileges
and roles that are granted to the database schema to align them with the tasks the Azure AD
user is assigned to. When the Azure AD user leaves the organization, the database
administrator must drop the database schema so that an unused account is not left on the
database. Using the database app roles enables the Azure AD administrator to control
access and roles by assigning users to app roles that are mapped to global schemas and
global roles. This way, user access to the database is managed by Azure AD administrators
and database administrators do not need to create, manage, and drop schemas for every
user.
An Azure AD user can be mapped to a database schema (user) either exclusively or through
an app role.
• Creating an exclusive mapping between an Azure AD user and an Oracle Database
schema. In this type of mapping, the database schema must be created for the Azure AD
user. Database privileges and roles that are needed by the Azure AD user must be
granted to the database schema. The database schema not only must be created when
the Azure AD user is authorized to the database, but the granted privileges and roles
must be modified as the Azure AD roles and tasks change. Finally, the database schema
must be dropped when the Azure AD user leaves the organization.
• Creating a shared mapping between an Azure AD app role and an Oracle Database
schema. This type of mapping, which is more common than exclusive mappings, is for
Azure AD users who have been assigned directly to the app role or is a member of an
Azure AD group that is assigned to the app role. The app role is mapped to an Oracle
Database schema (shared schema mapping). Shared schema mapping allows multiple
Azure AD users to share the same Oracle Database schema so a new database schema
is not required to be created every time a new user joins the organization. This
operational efficiency allows database administrators to focus on database application
maintenance, performance, and tuning tasks instead of configuring new users, updating
privileges and roles, and removing accounts.
In addition to database roles and privileges being granted directly to the mapped global
schema, additional roles and privileges can be granted through mapped global roles.
Different Azure AD users mapped to the same shared global schema may need different
privileges and roles. Azure app roles can be mapped to Oracle Database global roles. Azure
AD users who are assigned to the app role or are a member of an Azure AD group that is
assigned to the app role will be granted the Oracle Database global role when they access
the database.
The following diagram illustrates the different types of assignments and mappings that are
available.
3-111
Chapter 3
Manage Users
3-112
Chapter 3
Manage Users
3-113
Chapter 3
Manage Users
https://jwt.io/
2. Copy and paste the token string into the Encoded field.
3. Check the Decoded field, which displays information about the token string.
Near or at the bottom of the field, you will see a claim entitled ver, which indicates
either of the following versions:
• "ver": "1.0"
• "ver": "2.0"
Related Topics
• Enabling Microsoft Azure AD v2 Access Tokens
To enable the Microsoft Azure AD v2 access token, you must configure it to use
the upn attribute from the Azure portal.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type =>'AZURE_AD',
params => JSON_OBJECT('tenant_id' VALUE 'tenant_id',
'application_id' VALUE 'application_id',
'application_id_uri' VALUE
'application_id_uri'),
force => TRUE
);
END;
3-114
Chapter 3
Manage Users
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type =>'AZURE_AD',
params => JSON_OBJECT('tenant_id' VALUE '29981886-6fb3-44e3-82',
'application_id' VALUE '11aa1a11-aaa',
'application_id_uri' VALUE 'https://
example.com/111aa1aa'),
force => TRUE
);
END;
NAME VALUE
---------------------- --------
identity_provider_type AZURE_AD
3-115
Chapter 3
Manage Users
2. In the Azure Active directory admin center page, from the left navigation bar, select
Azure Active Directory.
3. In the MS - App registrations page, select App registrations from the left
navigation bar.
4. Select New registration.
The Register an application window appears.
5. In the Register an application page, enter the following Oracle Database instance
registration information:
• In the Name field, enter a name for the Oracle Database instance connection
(for example, Example Database).
• Under Supported account types, select the account type that matches your
use case.
– Accounts in this organizational directory only (tenant_name only -
Single tenant)
– Accounts in any organizational directory (Any Azure AD directory -
Multitenant)
– Accounts in any organizational directory (Any Azure AD directory -
Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
– Personal Microsoft accounts only
6. Bypass the Redirect URI (Optional) settings. You do not need to create a redirect
URI because Azure AD does not need one for the database server.
7. Click Register.
3-116
Chapter 3
Manage Users
After you click Register, Azure AD displays the app registration's Overview pane, which
will show the Application (client) ID under Essentials. This value is a unique identifier for
the application in the Microsoft identity platform. Note the term Application refers to the
Oracle Database instance.
8. Register a scope for the database app registration.
A scope is a permission to access the database. Each database will need a scope so that
clients can establish a trust with the database by requesting permission to use the
database scope. This allows the database client to get access tokens for the database.
a. In the left navigation bar, select Expose an API.
b. Under Set the App ID URI, in the Application ID URI field, enter the app ID URI for
the database connection using the following format, and then click Save:
your_tenancy_url/application_(client)_id
In this specification:
• your_tenancy_url must include https as the prefix and the fully qualified domain
name of your Azure AD tenancy.
• application_(client)_id is the ID that was generated when you registered the
Oracle Database instance with Azure AD. It is displayed in the Overview pane of
the app registration.
For example:
https://sales_west.example.com/1aa11111-1a1z-1a11-1a1a-11aa11a1aa1a
3-117
Chapter 3
Manage Users
• Scope name specifies a name for the scope. Enter the following name:
session:scope:connect
This name can be any text. However, a scope name must be provided.
You will need to use this scope name later when you give consent to the
database client application to access the database.
• Who can consent specifies the necessary permissions. Select Admins
and users, or for higher restrictions, Admins only.
• Admin consent display name describes the scope's purpose (for
example, Connect to Oracle), which only administrators can see.
• Admin consent display name describes the scope's purpose (for
example, Connect to Example Database), which only administrators can
see.
3-118
Chapter 3
Manage Users
• User consent display name is a short description of the purpose of the scope
(for example, Connect to Example Database), which users can see if you
specify Admins and users in Who can consent.
• User consent description is a more detailed description of the purpose of the
scope (for example, Connect to Example Database), which users can see if you
specify Admins and users in Who can consent.
• State enables or disables the connection. Select Enabled.
After you complete these steps, you are ready to add one or more Azure app roles, and then
perform the mappings of Oracle schemas and roles.
Related Topics
• Quickstart: Register an application with the Microsoft identity platform
3-119
Chapter 3
Manage Users
• Display name is the displayed name of the role (for example, HR App
Schema). You can include spaces in this name.
• Value is the actual name of the role (for example, HR_APP). Ensure that this
setting matches exactly the string that is referenced in the database mapping
to a schema or role. Do not include spaces in this name.
• Description provides a description of the purpose of this role.
• Do you want to enable this app role? enables you to activate the role.
6. Click Apply.
The app role appears in the App roles pane.
3-120
Chapter 3
Manage Users
6. From this list, select the users and groups that you want to assign to the app role, and
then click Select.
7. In the Add assignment window, select Select a role to display a list of the app roles that
you have created.
8. Select the app role and then select Select.
9. Click Assign.
8. Select Grant admin consent for tenancy to grant consent for the tenancy users, then
select Yes in the confirmation dialog box.
Related Topics
• Configure the admin consent workflow
3-121
Chapter 3
Manage Users
3-122
Chapter 3
Manage Users
Note:
If Autonomous Database is using a private endpoint, you need to specify a host
value for the connect string parameter. Using an IP address for the host parameter
in the connect string will not work with DN matching and the Azure AD token will not
be sent to the database.
See Private Endpoints Configuration Examples on Autonomous Database for
configuration information on how to set the host parameter when using a private
endpoint.
See Configuring Azure AD Client Connections to the Oracle Database for more information.
3-123
Chapter 3
Manage Users
3-124
Chapter 3
Manage Users
Microsoft Active Directory on Autonomous Database where you set the database
property ROUTE_OUTBOUND_CONNECTIONS.
Note:
See Use Azure Active Directory (Azure AD) with Autonomous Database for
information on using Azure Active Directory with Autonomous Database. The CMU
option supports Microsoft Active Directory servers but does not support the Azure
Active Directory service.
The integration of Autonomous Database with Centrally Managed Users (CMU) provides
integration with Microsoft Active Directory. CMU with Active Directory works by mapping
Oracle database global users and global roles to Microsoft Active Directory users and groups.
The following are required prerequisites to configure the connection from Autonomous
Database to Active Directory:
• You must have Microsoft Active Directory installed and configured. See AD DS Getting
Started for more information.
• You must create an Oracle service directory user in Active Directory. See Connecting to
Microsoft Active Directory for information on the Oracle service directory user account.
• An Active Directory system administrator must have installed Oracle password filter on
the Active Directory servers, and set up Active Directory groups with Active Directory
users to meet your requirements.
Note:
This is not required if you are using Kerberos authentication for CMU Active
Directory. See Kerberos Authentication for CMU with Microsoft Active Directory
for more information.
If you use password authentication with CMU Active Directory for Autonomous Database,
you must use the included utility opwdintg.exe to install the Oracle password filter on
Active Directory, extend the schema, and create three new ORA_VFR groups for three
types of password verifier generation. See Connecting to Microsoft Active Directory for
information on installing the Oracle password filter.
• You need the CMU configuration database wallet, cwallet.sso and the CMU
configuration file dsi.ora to configure CMU for your Autonomous Database:
– If you have configured CMU for an on-premise database, you can obtain these
configuration files from your on-premise database server.
– If you have not configured CMU for an on-premise database, you need to create
these files. Then you upload the configuration files to the cloud to configure CMU on
your Autonomous Database instance. You can validate the wallet and the dsi.ora
by configuring CMU for an on-premise database and verifying that an Active
Directory user can successfully log on to the on-premise database with these
configuration files.
For details on the wallet file for CMU, see Create the Wallet for a Secure Connection and
Verify the Oracle Wallet.
3-125
Chapter 3
Manage Users
For details on the dsi.ora file for CMU, see Creating the dsi.ora File.
For details on configuring Active Directory for CMU and troubleshooting CMU for
on-premise databases, see How To Configure Centrally Managed Users For
Database Release 18c or Later Releases (Doc ID 2462012.1).
• Port 636 of the Active Directory servers must be open to Autonomous Database in
Oracle Cloud Infrastructure. This allows Autonomous Database to access the
Active Directory servers.
• When the Active Directory servers are on a public endpoint:
– The Active Directory servers must be accessible from Autonomous Database
through the public internet.
– You can also extend your on-premise Active Directory to Oracle Cloud
Infrastructure where you can set up Read Only Domain Controllers (RODCs)
for the on-premise Active Directory. This allows you to use the RODCs in
Oracle Cloud Infrastructure to authenticate and authorize on-premise Active
Directory users for access to Autonomous Databases.
See Extend Active Directory integration in Hybrid Cloud for more information.
Note:
When you perform the configuration steps, connect to the database as the
ADMIN user.
3-126
Chapter 3
Manage Users
For example:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'CMU',
params => JSON_OBJECT('location_uri' value 'https://
objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o',
'credential_name' value
'my_credential_name')
);
END;
/
Oracle recommends that you store the CMU configuration files in a private bucket in your
Object Store.
In this example, namespace-string is the Oracle Cloud Infrastructure object storage
namespace and bucketname is the bucket name. See Understanding Object Storage
Namespaces for more information.
The credential_name you use in this step is the credentials to access the Object Store.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
If the location_uri is a pre-authenticated URL or a pre-signed URL, then supplying a
credential_name is not required.
The procedure creates a directory object named CMU_WALLET_DIR in your database and
copies the CMU configuration files from the Object Store location to the directory object.
This procedure also sets the database property CMU_WALLET to the value
'CMU_WALLET_DIR' and sets the LDAP_DIRECTORY_ACCESS parameter value to the value
PASSWORD to enable access from the Autonomous Database instance to Active Directory.
4. After you enable CMU authentication, remove the CMU configuration files including the
database wallet cwallet.sso and the CMU configuration file dsi.ora from Object
Store. You can use local Object Store methods to remove these files or use
DBMS_CLOUD.DELETE_OBJECT to delete the files from Object Store.
5. When Active Directory servers are on a private endpoint, perform additional configuration
steps to provide access to the private endpoint.
a. Set the database property ROUTE_OUTBOUND_CONNECTIONS to the value
'PRIVATE_ENDPOINT'.
See Enhanced Security for Outbound Connections with Private Endpoints for more
information.
b. Validate that the CMU-AD configuration file dsi.ora includes host names. When
ROUTE_OUTBOUND_CONNECTIONS is set to 'PRIVATE_TARGET', IP addresses cannot be
specified in dsi.ora.
Note for CMU with Active Directory on Autonomous Database:
• Only "password authentication" and Kerberos are supported for CMU with Autonomous
Database. When you are using CMU authentication with Autonomous Database, other
CMU authentication methods such as PKI are not supported.
3-127
Chapter 3
Manage Users
See Configuring Centrally Managed Users with Microsoft Active Directory for more
information on configuring CMU with Microsoft Active Directory.
Note:
When implementing both the Kerberos authentication and CMU-AD for
authorization, Oracle recommends implementing Kerberos authentication
first, and then adding CMU-AD authorization.
3-128
Chapter 3
Manage Users
You can use a command such as the following to generate the service key table file:
For example:
For more information on these files and steps required to obtain them, see
Configuring Kerberos Authentication.
b. Copy the Kerberos configuration files krb.conf and v5srvtab to a bucket in your
Object Store.
This step differs depending on the Object Store you use.
If you are using Oracle Cloud Infrastructure Object Store, see Putting Data into
Object Storage for details on uploading files.
c. Run DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION to enable Kerberos
external authentication.
For example:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
params => JSON_OBJECT('location_uri' value 'https://
objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o',
'credential_name' value
'my_credential_name')
);
END;
/
Note:
Oracle recommends that you store the Kerberos configuration files in a
private bucket in your Object Store.
3-129
Chapter 3
Manage Users
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'CMU',
params => JSON_OBJECT('location_uri' value 'https://
objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o',
'credential_name' value
'my_credential_name')
);
END;
/
Oracle recommends that you store the CMU configuration files in a private
bucket in your Object Store.
In this example, namespace-string is the Oracle Cloud Infrastructure object
storage namespace and bucketname is the bucket name. See Understanding
Object Storage Namespaces for more information.
The credential_name you use in this step is the credentials to access the
Object Store.
3-130
Chapter 3
Manage Users
3. After completing Step 1 and Step 2, verify that the configuration of Kerberos
authentication with CMU-AD is complete.
a. Login to the Autonomous Database instance as an Active Directory user so that
SYS_CONTEXT information populated in your USERENV.
b. Query the SYS_CONTEXT USERENV.
SYS_CONTEXT('USERENV','AUTHENTICATION_METHOD')
----------------------------------------------------------------------
----------
KERBEROS_GLOBAL
When you have Kerberos authentication configured and enabled without CMU-AD, this
query returns: KERBEROS. See Configure Kerberos Authentication with Autonomous
Database for more information.
When you have CMU-AD authentication configured without Kerberos, this query returns:
PASSWORD_GLOBAL. See Prerequisites to Configure CMU with Microsoft Active Directory on
Autonomous Database for more information.
Notes for using Kerberos authentication with CMU-AD:
• You do not need to add the password filter when using Kerberos authentication with
CMU-AD. See Prerequisites to Configure CMU with Microsoft Active Directory on
Autonomous Database for more information.
• Adding or removing Active Directory users is supported, in the same manner as with
CMU with Active Directory when you are using password authentication. See Add
Microsoft Active Directory Users on Autonomous Database for more information.
• The existing restrictions about authenticating against the Autonomous Database Built-in
Tools with CMU with Active Directory password also apply to CMU with Active Directory
with Kerberos authentication. See Tools Restrictions with Active Directory on
Autonomous Database for more information.
• Use DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION to disable CMU-AD with
Kerberos authentication. See DISABLE_EXTERNAL_AUTHENTICATION Procedure for
more information.
• When the CMU-AD servers are on a private endpoint, to use CMU-AD with Kerberos
authentication the server host name used for generating the key tab must be set to the
3-131
Chapter 3
Manage Users
For example:
'CN=widget_sales_group,OU=sales,DC=production,DC=example,DC=com';
In this example all members of the widget_sales_group are authorized with the
database role widget_sales_role when they log in to the database.
3. Use GRANT statements to grant the required privileges or other roles to the global
role.
For example:
DWROLE is a predefined role that has common privileges defined. See Manage User
Privileges on Autonomous Database - Connecting with a Client Tool for
information on setting common privileges for Autonomous Database users.
4. If you want to make an existing database role to be associated with an Active
Directory group, then use ALTER ROLE statement to alter the existing database role
to map the role to an Active Directory group.
Use the following syntax to alter an existing database role to map it to an Active
Directory group:
3-132
Chapter 3
Manage Users
5. If you want to create additional global role mappings for other Active Directory groups,
follow these steps for each Active Directory group.
See Configuring Authorization for Centrally Managed Users for more information on
configuring roles with Microsoft Active Directory.
The integration of Autonomous Database with Active Directory works by mapping Microsoft
Active Directory users and groups directly to Oracle database global users and global roles.
To add global users for Active Directory groups or users on Autonomous Database:
1. Log in as the ADMIN user to the database that is configured to use Active Directory (the
ADMIN user has the required CREATE USER and ALTER USER system privileges that you
need for these steps).
2. Set database authorization for Autonomous Database users with CREATE USER or ALTER
USER statements and include the IDENTIFIED GLOBALLY AS clause, specifying the DN of
an Active Directory user or group.
Use the following syntax to map a directory user to a database global user:
Use the following syntax to map a directory group to a database global user:
This creates a shared global user mapping. The mapping, with global user widget_sales,
is effective for all users in the Active Directory group. Thus, anyone in the
widget_sales_group can log in to the database using their Active Directory credentials
(through the shared mapping of the widget_sales global user).
3. If you want Active Directory users to use an existing database user, own its schema, and
own its existing data, then use ALTER USER to alter an existing database user to map the
user to an Active Directory group or user.
• Use the following syntax to alter an existing database user to map it to an Active
Directory user:
3-133
Chapter 3
Manage Users
4. If you want to create additional global user mappings for other Active Directory
groups or users, follow these steps for each Active Directory group or user.
See Configuring Authorization for Centrally Managed Users for more information on
configuring users with Microsoft Active Directory.
Note:
Do not log in using a Global User name. Global User names do not have a
password and connecting with a Global User name will not be successful.
You must have a global user mapping in your Autonomous Database in order
to log in to the database. You cannot log in to the database with only global
role mappings.
CONNECT "AD_DOMAIN\AD_USERNAME"/
AD_USER_PASSWORD@TNS_ALIAS_OF_THE_AUTONOMOUS_DATABASE;
3-134
Chapter 3
Manage Users
For example:
CONNECT "production\pfitch"/password@adbname_medium;
You need to include double quotes when the Active Directory domain is included along
with the username, as with this example: "production\pfitch".
In this example, the Active Directory username is pfitch in domain production. The
Active Directory user is a member of widget_sales_group group which is identified by its
DN 'CN=widget_sales_group,OU=sales,DC=production,DC=example,DC=com'.
After configuring CMU with Active Directory on Autonomous Database and setting up Active
Directory authorization, with global roles and global users, you can connect to your database
using any of the connection methods described in Connect to Autonomous Database. When
you connect, if you want to use an Active Directory user then use Active Directory user
credentials. For example, provide a username in this form, "AD_DOMAIN\AD_USERNAME"
(double quotes must be included), and use your AD_USER_PASSWORD for the password.
If your Autonomous Database instance is in Restricted Mode, this mode only allows users
with the RESTRICTED SESSION privilege to connect to the database. The ADMIN user has this
privilege. You can use restricted access mode to perform administrative tasks such as
indexing, data loads, or other planned activities. See Change Autonomous Database
Operation Mode to Read/Write Read-Only or Restricted for more information.
CONNECT "production\pfitch"/password@exampleadb_medium;
SHOW USER;
USER is "WIDGET_SALES"
The following command shows the DN (Distinguished Name) of the Active Directory user:
For example you can verify this centrally managed user's enterprise identity:
SYS_CONTEXT('USERENV','ENTERPRISE_IDENTITY')
3-135
Chapter 3
Manage Users
----------------------------------------------------------------------
cn=Peter Fitch,ou=sales,dc=production,dc=examplecorp,dc=com
For example, the Active Directory authenticated user identity is captured and audited
when the user logs on to the database:
SYS_CONTEXT('USERENV','AUTHENTICATED_IDENTITY')
----------------------------------------------------------------------
production\pfitch
See Verifying the Centrally Managed User Logon Information for more information.
Note:
To run this procedure you must be logged in as ADMIN user or have the
EXECUTE privilege on DBMS_CLOUD_ADMIN.
3-136
Chapter 3
Manage Users
For example:
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION;
END;
/
3-137
Chapter 3
Manage Users
3-138
Chapter 3
Manage Users
REALM Any realm supported by your KDC. REALM must always be in uppercase.
To enable Kerberos authentication for your Autonomous Database, you must keep your
Kerberos configuration files (krb.conf) and service key table file (v5srvtab) ready. For
more information on these files and steps to obtain them, please see Configuring Kerberos
Authentication.
3-139
Chapter 3
Manage Users
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
params => JSON_OBJECT(
'location_uri' value 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
'credential_name' value 'my_credential_name'));
END;
/
Note:
Oracle recommends that you store the Kerberos configuration files in a
private bucket in your Object Store.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
params => JSON_OBJECT(
'location_uri' value 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
'credential_name' value 'my_credential_name'
'kerberos_service_name' value 'oracle' ));
END;
/
3-140
Chapter 3
Manage Users
3. After you enable Kerberos authentication, remove the configuration krb.conf and
v5srvtab from Object Store. You can use local Object Store methods to remove these
files or use DBMS_CLOUD.DELETE_OBJECT to delete the files from Object Store.
See Navigate to Oracle Cloud Infrastructure Object Storage and Create Bucket for more
information on Object Storage.
See ENABLE_EXTERNAL_AUTHENTICATION Procedure for more information.
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION;
END;
/
This disables the Kerberos authentication (or any external authentication scheme
specified) for Oracle Autonomous Database.
See DISABLE_EXTERNAL_AUTHENTICATION Procedure for more information.
3-141
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Manage Credentials
You can create credentials, list credentials, or delete credentials in your Autonomous
Database.
• Create Credentials to Access Cloud Services
To access services in the Cloud, such as Cloud Object Store, you first need to
create credentials in your Autonomous Database.
• List Credentials
DBMS_CLOUD provides the ability to store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL. You can list credentials from the view
ALL_CREDENTIALS.
• Delete Credentials
DBMS_CLOUD provides the ability to store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL. You can remove credentials with
DBMS_CLOUD.DROP_CREDENTIAL.
3-142
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
END;
/
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for all data loads.
For detailed information about the parameters, see CREATE_CREDENTIAL Procedure.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand character (&)
as a special character. If you have the ampersand character in your password
use the SET DEFINE OFF command in those tools as shown in the example to
disable the special character and get the credential created properly.
2. With the credential you created in Step 1, you can access Object Store or other cloud
resources from Autonomous Database using a procedure such as
DBMS_CLOUD.COPY_DATA, DBMS_CLOUD.EXPORT_DATA, DBMS_CLOUD_PIPELINE if you are
using a Data Pipeline, or other procedures that require DBMS_CLOUD credentials.
List Credentials
DBMS_CLOUD provides the ability to store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL. You can list credentials from the view ALL_CREDENTIALS.
CREDENTIAL_NAME USERNAME
---------------------------–----------------------------- --------------------
COMMENTS
---------------------------–----------------------------- --------------------
ADB_TOKEN user_name@example.com
{"comments":"Created via DBMS_CLOUD.create_credential"}
DEF_CRED_NAME user_name@example.com
{"comments":"Created via DBMS_CLOUD.create_credential"}
3-143
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Delete Credentials
DBMS_CLOUD provides the ability to store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL. You can remove credentials with
DBMS_CLOUD.DROP_CREDENTIAL.
For example, to remove the credential named DEF_CRED_NAME, run the following
command:
BEGIN
DBMS_CLOUD.DROP_CREDENTIAL('DEF_CRED_NAME');
END;
For more information about the DBMS_CLOUD procedures and parameters, see
DBMS_CLOUD Subprograms and REST APIs.
Note:
Operations that use Oracle Data Pump support vault secret credentials (for
example impdp and expdp). Oracle Data Pump support with vault secret
credentials is limited to Oracle Cloud Infrastructure Swift URIs and Oracle
Cloud Infrastructure Native URIs. See DBMS_CLOUD URI Formats for more
information.
3-144
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Topics
Prerequisites to Create Vault Secret Credentials with Oracle Cloud Infrastructure Vault
Describes the required prerequisite steps to use vault secret credentials with Oracle Cloud
Infrastructure Vault secrets.
To create vault secret credentials where the secret is stored in Oracle Cloud Infrastructure
Vault, first perform the required prerequisites.
1. Create a vault and create a secret in the vault with Oracle Cloud Infrastructure Vault.
For more information, see the instructions for creating a vault and a secret, Managing
Vaults and Overview of Key Management.
2. Set up a dynamic group to provide access to the secret in the Oracle Cloud Infrastructure
Vault.
Create a dynamic group for the Autonomous Database instance where you want to
create a vault secret credential:
a. In the Oracle Cloud Infrastructure console click Identity & Security.
b. Under Identity click Domains and select an identity domain (or create a new identity
domain).
c. Under Identity domain, click Dynamic groups.
d. Click Create dynamic group and enter a Name, a Description, and a rule.
3-145
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
resource.id = 'your_Autonomous_Database_instance_OCID'
resource.compartment.id = 'your_Compartment_OCID'
e. Click Create.
3. Write policy statements for the dynamic group to enable access to Oracle Cloud
Infrastructure resources (secrets).
a. In the Oracle Cloud Infrastructure Console click Identity and Security and
click Policies.
b. To write policies for the dynamic group you created in the previous step, click
Create Policy, and enter a Name and a Description.
c. Use the show manual editor option of Policy Builder to create a policy.
For example, to allow access to the dynamic group to read a specific secret in
a compartment:
For example, to allow access to the dynamic group to read all secrets in a
compartment:
3-146
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
This allows you to store a secret in Oracle Cloud Infrastructure Vault and use the secret with
the credential you create to access cloud resources or to access other databases.
To create vault secret credentials where the secret is stored in Oracle Cloud Infrastructure
Vault:
1. Enable resource principal authentication to provide access to a secret in the Oracle
Cloud Infrastructure Vault.
See Enable Resource Principal to Access Oracle Cloud Infrastructure Resources for
more information.
2. Create a dynamic group and define policies to allow your Autonomous Database to
access secrets in an Oracle Cloud Infrastructure Vault.
See Prerequisites to Create Vault Secret Credentials with Oracle Cloud Infrastructure
Vault for more information.
3. Use DBMS_CLOUD.CREATE_CREDENTIAL to create a vault secret credential.
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OCI_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'SCOTT',
'secret_id' value
'ocid1.vaultsecret.oc1.iad.example..aaaaaaaauq5ok5nq3bf2vwetkpqsoa' ));
END;
/
Where:
• username: is the username of the original credential. It can be the username of any
type of username/password credential such as the username of an OCI Swift user,
the username required to access a database with a database link, and so on.
• secret_id: is the vault secret ID. For example, when you store the password
mysecret in a secret in Oracle Cloud Infrastructure Vault, the secret_id value is the
vault secret OCID.
To create a vault secret credential you must have EXECUTE privilege on the DBMS_CLOUD
package.
See CREATE_CREDENTIAL Procedure for more information.
4. Use the credential to access a cloud resource.
For example:
3-147
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Note:
Every 12 hours the secret (password) is refreshed from the content in the
Oracle Cloud Infrastructure Vault. If you change the secret value in the
Oracle Cloud Infrastructure Vault, it can take up to 12 hours for the
Autonomous Database instance to pick up the latest secret value.
3-148
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
d. Under Principal, enter the name of the service principal in the search field and select
the appropriate result.
e. Click Next.
f. Under Review + create, review the access policy changes and click Create to save
the access policy.
g. Back on the Access policies page, verify that your access policy is listed.
See Assign a Key Vault access policy for more information.
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'AZURE_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'azure_user',
'secret_id' value 'sales-secret',
'azure_vault_name' value 'azure_keyvault_name' ));
END;
/
Where:
• username: is the username of the original credential. It can be the username of any
type of username/password credential.
• secret_id: is the secret name.
• azure_vault_name: is the name of the vault where the secret is located.
To create a vault secret credential you must have EXECUTE privilege on the DBMS_CLOUD
package.
See CREATE_CREDENTIAL Procedure for more information.
3. Use the credential to access a cloud resource.
For example:
3-149
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Note:
Every 12 hours the secret (password) is refreshed from the content in the
Azure Key Vault. If you change the secret value in the Azure Key Vault, it can
take up to 12 hours for the Autonomous Database instance to pick up the
latest secret value.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
3-150
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'AWS_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'access_key',
'secret_id' value 'arn:aws:secretsmanager:region:account-
ID:secret:secret_name' ));
END;
/
3-151
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Where:
• username: is the username of the original credential. It can be the username of
any type of username/password credential.
• secret_id: is the vault secret AWS ARN.
To create a vault secret credential you must have EXECUTE privilege on the
DBMS_CLOUD package.
See CREATE_CREDENTIAL Procedure for more information.
3. Use the credential to access a cloud resource.
For example:
Note:
Every 12 hours the secret (password) is refreshed from the content in the
AWS Secrets Manager. If you change the secret value in the AWS Secrets
Manager, it can take up to 12 hours for the Autonomous Database instance
to pick up the latest secret value.
3-152
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
See Secret Manager and Create and access a secret using Secret Manager for more
information.
2. Enable Google Service Account authentication to provide access to GCP Secret
Manager.
On the Google Cloud console you must grant read access to the secret to the principal
authentication credential.
a. Go to the Secret Manager page in the Google Cloud console.
b. On the Secret Manager page, click the checkbox next to the name of the secret.
c. If it is not already open, click Show Info Panel to open the panel.
d. In the info panel, click Add Principal.
e. In the New principals text area, enter the service account name to add.
f. In the Select a role dropdown, choose Secret Manager and then Secret Manager
Secret Accessor.
See Enable Google Service Account and Find the GCP Service Account Name and
Manage access to secrets for more information.
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'GCP_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'gcp_user1',
'secret_id' value 'my-secret',
'gcp_project_id' value 'my-sample-project-191923' ));
END;
/
Where:
3-153
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Note:
Every 12 hours the secret (password) is refreshed from the content in the
GCP Secret Manager. If you change the secret value in the GCP Secret
Manager, it can take up to 12 hours for the Autonomous Database instance
to pick up the latest secret value.
BEGIN
DBMS_CLOUD.REFRESH_VAULT_CREDENTIAL(
credential_name => 'VAULT_SECRET_CRED');
END;
/
3-154
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
BEGIN DBMS_CLOUD.UPDATE_CREDENTIAL(
credential_name => 'VAULT_SECRET_CRED',
attribute => 'secret_id',
value => 'new_secret_id'
);
END;
/
To update a vault secret credential you must have EXECUTE privilege on the DBMS_CLOUD
package.
See UPDATE_CREDENTIAL Procedure for more information.
2. Use the updated credential to access a cloud resource.
For example:
3-155
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-156
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-157
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
resource.id = '<your_Autonomous_Database_instance_OCID>'
ALL
{resource.compartment.id='<your_Compartment_OCID>'}
See Managing Dynamic Groups for more information on creating a dynamic group
and creating rules to add resources to the group.
2. Write policy statements for the dynamic group to enable access to Oracle Cloud
Infrastructure resources.
a. In the Oracle Cloud Infrastructure console click Identity and Security and
click Policies.
b. To write policies for a dynamic group, click Create Policy, and enter a Name
and a Description.
c. Use the Policy Builder to create a policy.
For example to create a policy to allow access to Oracle Cloud Infrastructure
Object Store to manage buckets and objects in the Object Store in a tenancy:
d. Click Create.
See Managing Policies for more information on policies.
Note:
The resource principal token is cached for two hours. Therefore, if you
change the policy or the dynamic group, you have to wait for two hours to
see the effect of your changes.
3-158
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(
username => 'adb_user',
grant_option => TRUE);
END;
/
After you run this command, adb_user can enable resource principal for another user. For
example, if you connect as adb_user, you can run the following command:
3-159
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
OWNER CREDENTIAL_NAME
----- ----------------------
ADMIN OCI$RESOURCE_PRINCIPAL
EXEC DBMS_CLOUD_ADMIN.DISABLE_RESOURCE_PRINCIPAL();
No rows selected
To remove access to the resource principal credential for a specified database user,
include the username parameter. This denies the specified user access to the
OCI$RESOURCE_PRINCIPAL credential.
For example:
3-160
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'OCI$RESOURCE_PRINCIPAL',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
If you compare the steps required to access Object Storage as shown in Create Credentials
and Copy Data into an Existing Table, notice that Step 1, creating credentials is not required
when you use resource principal because you are using the system defined
OCI$RESOURCE_PRINCIPAL credentials.
3-161
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-162
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
In AWS, the role ARN is the identifier for the provided access and can be viewed on the AWS
console. For added security, when the AWS administrator configures the role, policies, and
trust relationship for the AWS account, they must also configure an External ID in the role's
trust relationship.
The External ID provides additional protection for assuming roles. The AWS administrator
configures configure the External ID as one of the following, based on the Autonomous
Database instance:
• The compartment OCID
• The database OCID
• The tenancy OCID
On AWS, the role can only be assumed by trusted users that are identified by the External ID
included in the request URL, where the supplied External ID in the request matches the
External ID configured in the role's trust relationship.
Note:
Setting the External ID is required for security.
User DBMS_CLOUD
1
Configure
AWS policy
and role
3-163
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Note:
Depending on your existing AWS configuration and the External ID you use,
you do not need to create a new role and policy for each Autonomous
Database instance. If you already have an AWS role containing the
necessary policy to access a resource, for example to access S3 cloud
storage, you can modify the trust relationship to include the details in Step 3.
Likewise, if you already have a role with the necessary trust relationship, you
can use that role to access all of your databases in an OCI compartment or
tenancy if you use an external ID that specifies the compartment OCID or
tenancy OCID.
From the AWS Management Console or using the APIs, an AWS administrator
performs the following steps:
3-164
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
1. Create a policy. In the policy you specify permissions for accessing AWS resources such
as S3 buckets.
See Creating an IAM policy to access Amazon S3 resources for more information.
2. Create a role and attach the policy to the role.
a. Access the AWS Management Console and choose Identity and Access
Management (IAM).
b. Click Create role.
3-165
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-166
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
When you use the database OCID as the External ID, the policy's trust relationship
only trusts the Autonomous Database instance specified with the OCID. If you use a
compartment OCID, the policy's trust relationship trusts all the Autonomous
Database instances in the compartment and you can use the same role ARN to grant
access to AWS resources to any Autonomous Database in the specified
compartment. Likewise, if you use the tenancy OCID, you can use the same role
ARN to grant access to AWS resources to any Autonomous Database in the
specified tenancy.
Previously in Step 2 you set the trust relationship External ID to the temporary value
0000.
On AWS you configure the trust relationship External ID value to match one of the
following:
• When the external_id_type type is database_ocid, on AWS you configure the
role's trust relationship External ID to be the Database OCID.
The Database OCID is available by running the following query:
3-167
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
See How to use trust policies with IAM role and How to use an external ID when
granting access to your AWS resources to a third party for more information.
After the ARN role configuration is finished, you can enable ARN on the instance. See
Perform Autonomous Database Prerequisites to Use Amazon ARNs for more
information.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
username => 'adb_user',
params => JSON_OBJECT(
'aws_role_arn' value 'arn:aws:iam::123456:role/
AWS_ROLE_ARN'));
END;
/
If you want the specified user to have privileges to enable ARN credentials for
other users, set the params parameter grant_option to TRUE.
For example:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
username => 'adb_user',
params => JSON_OBJECT(
'aws_role_arn' value 'arn:aws:iam::123456:role/
AWS_ROLE_ARN',
'grant_option' value TRUE ));
END;
/
3-168
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
After you run this command, adb_user has privileges to enable ARN credentials for other
users.
For example, if you connect as adb_user, you can run the following command:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
username => 'adb_user2');
END;
/
PARAM_VALUE
--------------------------------------------
arn:aws:iam::account-ID:user/username
The view CLOUD_INTEGRATIONS is available to the ADMIN user or to a user with DWROLE
privileges.
The AWS administrator uses the aws_user_arn value when configuring the AWS role's
trust relationship with the role and policies on the AWS system. Providing this value
grants permission on the AWS side for DBMS_CLOUD to access AWS resources.
After you enable ARN on the Autonomous Database instance by running
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH, the credential named AWS$ARN is available to
use with any DBMS_CLOUD API that takes a credential as the input. Except for the credential
named AWS$ARN, you can also create additional credentials with ARN parameters to access
AWS resources. See Create Credentials with ARN Parameters to Access AWS Resources for
more information.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_ARN',
params =>
JSON_OBJECT('aws_role_arn' value 'arn:aws:iam::123456:role/
3-169
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
AWS_ROLE_ARN',
'external_id_type' value 'database_ocid')
);
END;
/
This operation creates the credentials in the database in an encrypted format. You
can use any name for the credential name.
For detailed information about the parameters, see CREATE_CREDENTIAL
Procedure.
2. Use a DBMS_CLOUD procedure to access an Amazon resource with the ARN
credentials.
For example, use DBMS_CLOUD.LIST_OBJECTS.
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL(
credential_name => 'DEF_CRED_ARN',
attribute => 'aws_role_arn',
value => 'new_ARN_value');
END;
/
This updates the aws_role_arn attribute to the new value new_ARN_value for the
credential named DEF_CRED_ARN.
2. Use DBMS_CLOUD.UPDATE_CREDENTIAL to update an ARN based credential to
update the attribute external_id_type value.
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL(
credential_name => 'DEF_CRED_ARN',
attribute => 'external_id_type',
value => 'compartment_ocid');
3-170
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
END;
/
Note:
To use Autonomous Database with Azure service principal authentication you need
a Microsoft Azure account. See Microsoft Azure for details.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
3-171
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'AZURE',
username => 'adb_user',
params => JSON_OBJECT('grant_option' value TRUE,
'azure_tenantid' value
'azure_tenantID'));
END;
/
After you run this command, adb_user can enable Azure service principal for
another user. For example, if you connect as adb_user, you can run the following
command:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'AZURE',
username => 'adb_user2');
END;
/
PARAM_NAME PARAM_VALUE
---------------
--------------------------------------------------------------------
3-172
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
----------------------------------------------------------------------
azure_tenantid 29981886-6fb3-44e3-82ab-d870b0e8e7eb
azure_consent_url https://login.microsoftonline.com/f8cdef31-91255a/
oauth2/v2.0/authorize?
client_id=d66f1b5-1250d5445c0b&response_type=code&scope=User.read
azure_app_name
ADBS_APP_OCID1.AUTONOMOUSDATABASE.REGION1.SEA.ANZWKLJSZLYNB3AAWLYL3JVC4ICE
XLB3ZG6WTCX735JSSY2NRHOBU4DZOOVA
The view CLOUD_INTEGRATIONS is available to the ADMIN user or to a user with DWROLE
role.
2. In a browser, open the Azure consent URL, specified by the azure_consent_url
parameter.
For example, copy and enter the URL in your browser:
https://login.microsoftonline.com/f8cdef31-91255a/oauth2/v2.0/authorize?
client_id=d66f1b5-1250d5445c0b&response_type=code&scope=User.read
The Permissions requested page opens and shows a consent request, similar to the
following:
3-173
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
Note:
To work with Azure Blob Storage, you need an Azure storage account. If
you do not have an Azure storage account, create a storage account.
See Create a storage account for more information.
3-174
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
a. On the Microsoft Azure console, under Azure services, select Storage accounts.
b. Under Storage accounts, click the storage account you want to grant service
principal access to.
c. On the left, click Access Control (IAM).
d. From the top area, click + Add → Add role assignment.
e. In the search area enter text to narrow the list of roles that you see. For example,
enter Storage Blob to show the available roles that contain Storage Blob.
f. Select one or more roles as appropriate for the access you want to grant. For
example, select Storage Blob Data Contributor.
g. Click Next.
h. In the Add role assignment, under Members click + Select members.
i. Under Select members, in the select field, enter the azure_app_name listed in Step 1
(the param_value column of the CLOUD_INTEGRATIONS view).
j. Select the application.
For example, click
ADBS_APP_OCID1.AUTONOMOUSDATABASE.REGION1.SEA.ANZWKLJSZLYNB3AAWLYL3JVC4IC
EXLB3ZG6WTCX735JSSY2NRHOBU4DZOOVA
k. Click Select.
l. Click Review + assign.
5. Click Review + Assign again.
After assigning a role you need to wait, as role assignments may take up to five minutes
to propagate in Azure.
This example shows steps to grant roles for accessing Azure Blob Storage. If you want to
provide access for other Azure services you need to perform equivalent steps for the
additional Azure services to allow the Azure application (the service principal) to access the
Azure service.
3-175
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
LAST_MODIFIED
----------- ----- ------------------------ --------------------
--------------------
trees.txt 58 aCB1qMOPVobDLCXG+2fcvg== 2022-04-07T23:03:01Z
2022-04-07T23:03:01Z
If you compare the steps required to access object storage, as shown in Create
Credentials and Copy Data into an Existing Table, notice that Step 1, creating
credentials is not required because you are using an Azure service principal called
AZURE$PA.
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
provider => 'AZURE',
username => 'adb_user');
END;
/
When the provider value is AZURE and the username is a user other than the ADMIN
user, the procedure revokes the privileges from the specified user. In this case, the
ADMIN user and other users can continue to use ADMIN.AZURE$PA and the application
that is created for the Autonomous Database instance remains on the instance.
When the provider value is AZURE and the username is ADMIN, the procedure disables
Azure service principal based authentication and deletes the Azure service principal
application on the Autonomous Database instance. In this case, if you want to enable
Azure service principal you must perform all the steps required to use Azure service
principal again, including the following:
• Enable the ADMIN schema or another schema to use Azure service principal
authentication. See Enable Azure Service Principal for more information.
• Consent the application and perform the Azure role assignment grants. See
Provide Azure Application Consent and Assign Roles for more information.
See DISABLE_PRINCIPAL_AUTH Procedure for more information.
3-176
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-177
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
you to set a policy to limit access to resources by role. For example, setting a
policy that is limited to read-only access, by role, to a Google Cloud Storage
bucket.
• Google service account based credentials provide better security, as you do not
need to provide long-term user credentials in code when your application
accesses GCP resources. Autonomous Database manages the temporary
credentials for the Google service account and does not need to store GCP
resource user credentials in the database.
See Service accounts for information on Google service accounts.
Enable Google Service Account and Find the GCP Service Account Name
Prior to using a Google Cloud Platform (GCP) resource with a Google service account
you need to enable GCP access for your Autonomous Database instance.
1. Enable Google service account authentication with
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH.
For example, to enable Google service account authentication for the ADMIN user:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'GCP' );
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'GCP',
username => 'adb_user');
END;
/
If you want the specified user to have privileges to enable Google service account
authentication for other users, set the params parameter grant_option to TRUE.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'GCP',
username => 'adb_user',
params => JSON_OBJECT('grant_option' value TRUE));
END;
/
3-178
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
user. For example, if you connect as adb_user, you can run the following command to
enable GCP service account access for adb_user2:
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'GCP',
username => 'adb_user2');
END;
/
PARAM_NAME PARAM_VALUE
--------------------------------------------------------------------------
-
gcp_service_account GCP-SA-22222-32222@gcp-
example.iam.gserviceaccount.com
3. Note the gcp_service_account parameter value as you must supply this value when you
configure GCP resources.
See ENABLE_PRINCIPAL_AUTH Procedure for more information.
Assign Roles to the Google Service Account and Provide Access for GCP Resources
To use Google Cloud Platform (GCP) resources from an Autonomous Database instance,
you or a Google Cloud Administrator must assign roles and privileges to the Google service
account that your application accesses. In addition to assigning roles for the Google service
account, for any GCP resources you want to use a Google Cloud administrator must add
Google IAM principals.
As a prerequisite, first enable the Google service account on your Autonomous Database
instance. See Enable Google Service Account and Find the GCP Service Account Name for
more information.
1. Open the Google Cloud Console for your account.
2. Create roles with the specified permissions.
a. From the navigation menu, select IAM & Admin.
b. From the IAM & Admin navigator, select Roles.
c. On the Roles page, click More Actions and select + CREATE ROLE.
For example, you can create a role Object Store Read and Write to control the use of
an Object Store bucket.
3-179
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-180
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
3-181
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
5. Add roles and principals for the resource you want to access.
For example, if you want to access Google Cloud Storage using the role you just
created, Object Store Read Write:
a. From the navigator, select Cloud Storage and select Buckets.
b. Select the bucket you want to use, and click PERMISSIONS.
c. Click + ADD PRINCIPAL.
6. On the Grant access to "bucketname" dialog, add roles and a principals for the
selected resource.
a. Under Add principals add the value of the gcp_service_account parameter
from your Autonomous Database instance.
b. On the Grant access to "bucketname" dialog, enter roles under Assign Roles
and then click SAVE.
3-182
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
After you complete these steps the roles and principals are assigned. This allows your
application running on the Autonomous Database instance to access the GCP resource with
a Google service account.
3-183
Chapter 3
Manage Credentials or Configure Policies and Roles to Access Resources
• Perform the Google Cloud Platform role assignments for the resources you want
to access. See Assign Roles to the Google Service Account and Provide Access
for GCP Resources for more information.
To use a DBMS_CLOUD procedure or function with Google service account
authentication:
1. Use GCP$PA as the credential name.
2. Construct the URI to access the GCP resource using virtual hosted-style:
https://BUCKET_NAME.storage.googleapis.com/OBJECT_NAME
For example, you can access Google Cloud Storage using Google service account
credentials as follows:
When the provider value is GCP and the username is a user other than the ADMIN user,
the procedure revokes the privileges from the specified user. In this case, the ADMIN
user and other users can continue to use GCP$PA.
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
provider => 'GCP',
username => 'adb_user');
END;
/
When the provider value is GCP and the username is ADMIN, the procedure disables
Google service account access on the Autonomous Database instance. The default
value for username is ADMIN.
For example:
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
3-184
Chapter 3
Load Data into Autonomous Database
3-185
Chapter 3
Load Data into Autonomous Database
View Data Pump Jobs and Import Data with Data Pump
Use the Data Pump page to view Data Pump jobs and use the wizard to quickly create
and run Data Pump import jobs.
See The Data Pump Page for more information.
3-186
Chapter 3
Load Data into Autonomous Database
before loading the data into your database. Oracle provides support for loading files that are
located locally in your data center, but when using this method of data loading you should
factor in the transmission speeds across the Internet which may be significantly slower.
For more information on Oracle Cloud Infrastructure Object Storage, see Putting Data into
Object Storage and Overview of Object Storage.
Note:
If you are not using ADMIN user, ensure the user has the necessary privileges for the
operations the user needs to perform. See Manage User Privileges on Autonomous
Database - Connecting with a Client Tool for more information.
3-187
Chapter 3
Load Data into Autonomous Database
See Connect with Built-In Oracle Database Actions for information on accessing
Oracle Database Actions.
3-188
Chapter 3
Load Data into Autonomous Database
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for all data loads.
For detailed information about the parameters, see CREATE_CREDENTIAL Procedure.
3-189
Chapter 3
Load Data into Autonomous Database
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand
character (&) as a special character. If you have the ampersand
character in your password use the SET DEFINE OFF command in those
tools as shown in the example to disable the special character and get
the credential created properly.
2. Load data into an existing table using the procedure DBMS_CLOUD.COPY_DATA. For
example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
3-190
Chapter 3
Load Data into Autonomous Database
Note:
Autonomous Database supports a variety of source file formats, including
compressed data formats. See DBMS_CLOUD Package Format Options and
the DBMS_CLOUD compression format option to see the supported compression
types.
For detailed information about the parameters, see COPY_DATA Procedure and
COPY_DATA Procedure for Avro, ORC, or Parquet Files.
Create Credentials and Load Data Pump Dump Files into an Existing Table
For data loading you can also use Oracle Data Pump dump files as source files.
The source files for this load type must be exported from the source system using the
ORACLE_DATAPUMP access driver in External Tables. See Unloading and Loading Data with the
ORACLE_DATAPUMP Access Driver for details on exporting using the ORACLE_DATAPUMP
access driver.
To load the data you first move the dump files that were exported using the ORACLE_DATAPUMP
access driver to your Object Store and then use DBMS_CLOUD.COPY_DATA to load the dump
files to an existing table in your database.
The source files in this example are the Oracle Data Pump dump files, exp01.dmp and
exp02.dmp.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for all data loads.
For detailed information about the parameters, see CREATE_CREDENTIAL Procedure.
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand character (&)
as a special character. If you have the ampersand character in your password
use the SET DEFINE OFF command in those tools as shown in the example to
disable the special character and get the credential created properly.
3-191
Chapter 3
Load Data into Autonomous Database
2. Load data into an existing table using the procedure DBMS_CLOUD.COPY_DATA. For
example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp01.dmp,
https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp02.dmp',
format => json_object('type' value 'datapump')
);
END;
/
3-192
Chapter 3
Load Data into Autonomous Database
This example loads JSON values from a line-delimited file and uses the JSON file
myCollection.json. Each value, each line, is loaded into a collection on your database as
a single document.
3-193
Chapter 3
Load Data into Autonomous Database
Here's an example of such a file. It has three lines, with one object per line. Each of
those objects gets loaded as a separate JSON document.
Before loading the data from myCollection.json into your database, copy the file
to your object store:
• Create a bucket in the object store. For example, create an Oracle Cloud
Infrastructure Object Storage bucket from the Oracle Cloud Infrastructure Object
Storage link, and then in your selected compartment click Create Bucket, or use a
command such as the following OCI CLI command to create a bucket:
• Copy the JSON file to your object store bucket. For example use the following OCI
CLI command to copy the JSON file to the fruit_bucket on Oracle Cloud
Infrastructure Object Storage:
Load the JSON file from object store into a collection named fruit on your database
as follows:
1. Store your object store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL, as shown in the following example:
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials,
you can use the same credential name for loading all documents.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
See CREATE_CREDENTIAL Procedure for detailed information about the
parameters.
3-194
Chapter 3
Load Data into Autonomous Database
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand character (&)
as a special character. If you have the ampersand character in your password,
then use the SET DEFINE OFF command in those tools as shown in the example
to disable the special character, and get the credential created properly.
BEGIN
DBMS_CLOUD.COPY_COLLECTION(
collection_name => 'fruit',
credential_name => 'DEF_CRED_NAME',
file_uri_list =>
'https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-string/b/
fruit_bucket/o/myCollection.json',
format =>
JSON_OBJECT('recorddelimiter' value '''\n''') );
END;
/
3-195
Chapter 3
Load Data into Autonomous Database
to load documents into a collection. This topic explains how to load documents to your
database from a JSON array in a file.
Note:
You can also load documents from a JSON array in a file into a collection
using SODA for REST. See Load Purchase-Order Sample Data Using SODA
for REST.
This example uses the JSON file fruit_array.json. The following shows the
contents of the file fruit_array.json:
Before loading data into Autonomous Database, copy the data to your object store as
follows:
• Create a bucket in the object store. For example, create an Oracle Cloud
Infrastructure Object Store bucket from the Oracle Cloud Infrastructure Object
Storage link, in your selected Compartment, by clicking Create Bucket, or use a
command line tool such as the following OCI CLI command:
• Copy the JSON file to the object store. For example, the following OCI CLI
command copies the JSON file fruit_array.json to the object store:
Load the JSON file from object store into a SODA collection named fruit2 on your
database:
1. Store your object store credentials using the procedure
DBMS_CLOUD.CREATE_CREDENTIAL, as shown in the following example:
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
3-196
Chapter 3
Load Data into Autonomous Database
once unless your object store credentials change. Once you store the credentials, you
can use the same credential name for loading all documents.
See CREATE_CREDENTIAL Procedure for detailed information about the parameters.
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand character (&)
as a special character. If you have the ampersand character in your password,
then use the SET DEFINE OFF command in those tools as shown in the example
to disable the special character, and get the credential created properly.
BEGIN
DBMS_CLOUD.COPY_COLLECTION(
collection_name => 'fruit2',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/json/o/fruit_array.json',
format => '{"recorddelimiter" : "0x''01''", "unpackarrays" : "TRUE",
"maxdocsize" : "10240000"}'
);
END;
/
In this example you load a single JSON value which occupies the whole file, so there is
no need to specify a record delimiter. To indicate that there is no record delimiter, you can
use a character that does not occur in the input file. For this example, to indicate that
there is no delimiter, the control character 0x01 (SOH) is set to load the JSON documents
into a collection,. Thus, you specify a value for the recorddelimiter that does not occur
in the JSON file. For example, you can use value "0x''01''" because this character
does not occur directly in JSON text.
When unpackarrays parameter for format value is set to TRUE, the array of documents is
loaded as individual documents rather than as an entire array. The unpacking of array
elements is however limited to single level. If there are nested arrays in the documents,
those arrays are not unpacked.
The parameters are:
• collection_name: is the name of the target collection.
• credential_name: is the name of the credential created in the previous step. The
credential_name parameter must conform to Oracle object naming conventions,
which do not allow spaces or hyphens.
• file_uri_list: is a comma delimited list of the source files that you want to load.
• format: defines the options that you can specify to describe the format of the source
file. The format options characterset, compression, encryption,
ignoreblanklines, jsonpath, maxdocsize, recorddelimiter, rejectlimit,
type, unpackarrays are supported for loading JSON data. Any other formats
specified will result in an error.
3-197
Chapter 3
Load Data into Autonomous Database
If the data in your source files is encrypted, decrypt the data by specifying the
format parameter with the encryption option. See Decrypt Data While
Importing from Object Storage for more information on decrypting data.
See DBMS_CLOUD Package Format Options for more information.
In this example, namespace-string is the Oracle Cloud Infrastructure object
storage namespace and bucketname is the bucket name. See Understanding
Object Storage Namespaces for more information.
For detailed information about the parameters, see COPY_COLLECTION
Procedure.
The load of fruit_array.json, with DBMS_CLOUD.COPY_COLLECTION using the format
option unpackarrays recognizes array values in the source and instead of loading the
data as a single document, as it would by default, the data is loaded in the collection
fruit2 with each value in the array as a single document.
The LOGFILE_TABLE column shows the name of the table you can query to look at the
log of a load operation. For example, the following query shows the log of the load
operation with status FAILED and timestamp 2020-04-23 22:28:36:
The column BADFILE_TABLE shows the name of the table you can query to review
information for the rows reporting errors during loading. For example, the following
query shows the rejected records for the load operation:
3-198
Chapter 3
Load Data into Autonomous Database
Depending on the errors shown in the log and the rows shown in the BADFILE_TABLE table,
you might be able to correct errors by specifying different format options with
DBMS_CLOUD.COPY_COLLECTION.
Note:
The LOGFILE_TABLE and BADFILE_TABLE tables are stored for two days for each load
operation and then removed automatically.
Note:
If the database you use is an Oracle Autonomous Database then you can use
PL/SQL procedure DBMS_CLOUD.copy_collection to create a JSON document
collection from a file of JSON data such as that produced by common NoSQL
databases, including Oracle NoSQL Database.
If you use ejson as the value of the type parameter of the procedure, then
recognized extended JSON objects in the input file are replaced with corresponding
scalar values in the resulting native binary JSON collection. In the other direction,
you can use function json_serialize with keyword EXTENDED to replace scalar
values with extended JSON objects in the resulting textual JSON data.
3-199
Chapter 3
Load Data into Autonomous Database
These are the two main use cases for extended objects:
• Exchange (import/export):
– Ingest existing JSON data (from somewhere) that contains extended objects.
– Serialize native binary JSON data as textual JSON data with extended
objects, for some use outside the database.
• Inspection of native binary JSON data: see what you have by looking at
corresponding extended objects.
For exchange purposes, you can ingest JSON data from a file produced by common
NoSQL databases, including Oracle NoSQL Database, converting extended objects to
native binary JSON scalars. In the other direction, you can export native binary JSON
data as textual data, replacing Oracle-specific scalar JSON values with corresponding
textual extended JSON objects.
As an example of inspection, consider an object such as {"dob" :
"2000-01-02T00:00:00"} as the result of serializing native JSON data. Is
"2000-01-02T00:00:00" the result of serializing a native binary value of type date, or
is the native binary value just a string? Using json_serialize with keyword EXTENDED
lets you know.
The mapping of extended object fields to scalar JSON types is, in general, many-to-
one: more than one kind of extended JSON object can be mapped to a given scalar
value. For example, the extended JSON objects {"$numberDecimal":"31"} and
{"$numberLong:"31"} are both translated as the value 31 of JSON-language scalar
type number, and item method type() returns "number" for each of those JSON
scalars.
Item method type() reports the JSON-language scalar type of its targeted value (as a
JSON string). Some scalar values are distinguishable internally, even when they have
the same scalar type. This generally allows function json_serialize (with keyword
EXTENDED) to reconstruct the original extended JSON object. Such scalar values are
distinguished internally either by using different SQL types to implement them or by
tagging them with the kind of extended JSON object from which they were derived.
When json_serialize reconstructs the original extended JSON object the result is not
always textually identical to the original, but it is always semantically equivalent. For
example, {"$numberDecimal":"31"} and {"$numberDecimal":31} are semantically
equivalent, even though the field values differ in type (string and number). They are
translated to the same internal value, and each is tagged as being derived from
a $numberDecimal extended object (same tag). But when serialized, the result for both
is {"$numberDecimal":31}. Oracle always uses the most directly relevant type for the
field value, which in this case is the JSON-language value 31, of scalar type number.
3-200
Chapter 3
Load Data into Autonomous Database
Note:
There are two cases where the type of the original extended object can be lost
when deriving the internal binary-JSON value.
• An extended object with field $numberInt is translated to an Oracle SQL NUMBER
internal value, with no tag. Serializing that value produces a standard JSON-
language value of type number. There is no loss in the numerical value; the
only loss is the information that the original textual data was a $numberInt
extended object.
• Use of field $numberDecimal with infinite, very small, very large, or not-a-
number values is unsupported, and results in undefined behavior. Do not use a
string value that represents positive infinity ("Infinity" or "Inf"), negative
infinity ("-Infinity" or "-Inf"), or an unknown value (not a number, "Nan")
with $numberDecimal — instead, use $numberDouble with such values.
Table 3-1 presents correspondences among the various types used. It maps across (1) types
of extended objects used as input, (2) types reported by item method type(), (3) SQL types
used internally, (4) standard JSON-language types used as output by function
json_serialize, and (5) types of extended objects output by json_serialize when keyword
EXTENDED is specified.
Extended Object Type (Input) Oracle SQL Scalar Standard JSON Extended Object Type
JSON Type Scalar Type (Output)
Scalar (Output)
Type
(Reported
by type())
$numberDouble with value a JSON double BINARY_DO number $numberDouble with value
number, a string representing the UBLE a JSON number or one of
number, or one of these strings: these strings: "Inf", "-
"Infinity", "-Infinity", "Inf", Inf", "Nan"2
"-Inf", "Nan"1
$numberFloat with value the same float BINARY_FL number $numberFloat with value
as for $numberDouble OAT the same as
for $numberDouble
$numberDecimal with value the number NUMBER number $numberDecimal with value
same as for $numberDouble the same as
for $numberDouble
$numberInt with value a signed 32- number NUMBER number $numberInt with value the
bit integer or a string representing same as
the number for $numberDouble
$numberLong with value a JSON number NUMBER number $numberLong with value the
number or a string representing the same as
number for $numberDouble
3-201
Chapter 3
Load Data into Autonomous Database
Extended Object Type (Input) Oracle SQL Scalar Standard JSON Extended Object Type
JSON Type Scalar Type (Output)
Scalar (Output)
Type
(Reported
by type())
$binary with value one of these: binary BLOB or RAW string One of the following:
• a string of base-64 characters Conversion is • $binary with value a
equivalent to the string of base-64
• An object with fields base64
use of SQL characters
and subType, whose values are
a string of base-64 characters function • $rawid with value a
and the number 0 (arbitrary rawtohex. string of 32
hexadecimal characters,
binary) or 4 (UUID), respectively
if input had a subType
When the value is a string of
value of 4 (UUID)
base-64 characters, the extended
object can also have field $subtype
with value 0 or 4, expressed as a
one-byte integer (0-255) or a 2-
character hexadecimal string.
representing such an integer
$oid with value a string of 24 binary RAW(12) string $rawid with value a string
hexadecimal characters Conversion is of 24 hexadecimal
equivalent to the characters
use of SQL
function
rawtohex.
$rawhex with value a string with an binary RAW string $binary with value a string
even number of hexadecimal Conversion is of base-64 characters, right-
characters equivalent to the padded with = characters
use of SQL
function
rawtohex.
$rawid with value a string of 24 or binary RAW string $rawid
32 hexadecimal characters Conversion is
equivalent to the
use of SQL
function
rawtohex.
$oracleDate with value an ISO date DATE string $oracleDate with value an
8601 date string ISO 8601 date string
$oracleTimestamp with value an timestamp TIMESTAMP string $oracleTimestamp with
ISO 8601 timestamp string value an ISO 8601
timestamp string
$oracleTimestampTZ with value timestamp TIMESTAMP string $oracleTimestampTZ with
an ISO 8601 timestamp string with a with time WITH TIME value an ISO 8601
numeric time zone offset or with Z zone ZONE timestamp string with a
numeric time zone offset or
with Z
3-202
Chapter 3
Load Data into Autonomous Database
Extended Object Type (Input) Oracle SQL Scalar Standard JSON Extended Object Type
JSON Type Scalar Type (Output)
Scalar (Output)
Type
(Reported
by type())
$date with value one of the timestamp TIMESTAMP string $oracleTimestampTZ with
following: with time WITH TIME value an ISO 8601
• An integer millisecond count zone ZONE timestamp string with a
since January 1, 1990 numeric time zone offset or
with Z
• An ISO 8601 timestamp string
• An object with field numberLong
with value an integer millisecond
count since January 1, 1990
$intervalDaySecond with value daysecondI INTERVAL string $intervalDaySecond with
an ISO 8601 interval string as nterval DAY TO value an ISO 8601 interval
specified for SQL function SECOND string as specified for SQL
to_dsinterval function to_dsinterval
$intervalYearMonth with value yearmonthIn INTERVAL string $intervalYearMonth with
an ISO 8601 interval string as terval YEAR TO value an ISO 8601 interval
specified for SQL function MONTH string as specified for SQL
to_yminterval function to_yminterval
1 The string values are interpreted case-insensitively. For example, "NAN" "nan", and "nAn" are accepted and equivalent, and
similarly "INF", "inFinity", and "iNf". Infinitely large ("Infinity" or "Inf") and small ("-Infinity" or "-Inf")
numbers are accepted with either the full word or the abbreviation.
2 On output, only these string values are used — no full-word Infinity or letter-case variants.
See Also:
IEEE Standard for Floating-Point Arithmetic (IEEE 754)
3-203
Chapter 3
Load Data into Autonomous Database
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. Once you store the credentials
you can then use the same credential name for all data loads.
For detailed information about the parameters, see CREATE_CREDENTIAL
Procedure.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
2. Load JSON data into an existing table using the procedure
DBMS_CLOUD.COPY_DATA.
For example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'WEATHER2',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
jsonfiles*',
format => JSON_OBJECT('type' value 'json',
'columnpath' value '["$.WEATHER_STATION_ID",
"$.WEATHER_STATION_NAME"]')
);
END;
/
3-204
Chapter 3
Load Data into Autonomous Database
For example:
The LOGFILE_TABLE column shows the name of the table you can query to look at the log of a
load operation. For example, the following query shows the log for this load operation:
The column BADFILE_TABLE shows the name of the table you can query to look at the rows
with errors during loading. For example, the following query shows the rejected records for
the load operation. If there are not any rejected rows in the operation, the query does not
show any rejected rows.
Depending on the errors shown in the log and the rows shown in the BADFILE_TABLE file you
can correct the error by specifying the correct format options in DBMS_CLOUD.COPY_DATA.
When the format type is "datapump", any rows rejected up to the specified rejectlimit are
logged in the log file, but a BADFILE_TABLE is not generated.
By default the LOGFILE_TABLE and BADFILE_TABLE files are retained for two days and then
automatically removed. You can change the number of retention days with the logretention
option for the format parameter.
3-205
Chapter 3
Load Data into Autonomous Database
Note:
For ORC, Parquet, or Avro files, when the format parameter type is set to
the value orc, parquet or avro the BADFILE_TABLE table is always empty.
3-206
Chapter 3
Load Data into Autonomous Database
Prerequisites
A data file, for example, data.txt exists in the Amazon S3 bucket that you can import. The
sample file in this example has the following contents:
1,Direct Sales
2,Tele Sales
3,Catalog
4,Internet
5,Partners
On the AWS side, log in to your AWS account and do the following:
1. Grant access privileges to the AWS IAM user for the Amazon S3 bucket.
For more information, see Controlling access to a bucket with user policies.
2. Create an access key for the user.
For more information, see Managing access keys for IAM users.
3. Obtain an object URL for the data file stored in the Amazon S3 bucket.
For more information, see Accessing and listing an Amazon S3 bucket.
Note:
Here, the user name is your AWS Access key ID and the password is your user
access key.
3-207
Chapter 3
Load Data into Autonomous Database
2. Create a table in your database where you want to load the data.
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'mytable',
credential_name => 'AWS_CRED_NAME',
file_uri_list => https://aws-bucket-01.s3.amazonaws.com/
data.txt',
format => json_object('delimiter' value ',')
);
END;
/
For detailed information about the parameters, see COPY_DATA Procedure and
COPY_DATA Procedure for Avro, ORC, or Parquet Files.
You have successfully imported data from Amazon S3 to your Autonomous Database.
You can run this statement and verify the data in your table.
ID NAME
-- –-------------
1 Direct Sales
2 Tele Sales
3 Catalog
4 Internet
5 Partners
For more information about loading data, see Load Data from Files in the Cloud.
3-208
Chapter 3
Load Data into Autonomous Database
1. From the console, select the compartment for your Autonomous Database, and then
select the link to your Autonomous Database to open the console.
Note:
These steps are shown using Database Actions to execute the PL/SQL code,
and query the data. These actions can be performed from any SQL connection,
connecting to the Autonomous Database as a user with the proper privileges.
2. On the Autonomous Database Details page select Database Actions and in the list click
SQL.
As an alternative, select Database Actions and click View all database actions to
access the Database Actions Launchpad. From the Development section of the
Database Actions Launchpad, select SQL.
3. Within the SQL Worksheet, enter and execute the following code:
BEGIN DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name => '<YOUR_TABLE_NAME>'
, credential_name => '<YOUR_CREDENTIAL_NAME>'
, file_uri_list => '<YOUR_ORACLE_OBJECT_STORE_URL>'
, format => json_object('trimspaces' value
'rtrim','skipheaders' value '1', 'dateformat' value 'YYYYMMDD')
, field_list => 'object_id (1:3) char
, object_name (4:14) char
, object_type (15:39) char
, created_date1 (40:45) date mask
"YYMMDD"
, created_date2 (46:53) date
, last_ddl_time (54:61) date
, status (62:71)'
, column_list => 'object_id number
, object_name varchar2(30)
, object_type varchar2(25)
, status varchar2(10)
, created_date1 date
, created_date2 date
, last_ddl_time date');
3-209
Chapter 3
Load Data into Autonomous Database
END;
/
ORA-39405: Oracle Data Pump does not support importing from a source
database with TSTZ version n+1
into a target database with TSTZ version n.
See Manage Time Zone File Updates on Autonomous Database for more information
on this timezone related error.
• Export Your Existing Oracle Database to Import into Autonomous Database
Use Oracle Data Pump Export to export your existing Oracle Database to migrate
to Autonomous Database using Oracle Data Pump Import.
• Import Data Using Oracle Data Pump Version 18.3 or Later
Oracle recommends using the latest Oracle Data Pump version for importing data
from Data Pump files into your Autonomous Database, as it contains
enhancements and fixes for a better experience.
• Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier)
You can import data from Data Pump files into your Autonomous Database using
Data Pump client versions 12.2.0.1 and earlier by setting the default_credential
parameter.
• Access Log Files for Data Pump Import
The log files for Data Pump Import operations are stored in the directory you
specify with the data pump impdp directory parameter.
• Oracle Data Pump Import and Table Compression
Provides notes for using Oracle Data Pump import on Autonomous Database.
3-210
Chapter 3
Load Data into Autonomous Database
dumpfile parameter. Set the parallel parameter to at least the number of CPUs you have in
your database.
Oracle recommends using the following Data Pump parameters for faster and easier
migration to Autonomous Database:
exclude=cluster,indextype,db_link
parallel=n
schemas=schema_name
dumpfile=export%u.dmp
The exclude parameters ensure that these object types are not exported.
Note:
If during the export with expdp you use the encryption_pwd_prompt=yes parameter
then also use encryption_pwd_prompt=yes with your import and input the same
password at the impdp prompt to decrypt the dump files (remember the password
you supply during export). The maximum length of the encryption password is 128
bytes.
You can use other Data Pump Export parameters, like compression, depending on your
requirements. For more information on Oracle Data Pump Export see Oracle Database
Utilities.
3-211
Chapter 3
Load Data into Autonomous Database
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
user_ocid =>
‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwet
kpqsoa’,
tenancy_ocid =>
‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjq
qge32ewnrcyx2a’,
private_key =>
‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLG
PD8NM//JEBg=’,
fingerprint =>
‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
3-212
Chapter 3
Load Data into Autonomous Database
• Data Pump supports using a resource principal credential with impdp. See Import
Data Using Oracle Data Pump with OCI Resource Principal Credential for more
information.
2. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on your
Cloud Object Storage and the credential parameter set to the name of the credential you
created in the previous step. For example:
impdp admin/password@db2022adb_high \
directory=data_pump_dir \
credential=def_cred_name \
dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/export%u.dmp \
parallel=16 \
encryption_pwd_prompt=yes \
exclude=cluster,indextype,db_link
Note:
If during the export with expdp you used the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes and input the same password
at the impdp prompt that you specified during the export.
3-213
Chapter 3
Load Data into Autonomous Database
Import Data Using Oracle Data Pump with OCI Resource Principal Credential
Oracle Data Pump supports importing data pump files into your Autonomous Database
using an Oracle Cloud Infrastructure resource principal as a credential object.
If you use Oracle Data Pump expdp to export directly to Object Store then you must
use the same credential that was used to export when you import with impdp. In this
case, Oracle Data Pump import does not support Oracle Cloud Infrastructure resource
principal credentials. Other methods for uploading are supported for using impdp using
resource principal credentials. For example, if you upload Oracle Data Pump files on
Object Store using DBMS_CLOUD.PUT_OBJECT, you can import the files using Oracle
Data pump impdp using resource principal credentials. Likewise, when you use the
Oracle Cloud Infrastructure Console to upload data pump files to Object Store, you
can use resource principal credentials to import into an Autonomous Database
instance with Oracle Data pump impdp.
In Oracle Data Pump, if your source files reside on Oracle Cloud Infrastructure Object
Storage you can use Oracle Cloud Infrastructure native URIs or Swift URIs. See
DBMS_CLOUD URI Formats for details on these file URI formats.
1. Configure the dynamic groups and policies and enable Oracle Cloud Infrastructure
resource principal to access the Object Store location where the data pump files
you want to import reside.
See Use Resource Principal to Access Oracle Cloud Infrastructure Resources for
more information.
2. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on
your Cloud Object Storage and the credential parameter set to
OCI$RESOURCE_PRINCIPAL.
For example:
3-214
Chapter 3
Load Data into Autonomous Database
impdp admin/password@db2022adb_high \
directory=data_pump_dir \
credential= 'OCI$RESOURCE_PRINCIPAL' \
dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/export%u.dmp \
parallel=16 \
encryption_pwd_prompt=yes \
exclude=cluster,indextype,db_link
Note:
If during the export with expdp you used the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes and input the same password
at the impdp prompt that you specified during the export.
Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier)
You can import data from Data Pump files into your Autonomous Database using Data Pump
client versions 12.2.0.1 and earlier by setting the default_credential parameter.
Data Pump Import versions 12.2.0.1 and earlier do not have the credential parameter. If you
are using an older version of Data Pump Import you need to define a default credential
property for Autonomous Database and use the default_credential keyword in the
dumpfile parameter.
3-215
Chapter 3
Load Data into Autonomous Database
In Oracle Data Pump, if your source files reside on Oracle Cloud Infrastructure Object
Storage you can use the Oracle Cloud Infrastructure native URIs, or Swift URIs. See
DBMS_CLOUD URI Formats for details on these file URI formats.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
user_ocid =>
‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwet
kpqsoa’,
tenancy_ocid =>
‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjq
qge32ewnrcyx2a’,
private_key =>
‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLG
PD8NM//JEBg=’,
fingerprint =>
‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
3-216
Chapter 3
Load Data into Autonomous Database
See Configure Policies and Roles to Access Resources for more information on resource
principal based authentication.
Note:
The DEFAULT_CREDENTIAL value cannot be an Azure service principal, Amazon
Resource Name (ARN), or a Google service account.
The DEFAULT_CREDENTIAL value can be set to NULL if you are using a pre-authenticated
URL.
3. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on your
Cloud Object Storage, and set the default_credential keyword. For example:
impdp admin/password@db2022adb_high \
directory=data_pump_dir \
dumpfile=default_credential:https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/export%u.dmp \
parallel=16 \
encryption_pwd_prompt=yes \
exclude=cluster,indextype,db_link
Note:
If during the export with expdp you used the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes and input the same password
at the impdp prompt that you specified during the export.
3-217
Chapter 3
Load Data into Autonomous Database
For information on which database service name to connect to run Data Pump
Import, Manage Concurrency and Priorities on Autonomous Database.
For the dump file URL format for different Cloud Object Storage services, see
DBMS_CLOUD URI Formats.
In this example the following are excluded during the Data Pump Import:
• Clusters
• Indextypes
• Database links
Note:
To perform a full import or to import objects that are owned by other users,
you need the DATAPUMP_CLOUD_IMP role.
You can also use Data Pump Import to import SODA collections on Autonomous
Database. See Import SODA Collection Data Using Oracle Data Pump Version 19.6 or
Later for more information.
For information on disallowed objects in Autonomous Database, see SQL Commands.
See Oracle Data Pump Import and Table Compression for details on table
compression using Oracle Data Pump import on Autonomous Database.
For detailed information on Oracle Data Pump Import parameters see Oracle
Database Utilities.
Notes for importing with Oracle Data Pump:
• When you perform Oracle Data Pump export to Object Storage with a Swift URI
you must use Swift credentials to import with Oracle Data Pump import. See
Oracle Cloud Infrastructure Object Storage Swift URI Format for more information
on Swift URIs.
• When you perform Oracle Data Pump export to Object Storage with a native URI,
you can import using either Swift credentials or Signing Key based Credentials.
See Oracle Cloud Infrastructure Object Storage Native URI Format for more
information on Native URIs.
To access the log file you need to move the log file to your Cloud Object Storage using
the procedure DBMS_CLOUD.PUT_OBJECT. For example, the following PL/SQL block
moves the file import.log to your Cloud Object Storage:
BEGIN
DBMS_CLOUD.PUT_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
3-218
Chapter 3
Load Data into Autonomous Database
import.log',
directory_name => 'DATA_PUMP_DIR',
file_name => 'import.log');
END;
/
TRANSFORM=TABLE_COMPRESSION_CLAUSE:NONE
The TRANSFORM parameter with this option specifies that Oracle Data Pump Import should
ignore the compression type of your source tables. Using this option Oracle Data Pump
imports the tables into Autonomous Database using the default compression type, where the
default compression type depends on the Autonomous Database workload type:
• Data Warehouse: The default table compression is Hybrid Columnar Compression.
Oracle recommends using this default if your application primarily uses bulk load
operations on your tables, as the loads will compress the data. Query performance on
these tables will benefit from compression as queries need to do less IO.
If you have staging tables replicated from other systems using Oracle GoldenGate or
other replication tools, or your application primarily uses row-by-row DML operations on
tables, Oracle recommends keeping the tables uncompressed or using Advanced Row
Compression.
• Transaction Processing: The default table compression is no compression.
• JSON Database: The default table compression is no compression.
• APEX: The default table compression is no compression.
See TRANSFORM for more information on the Oracle Data Pump Import TRANSFORM
parameter.
See About Table Compression for more information.
3-219
Chapter 3
Load Data into Autonomous Database
• Load Data into Existing Autonomous Database Table with Oracle Database
Actions
You can load data into an existing table in Autonomous Database with the
Database Actions import from file feature.
Load Data into Existing Autonomous Database Table with Oracle Database
Actions
You can load data into an existing table in Autonomous Database with the Database
Actions import from file feature.
Before you load data, create the table in Autonomous Database. The file formats that
you can upload with the Database Actions upload feature are CSV, XLS, XLSX, TSV
and TXT.
To upload data from local files to an existing table with Database Actions, do the
following:
1. Access Database Actions from the Oracle Cloud Infrastructure Console or with the
Database Actions link provided to you.
2. To import data, in Database Actions, under Development click SQL.
This shows an SQL worksheet.
3. In the Navigator, right-click the table where you want to load data.
4. In the menu select Data loading → Upload Data...
For example, select the SALES table, right-click, and select Data loading →
Upload Data...
3-220
Chapter 3
Load Data into Autonomous Database
5. In the Import data dialog you can either drag and drop files or click Select files to show a
browser to select the files to import.
6. Complete the mapping for the columns you are importing. There are a number of options
for column mapping. Click (Show/Hide options) icon to show the data import and
format options to change column names, skip rows, rows to load, and various other
options.
Click Apply to apply the options you select.
3-221
Chapter 3
Load Data into Autonomous Database
7. When you finish selecting format and mapping options, click Next to preview the
column mapping.
If there is a problem at this stage, information shows with more details, such as: 2
pending actions. This means you need to correct or fix the source file data before
you import.
8. Click Next.
9. Click Next to review the column mapping.
This shows the Review page to review the source columns and target columns for
the import:
Depending on the size of the data file you are importing, the import may take some
time.
Database Actions provides history to show the status of the import and to allow you to
review the results or errors associated with the import operation.
3-222
Chapter 3
Load Data into Autonomous Database
For a detailed summary of the upload process, right-click the table in the Navigator tab,
select Data loading, and then select Loaded Data. A summary of the data loaded is
displayed in the Loaded data summary dialog.
If any data failed to load, you can view the number of rows in the Failed Rows column. Click
the column and a dialog is displayed showing the failed rows.
In the Loaded data summary dialog, you can also search for files loaded by schema name,
table name, or file name. To remove the loaded files, click the Delete icon.
See Uploading Data from Local Files for more information on using Database Actions to
upload data.
To load data from a directory, you do not need to include the credential_name parameter, but
you need READ object privileges on the directory.
For example, you can load data into an existing table using DBMS_CLOUD.COPY_DATA:
1. Use an existing directory, create a directory, or attach a network file system for the source
files.
See Create Directory in Autonomous Database for information on creating a local
directory on your Autonomous Database instance.
See Attach Network File System to Autonomous Database for information on attaching a
network file system that contains the data you want to load.
2. Load data with a DBMS_CLOUD procedure.
For example:
3-223
Chapter 3
Load Data into Autonomous Database
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'CHANNELS',
file_uri_list => 'MY_DIR:channels.txt',
format => json_object('delimiter' value ',')
);
END;
/
3-224
Chapter 3
Load Data into Autonomous Database
3-225
Chapter 3
Load Data into Autonomous Database
For either pipeline type you perform the following steps to create and use a pipeline:
1. Create and configure the pipeline. See Create and Configure Pipelines for more
information.
2. Test a new pipeline. See Test Pipelines for more information.
3. Start a pipeline. See Start a Pipeline for more information.
In addition, you can monitor, stop, or drop pipelines:
• While a pipeline is running, either during testing or during regular use after you
start the pipeline, you can monitor the pipeline. See Monitor and Troubleshoot
Pipelines for more information.
• You can stop a pipeline and later start it again, or drop a pipeline when you are
finished using the pipeline. See Stop a Pipeline and Drop a Pipeline for more
information.
3-226
Chapter 3
Load Data into Autonomous Database
3-227
Chapter 3
Load Data into Autonomous Database
JSON
Amazon S3
CSV
Autonomous
Google Cloud Database
Storage
XML
Azure Blob
Storage
Migration from non-Oracle databases is one possible use case for a load pipeline.
When you need to migrate your data from a non-Oracle database to Oracle
Autonomous Database, you can extract the data and load it into Autonomous
Database (Oracle Data Pump format cannot be used for migrations from non-Oracle
databases). By using a generic file format such as CSV to export data from a non-
Oracle database, you can save your data to files and upload the files to object store.
Next, create a pipeline to load the data to Autonomous Database. Using a load
pipeline to load a large set of CSV files provides important benefits such as fault
tolerance, and resume and retry operations. For a migration with a large data set you
can create multiple pipelines, one per table for the non-Oracle database files, to load
data into Autonomous Database.
3-228
Chapter 3
Load Data into Autonomous Database
• Export incremental data of a table to object store using a date or timestamp column as
key for tracking newer data.
• Export data of a table to object store using a query to select data without a reference to a
date or timestamp column (so that the pipeline exports all the data that the query selects
for each scheduler run).
Export pipelines have the following features (some of these are configurable using pipeline
attributes):
• Results are exported in parallel to object store.
• In case of any failures, a subsequent pipeline job repeats the export operation.
• Export pipelines support multiple export file formats, including: CSV, JSON, Parquet, or
XML.
3-229
Chapter 3
Load Data into Autonomous Database
– Export data of a table to object store using a query to select data without a
reference to a date or timestamp column (so that the pipeline exports all the
data that the query selects for each scheduler run). See Create and Configure
a Pipeline to Export Query Results (Without a Timestamp).
• Create and Configure a Pipeline for Loading Data
You can create a pipeline to load data from external files in object store to tables in
Autonomous Database.
• Create and Configure a Pipeline for Export with Timestamp Column
• Create and Configure a Pipeline to Export Query Results (Without a Timestamp)
You can create an export pipeline to automatically export data from your
Autonomous Database to object store. Using this export pipeline option you
specify a SQL query that the pipeline runs periodically to export data to object
store. You can use this export option to share the latest data from your
Autonomous Database to object store for other applications to consume the data.
On your Autonomous Database, either use an existing table or create the database
table where you are loading data. For example:
BEGIN
DBMS_CLOUD_PIPELINE.CREATE_PIPELINE(
pipeline_name => 'MY_PIPE1',
pipeline_type => 'LOAD',
description => 'Load metrics from object store into a
table'
);
END;
/
3-230
Chapter 3
Load Data into Autonomous Database
You specify the credential for the pipeline source location with the attribute
credential_name. If you do not supply a credential_name in the next step, the
credential_name value is set to NULL. You can use the default NULL value when the
location attribute is a public or pre-authenticated URL.
See CREATE_CREDENTIAL Procedure for more information.
3. Set the pipeline attributes, including the required attributes: location, table_name, and
format.
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'MY_PIPE1',
attributes => JSON_OBJECT(
'credential_name' VALUE 'OBJECT_STORE_CRED',
'location' VALUE 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/',
'table_name' VALUE 'employee',
'format' VALUE '{"type":"json", "columnpath":["$.NAME",
"$.AGE", "$.SALARY"]}',
'priority' VALUE 'HIGH',
'interval' VALUE '20')
);
END;
/
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'MY_PIPE1',
attribute_name => 'format',
3-231
Chapter 3
Load Data into Autonomous Database
BEGIN
DBMS_CLOUD_PIPELINE.CREATE_PIPELINE(
pipeline_name=>'EXP_PIPE1',
pipeline_type=>'EXPORT',
description=>'Export time series metrics to object store');
END;
/
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'EXP_PIPE1',
attributes => JSON_OBJECT('credential_name' VALUE
3-232
Chapter 3
Load Data into Autonomous Database
'OBJECT_STORE_CRED',
'location' VALUE 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/',
'table_name' VALUE 'metric_table',
'key_column' VALUE 'metric_time',
'format' VALUE '{"type": "json"}',
'priority' VALUE 'MEDIUM',
'interval' VALUE '20')
);
END;
/
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'EXP_PIPE1',
attributes => JSON_OBJECT('credential_name' VALUE
'OBJECT_STORE_CRED',
'location' VALUE 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/',
'query' VALUE 'SELECT * from metrics_table',
'key_column' VALUE 'metric_time',
'format' VALUE '{"type": "json"}',
'priority' VALUE 'MEDIUM',
'interval' VALUE '20')
);
END;
/
Where the credential_name is the credential you created in the previous step.
The following attributes must be set to run an export pipeline:
• location: Specifies the destination object store location. The location you specify is
for one table_name per pipeline.
• table_name: Specifies the table in your database that contains the data you are
exporting (either the table_name parameter or the query parameter is required).
• query: Specifies the query to run in your database that provides the data you are
exporting (either the table_name parameter or the query parameter is required).
• format: Describes the format of the data you are exporting.
See DBMS_CLOUD Package Format Options for EXPORT_DATA for more
information.
The priority value determines the degree of parallelism for fetching data from the
database.
The interval value specifies the time interval in minutes between consecutive runs of a
pipeline job. The default interval is 15 minutes.
See DBMS_CLOUD_PIPELINE Attributes for details on the pipeline attributes.
After you create a pipeline you can test the pipeline or start the pipeline:
• Test Pipelines
3-233
Chapter 3
Load Data into Autonomous Database
• Start a Pipeline
BEGIN
DBMS_CLOUD_PIPELINE.CREATE_PIPELINE(
pipeline_name=>'EXP_PIPE2',
pipeline_type=>'EXPORT',
description=>'Export query results to object store.');
END;
/
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'EXP_PIPE2',
attributes => JSON_OBJECT(
'credential_name' VALUE 'OBJECT_STORE_CRED',
'location' VALUE 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/',
'query' VALUE 'SELECT * FROM table_name',
'format' VALUE '{"type": "json"}',
'priority' VALUE 'MEDIUM',
'interval' VALUE '20')
);
END;
/
Where the credential_name is the credential you created in the previous step.
The following attributes must be set to run an export pipeline:
3-234
Chapter 3
Load Data into Autonomous Database
Test Pipelines
Use RUN_PIPELINE_ONCE to run a pipeline once on-demand without creating a scheduled job.
RUN_PIPELINE_ONCE is useful for testing a pipeline before you start the pipeline. After you run
a pipeline once to test the pipeline and check that it is working as expected, use
RESET_PIPELINE to reset the pipeline's state (to the state before you ran RUN_PIPELINE_ONCE).
1. Create a pipeline.
See Create and Configure Pipelines for more information.
2. Run a pipeline once to test the pipeline.
BEGIN
DBMS_CLOUD_PIPELINE.RUN_PIPELINE_ONCE(
pipeline_name => 'MY_PIPE1'
);
END;
/
BEGIN
DBMS_CLOUD_PIPELINE.RESET_PIPELINE(
pipeline_name => 'MY_PIPE1',
purge_data => TRUE
);
END;
/
3-235
Chapter 3
Load Data into Autonomous Database
Start a Pipeline
After you create a pipeline, you can start the pipeline.
When a pipeline is started the pipeline runs continuously in a scheduled job. The
pipeline's scheduled job repeats, either by default every 15 minutes or at the interval
you set with the interval attribute.
1. Start a pipeline.
BEGIN
DBMS_CLOUD_PIPELINE.START_PIPELINE(
pipeline_name => 'EMPLOYEE_PIPELINE'
);
END;
/
PIPELINE_NAME STATUS
------------------------ -------
EMPLOYEE_PIPELINE STARTED
Stop a Pipeline
Use STOP_PIPELINE to stop a pipeline. When a pipeline is stopped, no future jobs are
scheduled for the pipeline.
By default currently running jobs complete when you stop a pipeline. Set the force
parameter to TRUE to terminate any running jobs and stop the pipeline immediately.
3-236
Chapter 3
Load Data into Autonomous Database
1. Stop a pipeline.
BEGIN
DBMS_CLOUD_PIPELINE.STOP_PIPELINE(
pipeline_name => 'EMPLOYEE_PIPELINE'
);
END;
/
PIPELINE_NAME STATUS
------------------------ -------
EMPLOYEE_PIPELINE STOPPED
Drop a Pipeline
The procedure DROP_PIPELINE drops an existing pipeline.
If a pipeline has been started, it must be stopped before the pipeline can be dropped. See
STOP_PIPELINE Procedure for more information.
In order to drop a pipeline that is started, set the force parameter to TRUE to terminate any
running jobs and drop the pipeline immediately
1. Drop a pipeline.
BEGIN
DBMS_CLOUD_PIPELINE.DROP_PIPELINE(
pipeline_name => 'EMPLOYEE_PIPELINE'
);
END;
/
No rows selected
3-237
Chapter 3
Load Data into Autonomous Database
Reset a Pipeline
Use the reset pipeline operation to clear the record of the pipeline to the initial state.
Note:
You can optionally use reset pipeline to purge data in the database table
associated with a load pipeline or to remove files in object store for an export
pipeline. Usually this option is used when you are testing a pipeline during
pipeline development.
BEGIN
DBMS_CLOUD_PIPELINE.RESET_PIPELINE(
pipeline_name => 'EMPLOYEE_PIPELINE',
purge_data => TRUE);
END;
/
Only use the purge_data parameter with value TRUE if you want to clear data in
your database table, for a load pipeline, or clear files in object store for an export
pipeline. Usually this option is used when you are testing a pipeline during pipeline
development.
See RESET_PIPELINE Procedure for more information.
3-238
Chapter 3
Load Data into Autonomous Database
PIPELINE_NAME ATTRIBUTE_NAME
ATTRIBUTE_VALUE
------------- ---------------
-----------------------------------------------------------------------------
----
MY_TREE_DATA credential_name
DEF_CRED_OBJ_STORE
3-239
Chapter 3
Load Data into Autonomous Database
PIPELINE_NAME STATUS_TABLE
------------- --------------------
MY_TREE_DATA PIPELINE$9$41_STATUS
View the status table to see information about the pipeline, including the following:
• The relevant error number and error message are recorded in the status table if an
operation on a specific file fails.
• For completed pipeline operations, the time needed for each operation can be
calculated using the reported START_TIME and END_TIME.
For example the following shows that the load operation for two files failed and one
completed:
3-240
Chapter 3
Load Data into Autonomous Database
Pipelines for loading data, where the pipeline_type is 'LOAD', reserve an ID that is shown in
USER_LOAD_OPERATIONS and in DBA_LOAD_OPERATIONS. The ID value in these views maps to
the pipeline's OPERATION_ID in USER_CLOUD_PIPELINES and DBA_CLOUD_PIPELINES.
To obtain more information for a load pipeline, query the pipeline's OPERATION_ID:
PIPELINE_NAME OPERATION_ID
------------- ------------
MY_TREE_DATA 41
For example:
This query shows ID, TYPE, LOGFILE_TABLE, BADFILE_TABLE if it exists, and the STATUS_TABLE.
You can view these tables for additional pipeline load information.
3-241
Chapter 3
Load Data into Autonomous Database
PIPELINE_NAME OPERATION_ID
------------- ------------
MY_TREE_DATA 41
For example:
This query shows ID, TYPE, LOGFILE_TABLE, BADFILE_TABLE if it exists, and the
STATUS_TABLE. You can view these tables for additional pipeline load information.
View the pipeline log file table to see a complete log of the pipeline's load operations.
For example:
3-242
Chapter 3
Load Data into Autonomous Database
View the pipeline bad file table to see details on input format records with errors. The bad file
table show information for the rows reporting errors during loading. Depending on the errors
shown in the log file table and the rows shown in the pipeline's bad file table, you might be
able to correct the errors either by modifying the pipeline format attribute options, or by
modifying the data in the file you are loading.
For example:
BEGIN
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'ORA$AUDIT_EXPORT',
attribute_name => 'credential_name',
attribute_value => 'DEF_CRED_OBJ_STORE'
);
DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name => 'ORA$AUDIT_EXPORT',
attribute_name => 'location',
attribute_value => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/'
);
END;
/
The log data from database is exported to the object store location you specify.
See SET_ATTRIBUTE Procedure for more information.
3-243
Chapter 3
Load Data into Autonomous Database
Note:
If you do not see the Data Load card then your database user is missing the
required DWROLE role.
To reach the Data Load page, click Data Load in the Database Actions page, or click
the Selector icon and select Data Load from the Data Studio menu in the
navigation pane.
To load or create links to data or create live table feeds, on the Data Load page, select
a combination of an operation and a data source location. The following table lists the
operations and the source locations that support those operations.
3-244
Chapter 3
Load Data into Autonomous Database
You can also view the following operations from this page:
• Data Load Jobs: View and check your data load jobs.
• Connections: Manage your cloud storage links, Data Catalog links and Share Providers.
The following topics describe these actions.
• Manage Connections
• Loading Data
• Linking Data
• Feeding Data
• Checking Data Load Jobs
You can check an existing data load job and retrieve the data load job later when
required.
• Data Studio Preferences
This topic explains the Data Studio Preferences setting in the Load Data page.
• Managing Connections
Connections that are established from the Data Studio to the cloud, catalogs and shares
are listed on this page.
• Loading Data
You can load data from files on your local device, from remote databases, or from cloud
storage buckets from directories and share providers.
• Linking Data
You can link to data in remote databases or in cloud storage buckets.
• Feeding Data
You can run a live table feed on demand, on a schedule, or as the result of a notification.
3-245
Chapter 3
Load Data into Autonomous Database
The Data Load Job page contains the Search for Data Load Jobs field, a list of Data
load job cards. You can enter the data load job you are looking for in the field or click
one of the data load jobs from the list.
3-246
Chapter 3
Load Data into Autonomous Database
Select one or more filter values to limit the data load jobs shown on the page. Only those
entities that match the filter values are shown. The filter criteria are based on the
schemas and how data is loaded or linked. That is, the items returned by a search are
filtered by these filter settings. Selecting all or none of the options shows all entities.
4. Display Area
The area beneath the Search for Data Load Jobs field displays the data load job carts
returned by a search and that match the filter criteria set in the Filters panel. It displays
list of carts which represents a list of previously run data load jobs
You can view details about the job, re-run the Data Load job, Rename the Data Load Job and
Delete the Data Load Job.
View Details about the Data Load Job
To view details about the existing data load job, click Action and select View Details in the
card for the load job. Selecting View Details displays details like Lineage, Impact and Log
details of the data load job.
For details on Lineage, Impact and Log details, see Viewing Entity Details.
Rerun Data Load Job
After viewing details about the selected data load or data link job whose sources are from
cloud storage, you can re-run the selected previously run data load job. The previous files
and folders processed in that job will be loaded in the cart with all your settings unchanged
from the last run. On the left side of the page is a navigator pane, where you choose the
cloud store connection and the folders or files containing the data that were previously run.
The previously used cloud storage details are already present in the navigator pane. Select
additional files or folders from the navigator pane and drop them in the Data Load Cart area.
To change the cloud storage connection and the folders and files containing the data, see
Managing Connections
Once you have added the data sources to the data load cart, click the Start icon in the data
load cart menu bar. When the data load job completes, the Load Cloud Object page displays
the results of the job. At the top of the page, a status message shows the number of items for
which the load has completed over the number of items in the job and the total time elapsed
for the job. If any previously loaded or linked files and folders are no longer present in the
cloud location, an error message reports the problem. You can view the list of unavailable
files and folders, which will not be loading into the cart for processing.
Click the Cancel icon to cancel re-running the selected job.
Note:
The Rerun Data Load Job option is visible for data jobs with Load and Link data
options for Object storage.
3-247
Chapter 3
Load Data into Autonomous Database
Select the Delete Data Load Job option to remove the previously run job from the
Data load Job page. This will also remove the log files and the error logs generated
when attempting to run the data load job.
in the top-right corner of the data load page to view the Data Studio Preferences
wizard and access this feature.
2. In the General tab of the Data Studio Preferences screen, select the OCI
Credential from the drop-down. See Create Oracle Cloud Infrastructure Native
Credentials to refer to the steps you need to follow to establish cloud storage
3-248
Chapter 3
Load Data into Autonomous Database
connection from Data Studio to Oracle Cloud Infrastructure (OCI) Object service. You will
use this credential to access various OCI services such as OCI languages, Identity, and
storage Buckets. The Data Load tool uses the OCI Credential to list compartments, call
the OCI Language PII API, list buckets for data load from cloud Store, and list buckets
when creating a Cloud Storage Link.
3. Select your root compartment from the Compartment drop-down which lists the buckets
in the Object Storage compartment.
4. Select Use OCI to detect Personally Identifiable Information (PII) to warn you about
loading private and sensitive data such as your name, date of birth, social security
number, and email address.
5. Select AI Profile (Experimental) from the drop-down to use this AI profile for facilitating
and configuring the translation of natural language prompts to SQL statements. If you
have Select AI set up using either OpenAI or Cohere, you can enhance your data by
loading small reference data tables that are generated by large language models. Try out
the suggested prompts, or come up with your own to generate data to load into the
autonomous database.
6. Click Save to save the current preferences once all the settings are complete.
Managing Connections
Connections that are established from the Data Studio to the cloud, catalogs and shares are
listed on this page.
The Connections page contains the Search for entities such as Cloud Storage Links, Data
Catalog Links, and Share Providers, and a list of entity cards. You can enter the entity you
are looking for in the field or click one of the entity card from the list. You can register the
cloud store you want to use from this page. You can also register data catalogs and subscribe
to a share provider.
3-249
Chapter 3
Load Data into Autonomous Database
3-250
Chapter 3
Load Data into Autonomous Database
• Oracle Cloud Infrastructure (OCI), see Create an OCI Cloud Store Location.
• Amazon S3, or you are calling an AWS API, see Create an Amazon S3 Cloud Store
Location.
• Microsoft Azure Blob Storage or you are calling an Azure API, see Create an Microsoft
Azure Cloud Store Location.
• Google Cloud Storage, see Create a Google Cloud Store Location.
• Other (Swift compatible) cloud storage, see Create an Other (Swift Compatible) Cloud
Store Location.
• OCI cloud storage by using native OCI credentials, see Create an OCI Cloud storage
location using Oracle Cloud Infrastructure Signing Keys.
Note:
This method is suggested if you need to use the OCI REST APIs. You need to
use the Native OCI credentials if you want to use the OCI REST APIs. To
create an OCI cloud storage location using Oracle Cloud Infrastructure Signing
Keys, you must first Create Oracle Cloud Infrastructure Native Credentials.
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens a
Create Credential wizard.
In the Credentials section, select Cloud Username and Password.
5. From the Cloud Store drop-down list, select Oracle.
6. Enter a name in the Credential Name field. The name must conform to Oracle object
naming conventions, which do not allow spaces or hyphens. For example:
my_credential
7. For an OCI cloud store, in the Oracle Cloud Infrastructure User Name field, enter your
OCI user name. For example:
myUsername
8. For an OCI cloud store, in the Auth Token field, enter your auth token. For example:
LPB>Ktk(1M1SD+a]+r
3-251
Chapter 3
Load Data into Autonomous Database
9. In the Bucket URI field, enter the URI and bucket for your OCI instance bucket.
a. To get the URI and bucket, go to the bucket in the Object Storage
compartment in your Oracle Cloud Instance.
b. In the Objects group, click the Actions (three vertical dots) icon for a file in the
bucket, then click View Object Details.
c. Copy all of the URL Path (URI) except for the file name. Be sure to include the
trailing slash. For example, for the file https://objectstorage.us-
phoenix-1.oraclecloud.com/n/myoci/b/my_bucket/o/MyFile.csv, select the
following:
https://objectstorage.us-phoenix-1.oraclecloud.com/n/myoci/b/
my_bucket/o/
Note:
The display area is blank when we create a new cloud storage location.
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens
a Create Credential wizard.
In the Credentials section, select Cloud Username and Password.
5. From the Cloud Store drop-down list, select Oracle.
6. Enter a name in the Credential Name field. The name must conform to Oracle
object naming conventions, which do not allow spaces or hyphens. For example:
my_credential
3-252
Chapter 3
Load Data into Autonomous Database
8. Enter a name in the Credential Name field. The name must conform to Oracle object
naming conventions, which do not allow spaces or hyphens. For example:
my_credential
9. In the AWS Access Key ID field, enter your AWS access key ID. For
example:myAccessKeyID
10. In the AWS Secret Access Key field, enter your AWS secret access key. For information
on AWS access keys, see Managing access keys for IAM users.
11. In the Bucket URI field, enter the URI and bucket for your Amazon S3 bucket. For
example:
https://s3-us-west-2.amazonaws.com/adwc/my_bucket
Note:
The display area is blank when we create a new cloud storage location.
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens a
Create Credential wizard.
In the Credentials section, select Cloud Username and Password.
5. From the Cloud Store drop-down list, select Microsoft Azure.
6. Enter a name in the Credential Name field. The name must conform to Oracle object
naming conventions, which do not allow spaces or hyphens. For example:
my_credential
7. In the Azure Storage Account Name field, enter the name of your Azure storage
account. For example:
AZURE_KEY123...
3-253
Chapter 3
Load Data into Autonomous Database
8. In the Azure Storage Account Access Key field, enter your Azure access key.
For information on Azure storage accounts, see Create a storage account.
9. In the Bucket URI field, enter the URI and bucket for your Microsoft Azure bucket.
For example:
https://objectstore.microsoft.com/my_bucket
Note:
The display area is blank when we create a new cloud storage location.
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens
a Create Credential wizard.
In the Credentials section, select Cloud Username and Password.
5. Enter a name in the Credential Name field. The name must conform to Oracle
object naming conventions, which do not allow spaces or hyphens. For example:
my_credential
GOOGTS1C3LPB3KTKSDMB2BFD
8. In the HMAC Access Secret field, enter your HMAC secret. For information on
HMAC keys, see HMAC Keys.
9. In the Storage Settings tab of the Add Cloud Store dialog box, enter a name for
the cloud storage link. For example:
My_Cloud_Store
3-254
Chapter 3
Load Data into Autonomous Database
10. (Optional) In the Description field, enter a description for the link. For example:
11. In the Bucket URI field, enter the bucket and URI for your Google bucket. For example:
https://my_bucket.storage.googleapis.com
Note:
The display area is blank when we create a new cloud storage location.
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens a
Create Credential wizard.
In the Credentials section, select Cloud Username and Password.
Note:
If you have the user OCID, tenancy OCID, private key and fingerprint, select
Oracle Cloud Infrastructure Signing Keys and refer to the Create an OCI
Cloud storage location using Oracle Cloud Infrastructure Signing Keys section
of this topic.
5. From the Cloud Store drop-down list, select Other (Swift Compatible).
6. Enter a name in the Credential Name field. The name must conform to Oracle object
naming conventions, which do not allow spaces or hyphens. For example:
my_credential
3-255
Chapter 3
Load Data into Autonomous Database
7. In the Access User Name field, enter your access user name. For example:
OTHER_KEY123...
https://someswiftcompatibleprovider.com/my_bucket
10. Click Next.The dialog box progresses to the Cloud Data tab. This tab lists the
objects available on this cloud storage location in the display area. The display
area is blank when we create a new cloud storage location.
11. Click Create to create the cloud storage location.
Create an OCI Cloud storage location using Oracle Cloud Infrastructure Signing
Keys
To create an OCI Cloud storage location using Oracle Cloud Infrastructure Signing
Keys:
1. On the Connections page, click Create and select Create New Cloud Store
Location. This opens the Add Cloud Store Location wizard.
2. In the Storage Settings tab of the Add Cloud Store Location box, enter a name for
the cloud storage link. For example:
My_Cloud_Store
3. (Optional) In the Description field, enter a description for the link. For example:
4. Click Select Credential and Create Credential to create a credential. This opens
a Create Credential wizard.
If you have the user OCID, tenancy OCID, private key and fingerprint, select
Oracle Cloud Infrastructure Signing Keys.
Note:
If you only have a username and password select Cloud Username and
Password in this step and refer to the Create an Other (Swift
Compatible) Cloud Store Location section of this topic.
3-256
Chapter 3
Load Data into Autonomous Database
8. Oracle Cloud Infrastructure Tenancy: The OCID of the tenant. See Where to Get the
Tenancy's OCID and User's OCID for details on obtaining the Tenancy's OCID.
9. Oracle Cloud Infrastructure User Name: The OCID of the user. See Where to Get the
Tenancy's OCID and User's OCID for details on obtaining the User's OCID.
10. Select Create Credential.
11. n the Bucket URI field, enter the URI and bucket for your cloud store bucket. For
example:
https://objectstorage.<region>.oraclecloud.com/<namespace>/b/<bucket>/.
13. The dialog box progresses to the Cloud Data tab. This tab lists the objects available on
this cloud storage location in the display area.
Note:
The display area is blank when we create a new cloud storage location.
You will receive a notification that the cloud storage location is created successfully.
• Create Oracle Cloud Infrastructure Native Credentials
To establish cloud storage connection from Data Studio to Oracle Cloud Infrastructure
(OCI) Object storage service, you must configure the cloud storage location with your
OCI authentication details. You can create Oracle Cloud Infrastructure (OCI) Native
Credentials by using the CREATE_CREDENTIAL procedure of DBMS_CLOUD package.
• Register Data Catalog
You can register data catalogs you want to use with registering the data catalog.
3-257
Chapter 3
Load Data into Autonomous Database
Parameter Description
credential_name The credential_name parameter must conform to Oracle object
naming conventions, which do not allow spaces or hyphens.
user_ocid Specifies the user's OCID. See Where to Get the Tenancy's OCID
and User's OCID for details on obtaining the User's OCID.
tenancy_ocid Specifies the tenancy's OCID. See Where to Get the Tenancy's
OCID and User's OCID for details on obtaining the Tenancy's OCID.
private_key Specifies the generated private key. Private keys generated with a
passphrase are not supported. You need to generate the private key
without a passphrase. See How to Generate an API Signing Key for
details on generating a key pair in PEM format.
fingerprint Specifies a fingerprint. After a generated public key is uploaded to
the user's account the fingerprint is displayed in the console. Use
the displayed fingerprint for this argument. See How to Get the
Key's Fingerprint and How to Generate an API Signing Key for more
details.
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
user_ocid IN VARCHAR2,
tenancy_ocid IN VARCHAR2,
private_key IN VARCHAR2,
fingerprint IN VARCHAR2);
Once you obtain all the necessary inputs and generate your private key, here is a
sample of your CREATE_CREDENTIAL procedure:
'ocid1.user.oc1..aaaaaaaatfn77fe3fxux3o5lego7glqjejrzjsqsrs64f4jsjrhbsk
5qzndq', tenancy_ocid =>
'ocid1.tenancy.oc1..aaaaaaaapwkfqz3upqklvmelbm3j77nn3y7uqmlsod75rea5zmt
mbl574ve6a', private_key =>
'MIIEogIBAAKCAQEAsbNPOYEkxM5h0DF+qXmie6ddo95BhlSMSIxRRSO1JEMPeSta0C7WEg
7g8SOSzhIroCkgOqDzkcyXnk4BlOdn5Wm/BYpdAtTXk0sln2DH/
GCH7l9P8xC9cvFtacXkQPMAXIBDv/
zwG1kZQ7Hvl7Vet2UwwuhCsesFgZZrAHkv4cqqE3uF5p/
qHfzZHoevdq4EAV6dZK4Iv9upACgQH5zf9IvGt2PgQnuEFrOm0ctzW0v9JVRjKnaAYgAbqa
23j8tKapgPuREkfSZv2UMgF7Z7ojYMJEuzGseNULsXn6N8qcvr4fhuKtOD4t6vbIonMPIm7
Z/
a6tPaISUFv5ASYzYEUwIDAQABAoIBACaHnIv5ZoGNxkOgF7ijeQmatoELdeWse2ZXll+JaI
NeTwKU1fIB1cTAmSFv9yrbYb4ubKCJuYZJeC6I92rT6gEiNpr670Pn5n43cwblszcTryWOY
QVxAcLkejbPA7jZd6CW5xm/
vEgRv5qgADVCzDCzrij0t1Fghicc+EJ4BFvOetnzEuSidnFoO7K3tHGbPgA+DPN5qrO/
8NmrBebqezGkOuOVkOA64mp467DQUhpAvsy23RjBQ9iTuRktDB4g9cOdOVFouTZTnevN6Jm
Dxufu9Lov2yvVMkUC2YKd+RrTAE8cvRrn1A7XKkH+323hNC59726jT57JvZ+ricRixSECgY
EA508e/alxHUIAU9J/uq98nJY/
3-258
Chapter 3
Load Data into Autonomous Database
6+GpI9OCZDkEdBexNpKeDq2dfAo9pEjFKYjH8ERj9quA7vhHEwFL33wk2D24XdZl6vq0tZADNSzOt
TrtSqHykvzcnc7nXv2fBWAPIN59s9/
oEKIOdkMis9fps1mFPFiN8ro4ydUWuR7B2nM2FWkCgYEAxKs/
zOIbrzVLhEVgSH2NJVjQs24S8W+99uLQK2Y06R59L0Sa90QHNCDjB1MaKLanAahP30l0am0SB450k
EiUD6BtuNHH8EIxGL4vX/SYeE/
AF6tw3DqcOYbLPpN4CxIITF0PLCRoHKxARMZLCJBTMGpxdmTNGyQAPWXNSrYEKFsCgYBp0sHr7TxJ
1WtO7gvvvd91yCugYBJAyMBr18YY0soJnJRhRL67A/
hlk8FYGjLW0oMlVBtduQrTQBGVQjedEsepbrAcC+zm7+b3yfMb6MStE2BmLPdF32XtCH1bOTJSqFe
8FmEWUv3ozxguTUam/
fq9vAndFaNre2i08sRfi7wfmQKBgBrzcNHN5odTIV8l9rTYZ8BHdIoyOmxVqM2tdWONJREROYyBtU
7PRsFxBEubqskLhsVmYFO0CD0RZ1gbwIOJPqkJjh+2t9SH7Zx7a5iV7QZJS5WeFLMUEv+YbYAjnXK
+dOnPQtkhOblQwCEY3Hsblj7Xz7o=', fingerprint =>
'4f:0c:d6:b7:f2:43:3c:08:df:62:e3:b2:27:2e:3c:7a');END;/
PL/SQL procedure successfully completed.
You can now retrieve the new credentials with the following query:
See REF DBMS_DCAT Package to refer to the procedures to add custom parameters such
as region and data catalog ID to the Data Catalog. The Data Catalog ID is a unique Oracle
cloud Identifier for data catalog instance and region is the data catalog region.
3-259
Chapter 3
Load Data into Autonomous Database
Loading Data
You can load data from files on your local device, from remote databases, or from
cloud storage buckets from directories and share providers.
The following topics describe the interfaces for these actions.
• Loading Data From Local Files
• Loading Data from Other Databases
• Loading Data from Cloud Storage
• Loading from Directories
• Loading from Share Providers
• Loading Data From Local Files
To load data from local files into your Oracle Autonomous Database, on the Data
Load page, select LOAD DATA and LOCAL FILE. Drag one or more files from
your local file system navigator and drop them in the Data Load Cart. You can also
click Select Files or the Select Files icon, select one or more files from the file
system navigator, and then click Open.
• Loading Data from Other Databases
To load data from tables in another database into your Oracle Autonomous
Database, on the Data Load page, select LOAD DATA and DATABASE. Select a
database link from the drop-down list. Drag one or more tables from the list of
database tables and drop them in the Data Load Cart.
• Loading Data from Cloud Storage
You can load data from a cloud store to a table in your Autonomous Database.
3-260
Chapter 3
Load Data into Autonomous Database
Create a Directory
To add a directory, you must have the CREATE ANY DIRECTORY system privilege. The
ADMIN user is granted the CREATE ANY DIRECTORY system privilege. The ADMIN user
can grant CREATE ANY DIRECTORY system privileges to other users.
Note:
• CREATE DIRECTORY creates the database directory object in the database and
also creates the file system directory. For example the directory path could be:
/u03/dbfs/7C149E35BB1000A45FD/data/stage
• You can create a directory in the root file system to see all the files with the
following commands:
After you create the ROOT_DIR directory, use the following command to list all
files:
3-261
Chapter 3
Load Data into Autonomous Database
Let us demonstrate how to create a directory and access it from Data Studio:
• Create a directory in Database Actions:
Login to your Database Actions instance and select the SQL card under
Development. You can view the SQL worksheet. Now create a directory and
attach a file system name of your choice to the directory you create. In the below
given example, FSS_DIR is the name of the directory.
Run the above command. The above command gives the following output:
PL/SQL procedure successfully completed.
• Attach the file system
Attach your file system with the name of your choice to the FSS_DIR directory via
the DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM procedure.
BEGIN
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM(
file_system_name => '********',
file_system_location =>
'*******.sub1********1.********.oraclevcn.com:/********',
directory_name => 'FSS_DIR',
description => 'attach OCI file system'
);
END;
/
3-262
Chapter 3
Load Data into Autonomous Database
3-263
Chapter 3
Load Data into Autonomous Database
To load tables from a share, click Data Load on the Launchpad and select Load Data.
Click Share on the Load Data page. Click + Subscribe to Share Provider to
subscribe to a Share Provider.
2. Select Add New Provider JSON and click on the Delta Share Profile JSON to drag
and drop the JSON profile.
3. Click Next to progress to the Add Shares tab.
4. Select the level of network access you want to allow from your database to the
host with the Share REST endpoint, and click Run. In this example, Allow access
to Host Only is selected.
3-264
Chapter 3
Load Data into Autonomous Database
5. To register the shares available to you, move shares from Available Shares to Selected
Shares and click Subscribe.
The screenshot below shows the REVIEWS share moved from Available Shares to
Selected Shares before clicking Subscribe.
6. Create external tables derived from tables selected from the data share.
a. Drag and drop tables from the selected share. You can optionally click settings to
view the table details.
In this example, the only table selected is HOTEL_REVIEWS.
b. You can optionally change the name of your table and click Close.
In this example, the name is changed from HOTEL_REVIEWS to
HOTEL_REVIEWS_SHARE.
3-265
Chapter 3
Load Data into Autonomous Database
c. Create the external table by clicking Start, on the Select Share page, and then
clicking Run on the Run Data Load Job dialog.
d. When the external tables are created the Complete message is displayed.
3-266
Chapter 3
Load Data into Autonomous Database
Note:
The Data load tool does not support the creation of a live feed from a loaded
cart item that consists of a single file in CSV, XLS, XLSX, TSV, TXT, XML,
JSON, and AVRO format or of a folder that contains a file in XLSX format.
6. To add the folders, drag a folder from the file navigator on the left and drop them into the
cart on the right. When you add the folder to the cart, a prompt is displayed that asks if
you want to load all the objects from the multiple source files into a single target table.
Click Yes to continue or No to cancel. You can add multiple folders to the cart, the data
represented by each card will be loaded into a separate table, but all the items in the cart
will be processed as part of the same data load job.
7. Select Settings (pencil icon) from the data load cart to enter the details about the data
load job.
3-267
Chapter 3
Load Data into Autonomous Database
8. In the Settings tab of the Load Data from the Cloud Store Location, you can select
Create Table or Drop Table and Create New Table from the Option drop-down.
Note:
The Live feed tool works with the Data load job only if you create a table
and insert the data into a new table or drop the existing table and insert
the data into a new table.
9. Enter the other details for the data load job. For more information on entering the
details, refer to the Enter Details for the Data Load Job topic.
10. Once you have added data sources to the data load cart and entered details about
the data load job, select Start to run the job.
11. After the data load job is run, the data load cart displays a green check mark
3-268
Chapter 3
Load Data into Autonomous Database
You can view the Properties and Mapping details of the data load job on the Table
Settings tab.
Note:
You cannot select or edit any of the details of this tab.
16. Click Next to view the Live Feed Settings tab of the wizard. On the Live Feed Settings
tab, specify the following values in the field:
3-269
Chapter 3
Load Data into Autonomous Database
• Enable for Notification: Select this option so that new or changed data in the
data source will be loaded based on an Oracle Cloud Infrastructure
notification. When you select this option, you can avoid delays that might
occur when polling is initiated on a schedule (that is, if you selected the live
table feed Scheduled option.
When you select the Enable for Notification option, you must also configure
your object store bucket to emit notifications
• Enable For Scheduling: Select this option to set up a schedule for data feed.
In the time interval fields, enter a number, and select a time type and the days
on which to poll the bucket for new or changed files. For example, to poll every
two hours on Monday, Wednesday, and Friday, enter 2, select Hours, and
then select Monday, Wednesday, and Friday in the appropriate fields. You
can select All Days, Monday to Friday, Sunday to Thursday, or Custom
from the Week Days drop-down. The Custom field enables you to select
Monday,Tuesday, Wednesday, Thursday and Friday .
Select a start and end date. If you don't select a start date, the current time and
date will be used as the start date. The end date is optional. However, without an
end date, the feed will continue to poll.
The rest of the fields displayed in the wizard such as the Live Table Feed Name,
Target Table Name , and Consumer Group are greyed out and disabled for
selecting or editing.
Click Save to save and create a Live Table Feed from a data load cart.
3-270
Chapter 3
Load Data into Autonomous Database
for the data load. The Data Load tool supports loading tables from only the first worksheet of
a multi-worksheet XLSX file when the file is in an object store.
You can add more files to the cart by clicking the Select Files icon. You can add any number
of files to the cart and load data from all of them in a single data load job.
To remove a source file from the Data Load Cart, click the Remove (trash can) icon for the
source item. To remove all source files from the cart, click the Remove All (trash can) icon in
the Data Load Cart menu bar.
To return to the Data Load page, click Data Load above the Data Load Cart menu bar.
To change the delimiter character that separates columns in the source, expand the Field
delimiter drop-down list and select a character. For example, if the file has columns delimited
by semicolons, change the delimiter from the default comma delimiter to a semicolon.
To convert any invalid value in a numeric source column to a null value in the target column,
select the Numeric column Convert invalid data to null option.
3-271
Chapter 3
Load Data into Autonomous Database
Settings Template
The save settings feature saves the configuration set in the Cart settings in the form of
a JSON file. When opening the Settings template, you have the following options:
1. Load Settings Template: Loads a settings template from your local system.
2. Save Settings Template: Saves the current existing settings template.
You can use the Load Settings Template if you want to use an existing customized
template present in your local.
1. From the Settings Template in the Settings tab of the Load Data page, select Load
Settings Template.
2. You will see a Load Settings Template wizard, click the Settings Template JSON to
load a JSON file from your system.
3. Clicking the Settings template JSON will open your local system. Click OK to load
the JSON file.
4. After you load the JSON file, you can view the updates applied automatically to the
settings tab which matches the JSON settings template you load from your local.
You can use the Save Settings Template to save the existing current Settings
template.
1. From the Settings Template in the Settings tab of the Load Data page, select Save
Settings Template.
2. The Template file editor appears. Click the Template File name and name the new
template.
3. Click OK to finish saving the new name of the existing template.
4. You can test the configuration of the new template.
3-272
Chapter 3
Load Data into Autonomous Database
• Type VARCHAR in the search field and click on the magnifier icon besides the search field.
3-273
Chapter 3
Load Data into Autonomous Database
• The search will return the rows having VARCHAR as the Data Type. Select the Bulk
Edit icon besides the magnifier icon.
• Selecting the Bulk Edit icon opens the Bulk Edit Mapping dialog box.
3-274
Chapter 3
Load Data into Autonomous Database
• Click Data Type and select NUMBER from the Data Type drop-down. Click OK.
• The Data Type of the selected two rows changes from VARCHAR to NUMBER.
3-275
Chapter 3
Load Data into Autonomous Database
• Clear the content from the search field and click the magnifier icon to view the bulk
edit updates in the Mapping table.
Specify Mappings
If you select the Create Table or Drop Table and Create New Table option and you
are getting the source column names from the file header, then in the Mapping section
either accept the default values for the target columns and data types or specify
different values. To specify different values, in the target column, enter a name for the
column. In the Data Type column, select a data type from the drop-down list. If you are
not getting the source column names from the file header, then for each source
column specify a name for the target column and select a data type from the Data
Type drop-down list. For the Date data type, select a date format from the Format
drop-down list.
Note:
You will receive a tooltip error message with the exact reason for the error
message when you complete editing a cell. The mapping grid cell will be
highlighted with red to indicate an invalid value that must be fixed. The
highlight is removed after you fix the invalid value. For example, you can
view the following tooltip error message when the target column name is not
filled in.
For the Merge into Table option, for each source column, select a target column form
the drop-down list. You must specify at least one column as a key column. To specify a
column as a key column, select the Merge Key check box for the column. Merge keys
are one or more columns that uniquely identify each row in the table. Merge keys must
not contain any null values. For loading tables with primary keys, this option
automatically enables the selection of primary key columns as the merge keys.
For the Insert into Table or Replace Data options, for each source column, select a
target column from the drop-down list of existing columns.
3-276
Chapter 3
Load Data into Autonomous Database
Any modifications you make in the source preview do not affect the loading of data from the
file.
3-277
Chapter 3
Load Data into Autonomous Database
To add a remote database to the list of database links, create a database link to the
remote database. For information on creating a database link, see Database Links in
Oracle® Database Database Administrator’s Guide 21c.
The databases available to you appear in the drop-down list of the database
navigation pane of the Load Tables page.
You can filter the tables displayed in the navigation pane by entering a case-sensitive
value in the search field at the top of the navigation tree and pressing Enter. To display
all of the tables again, clear the search field and press Enter.
You can add any number of tables from the navigation pane to the Data Load Cart and
load data from all of them in a single data loading job. You can set filters on the data
for a table to load only the specified data.
Settings Template
The save settings feature saves the configuration set in the Cart settings in the form of
a JSON file. When opening the Settings template, you have the following options:
1. Load Settings Template: Loads a settings template from your local system.
2. Save Settings Template: Saves the current existing settings template.
3-278
Chapter 3
Load Data into Autonomous Database
You can use the Load Settings Template if you want to use an existing customized template
present in your local.
1. From the Settings Template in the Settings tab of the Load Data page, select Load
Settings Template.
2. You will see a Load Settings Template wizard, click the Settings Template JSON to load a
JSON file from your system.
3. Clicking the Settings template JSON will open your local system. Click OK to load the
JSON file.
4. After you load the JSON file, you can view the updates applied automatically to the
settings tab which matches the JSON settings template you load from your local.
You can use the Save Settings Template to save the existing current Settings template.
1. From the Settings Template in the Settings tab of the Load Data page, select Save
Settings Template.
2. The Template file editor appears. Click the Template File name and name the new
template.
3. Click OK to finish saving the new name of the existing template.
4. You can test the configuration of the new template.
3-279
Chapter 3
Load Data into Autonomous Database
The Bulk Edit setting enables you to update the values of the following columns for all
the searches returned by the search field:
• Data Type
• Target Column name
• Include Columns for loading
• Exclude columns for loading
Consider changing the Data Type of first and third row from VARCHAR to NUMBER in
the mapping table.
• Type VARCHAR in the search field and click on the magnifier icon besides the search
field.
• The search will return the rows having VARCHAR as the Data Type. Select the Bulk
Edit icon besides the magnifier icon.
3-280
Chapter 3
Load Data into Autonomous Database
• Selecting the Bulk Edit icon opens the Bulk Edit Mapping dialog box.
3-281
Chapter 3
Load Data into Autonomous Database
• Click Data Type and select NUMBER from the Data Type drop-down. Click OK.
• The Data Type of the selected two rows changes from VARCHAR to NUMBER.
• Clear the content from the search field and click the magnifier icon to view the bulk
edit updates in the Mapping table.
3-282
Chapter 3
Load Data into Autonomous Database
Specify Mappings
If you select the Create Table or the Drop Table and Create New Table option, then in the
Mapping section either accept the default values for the target columns or specify different
values. For the target column, enter a name for the column.
Note:
You will receive a tooltip error message with the exact reason for the error message
when you complete editing a cell. The mapping grid cell will be highlighted with red
to indicate an invalid value that must be fixed. The highlight is removed after you fix
the invalid value. For example, you can view the following tooltip error message
when the target column name is not filled in.
For the Insert into Table or Replace Data options, select a target column from the drop-
down list of existing columns.
For the Merge into Table option, for each source column, select a target column form the
drop-down list. You must specify at least one column as a key column. To specify a column
as a key column, select the Merge Key check box for the column. Merge keys are one or
more columns that uniquely identify each row in the table. Merge keys must not contain any
null values. For loading tables with primary keys, this option automatically enables the
selection of primary key columns as the merge keys.
Preview
To view the data in the source table, in the settings pane select the Source Table tab. The
source preview displays the data in the table.
Table
For all options except Create Table, to view the existing data in the target table, in the
settings pane select the Target Table tab. The target preview displays the data in the table
before you run the data load job.
SQL
The SQL tab displays the SQL commands that will be run to complete this data load job.
Note:
You can see the SQL code even before the table is created.
Job Report
Displays a report of the total rows loaded and failed for a specific table. The report displays of
the total rows loaded and failed. You can view the name of the table, the time the table was
loaded and the time taken to process the load.
3-283
Chapter 3
Load Data into Autonomous Database
Data Definition
Data Definition displays the Oracle Autonomous Database DDL that created the entity.
3-284
Chapter 3
Load Data into Autonomous Database
On the left side of the page is a navigator pane, where you choose a cloud store connection
and the folders or files containing the data. On the right of the page is the data load cart,
where you stage the files and folders for the data load job. You can set options for the data
load job before running it. The Autonomous Database comes with predefined CPU/IO shares
assigned to different consumer groups. You can set the consumer group to either low,
medium or high while executing a data load job depending on your workload.
To load files from a cloud store into your database, do the following:
• Manage Cloud Storage Links for Data Load Jobs
• Prepare the Data Load Job
• Add Files or Folders for the Data Load Job
• Enter Details for the Data Load Job
• Run the Data Load Job
• View Details About the Data Load Job After It Is Run
• View the Table Resulting from the Data Load Job
• Manage Cloud Storage Links for Data Load Jobs
Before you can load data from a cloud store, you must establish a connection to the
cloud store you want to use. You can select cloud store location from the cloud store
locations field.
• Prepare the Data Load Job
• Add Files or Folders for the Data Load Job
• Enter Details for the Data Load Job
Enter the details about the data load job in the Load Data from Cloud Storage pane.
• Run the Data Load Job
Once you've added data sources to the data load cart and entered details about the data
load job, you can run the job.
• View Details About the Data Load Job After It Is Run
• View the Table Resulting from the Data Load Job
Manage Cloud Storage Links for Data Load Jobs
Before you can load data from a cloud store, you must establish a connection to the cloud
store you want to use. You can select cloud store location from the cloud store locations field.
On the Load Cloud Object page:
1. Click the Manage Cloud Store locations menu besides the cloud store locations text
field.
2. Select Add Cloud Store Location pane. This opens an Add Cloud Store Location dialog
box. See Managing Connections to add cloud store location.
Alternatively,
• On the Data Load page, select CLOUD LOCATIONS, and then click Next to go to the
Manage Cloud Store page.
• On the Link Cloud Object page, click the Manage Cloud Store locations menu button
besides the text field to enter the cloud store location.
See Managing Connections.
3-285
Chapter 3
Load Data into Autonomous Database
To return to the Load Cloud Object page, click Data Load in the breadcrumbs at the
top of the page and then navigate back to the page.
Prepare the Data Load Job
As you'll see below in Enter Details for the Data Load Job, the first decision you'll
make when configuring a data load job is how to load the source data into a new or
existing table in the database. The choices are:
• Create a table and insert data loaded from the source into the new table.
• Insert data loaded from the source into an existing table.
• Delete all data in an existing table and then insert new data from the source into
the table.
• Drop a table, create a new table, and then insert data loaded from the source into
the new table.
• Merge data from the source into a table by updating existing rows in the table and
inserting new rows into the table.
You may have to adjust your source data or your target table so that the source data
loads correctly into the external target table. The number, order, and data types of
columns in the source must match those in the target. Consider:
• If you're creating a new table or if the columns in your source exactly match the
columns in an existing target, you don't have to do any special preparation.
• If the columns in your source don't match the columns in an existing target, you
must edit your source files or target table so they do match.
• If you're loading multiple files, you must make sure that:
– All the source files are of the same type, for example, CSV, JSON, etc.
– The number, order, and data types of the columns in all the source files match
(and that they match the target, if you're loading into an existing table).
• If you want to partition by date:
– The source file must contain data where the data type is date or timestamp.
– You must load a folder containing two or more data sources.
– The names of the files in the folder must indicate a date or dates, for example,
MAR-1999.csv or 2017-04-21.xlsx.
Add Files or Folders for the Data Load Job
Add files from the cloud store to the data load cart, where you can edit the details of
the data load job. To add the files:
1. From the list at the top of the navigator pane on the left, select the bucket with
your source data.
The list shows links that were established on the Manage Cloud Storage page. If
you haven't yet registered the cloud store you want to use, click the Manage
Cloud Store button at the top of the page and register a connection. See Manage
Cloud Storage Links for Data Load Jobs, above.
2. Drag one or more items from the file navigator on the left and drop them into the
cart on the right.
3-286
Chapter 3
Load Data into Autonomous Database
• You can add files, folders, or both. A card is added to the cart for each file or folder
you drag into it. The card lists the name of the source file or folder and a proposed
name for the target table.
• If you add a folder that contains multiple files, all the files must be of the same type,
that is, CSV, TXT, etc.
When you add the folder to the cart, a prompt is displayed that asks if you want to
load all the objects from the multiple source files into a single target table. Click OK to
continue or Escape to cancel.
• When you add multiple individual files or multiple folders to the cart, the data
represented by each card will be loaded into a separate table, but all the items in the
cart will be processed as part of the same data load job.
• You can add files or folders from a different bucket, but if you do that, you're
prompted to remove all files that are already in the cart before proceeding. To select
files from a different bucket, select the bucket from the drop-down list in the navigator
pane on the left and then add the file(s), as described above.
• You can drop files or folders into the data load cart and then navigate away from the
Data Load Object page. When you return to the page, those items remain on the
page, but you may receive a message, "Remove All Data Load Items. Changing to
another Cloud storage location requires all items to be removed from the data load
job. Do you wish to continue?" Click Yes to remove the items from the cart. Click No
to keep the items in the cart. Then you can continue to work.
You can remove items from the cart before running the data load job:
• To remove an item from the cart, click the Actions icon and select Remove on the card
for the item.
• To remove all items from the cart, click Remove All in the data link cart menu bar at the
top of the pane.
Enter Details for the Data Load Job
Enter the details about the data load job in the Load Data from Cloud Storage pane.
On the card in the data link cart, click the Actions icon and select the Settings to open the
Load Data from Cloud Storage pane for that job. The pane contains:
• Settings Tab - Table Section
• Settings Tab - Properties Section
• Settings Tab - Mapping Section
• Preview Tab
• Table Tab
• Error Tab
• Close Button - Save and Close the Pane
3-287
Chapter 3
Load Data into Autonomous Database
– Create Table: Creates a table and inserts the data into the new table. When
you select this option, the Name field on the Settings tab is filled with a
default name, based on the name of the source file or folder. You can change
it if you want.
– Insert into Table: Inserts data loaded from the source into an existing table.
When you select this option, the Name field on the Settings tab presents a list
of the tables in the current schema. Select the table into which you want to
insert the data.
– Replace Data: Deletes all data in the existing table and then inserts new data
from the source into the table. When you select this option, the Name field on
the Settings tab presents a list of the tables in the current schema. Select the
table you want to use.
– Drop Table and Create New Table: Drops the table (if it already exists),
creates a new table, and then inserts the new data into the table. When you
select this option, the Name field on the Settings tab presents a list of the
tables in the current schema. Select the table you want to use.
– Merge into Table: Updates existing rows and inserts new rows in the table.
When you select this option, the Name field on the Settings tab presents a list
of the tables in the current schema. Select the table you want to use.
• Name: The name of the target table.
• Partition Column:
List Partitions and Date-based partitions are the different types of partitions
available in data loading.
List partitioning is required when you specifically want to map rows to partitions
based on discrete values.
To partition according to a specific column, click the Partition Column drop-down
list and select the column you want to use for the partitioning.
You will have N files per partition value, all partitioned by the partition column you
select.
Note:
– For linked files (from external tables) there is also a requirement that
for each file, the list partitioning column can contain only a single
distinct value across all of the rows.
– If a file is list partitioned, the partitioning key can only consist of a
single column of the table.
Date-based partitioning is available when you load a folder containing two or more
data sources that contain date or timestamp data.
To partition according to date, click the Partition Column drop-down list and
select the DATE or TIMESTAMP column you want to use for the partitioning.
3-288
Chapter 3
Load Data into Autonomous Database
• Text enclosure: Select the character for enclosing text: " (double-quote character),
' (single-quote character) or None. This option is visible only when the selected file is in
plain text format (CSV, TSV, or TXT).
• Field delimiter: Select the delimiter character used to separate columns in the source.
For example, if the source file uses semicolons to delimit the columns, select Semicolon
from this list. The default is Comma. This option is visible only when the selected file is in
plain text format (CSV, TSV, or TXT).
• Start processing data at row: Specifies the number of rows to skip when loading the
source data into the target:
– If you select the Column header row option under Source column name (see
below) and if you enter a number greater than 0 in the Start processing data at row
field, then that number of rows after the first row are not loaded into the target.
– If you deselect the Column header row option under Source column name, and if
you enter a number greater than 0 in the Start processing data at row field, then
that number of rows including the first row are not loaded into the target.
• Source column name: Select the Column header row checkbox to use the column
names form the source table in the target table.
– If you select this option, the first row in the file is processed as column names. The
rows in the Mapping section, below, are filled with those names (and with the
existing data types, unless you change them).
– If you deselect this option, the first row is processed as data. To specify column
names manually, enter a name for each target column in the Mapping section. (You
will also have to enter data types.)
• Numeric column: Select the Convert invalid data to null checkbox to convert an
invalid numeric column value into a null value.
3-289
Chapter 3
Load Data into Autonomous Database
The Bulk Edit setting updates the columns returned by the search field. The search
box besides the Bulk Edit setting icon filters the list of columns you wish to update in a
bulk. As soon as you start typing in the search field, the tool returns the field values
which contains the letters you type. You can remove the filter by deleting all the
content from the search box and clicking the magnifier icon that appears next to the
search box.
The Bulk Edit setting enables you to update the values of the following columns for all
the searches returned by the search field:
• Data Type
• Target Column name
• Include Columns for loading
• Exclude columns for loading
Consider changing the Data Type of first and third row from VARCHAR to NUMBER in
the mapping table.
• Type VARCHAR in the search field and click on the magnifier icon besides the search
field.
• The search will return the rows having VARCHAR as the Data Type. Select the Bulk
Edit icon besides the magnifier icon.
3-290
Chapter 3
Load Data into Autonomous Database
• Selecting the Bulk Edit icon opens the Bulk Edit Mapping dialog box.
3-291
Chapter 3
Load Data into Autonomous Database
• Click Data Type and select NUMBER from the Data Type drop-down. Click OK.
• The Data Type of the selected two rows changes from VARCHAR to NUMBER.
• Clear the content from the search field and click the magnifier icon to view the bulk
edit updates in the Mapping table.
3-292
Chapter 3
Load Data into Autonomous Database
Note:
If you're loading multiple files from a folder in a single data load job, only the
first file will be shown in the Mapping section. However, as long as the column
names and data types match, the data from all source files will be loaded.
• Data Type: Lists the data type to use for data in that column. This column is displayed
only for Create Table or Drop Table and Create New Table. The contents change
depending on whether the Get from file header option is selected.
– If the Get from file header option is selected, Data type shows the data types of the
columns in the source file (for Create Table) or in the existing table (for Drop Table
3-293
Chapter 3
Load Data into Autonomous Database
and Create New Table). If you want to change the data type for the target,
click the name and select a different one from the list.
– If the Column header row option is not selected, Data type shows all
available data types. Select the data type to use for the target column from the
list.
• Length/Precision (Optional): For columns where the Data Type is NUMBER,
enter the length/precision for the numbers in the column. Precision is the number
of significant digits in a number. Precision can range from 1 to 38.
For columns where Data Type is VARCHAR2, the Auto value in Length/
Precisionfield enables the Auto Size feature.
With the Auto-Size column Width feature, you can automatically size any column
to fit the largest value in the column. Select Auto from the Length/Precision drop-
down values or pick a value from the drop-down list.
• Scale (Optional): For columns where the Data Type is NUMBER, enter the scale
for the numbers in the column. Scale is the number of digits to the right (positive)
or left (negative) of the decimal point. Scale can range from ranges from -84 to
127.
• Format: If the data type in the Data type column is DATE or one of the
TIMESTAMP types, select a format for that type from the from the Format drop-
down list.
• Merge Key: This option is used only for the processing option Merge into Table.
For the Merge into Table option, you must specify at least one column to use as a
key column. Merge keys are one or more columns that uniquely identify each row
in the table. To specify a key column, select the Merge Key checkbox for the
column. Merge keys must not contain any null values. For loading tables with
primary keys, this option automatically enables the selection of primary key
columns as the merge keys.
Preview Tab
The Preview tab displays the source data in tabular form. The display reflects the
settings you chose in the Properties section.
If you dragged a folder containing multiple files into the data load cart and then clicked
Settings for that card, the File pane includes a Preview Object (File) drop-down
list at the top of the pane that lists all the files in the folder. Select the source file you
want to preview from that list.
Table Tab
The Table tab displays what the target table is expected to look like after the data has
been loaded. If you chose the Create Table processing option, no table is shown.
Errors Tab
The Error tab lists any errors generated when attempting to run the data load job.
SQL Tab
The SQL tab displays the SQL commands that will be run to complete this data load
job.
3-294
Chapter 3
Load Data into Autonomous Database
Note:
You can see the SQL code even before the table is created.
2. Click Start in the data load cart menu bar. To stop the data load job, click Stop.
When the data load job completes, the Load Cloud Object page displays the results of
the job. At the top of the page, a Status message shows the number of items for which
the load has completed over the number of items in the job and the total time elapsed for
the job.
View Details About the Data Load Job After It Is Run
To view details about the data load job after it is run, click the Actions icon and select
Settings in the card for the item. The Load Data from Cloud Store Locations pane is
displayed again, with the settings used for the job plus some additional details about job run.
Settings Tab
The Settings tab shows the details that were set in the Settings tab when preparing for job.
Preview Tab
The Preview tab shows the source file or files used for the data load job.
Table Tab
The Table tab shows the table created or modified from the data load job.
Error Tab
The Error tab shows rows rejected from the data link job, if any. You can click the icon
at the top of the page to download the rejected rows as a CSV file.
SQL Tab
The SQL tab shows the SQL that was created and run to load the data. You can click the
3-295
Chapter 3
Load Data into Autonomous Database
4. Click View Details on the right side of the card and review the table. See The
Catalog Page.
Linking Data
You can link to data in remote databases or in cloud storage buckets.
When you link to data in a remote database or in cloud storage, the target object
produced is an external table or a view. When you select that target table or view on
the Data Load Jobs page, the source preview for the object shows the current data in
the source object.
Linking to columns in a remote database or to files in cloud storage can be useful
when the source data is being continually updated. For example, if you have a
database table with columns that are updated for each new sales transaction, and you
have run a data load job that links columns of the source table to targets in your
Oracle Autonomous Database, then those targets have the new sales data as it is
updated in the source.
See:
• Linking to Other Databases
• Linking to Objects in Cloud Storage
• Linking to Directory
• Linking to Share
• Linking to OCI Data Catalog
• Linking to Other Databases
To link to data in tables in another database from your Oracle Autonomous
Database, on the Data Load page, click LINK DATA under Data Load, then click
Database. Select a database from the drop-down list. Drag one or more tables
from the list of database tables and drop them in the Data Load Cart.
• Linking to Objects in Cloud Storage
When you create a link to files in a cloud store bucket from you Oracle
Autonomous database, you create an external table that links to the files in the
cloud store.
3-296
Chapter 3
Load Data into Autonomous Database
• Linking to Directory
You can link to directories from your Autonomous Database.
• Linking to Share
You can subscribe to a Share Provider for linking data.
• Linking to Data Catalog
(Required) <Enter a short description here.>
Linking to Directory
You can link to directories from your Autonomous Database.
On the Data Load page, click Link Data and select Directory. On the left side of the page is a
navigator pane, where you choose a directory with the folders or files containing the data. On
the right of the page is the data link cart, where you stage the files and folders for the data
link job. You can set options for the data link job before running it. The Autonomous Database
comes with predefined CPU/IO shares assigned to different consumer groups. You can set
the consumer group to either low, medium or high while executing a data load job depending
on your workload.
Drag and drop the files or folders from the directory you select to add them to linking job.
Enter Details for the Data Load Job
Enter the details about the data link job in the Link Data from Directory dialog box. For more
details, refer to the Linking to Objects in Cloud Storage chapter.
Linking to Share
You can subscribe to a Share Provider for linking data.
On the Data Load page, click Link Data and select Share. For more details, refer to the
Loading data from a Share chapter.
3-297
Chapter 3
Load Data into Autonomous Database
The databases available to you appear in the drop-down list of the database
navigation pane of the Link Tables page.
View Statistics
To view statistics about the source table, in the settings pane select the Statistics tab.
It may take a moment for the statistics to appear. The statistics include the size of the
table, the number of rows and columns, the column names, data types, number of
distinct values, and other information. Below the details about the columns is a bar
graph that displays the top unique values for the selected column.
3-298
Chapter 3
Load Data into Autonomous Database
3-299
Chapter 3
Load Data into Autonomous Database
1. Click the Manage cloud store icon besides the field where you enter the cloud
store location. Select Add Cloud Store Location.
2. Enter your information in the Add Cloud Store Location pane. See Managing
Connections to add cloud storage location.
Alternatively,
• On the Data Load page, select CLOUD LOCATIONS card, and then click Add
Cloud Store Location.
See Managing Connections.
To return to the Link Cloud Object page, click Data Load in the breadcrumbs at the top
of the page and then navigate back to the page.
Prepare the Data Link Job
You may have to adjust your source data or your target table so that the source data
links correctly to the external target table. Consider:
• If you're linking to multiple files, you must make sure that:
– All the source files are of the same type, for example, CSV, JSON, etc.
– The number, order, and data types of the columns in all the source files match.
• If you want to partition by date:
– The source file must contain data where the data type is date or timestamp.
– You must load a folder containing two or more data sources.
– The names of the files in the folder must indicate a date or dates, for example,
MAR-1999.csv or 2017-04-21.xlsx.
Add Files or Folders for the Data Link Job
Add files from the cloud store to the data link cart, where you can edit the details of the
data link job. To add the files:
1. From the list at the top of the navigator pane on the left, select the bucket with
your source data.
The list shows links that were established on the Manage Cloud Storage page. If
you haven't yet registered the cloud store you want to use, click the Connections
button under the Data Load menu in Data Studio suite of tools and register a
connection.
2. Drag one or more items from the file navigator on the left and drop them into the
cart on the right.
• You can add files, folders, or both. A card is added to the cart for each file or
folder you drag into it. The card lists the name of the source file or folder and a
proposed name for the target table.
• If you add a folder that contains multiple files, all the files must be of the same
type, that is, CSV, TXT, etc.
When you add the folder to the cart, a prompt is displayed that asks if you
want to load all the objects from the multiple source files into a single target
table. Click OK to continue or Escape to cancel.
3-300
Chapter 3
Load Data into Autonomous Database
• When you add multiple individual files or multiple folders to the cart, the data
represented by each card will be loaded into a separate table, but all the items in the
cart will be processed as part of the same data load job.
• You can add files or folders from a different bucket, but if you do that, you're
prompted to remove all files that are already in the cart before proceeding. To select
files from a different bucket, select the bucket from the drop-down list in the navigator
pane on the left and then add the file(s), as described above.
• You can drop files or folders into the data load cart and then navigate away from the
Data Link Object page. When you return to the page, those items remain on the
page, but you may receive a message, "Remove All Data Link Items. Changing to
another Cloud storage location requires all items to be removed from the data load
job. Do you wish to continue?" Click Yes to remove the items from the cart. Click No
to keep the items in the cart. Then you can continue to work.
You can remove items from the cart before running the data link job:
• To remove an item from the cart, click the Actions menu and select Remove on the card
for the item.
• To remove all items from the cart, click Remove All in the data link cart menu bar at the
top of the pane.
Enter Details for the Data Link Job
Enter the details about the data link job in the Link Data from Cloud Storage pane.
On the card in the data link cart, click Settings to open the Link Data from Cloud Storage
pane for that job. The pane contains:
• Settings Tab - Table Section
• Settings Tab - Properties Section
• Settings Tab - Mapping Section
• File Tab
• Table Tab
• SQL Tab
• Close Button - Save and Close the Pane
3-301
Chapter 3
Load Data into Autonomous Database
Note:
– For linked files (from external tables) there is also a requirement that
for each file, the list partitioning column can contain only a single
distinct value across all of the rows.
– If a file is list partitioned, the partitioning key can only consist of a
single column of the table.
Date-based partitioning is available when you link a folder containing two or more
data sources that have columns that contain date or timestamp data.
To partition according to date, click the Partition Column drop-down list and
select the DATE or TIMESTAMP column you want to use for the partitioning.
• Validation Type: Validation examines the source files, optional partitioning
information, and report rows that do not match the format options specified. Select
None for no validation; select Sample to perform validation based on a sample of
the data; or select Full to perform validation based on all the data.
• Use Wildcard: This check box enables use of wildcard characters in search
condition to retrieve specific group of files that matches the filter criteria.
You can use a wildcard character, such as an asterisk (*) that searches, filters, and
specifies groups of files that detect and add new files to the external table.
For example, if you enter file*, then file01, file02, file03, and so on are considered
to match the keyword. The asterisk (*) matches zero or more characters of the
possibilities, to the keyword.
Note:
The wildcard support is incompatible with partitioning. The validation of
source file fails if you use wildcards with partitioned data.
• Text enclosure: Select the character for enclosing text: " (double-quote
character), ' (single-quote character) or None. This option is visible only when the
selected file is in plain text format (CSV, TSV, or TXT).
• Field delimiter: Select the delimiter character used to separate columns in the
source. For example, if the source file uses semicolons to delimit the columns,
select Semicolon from this list. The default is Comma. This option is visible only
when the selected file is in plain text format (CSV, TSV, or TXT).
• Start processing data at row: Specifies the number of rows to skip when linking
the source data to the target external table:
3-302
Chapter 3
Load Data into Autonomous Database
– If you select the Column header row option under Source column name (see
below) and if you enter a number greater than 0 in the Start processing data at row
field, then that number of rows after the first row are not linked to the target.
– If you deselect the Column header row option under Source column name, and if
you enter a number greater than 0 in the Start processing data at row field, then
that number of rows including the first row are not linked to the target.
• Source column name: Select the Column header row checkbox to use the column
names form the source table in the target table.
– If you select this option, the first row in the file is processed as column names. The
rows in the Mapping section, below, are filled with those names (and with the
existing data types, unless you change them).
– If you deselect this option, the first row is processed as data. To specify column
names manually, enter a name for each target column in the Mapping section. (You
will also have to enter data types.)
• Numeric column: Select the Convert invalid data to null checkbox to convert an
invalid numeric column value into a null value.
• Newlines included in data values: Select this option if there are newline characters or
returns to the beginning of the current line without advancing downward in the data fields.
Selecting this option will increase the time taken to process the load. If you do not select
this option when loading the data, the rows with newlines in the fields will be rejected.
You can view the rejected row in the Job Report panel.
3-303
Chapter 3
Load Data into Autonomous Database
Note:
Note:
If you're linking multiple files from a folder in a single data link job, only
the first file will be shown in the Mapping section. However, as long as
the column names and data types match, the data from all source files
will be linked.
• Data Type: Lists the data type to use for data in that column. The contents change
depending on whether the Get from file header option is selected.
– If the Column header row option is selected, Data type shows the data types
of the columns in the source file. If you want to change the data type for the
target, click the name and select a different one from the list.
– If the Column header row option is not selected, Data type shows all
available data types. Select the data type to use for the target column from the
list.
• Length/Precision (Optional): For columns where the Data Type is NUMBER,
enter the length/precision for the numbers in the column. Precision is the number
of significant digits in a number. Precision can range from 1 to 38.
For columns where Data Type is VARCHAR2, the Auto value in Length/
Precision field enables the Auto Size feature.
With the Auto-Size column Width feature, you can automatically size any column
to fit the largest value in the column. Select Auto from the Length/Precision drop-
down values or pick a value from the drop-down list.
• Scale (Optional): For columns where the Data Type is NUMBER, enter the scale
for the numbers in the column. Scale is the number of digits to the right (positive)
3-304
Chapter 3
Load Data into Autonomous Database
or left (negative) of the decimal point. Scale can range from ranges from -84 to 127.
• Format: If the data type in the Data type column is DATE or one of the TIMESTAMP
types, select a format for that type from the from the Format drop-down list.
Preview Tab
The Preview tab displays the source data in tabular form. The display reflects the settings
you chose in the Properties section.
If you dragged a folder containing multiple files into the data link cart and then clicked
Settings for that card, the Preview pane includes a Preview Object (File) drop-down list at
the top of the pane that lists all the files in the folder. Select the source file you want to
preview from that list.
Table Tab
The Table tab displays what the target table is expected to look like after the data has been
linked.
Job Report
Displays a report of the total rows loaded and failed for a specific table. The report displays of
the total rows loaded and failed. You can view the name of the table, the time the table was
loaded and the time taken to process the load.
SQL Tab
The SQL tab displays the SQL commands that will be run to complete this data link job.
Note:
You can see the SQL code even before the table is created.
2. Click Start in the data link cart menu bar. To stop the data link job, click Stop.
When the data link job completes, the Link Cloud Object page displays the results of the
job. At the top of the page, a Status message shows the number of items for which the
3-305
Chapter 3
Load Data into Autonomous Database
link has completed over the number of items in the job and the total time elapsed
for the job.
View Details About the Data Link Job After It Is Run
To view details about the data link job after it is run, click the Actions menu and select
Settings in the card for the item. The Link Data from Cloud Storage pane is
displayed again, with the settings used for the job plus some additional details about
job run.
Settings Tab
The Settings tab shows the details that were set in the Settings tab when preparing
for job.
File Tab
The File tab shows the source file or files used for the data link job.
Table Tab
The Table tab shows the external table created for the data link.
SQL Tab
The SQL tab shows the SQL that was created and run to link the data. You can click
the Copy to clipboard button at the top of the pane to copy it.
View the Table Resulting from the Data Link Job
After running a data link job, to view the table created by the data link job:
1. On the Database links page, click the Reload Cart button. Clicking the button
enables you to add the data sources to the data load the cart again.
2. Click the Data Load Jobs menu from the Data Studio navigation pane.
3. On the Data Load Jobs page, find the new or updated table.
4. Click View Details on the right side of the card and review the table. See The
Catalog Page.
Feeding Data
You can run a live table feed on demand, on a schedule, or as the result of a
notification.
A Live Table Feed automates loading of data into a table in your database. Files
automatically load as they appear in object storage and the Live Table Feed system
ensures that files are only loaded once. The loading can happen manually, by a
schedule, or even by notifications delivered directly from Object Storage.
The bucket can contain files in these formats: AVRO, CSV, JSON, Parquet, ORC,
Delimited TXT. All of the files must have the same column signature.
3-306
Chapter 3
Load Data into Autonomous Database
3-307
Chapter 3
Load Data into Autonomous Database
If you deselect the Column header row option under Source column name,
and if you enter a number greater than 0 in the Start processing data at row
field, then that number of rows including the first row are not linked to the
target.
Column header row: Select the Column header row checkbox to use the
column names form the source table in the target table.
If you select this option, the first row in the file is processed as column names.
The rows in the Mapping section, below, are filled with those names (and with
the existing data types, unless you change them).
If you deselect this option, the first row is processed as data. To specify
column names manually, enter a name for each target column in the Mapping
section. (You will also have to enter data types).
Select the Convert invalid data to null checkbox to convert an invalid
numeric column value into a null value.
• Edit or update the table settings in the Table Settings section: In this pane, the
mapping of the source to target columns is displayed.
– Select the Include check box at the beginning of a row to add the column
to the target table.
– Select or enter values for column attributes such as Target Column Name,
Column Type, Precision, Scale, Default, Primary Key and Nullable.
– You need to review the suggested data type and if needed, modify it by
entering the data type directly into the target cell.
Review the generated mapping table code based on the selections made
in the previous screens.
Click Next to progress to the Preview tab.
• The Preview Pane displays the changes you make to the table.
• Click Next to progress to the Live Feed Settings tab.
On the Live Settings tab, specify the following field values:
• Live Table Feed Name: Accept the default name or enter a different name to
identify this live table feed.
• Target Table Name: Accept the default name or enter a different name. This
is the name of the target table that data from the live feed will be loaded into in
your Autonomous Database instance. If the table does not exist, Live Feed will
attempt to guess the correct columns. You can pre-create the table in which
you wish the Live Feed to load into. This is for higher accuracy.
• Enable for Notification: Select this option so that new or changed data in the
data source will be loaded based on an Oracle Cloud Infrastructure
notification. When you select this option, you can avoid delays that might
occur when polling is initiated on a schedule (that is, if you selected the live
table feed Scheduled option).
When you select the Enable for Notification option, you must also:
– Configure your object store bucket to emit notifications
– Create a Notifications service subscription topic
– Create an Events service rule
– Copy the notification URL
3-308
Chapter 3
Load Data into Autonomous Database
3-309
Chapter 3
Load Data into Autonomous Database
3-310
Chapter 3
Load Data into Autonomous Database
3-311
Chapter 3
Load Data into Autonomous Database
Note:
Tip:
To complete those steps, you will alternate between Oracle Cloud
Infrastructure Console pages and Oracle Database Actions pages. You may
find it convenient to open the Cloud Console in one browser page or tab and
Database Actions in another, so it's easy to move back and forth.
3-312
Chapter 3
Load Data into Autonomous Database
b. On the Bucket Details page, click the Edit link next to Emit Object Events.
c. Select the Emit Objects Events check box, and then click Save Changes.
Step 4: Create and configure a live table feed to use notifications, and copy the
notification URL
Where: Database Actions: Live Feeds page
You can configure a new or an existing live table feed to use notifications:
1. Go to the Database Actions Live Feeds page, as described in Feeding Data.
2. Create or edit a live table feed object, as described in Create a Live Table Feed Object or
Edit a Live Table Feed Object. Select the Enable for Notification option
3. Click Create or Save.
4. Click the Actions (three vertical dots) icon on the card for your live feed, and select Show
Confirmation URL.
5. In the Notification URL dialog box, click the Copy icon to copy the URL to the clipboard.
You may want to copy it to a temporary file, so you can retrieve it later. You'll use this
URL in the next step, Step 5: Create a Notifications service subscription.
3-313
Chapter 3
Load Data into Autonomous Database
1. Return to the Oracle Cloud Infrastructure Console. Open the navigation menu and
click Developer Services. Under Application Integration, click Notifications.
2. On the Notifications page, click the Subscriptions tab (on the left side of the
page), the status will be Active.
3. Click Create Subscription and fill in the Create Subscription page:
• Subscription topic: Select the subscription topic you created in Step 2:
Create a Notifications service subscription topic.
• Protocol: HTTPS (Custom URL)
• URL: Paste in the URL you copied in Step 4: Create and configure a live table
feed to use notifications, and copy the notification URL.
• Click Create. The subscription will be listed in the Subscriptions table in a
state of "Pending."
Creating a Notification-Based Live Table Feed using Amazon Simple Storage Service
(S3)
You can integrate Amazon Simple Storage Service (S3) and Oracle Cloud
Infrastructure (OCI) to automate the process of live feed notifications when storage
objects it is observing have updates. The following section provides instructions for
creating event notifications in your Amazon S3 bucket where your data files are stored.
Tip:
To complete these steps, you will need to alternate between Amazon Web
Services (AWS) Management console and Oracle Database Actions pages.
You may find it convenient to open the Amazon Web Services in one browser
page or tab and Database Actions in another, so it is easy to move back and
forth.
To create a notification- based live feed with Amazon S3 as cloud storage you must:
• Step 1: Create your object store bucket in Amazon S3
3-314
Chapter 3
Load Data into Autonomous Database
3-315
Chapter 3
Load Data into Autonomous Database
Note:
Paste the Access key ID and Secret access key generated in the
previous step (Step 2: Create Access Keys) to their respective text fields
in the Add Cloud Storage page.
Step 4: Create and configure a live table feed to use notifications, and copy the
notification URL
Where: Database Actions: Live Feeds page
Creating a live table feed enables you to load data in real time from external storage
sources to your table in ADB. External storage you use include as Oracle Object
Store, AWS S3 or Microsoft Azure containers.
You can configure a new or an existing live table feed to use notifications:
1. Go to the Database Actions Live Feeds page, as described in Feeding Data.
2. Create or edit a live table feed object, as described in Create a Live Table Feed
Object or Edit a Live Table Feed Object. Select the Enable for Notification option
3. Click Create or Save.
4. Click the Actions (three vertical dots) icon on the card for your live feed, and
select Show Notification URL.
5. In the Notification URL dialog box, click the Copy icon to copy the URL to the
clipboard. You may want to copy it to a temporary file, so you can retrieve it later.
You will use this URL in the subsequent step (Step 7: Create a notifications service
subscription).
3-316
Chapter 3
Load Data into Autonomous Database
Navigate to the AWS Management console, and then select Create an AWS Account.
Follow the instructions as provided in the Amazon SNS link to create your first IAM
administrator user and group. Now you can log in to any of the AWS services as an IAM user.
1. Log in to Amazon SNS console as an IAM user.
2. On the Topics page, select Create topic.
3. Specify the following fields on the Create topic page, in the Details section.
• Type:Standard (Standard or FIFO)
• Name: notify-topic. For a FIFO topic, add fifo to the end of the name.
• Display Name: This field is optional.
4. Expand the Encryption section and select Disable encryption.
5. Expand the Access policy section and configure additional access permissions, if
required. By default, only the topic owner can publish or subscribe to the topic. This step
is optional. Edit the JSON format of the policy based on the topic details you enter. Here
is a sample of Access policy in JSON format.
{ "Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement":[
{"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {"AWS": "*"
},"Action": [
"SNS:Publish",
"SNS:RemovePermission",
"SNS:SetTopicAttributes",
"SNS:DeleteTopic",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:AddPermission",
"SNS:Subscribe"
],
"Resource": "arn:aws:sns:us-west-2:555555555555:notify-topic", //us-
west-2 is the region
"Condition": {
"StringEquals": {
"AWS:SourceOwner": "555555555555"
}
}
},
{
"Sid": "s3_policy", //This field accepts string values
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"SNS:Publish"
],
"Resource": "arn:aws:sns:us-west-2:555555555555:notify-topic", //
notify-topic is the topic name
3-317
Chapter 3
Load Data into Autonomous Database
"Condition": {
"StringEquals": {
"aws:SourceAccount": "555555555555" //This is the Account
ID
},
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:testbucket /*testbucket
is the s3 bucket name. You will get notifications only when file is
uploaded to this
bucket.*/
"
}
}
}
]
}
6. Expand the Delivery retry policy (HTTP/S) section to configure how Amazon
SNS retries failed message delivery attempts. This step is optional.
7. Expand the Delivery status logging section to configure how Amazon SNS logs
the delivery of messages to CloudWatch. This step is optional.
8. Expand Tags section to add metadata tags to the topic. This step is optional.
9. Select Create topic.
10. The topic's Name, ARN (Amazon Resource Name), and Topic owner's AWS
account ID are displayed in the Details section.
11. Copy the topic ARN to the clipboard.
Step 6: Enable and configure event notifications using the Amazon S3 console
Where: Amazon S3 Management console
You can enable Amazon S3 bucket events to send a notification message to a
destination whenever those events occur. You configure event notifications for your S3
bucket to notify OCI when there is an update or new data available to load. The
following steps explain the procedure to be followed in Amazon S3 console to enable
event notifications.
1. Log in to Amazon S3 Management console and sign in as an IAM (Amazon
Identity and Access Management) user.
2. In the Buckets list, select the name of the bucket i.e. testbucket. This is the bucket
that you had created in Step 1: Create your object store bucket in Amazon S3.
3. Select Properties icon.
4. Navigate to the Event Notifications section and select Create event notification
icon.
5. In the General configuration section, specify the following values for event
notification.
• Event name: bucket-notification
• Prefix: This value is to filter event notifications by prefix. It is an optional value.
This is added to filter event activity.
3-318
Chapter 3
Load Data into Autonomous Database
• Suffix: This value is to filter event notifications by suffix. It is an optional value. This
is added to filter event activity.
6. In the Event types section, select one or more event types that you want to receive
notifications for. If you are unsure of what event types to pick, then select the All object
create events option.
7. In the Destination section, select SNS Topic as the event notification destination.
Note:
Before you can publish event notifications, you must grant the Amazon S3 the
necessary permissions to call the relevant API. This is so that it can publish
notifications to a Lambda function or an SNS topic.
8. After you select SNS topic as the event notification destination, select the SNS topic i.e.
notify-topic from dropdown. This is the topic you created in Step 5: Create a notifications
service subscription topic.
9. Select Save changes.
3-319
Chapter 3
Load Data into Autonomous Database
Note:
HTTP(S) endpoints, email addresses, and AWS resources in other AWS
accounts require confirmation of the subscription before they can receive
messages.
3-320
Chapter 3
Load Data into Autonomous Database
Tip:
To complete the steps above, you will need to alternate between Microsoft Azure
portal and Oracle Database Actions pages. You may find it convenient to open the
Microsoft Azure portal in one browser page or tab and Database Actions in another,
so it is easy to move back and forth.
3-321
Chapter 3
Load Data into Autonomous Database
3-322
Chapter 3
Load Data into Autonomous Database
Note:
Paste the connection string value under key 1 of previous step Step 3: Create
Access Keys in Azure storage account access key text field of Add Cloud
Storage page. Also paste the storage account name generated in the previous step
Step 3: Create Access Keys in Azure storage account name text field of Add
Cloud Storage page.
Step 6: Create and configure a live table feed to use notifications and copy the
notification URL
Where: Database Actions: Live Feeds page
The Live table feed object enables data to be loaded from Microsoft Azure cloud storage with
no polling delay. This object creates an integration between Oracle Cloud Interface and
Microsoft Azure.
You can configure a new or an existing live table feed to use notifications:
1. Go to the Database Actions Live Feeds page, as described in Feeding Data.
2. Create or edit a live table feed object, as described in Create a Live Table Feed Object or
Edit a Live Table Feed Object. Select the Enable for Notification option
3. Click Create or Save.
4. Click the Actions (three vertical dots) icon on the card for your live feed, and select
Show Confirmation URL.
5. In the Notification URL dialog box, click the Copy icon to copy the URL to the clipboard.
You may want to copy it to a temporary file, so you can retrieve it later. You will use this
URL in the subsequent step, (Step 8: Create Event subscription).
3-323
Chapter 3
Load Data into Autonomous Database
1. Select the storage account you created in Step 2: Create a storage account in
Microsoft Azure.
2. Select the Events icon in the left navigation pane.
3. Click on +Event Subscription.
The Create Event Subscription window appears.
1. Specify the following fields in Event Subscription details section:
• Name: Eventssub. This is the name of the Event Subscription we create.
• Event Schema: Event Grid Schema
2. Specify the following fields in Topic Details section:
• Topic Type: Storage account
• System Topic Name: eventtopic.
3. Specify the following fields in Event Types section:
• Event Type: MicrosoftStorage.BlobCreated
4. Specify the following fields in Endpoint Details section:
• Endpoint Type: Web Hook
• Endpoint: Paste the Notification URL copied in Step 6: Create and configure
a live table feed to use notifications and copy the notification URL.
5. Select Create.
This way Microsoft Azure creates a system topic first and then the Event Subscription
for the topic.
Note:
The Confirmation URL link expires after a few minutes. You must make
sure that you click the link before it expires.
4. Return to the Confirmation URL dialog box and select the Check only when the
cloud store confirmation process is complete check box, and click OK.
Once you finish the above steps, upload a new file to the Microsoft Azure container
you created in Step 4: Create a container.
3-324
Chapter 3
Load Data into Autonomous Database
Note:
If you do not see the Data Transforms card then your database user is missing the
required DATA_TRANSFORM_USER role.
After your data flows and workflows are ready, you can execute the mappings immediately or
schedule to execute them at a later time. Oracle Data Transforms run-time agent
orchestrates the execution of jobs. On execution, Oracle Data Transforms generates the
code for you.
You can launch Data Transforms in any of the following ways:
• Oracle Cloud Marketplace: Create a Data Transforms instance from Oracle Cloud
Marketplace. Data Transforms is available as a separate listing on Marketplace called
Data Integrator: Web Edition.
3-325
Chapter 3
Load Data into Autonomous Database
• Database Actions Data Studio: Navigate to Database Actions Data Studio Page,
and click Data Transforms in the Database Actions page.
If you have already registered a Data Transforms instance from OCI Marketplace
with the Autonomous Database, the Data Transforms card on the Database
Actions page will continue to take you to your Marketplace instance.
Access to the standard set of Data Transforms features may depend on where you
launch Data Transforms from. In this documentation, certain topics could include any
of the following badges to indicate features that may or may not be available for use:
• APPLIES TO: Data Transforms that is available as a separate listing on
Marketplace called Data Integrator: Web Edition.
• APPLIES TO: Data Transforms instance that is registered with Autonomous
Database.
• APPLIES TO: Data Transforms that is part of the suite of data tools built into
Oracle Autonomous Database.
• Access Oracle Data Transforms From Data Studio
• Data Transforms Notes
• Work with Connections
Connections help you to connect Data Transforms to various technologies
reachable from your OCI network.
• Create a Data Load
A data load allows you to load multiple data entities from a source connection to a
target connection.
• Run a Data Load
After you create the data load, you are taken to the Data Load Detail page that
displays the details that you need to run a data load.
• Work with Data Entities
A Data Entity is a tabular representation of a data structure.
• Work with Projects
A project is the top-level container, which can include multiple folders to organize
your data flows or work flows into logical groups.
• Create a Data Flow
A data flow defines how the data is moved and transformed between different
systems.
• Schedule Data Flows or Workflows
You can schedule workflows and data flows to run at specified time interval.
• Monitor Status of Data Loads, Data Flows, and Workflows
When you run a data load, data flow, or workflow Oracle Data Transforms runs
jobs in the background to complete the request. You can view the status of the job
in the panel on the bottom right of the Data Load Details, the Data Flow Editor, and
the Workflow Editor page.
• Introduction to Workflows
A workflow is made up of multiple flows organized in a sequence in which they
must be executed.
• Create and Manage Jobs
When you execute a data load, data flow or workflow Oracle Data Transforms
creates a job to complete the process in the background.
3-326
Chapter 3
Load Data into Autonomous Database
Note:
Data Transforms is also available as a separate listing on OCI Marketplace. If
you have already registered a Data Transforms instance from OCI Marketplace
with the Autonomous Database, the Data Transforms card on the Database
Actions page will continue to take you to your Marketplace instance. If you wish
to use the embedded Data Transforms, then you must unregister the
Marketplace instance. See Unregister the ODI Instance from Autonomous
Database.
3. When you login to the Data Transforms tool for the first time, you need to provide the
database user credentials to sign in.
It takes approximately 1-2 minutes for the Data Transforms service to start. After the
service starts, the Data Transforms home page opens.
4. From the left pane of the Home page, click the Connections tab to view the newly
created Autonomous Database connection.
4. Click the Actions icon next to the connection and select Edit.
5. In the Update Connection page, enter the database username and password to use the
connection. Click Update to save the changes.
3-327
Chapter 3
Load Data into Autonomous Database
unwanted usage of resources for your tenancy. If the issue persists, file a service
request at Oracle Cloud Support or contact your support representative.
3-328
Chapter 3
Load Data into Autonomous Database
– Array Fetch Size - When reading large volumes of data from a data server,
Oracle Data Transforms fetches successive batches of records. This value is the
number of rows (records read) requested by Oracle Data Transforms on each
communication with the data server.
– Batch Update Size - When writing large volumes of data into a data server,
Oracle Data Transforms pushes successive batches of records. This value is the
number of rows (records written) in a single Oracle Data Transforms INSERT
command.
– Degree of Parallelism for Target - This value indicates the number of threads
allowed for a loading task. The default value is 1. The maximum number of
threads allowed is 99.
Note:
Connection details are specific and the above options vary based on the
selected connection type.
7. After providing all the required connection details, click Test Connection to test the
connection.
8. Click Create.
The new connection is created.
The newly created connections are displayed in the Connections page.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit to edit the provided connection details.
• Select Test Connection to test the created connection.
• Select Delete Schema to delete schemas.
• Select Delete Connection to delete the created connection.
You can also search for the required connection to know its details based on the following
filters:
• Name of the connection.
• Technology associated with the created connection.
• Supported Connection Types
This topic lists the connection types that are supported for connecting to Data
Transforms.
• Create Custom Connectors
• Create a Data Transforms Connection for Remote Data Load
You can connect to an existing Data Transforms instance and run a data load remotely.
• Create a Delta Share Connection
Databricks Delta Share is an open protocol for secure data sharing. Oracle Data
Transforms integrates with Delta Share to load data to Oracle Autonomous Database.
You can use the Delta Share connection to load data from Databricks or Oracle Data
Share.
3-329
Chapter 3
Load Data into Autonomous Database
Note:
APPLIES TO: Data Transforms that is available as a separate listing on
Marketplace called Data Integrator: Web Edition.
• For the connectors that require driver installation, you need to copy the
jar files to the /u01/oracle/transforms_home/userlibs directory
before you add the connection.
• Apart from the connection types listed here you can create custom
connectors, which you can use to connect Data Transforms to any JDBC
supported data sources. See Create Custom Connectors.
3-330
Chapter 3
Load Data into Autonomous Database
3-331
Chapter 3
Load Data into Autonomous Database
3-332
Chapter 3
Load Data into Autonomous Database
3-333
Chapter 3
Load Data into Autonomous Database
3-334
Chapter 3
Load Data into Autonomous Database
3-335
Chapter 3
Load Data into Autonomous Database
3-336
Chapter 3
Load Data into Autonomous Database
3-337
Chapter 3
Load Data into Autonomous Database
3-338
Chapter 3
Load Data into Autonomous Database
3-339
Chapter 3
Load Data into Autonomous Database
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Delete, to delete the created connection.
3-340
Chapter 3
Load Data into Autonomous Database
Note:
You cannot delete custom connectors that have existing connections.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit to edit the provided connection details.
• Select Test Connection to test the created connection.
• Select Delete Schema to delete schemas.
• Select Delete Connection to delete the created connection.
You can also search for the required connection to know its details based on the following
filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-341
Chapter 3
Load Data into Autonomous Database
3-342
Chapter 3
Load Data into Autonomous Database
• In the Proxy Host textbox, enter the host name of the proxy server to be used for the
connection.
• In the Proxy Port textbox, enter the port number of the proxy server.
• Select the following checkboxes depending on where the proxy is required:
– Use Proxy to access Delta Share Server
– Use Proxy to access Delta Share Storage
10. Click Test Connection, to test the established connection.
11. After providing all the required connection details, click Create.
The new connection is created.
The newly created connections are displayed in the Connections page.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Test Connection, to test the created connection.
• Select Delete Schema, to delete schemas.
• Select Delete Connection, to delete the created connection.
You can also search for the required Connection to know its details based on the following
filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-343
Chapter 3
Load Data into Autonomous Database
In the Create Data Load tab, enter a name if you want to replace the default
value and add a description.
2. Click Next.
3. In the Source Connection tab,
a. From the Connection Type drop-down, select Delta Share.
b. from the Connection drop-down, select the required connection from which
you wish to add the data entities.
c. Select the share that you want to load tables from the Share drop-down. The
drop-down lists all the shares for the selected connection.
d. Click Next.
4. In the Target Connection tab,
a. From the Connection Type drop-down, select Oracle as the connection type.
Note:
This drop-down lists only JDBC type connections.
b. From the Connection drop-down, select the required connection from to you
wish to load the data entities.
c. Enter a unique name in the Schema textbox.
d. Click Save.
The Data Load Detail page appears listing all the tables in the selected share with
their schema names.
Note:
For Delta Share data loads the Data Load Detail page only includes the
option. You cannot apply different actions - incremental merge,
incremental append, recreate, truncate, append - on the data entities
before loading it to the target schema. This is to make sure that the data
is consistent between the Delta Sharing server and the target schema.
To view the statistics of the data entities, click the Actions icon ( ) next to the data
entity, click Preview, and then select the Statistics tab. See View Statistics of Data
Entities for information.
3-344
Chapter 3
Load Data into Autonomous Database
3-345
Chapter 3
Load Data into Autonomous Database
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Test Connection, to test the created connection.
• Select Delete Schema, to delete schemas.
• Select Delete Connection, to delete the created connection.
You can also search for the required Connection to know its details based on the
following filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-346
Chapter 3
Load Data into Autonomous Database
9. In the Proxy Port textbox, enter the port number of the proxy server.
10. In the User text box enter the user name for connecting to the REST endpoint.
11. In the Password text box enter the password for connecting to the REST endpoint.
12. Choose a connection from the Staging Connection drop-down list. The list displays only
existing Autonomous Database connections. To use a different connection, create the
connection before you reach this page.
13. After providing all the required connection details, click Create.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Test Connection, to test the created connection.
• Select Delete Schema, to delete schemas.
• Select Delete Connection, to delete the created connection.
You can also search for the required Connection to know its details based on the following
filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-347
Chapter 3
Load Data into Autonomous Database
11. In the Role ID textbox, enter the role ID for connecting to the data server.
12. If the source that you want to use for mapping is a saved search, you need to also
specify the following details in Saved Search Extraction:
• Application ID: Enter the NetSuite Application ID for Data Transforms.
• Version: Enter the NetSuite version number.
13. Select the checkbox in Build Data Model to install pre-built dataflows and
workflows that you can run to extract data from NetSuite and move it to your
Oracle target schema using the Build Data Warehouse wizard.
14. Click Test Connection, to test the established connection.
15. After providing all the required connection details, click Create.
The new connection is created.
The newly created connections are displayed in the Connections page.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Test Connection, to test the created connection.
• Select Build Data Warehouse, to select the functional areas and create the
NetSuite Data Warehouse in the target schema. See Using the Build Data
Warehouse Wizard for more information.
• Select Delete Schema, to delete schemas.
• Select Delete Connection, to delete the created connection.
You can also search for the required Connection to know its details based on the
following filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-348
Chapter 3
Load Data into Autonomous Database
1. On the Home page, click the Connections tab. The Connections page appears.
2. Click the Actions icon ( ) next to the Oracle NetSuite connection that you want to use
to build the data warehouse and click Build Data Warehouse.
The Build Data Warehouse wizard opens.
3. From the Connection drop-down list, choose the Autonomous Database connection
where your target schema resides.
4. From the Staging Schema drop-down, all schema corresponding to the selected
connection are listed in two groups:
• Existing Schema (ones that you've imported into Oracle Data Transforms) and
• New Database Schema (ones that you've not yet imported).
Select the schema that you want to use from the drop-down.
5. Similarly select the Target Schema.
6. Click Next.
7. Select the NetSuite Business Areas that you want to use to transfer data from the
NetSuite Data Warehouse to the target schema.
8. Click Save.
Data Transforms starts the process to build the data warehouse. Click Jobs on the left
pane of the Home page to monitor the progress of the process. When the job completes
successfully, Data Transforms creates a Project folder that includes all the pre-built
workflows and dataflows, which you can run to transfer data from the NetSuite
connection to your target schema. See Running the Pre-Built Workflows to Load Data
into the Target Schema for more information.
Running the Pre-Built Workflows to Load Data into the Target Schema
When the Build Data Warehouse wizard completes successfully, Data Transforms creates a
project that includes all the pre-built data flows and workflows that you can run to extract data
from a Netsuite connection and load it into your target schema.
To view and run the pre-built workflows:
1. Click Projects on the left pane of the Home page and select the newly created NetSuite
project.
2. Click Workflows in the left pane. The following pre-built workflows are listed in the
Project Details page:
• Stage NetSuite Source to SDS
• Extract Transaction Primary Keys
• Load SDS to Warehouse
• Apply Deletes
• All Workflows
3. Click the Actions icon ( ) next to the workflow you want to run and click Start.
Oracle recommends that you run All Workflows to execute all the pre-built workflows.
To see the status of the workflow, click Jobs from the left pane in the current project.
When the job completes successfully, all the data from the NetSuite connection is loaded
into the target schema.
3-349
Chapter 3
Load Data into Autonomous Database
https://swiftobjectstorage.<your-region>.oraclecloud.com/v1/
<your-namespace>/<your-bucket>/<your-file>
https://objectstorage.<your-region>.oraclecloud.com/n/<your-
namespace>/b/<your-bucket>/o/<your-file>
3-350
Chapter 3
Load Data into Autonomous Database
• If you want to use the URL provided by the OCI Console, specify the URL only till the
name of the bucket.
For example,
https://swiftobjectstorage.<your-region>.oraclecloud.com/v1/<your-
namespace>/<your-bucket>
https://objectstorage.<your-region>.oraclecloud.com/n/<your-
namespace>/b/<your-bucket>/o
• If you choose Credential as the Connection Mode (see step 6), specify the URL till
bucketname/o
For example,
https://objectstorage.<your-region>.oraclecloud.com/n/<your-
namespace>/b/<your-bucket>/o/
The values for Region, Namespace and Bucket are auto-populated based on the URL
provided.
8. To select the Connection Mode do one of the following:
• Select Swift Connectivity, and provide the following details:
– In the User Name text box enter your Oracle Cloud Infrastructure username.
– In the Token text box enter the auth token.
• Select Credential and provide the ODI credential in the Enter Credential text box.
You must create the credential in the repository and in the Autonomous Database
that you created during instance creation. When you create a data flow to map data
from Object Storage to Autonomous Database you need to create the ODI credential
in target schema as well.
To create the credential, execute the following script:
9. Click Create.
The new connection is created.
10. Click Test Connection, to test the established connection.
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit, to edit the provided connection details.
• Select Test Connection, to test the created connection.
• Select Delete Schema, to delete schemas.
• Select Delete Connection, to delete the created connection.
3-351
Chapter 3
Load Data into Autonomous Database
You can also search for the required Connection to know its details based on the
following filters:
• Name of the Connection.
• Technology associated with the created Connection.
3-352
Chapter 3
Load Data into Autonomous Database
9. In the Connection String text box the JDBC URL is auto-populated based on the values
you specify. Copy and save the JDBC URL. You will need to enter this JDBC URL when
you create the connection.
10. Click Test Connect to validate your entries.
11. Select the Configure Endpoints tab on the side menu and configure the following:
• In the Endpoint field, select the type of request, and enter the portion of your
endpoint after the base URL. Note that the value here should be in a valid URL-
encoded syntax.
• In the Table Name field, specify the name of the table that you want to map to.
Similarly, you can add multiple endpoints and tables.
12. Click Send to generate the response.
3-353
Chapter 3
Load Data into Autonomous Database
Click the Actions icon ( ) next to the selected connection to perform the following
operations:
• Select Edit to edit the provided connection details.
• Select Test Connection to test the created connection.
• Select Delete Schema to delete schemas.
• Select Delete Connection to delete the created connection.
You can also search for the required connection to know its details based on the
following filters:
• Name of the connection.
• Technology associated with the created connection.
Note:
Data load is not supported for Oracle Object Storage connections.
3-354
Chapter 3
Load Data into Autonomous Database
click the + icon to create a new connection. See Work with Connections for more details
about connections.
7. In the Schema drop-down, all schema corresponding to the selected connection are
listed in two groups:
• Existing Schema (ones that you've imported into Oracle Data Transforms) and
• New Database Schema (ones that you've not yet imported).
Select the schema that you want to use from the drop down.
Note:
If there is missing information such as user name or password not specified,
wallet missing, and so on, the list may fail to populate with a “This connection
has missing information.” error. Click the Edit icon ( ) to open the Update
Connection page where you can fill in the missing details.
8. Click Next.
9. Similarly, define the target connection.
10. Click Save.
The Data Load Detail page appears listing all the loaded data entities.
Note:
Make sure that you have created connections before you plan to create a data
load using the Projects page. See Work with Connections for more details
about connections.
3-355
Chapter 3
Load Data into Autonomous Database
Note:
Data load is not supported for Oracle Object Storage connections.
Note:
APPLIES TO: Data Transforms that is available as a separate listing on
Marketplace called Data Integrator: Web Edition.
If the data load is huge, you might want to increase the memory of the ODI
Agent to avoid any issues. Follow the instructions in Increase the Memory of
ODI Agent before you start to run the data load.
3-356
Chapter 3
Load Data into Autonomous Database
Note:
These options display the list of data entities based on the search criteria. To
view the list of all data entities, clear any applied filters.
Note:
This option is not available for data entities that are loaded using OCI
GoldenGate.
• Truncate – If the table is already present in the target schema, deletes all the data
from the selected table. Nothing is dropped.
3-357
Chapter 3
Load Data into Autonomous Database
Note:
For Delta Share data loads the Data Load Detail page only includes
the option. You cannot apply different actions - incremental
merge, incremental append, recreate, truncate, append - on the data
entities before loading it to the target schema. This is to make sure
that the data is consistent between the Delta Sharing server and the
target schema.
• Append – If the table is already present in the target schema, adds rows to the
table.
• Do Not Load – Skips the selected data entity from the data load job. After you
click Save, these data entities are no longer available for future data load jobs.
You can select multiple data entities and apply different actions. The unsaved rows
are highlighted in bold.
Note:
These options are not available for Delta Share connections.
3. To specify how you want to store the source column names in the target tables,
click Advanced Options, which is on the right side of the Data Load Detail page.
Choose one of the following:
• Retain original names by enclosing all names with delimiters - Creates
column names with the same names as is from the source tables in the target
table.
• Use no delimiters - This is the default selection. Converts all the column
names to upper case and replaces spaces and special characters with
underscores.
The following options are applicable to reserved words such as Date,
Timestamp, Start, and so on.
– Enclose with delimiters - This is the default selection. Encloses column
names that are reserved words with delimiters (not all column names).
– Use a prefix - Adds the specified prefix to column names that are
reserved words (not all column names).
For column names that have the same name after conversion, the names are
suffixed with a numeric value to maintain uniqueness. For example column
names
Date, date, DATE, Item_@Code, Item$$Code, Item%%Code
are created in the target table as
DATE, DATE_0, DATE_1, ITEM__CODE, ITEM__CODE_0, ITEM__CODE_1.
3-358
Chapter 3
Load Data into Autonomous Database
Note:
Once the data load is run, the selected options are applied and retained for all
subsequent runs. You cannot change the configuration.
4. Click to save the changes. A green checkmark ( ) in the row indicates that the
changes are saved.
5. To start the data load,
• Click .
After you have added the data loads to the workflow, click to execute them.
• Increase the Memory of ODI Agent
3-359
Chapter 3
Load Data into Autonomous Database
• Click the Actions icon ( ) next to the selected Data Entity to perform the
following operations:
– Select Edit, to edit the existing details.
– Select Preview, to preview the selected Data Entity. If the data entity belongs
to an Oracle database, you can also view statistics of the table. See View
Statistics of Data Entities for more details.
– Select Delete, to delete the selected Data Entity.
• To delete the Data Entities in bulk, in the Data Entities page, select the check
boxes of the respective Data Entities and click Delete.
• You can also search for the required Data Entity to know its details based on the
following filters:
– Name of the Data Entity
– Connection for which the Data Entity was created
– Schema to which the Data Entity is associated
– Tag that is associated with the Data Entity
3-360
Chapter 3
Load Data into Autonomous Database
Note:
The import of BICC PVOs can take a long time depending on the number of
selected objects. To improve performance, Oracle recommends that you use a
mask to filter and limit the number of PVOs that you want to import.
5. Choose a Mask/filter if you don't want to import every object in the schema.
Depending on the Connection Type, you will be presented with further options for
importing.
Note:
For Oracle Object Storage connections, this value is case-sensitive. If Batch
similar files is set to True, all the files that match the mask and have the same
structure are grouped together into a single data entity.
6. [For Oracle Financials Cloud connections only] From the list in the Resources section,
select the items that you want to import. When the import process completes, a table is
created for each selected resource.
3-361
Chapter 3
Load Data into Autonomous Database
7. [For REST server connections only] In the Resources section, do the following:
• In the Resource URI field, enter the URL of the REST service you want to
import resources from.
• Click the + icon.
• In the Name column enter an identifier for the resource.
• In the Operation URI column enter the URI of the resource.
• Click Test Resource to check whether the entries are valid.
8. Click Start.
A Job is created and the corresponding Job ID is displayed for you to track the
session. Click the Job ID to view the details of the job.
Upon successful execution of the job, all the selected Data Entities are imported.
Click the Refresh icon present at the right corner of the Data Entities page, to
see the new imported Data Entities.
Note:
Oracle Financials Cloud connections are not listed here because you
cannot manually create data entities for such connections. You can only
import data entities from Oracle Financials Cloud REST endpoints using
the Import Data Entities page. See Import Data Entities.
3-362
Chapter 3
Load Data into Autonomous Database
• Existing Database Schema (ones that you've imported from before and are
potentially replacing data entities).
From the Schema drop-down, select the required schema.
Note:
For Oracle Object Storage connections, the Schema drop-down lists the name
of the bucket that you specified in the URL when you created the connection.
2. Click the Add Data Entity icon present on the top right corner of the target component.
3-363
Chapter 3
Load Data into Autonomous Database
3. Add Data Entity page appears allowing you to configure the following details of
the target component:
General tab
• In the Name text box, enter the name of the newly created Data Entity.
• The Alias text box is auto-populated with the name of the newly created Data
Entity.
• From the Connection Type drop-down, select the required connection from
which you wish to add the newly created Data Entity.
• It loads the server name coined at the time of connection creation. From the
Server drop-down, select the required server name from which you wish to
add the newly created Data Entity.
• From the Schema drop-down, select the required schema.
• Click Next.
Columns tab
It allows you to create, remove or edit the column definitions.
• Click the Add Columns icon, to add new columns to the newly created
Data Entity.
A new column is added to the displayed table.
• The table displays the following columns:
– Name
– Data Type - Click the cell to configure the required Data Type.
– Scale
– Length
– Actions - Click the cross icon to delete the created column.
• To delete the columns in bulk, select the columns and click the Delete
icon.
• To search for the required column details, in the Search text box enter the
required column name and click enter. The details of the required column are
displayed.
• Click Next.
Preview Data Entity tab
It displays a preview of all the created columns and their configured details. If the
data entity belongs to an Oracle database, you can also view statistics of the table.
See View Statistics of Data Entities for more information.
4. Click Save.
The new target Data Entity is created.
5. Expand the Properties Panel in the right pane to view the following settings of the
created components:
• General - Displays the Name of the component along with its Connection and
Schema details.
3-364
Chapter 3
Load Data into Autonomous Database
• Attributes - Displays the details of all the attributes associated to the component.
• Column Mapping - Click Auto Map to map all the columns automatically.
• Preview - Click to have a preview of the component.
• Options - Change the options as appropriate.
Note:
This feature is available for Oracle database tables only.
You can view the statistics of a selected data entity in one of following ways:
• In the Data Entities list, click the Actions icon ( ) next to the Data Entity and click
Preview. Select the Statistics tab to view the statistics of the selected data entity.
• On any data flow click on any source or target data entity, and expand the properties
panel in the right pane. Click Preview.
The statistical data is presented as follows:
• The total number of rows and columns in the data entity is displayed at the top.
• The statistics panel displays the thumbnail graphs for each column with information about
the Min, Max, Distinct, and Null values.
• Two types of thumbnail representations are displayed based on the histogram:
– A bar chart represents data for frequency and top-frequency histograms. The bar
chart show the first top 10 values for the number of rows in the table.
– A table lists data for Hybrid and Height-Balanced histograms. The table displays the
entire data and is scrollable. The table displays the range for the values and the
percentage of rows in each range.
• You can click each thumbnail to view the statistics of the column in a new browser tab.
• The detailed view of each chart also shows the type of histogram.
3-365
Chapter 3
Load Data into Autonomous Database
Creating a Project
To create a new project:
1. Click Create Project on the Projects page.
2. On the Create Project page, provide a project name.
3. Click Create.
After creating the project, you're redirected to the Projects page.
3. Click the Actions icon ( ) of the respective data flow to edit, rename, copy,
change folder, start or delete the required data flow of the selected project.
Deleting a Project
When you delete a project, you also delete the resources it contains. Once you delete
a project, it cannot be restored.
Note:
Be sure to review all contained resources before you delete the project.
To delete a project, on the Projects page, select Delete from the Actions icon ( ) for
the project you want to delete or select a project to open its details page, and then
click Delete.
In the Delete Project dialog, click Delete to confirm the delete operation.
3-366
Chapter 3
Load Data into Autonomous Database
3-367
Chapter 3
Load Data into Autonomous Database
3-368
Chapter 3
Load Data into Autonomous Database
• Data Entities Panel: The data entity panel displays the Data Entities that are available to
use in your Data flows. The displayed list can be filtered using the Name and Tags fields.
The panel includes options that let you add schemas, import data entities, remove any of
the schemas that are associated with the data flow, and refresh data entities. See Add
Components for information about how to use these options.
• Database Functions Toolbar: The Database Functions toolbar display the database
functions that can be used in your data flows. Just like Data Entities, you can drag and
drop the Database tools you want to use on the design canvas. See Supported Database
Functions for more information.
• Design Canvas: The design canvas is where you build your transformation logic. After
adding the Data Entities and Database Functions to the design canvas, you can connect
them in a logical order to complete your data flows.
• Properties Panel: The properties panel displays the properties of the selected object on
the design canvas. The Properties Panel is grouped into four Tabs. General, Attributes,
Preview Data, Column Mapping, and Options. Not all tabs are available as they vary
based on the selected object. See Component Properties to know more about these
options.
• Status Panel: When you run a data flow, the Status Panel shows the status of the job
that is running in the background to complete the request. You can see the status of the
job that is currently running or the status of the last job. For more information about the
Status panel, see Monitor Status of Data Loads, Data Flows, and Workflows.
After designing the required data flow,
• Click , to maximize or minimize the created data flow diagram in the design
canvas.
3-369
Chapter 3
Load Data into Autonomous Database
3-370
Chapter 3
Load Data into Autonomous Database
• Contains
5. Oracle Spatial and Graph
It contains the following components:
• Buffer Dim
• Buffer Tol
• Distance Dim
• Distance Tol
• Nearest
• Simplify
• Point
• Geocode Tools:
Note:
The following Geocode Tools work only in non-Autonomous Database
environment.
– Geocode As Geometry
– Geocode
– Geocode Addr
– Geocode All
– Geocode Addr All
– Reverse Geocode
Note:
The following Geocode Tool works only in an Autonomous Database
environment.
– Geocode Cloud
• Spatial Join
Add Components
Add the data entities and database functions to the Design Canvas, and connect them in a
logical order to complete your data flows.
To add components to your data flow:
1. In the Data Entities panel, click Add a Schema to add schemas that contain the data
entities that you want to use in the data flow.
2. In the Add a Schema page, select the connection and schema name.
3. Click Import.
3-371
Chapter 3
Load Data into Autonomous Database
4. In the Import Data Entities page, select the Type of Objects you want to import.
Choose a Mask/filter if you don't want to import every object in the schema and
click Start.
5. The Data Entities panel lists the imported data entities. The panel includes various
options that let you do the following:
• Refresh Data Entities – Click the Refresh icon to refresh the displayed list.
• Name - Search for data entities by name.
• Tags - Filter the data entities by the name of the tag used.
• Import Data Entities - Right-click the schema to see this option. Use this
option to import the data entities.
• Remove Schema - Right-click the data entity to see this option. Use this
option to remove the schema from the list. Note that this option does not
delete the schema, it only removes the association of the schema with this
data flow.
6. Similarly add more schemas to the Data Flow, if required.
7. Drag the required Data Entities that you want to use in the data flow and drop
them on the design canvas.
8. From the Transformations toolbar, drag the transformation component that you
want to use in the data flow and drop them on the design canvas.
9. Select an object on the design canvas, and drag the Connector icon ( ) next
to it to connect the components.
10. After saving the data flow, there may be a Transfer icon overlaid on one or more of
the component connections. This indicates that ODI has detected an additional
step and it is required to move the data between data servers. You can click on
this Icon to view properties associated with this step.
For example:
• Component Properties
The Properties Panel displays various settings for components selected in the
Design Canvas.
Component Properties
3-372
Chapter 3
Load Data into Autonomous Database
The Properties Panel displays various settings for components selected in the Design
Canvas.
Depending on the component selected, you may see any of the following icons:
• General ( ) - Displays the name of the component along with its connection and
schema details. You can edit some of these properties.
• Attributes ( ) - Displays the details of all the attributes associated with the component.
• Column Mapping ( ) - Allows you to map all the columns automatically. See Map
Data Columns for more information.
• Preview ( ) - Displays a preview of the component. For Oracle tables, you can also
view the statistics of the selected data entity. See View Statistics of Data Entities for
details about the statistical information available.
Note:
The Auto compression option is available to the ADMIN user or to a user
with the DWROLE role. For data flows that have schema users other than
ADMIN you need to either assign the DWROLE to the user or disable Auto
compression to avoid execution errors.
3-373
Chapter 3
Load Data into Autonomous Database
4. To map the columns by Position or by Name, from the Auto Map drop-down
menu, select By Position or By Name.
To map the columns manually:
1. From the Auto Map drop-down menu, select Clear to clear the existing mappings.
2. Drag and drop the attributes from the tree on the left to map with the Expression
column.
3. To edit an expression, click the Edit icon of the respective column. The
Expression Editor appears allowing you to perform the required changes (for
example, you can just add an expression-"UPPER" or open the Expression Editor
to edit the expression).
Note:
Use the expression editor only if you have complex expressions for a
particular column.
4. Click OK.
2. Click the Simulate Code icon ( ) if you want to check the code that will run to
complete the tasks that are performed when you execute the data flow job. The
source and target details are displayed in different colors for ease of reference.
This is handy if you want to check if the mapping is correct before you run the job
or if the job fails. Note that the code cannot be used for debugging. For detailed
information about the job, see the Job Details page.
3. Click the Validate icon ( ) in the toolbar above the design canvas to validate the
data flow.
4. After a successful validation, click the Execute icon ( ) next to the Validate icon
to execute the data flow.
A message appears that displays the execution Job ID and name. To check the
status of the data flow, see the Status panel on the right below the Properties
Panel. For details about the Status panel, see Monitor Status of Data Loads, Data
Flows, and Workflows. This panel also shows the link to the Job ID that you can
click to monitor the progress on the Jobs page. For more information, see Create
and Manage Jobs.
For data flows created using Oracle Object Storage connections, the data from the
source CSV file is loaded into the target Oracle Autonomous Database. You can
also export data from an Oracle Autonomous Database table to a CSV file in
Oracle Object Storage.
3-374
Chapter 3
Load Data into Autonomous Database
3-375
Chapter 3
Load Data into Autonomous Database
6. In the Number of Attempts on Failure text box, enter the required value or click
the up or down arrows next to the text box, to select the required value. This value
denotes the number of retry attempts that should happen after the schedule
failure.
7. Stop Execution After field denotes the time after which the schedule has to stop
after executing it. It can be in Hours, Minutes or Seconds. Select the value and
its unit.
8. Select the status of the scheduled Data Flow or Workflow from the Status options.
It can be Active, Inactive or Active for Period. Through Active for Period option
you can configure the exact time frame of the created schedule. It's Starting and
Ending Date with time, time interval in which you wish to run the schedule and
exceptions, if any.
9. After configuring the above details, click Save.
The created schedules get listed in the Schedules page along with the following
details:
• Resource
• Status
• Frequency
• Valid or Invalid
Click the Actions menu ( ) of the respective schedule to perform the following
operations:
• Click Edit to edit the details of the created schedule.
• Click Disable to disable the created schedule. The status of a schedule
becomes inactive when you disable it and gets displayed in the Scheduled list.
Select the schedule and click Enable, to enable it whenever required. You can
enable it again for a specific period, through Enable for Period option.
• Click Validate to validate the created schedule.
• Click Delete to delete the created schedule. Upon confirmation the created
schedule is deleted.
To schedule a data flow or a workflow specific to a project:
• Click the Project title displayed on the Projects page.
Project Details page appears.
• In the left pane, click Schedules.
Schedules page appears. It displays all the schedules pertaining to the selected
project.
Note:
This page is project specific and displays only the Data Flows and
Workflows pertaining to the selected project.
3-376
Chapter 3
Load Data into Autonomous Database
The new schedule is created and is added to the existing list of schedules for the project.
Note:
For data loads created using OCI GoldenGate, the status panel shows the
link to the GoldenGate Deployment Console and the status of the Extract
and Replicat processes.
• The name of the job and a link that takes you to the Jobs Details page where you can
see detailed information about the job and monitor the execution status. For more
information, see Create and Manage Jobs.
When you run a data load, multiple jobs run in the background to complete the request.
This panel shows a link for each job that executes when running a data load.
Introduction to Workflows
A workflow is made up of multiple flows organized in a sequence in which they must be
executed.
Each data flow is executed as a step. You can also add workflows as well as SQL queries as
steps within a workflow. When you execute a workflow, a data flow either succeeds or fails.
Depending on whether the first data flow succeeds or fails, you can choose the next data flow
that must be executed.
Here is an example of a workflow:
3-377
Chapter 3
Load Data into Autonomous Database
3-378
Chapter 3
Load Data into Autonomous Database
Note:
The drop-down lists only Oracle database connections.
• In the SQL textbox add the query that you want to run.
9. Select the step and click the Connector icon ( ) next to it to connect it with the next
step.
10. After defining all the required Workflow details,
• Select a single step in the canvas and click the Execute Step icon ( ), to execute
only the selected data flow or workflow.
To check the status of the workflow, see the Status panel on the right below the
Properties Panel. For details about the Status panel, see Monitor Status of Data
Loads, Data Flows, and Workflows. This panel shows the link to the Job ID that you
can click to monitor the execution status on the Jobs page. For more information
about jobs see Create and Manage Jobs.
3-379
Chapter 3
Load Data into Autonomous Database
To view the details of a workflow, click the name of the workflow and you are
navigated to the Workflow Details page.
In the left pane, you can search for the required Data Flow or Workflow using the
Name filter. In the Name text box, enter the name of the required Data Flow or
Workflow.
Select a step in the canvas and check the Properties Panel available at the right
side of the design canvas to know the following properties of the selected step in a
created Data Flow or Workflow:
• Name
• Linked Object
Note:
You cannot edit this field.
• Step -
– First Step - Select the First Step check-box, to execute the selected step
as the first step for execution in a Data Flow or Workflow.
Note:
You can select only a single step as a first step for execution in a
Data Flow or Workflow.
3-380
Chapter 3
Load Data into Autonomous Database
Note:
This page is project specific and displays only the jobs pertaining to the
selected project.
Click the Actions menu ( ) to View Details of the jobs, Delete and Rerun the jobs, if
required. If any of the steps failed, you can click on them to see the error details.
To delete a job, select the check box of the respective job session and click Delete. Upon
confirmation the selected job gets deleted.
3-381
Chapter 3
Link Data
Link Data
You can link to data in remote databases, in cloud storage, or in connected file
systems or the local file system. You can also use Database Links to access objects
on another database.
When you link to data in a remote database or in cloud storage or in connected file
systems or the local file system, the target object produced is an external table or a
view. When you select that target table or view on the Data Load, explore page in
Database Actions, the source preview for the object shows the current data in the
source object.
Linking to columns in a remote database, to files in cloud storage, or in connected file
systems or the local file system can be useful when the source data is being
continually updated. For example, if you have a database table with columns that are
updated for each new sales transaction, and you have run a data load job that links
columns of the source table to targets in your Oracle Autonomous Database, then
those targets have the new sales data as it is updated in the source.
In addition, you can use Database Links to access objects on another database. Using
Database Links, the other database need not be an Oracle Database system.
• Query External Data with Autonomous Database
Describes packages and tools to query and validate data with Autonomous
Database.
• Use Database Links with Autonomous Database
You can create database links from an Autonomous Database to another
Autonomous Database, to other Oracle databases, or to non-Oracle databases. In
addition, you can create database links from other databases to an Autonomous
Database.
Note:
If you are not using ADMIN user, ensure the user has the necessary privileges
for the operations the user needs to perform. See Manage User Privileges on
Autonomous Database - Connecting with a Client Tool for more information.
3-382
Chapter 3
Link Data
3-383
Chapter 3
Link Data
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
3-384
Chapter 3
Link Data
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'CHANNELS_EXT',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/channels.txt',
format => json_object('delimiter' value ','),
column_list => 'CHANNEL_ID VARCHAR2(2), CHANNEL_DESC VARCHAR2(20),
CHANNEL_CLASS VARCHAR2(20)' );
END;
/
Note:
Autonomous Database supports a variety of source file formats, including
compressed data formats. See DBMS_CLOUD Package Format Options and
the DBMS_CLOUD compression format option to see the supported compression
types.
You can now run queries on the external table you created in the previous step. For
example:
By default the database expects all rows in the external data file to be valid and match
both the target data type definitions as well as the format definition of the files. If there are
any rows in the source files that do not match the format options you specified, the query
3-385
Chapter 3
Link Data
reports an error. You can use DBMS_CLOUD parameters, like rejectlimit, to suppress
these errors. As an alternative, you can also validate the external table you
created to see the error messages and the rejected rows so that you can change
your format options accordingly. See Validate External Data for more information.
For detailed information about the parameters, see CREATE_EXTERNAL_TABLE
Procedure.
See DBMS_CLOUD URI Formats for more information on the supported cloud
object storage services.
• External Table Metadata Columns
The external table metadata helps you determine where data is coming from when
you perform a query.
• file$path: Specifies the file path text up to the beginning of the object name.
• file$name: Specifies the object name, including all the text that follows the final
"/".
For example:
3-386
Chapter 3
Link Data
Note:
The steps to use external tables are very similar for ORC, Parquet, and Avro. These
steps show working with a Parquet format source file.
The source file in this example, sales_extended.parquet, contains Parquet format data.
To query this file in Autonomous Database, do the following:
1. Store your object store credentials, to access the object store, using the
procedure DBMS_CLOUD.CREATE_CREDENTIAL:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external table for ORC, Parquet, or Avro on top of your source files using the
procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE.
The procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE supports external files in the
supported cloud object storage services, including: Oracle Cloud Infrastructure Object
Storage, Azure Blob Storage, Amazon S3, and Amazon S3-Compatible, including: Oracle
Cloud Infrastructure Object Storage, Google Cloud Storage, and Wasabi Hot Cloud
Storage. The credential is a table level property; therefore, the external files must be on
the same object store.
By default, the columns created in the external table automatically map their data types to
Oracle data types for the fields found in the source files and the external table column
names match the source field names.
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'sales_extended_ext',
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/sales_extended.parquet',
format => '{"type":"parquet", "schema": "first"}'
);
3-387
Chapter 3
Link Data
END;
/
Note:
If the column_list parameter is specified, then you provide the column
names and data types for the external table and the schema value, if
specified is ignored. Using column_list you can limit the columns in the
external table. If column_list is not specified then the schema default
value is first.
3. You can now run queries on the external table you created in the previous step:
DESC sales_extended_ext;
Name Null? Type
-------------- ----- --------------
PROD_ID NUMBER(10)
CUST_ID NUMBER(10)
TIME_ID VARCHAR2(4000)
CHANNEL_ID NUMBER(10)
3-388
Chapter 3
Link Data
PROMO_ID NUMBER(10)
QUANTITY_SOLD NUMBER(10)
AMOUNT_SOLD NUMBER(10,2)
GENDER VARCHAR2(4000)
CITY VARCHAR2(4000)
STATE_PROVINCE VARCHAR2(4000)
INCOME_LEVEL VARCHAR2(4000)
This query shows values for rows in the external table. If you want to query this data
frequently, after examining the data you can load it into a table with
DBMS_CLOUD.COPY_DATA.
See CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files and
COPY_DATA Procedure for Avro, ORC, or Parquet Files for more information.
See DBMS_CLOUD URI Formats for information on supported cloud object storage services.
• Using the file_url_path value in combination with the format parameter: Autonomous
Database analyzes Cloud Object Store file path information supplied with this parameter
to determine the partition columns and data types (or you can manually specify the
partition columns and data types).
This type of partitioning provides a synchronization routine to handle changes when
external partition files are added or removed.
3-389
Chapter 3
Link Data
3-390
Chapter 3
Link Data
folder/subfolder format. For example, on Cloud Object Store a Hive format data file is
stored as follows:
table/partition1=partition1_value/partition2=partition2_value/data_file.csv
Files saved in Hive partitioned format provide partition information in the data file path name.
The data file path name includes information about the object contents, including partition
column names and partition column values (the data file does not include the partition
columns and their associated values).
For example, consider an external partitioned SALES table created from Hive format data on
Cloud Object Store:
.../sales/country=USA/year=2020/month=01/file1.csv
.../sales/country=USA/year=2020/month=01/file2.csv
.../sales/country=USA/year=2020/month=02/file3.csv
.../sales/country=USA/year=2020/month=03/file1.csv
.../sales/country=FRA/year=2020/month=03/file1.csv
The Hive format partition information shows the data files in Cloud Object Store are
partitioned by country, year, and month and the values for these partition columns are
also specified within the Hive format path name for each data file (the path name includes
values for the partitioned columns: country, year, and month).
The column names in the path will be used by the API to simplify the table definition.
table/partition1_value/partition2_value/*.parquet
The path includes both partition column values, in partition column order, and the data files.
Autonomous Database allows you to create an external partitioned table from folder format
data and you can perform a query using the specified partitions.
Files saved in folder partitioned format provide the data partition column values in the file
name. Unlike Hive, the paths do not include the column name, therefore the column names
must be provided. The order of partition columns is important and the order in the file name
for column partition names must match the order in the partition_columns parameter.
3-391
Chapter 3
Link Data
tents, 291
canoes, 22
backpacks, 378
The country values in the path name, and the time period values for month and year
are not specified as columns within the data file. The partition column values are
specified only in the path name with values shown: USA, 2020, and 02. After you
create an external partitioned table with this data file you can use the partition columns
and their values when you run a query on the external partitioned table.
For example:
The benefit of creating an external partitioned table with data generated as Hive format
partitioned data is that the query engine is optimized to partition prune the data to
select the correct partition and the query only selects data from one partition and only
needs to search a single data file. Thus, the query would only require a scan of the
file3.csv file (/sales/country=USA/year=2020/month=02/file3.csv). For
large amounts of data, such partition pruning can provide significant performance
improvements.
Using standard Oracle Database external tables, the partition column must be
available as a column within the data file to use it for queries or partition definitions.
Without the special handling that is available with external partitioned tables on
Autonomous Database, this would be a problem if you want to use data stored in Hive
format on Cloud Object Store, as you would need to regenerate the data files to
include the partition as a column in the data file.
3-392
Chapter 3
Link Data
When you use structured data, such as Parquet, Avro, or ORC files stored in folder format on
Cloud Object Store, the columns and their data types are known. and you do not need to
specify the column list as is required with unstructured data. To create the partitioned external
tables, use the procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE to specify the partition
columns and their types as follows:
• The root for the file list is specified in the path name with the file_uri_list parameter.
For example, http://.../sales/*
• The column_list parameter is not required for structured files. If you do not specify the
column list, you must define the partition columns and their data types when you create
the external partitioned table. Use the option partition_columns in the format
parameter to specify the partition columns and their data types.
• The generated DLL includes the columns specified in the path name.
See Query External Partitioned Data with Hive Format Source File Organization and Query
External Partitioned Data with Folder Format Source File Organization for complete
examples.
• External Partitioning: CSV Source Files with Hive-style Folders
Shows how to create external partitioned tables with CSV source files stored on Cloud
Object Store in Hive-style folders.
• External Partitioning: CSV Source Files with Simple Folders
Shows how to create external partitioned tables with CSV source files stored on Cloud
Object Store in simple folder format.
• External Partitioning: Parquet Source Files with Hive-style Folders
Shows how to create external partitioned tables with Parquet source files stored on Cloud
Object Store in Hive-style folders.
• External Partitioning: Parquet with Simple Folders
Shows how to create external partitioned tables with Parquet source files stored on Cloud
Object Store in simple folder format.
.../sales/country=USA/year=2020/month=01/file1.csv
.../sales/country=USA/year=2020/month=01/file2.csv
.../sales/country=USA/year=2020/month=02/file3.csv
.../sales/country=USA/year=2020/month=03/file1.csv
.../sales/country=FRA/year=2020/month=03/file1.csv
API:
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name => 'mysales',
credential_name => 'mycredential',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/.../sales/*.csv',
column_list => 'product varchar2(100), units number, country
varchar2(100), year number, month varchar2(2)',
3-393
Chapter 3
Link Data
Note:
The partition_columns in the format parameter must match the column
names found in the path (for example, the country column matches
“country=…”)
.../sales/USA/2020/01/file1.csv
.../sales/USA/2020/01/file2.csv
.../sales/USA/2020/02/file3.csv
.../sales/USA/2020/03/file1.csv
.../sales/FRA/2020/03/file1.csv
API:
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'mysales',
credential_name => 'mycredential',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/.../sales/*.csv',
column_list => 'product varchar2(100), units number, country
varchar2(100), year number, month varchar2(2)',
field_list => 'product, units', --[Because country, year and
month are not in the file, they are not listed in the field list]
format => '{"type":"csv","partition_columns":["country",
"year", "month"]}');
Note:
The API call is the same as in the previous example, but the order of the
partition_columns in the format parameter is significant because the
column name is not in the file path.
3-394
Chapter 3
Link Data
.../sales/USA/2020/01/file1.parquet
.../sales/USA/2020/01/file2.parquet
.../sales/USA/2020/02/file3.parquet
.../sales/USA/2020/03/file1.parquet
.../sales/FRA/2020/03/file1.parquet
API:
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name => 'mysales',
credential_name => 'mycredential',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/.../sales/*.parquet',
format =>
json_object( 'type' value 'parquet',
'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'country', 'type' value
'varchar2(100)'),
json_object('name' value 'year', 'type' value
'number'),
json_object('name' value 'month', 'type' value
'varchar2(2)')
)
)
);
Note:
The column_list parameter is not specified. As shown, for each partition column
specify both the name and data type in the format parameter partition_columns.
.../sales/USA/2020/01/file1.parquet
.../sales/USA/2020/01/file2.parquet
.../sales/USA/2020/02/file3.parquet
.../sales/USA/2020/03/file1.parquet
.../sales/FRA/2020/03/file1.parquet
3-395
Chapter 3
Link Data
API:
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name => 'mysales',
credential_name => 'mycredential',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/.../sales/*.parquet',
format =>
json_object( 'type' value 'parquet',
'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'country', 'type' value
'varchar2(100)'),
json_object('name' value 'year', 'type' value 'number'),
json_object('name' value 'month', 'type' value
'varchar2(2)')
)
)
);
Note:
The column_list parameter is not specified. You must include both the
name and data type for the partition columns. In addition, the order of the
partition_columns in the format clause matters because the column name
is not in the file path.
Query External Partitioned Data with Hive Format Source File Organization
Use DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE to create an external partitioned table
and generate the partitioning information from the Cloud Object Store file path.
Consider the following sample source files in Object Store:
custsales/month=2019-01/custsales-2019-01.csv
custsales/month=2019-02/custsales-2019-02.csv
custsales/month=2019-03/custsales-2019-03.csv
With this naming, the values for month are captured within the object name.
To create a partitioned external table with data stored in this sample Hive format, do
the following:
1. Store Object Store credentials using the
procedure DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
3-396
Chapter 3
Link Data
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external partitioned table on top of your source files using the procedure
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE.
The procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE supports external partitioned
files in the supported cloud object storage services. The credential is a table level
property; therefore, the external files must all be on the same cloud object store.
For example:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
TABLE_NAME => 'sales_sample',
CREDENTIAL_NAME => 'DEF_CRED_NAME',
FILE_URI_LIST => 'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/moviestream_landing/o/
sales_sample/*.parquet',
FORMAT => '{"type":"parquet", "schema":
"first","partition_columns":[{"name":"month","type":"varchar2(100)"}]}');
END;
/
3-397
Chapter 3
Link Data
• format: defines the options you can specify to describe the format of the
source file. The partition_columns format parameter specifies the names of
the partition columns.
If the data in your source file is encrypted, decrypt the data by specifying the
encryption format option. See Decrypt Data While Importing from Object
Storage for more information on decrypting data.
See DBMS_CLOUD Package Format Options for more information.
In this example, namespace-string is the Oracle Cloud Infrastructure object
storage namespace and bucketname is the bucket name. See Understanding
Object Storage Namespaces for more information.
The DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE call would result in the following
table definition:
3-398
Chapter 3
Link Data
( 'https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-
string/b/moviestream_landing/o/sales_sample/month=2019-02/*.parquet'
))
PARALLEL ;
Query External Partitioned Data with Folder Format Source File Organization
Use DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE to create an external partitioned table and
generate the partitioning information from the Cloud Object Store file path.
When you create an external table with folder format data files, you have two options for
specifying the types of the partition columns:
• You can manually specify the columns and their data types with column_list parameter.
See Query External Partitioned Data with Hive Format Source File Organization for an
example using the column_list parameter.
• You can let DBMS_CLOUD derive the data file columns and their types from information in
structured data files such as Avro, ORC, and Parquet data files. In this case, you use the
partition_columns option with the format parameter to supply the column names and
their data types for the partition columns and you do not need to supply the column_list
or the field_list parameters.
Consider the following sample source files in Object Store:
.../sales/USA/2020/01/sales1.parquet
.../sales/USA/2020/02/sales2.parquet
3-399
Chapter 3
Link Data
To create a partitioned external table with the Cloud Object Store file path defining the
partitions from files with this sample folder format, do the following:
1. Store your object store credentials using the
procedure DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
BEGIN DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/sales/*.parquet',
format =>
json_object('type' value 'parquet', 'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'country', 'type' value
'varchar2(100)'),
json_object('name' value 'year', 'type' value
'number'),
json_object('name' value 'month', 'type' value
'varchar2(2)')
)
)
);
3-400
Chapter 3
Link Data
END;
/
3-401
Chapter 3
Link Data
that you can change your format options accordingly. See Validate External Data
and Validate External Partitioned Data for more information.
3. You can now run queries on the external partitioned table you created in the
previous step.
Your Autonomous Database takes advantage of the partitioning information of your
external partitioned table, ensuring that the query only accesses the relevant data
files in the Object Store. For example, the following query only reads data files
from one partition.
For example:
Note:
Only use DBMS_CLOUD.SYNC_EXTERNAL_PART_TABLE when the partitioned
external table is created with DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE and
the file_url_path parameter.
BEGIN
DBMS_CLOUD.SYNC_EXTERNAL_PART_TABLE(table_name => 'MYSALES');
END;
/
3-402
Chapter 3
Link Data
2. The partition information is now refreshed and you can run queries on the updated
external partitioned table. with new partitions available or with partitions that were
removed no longer available in the external partitioned table.
There are two ways to create an external partitioned table on Autonomous Database:
• The first version is for. See Query External Partitioned Data with Hive Format Source File
Organization for information on this type of external partitioned table usage.
• The second version is for. When you a create a partitioned external table in this way you
include a partitioning clause in the partitioning_clause parameter. The partitioning
clause that you include depends upon the data files in the Cloud and the type of
partitioning you use. This section describes this type of external partitioned table usage.
1. Store your object store credentials using the procedure DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external partitioned table on top of your source files using the procedure
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE.
The procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE supports external partitioned
files in the supported cloud object storage services. The credential is a table level
property; therefore, the external files must be on the same object store.
For example:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name =>'PET1',
3-403
Chapter 3
Link Data
credential_name =>'DEF_CRED_NAME',
format => json_object('delimiter' value ',', 'recorddelimiter' value
'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) location
( ''https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/file_11.txt''),
partition p2 values less than (2000) location
( ''https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/file_21.txt''),
partition p3 values less than (3000) location
( ''https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/file_31.txt'') )'
);
END;
/
3-404
Chapter 3
Link Data
external partitioned table you created to see the error messages and the rejected rows so
that you can change your format options accordingly. See Validate External Data and
Validate External Partitioned Data for more information.
See CREATE_EXTERNAL_PART_TABLE Procedure for detailed information about the
parameters.
See DBMS_CLOUD URI Formats for more information on the supported cloud object
storage services.
If your data, internal or external, can be represented in finer granularity as multiple logical
partitions then it is highly recommended to create a hybrid partitioned table with multiple
internal and external partitions, preserving the logical partitioning of your data for query
access.
When you a create a hybrid partitioned table, you include a partitioning clause in the
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE statement. The partitioning clause that you include
depends upon your data files and the type of partitioning you use. See Creating Hybrid
Partitioned Tables for more information.
1. Store your object store credentials using the procedure DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for creating hybrid partitioned tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create a hybrid partitioned table on top of your source files using the procedure
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE.
The procedure DBMS_CLOUD.CREATE_HYBRID_PART_TABLE supports external partitioned
files in the supported cloud object storage services. The credential is a table level
property; therefore, the external files must be on the same object store.
3-405
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
table_name =>'HPT1',
credential_name =>'DEF_CRED_NAME',
format => json_object('delimiter' value ',', 'recorddelimiter' value
'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000) external location
( ''https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/file_11.txt'') ,
partition p2 values less than (2000) external location
( ''https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/file_21.txt'') ,
partition p3 values less than (3000) )'
);
END;
/
3-406
Chapter 3
Link Data
If there are any rows in the source files that do not match the format options you
specified, the query reports an error. You can use DBMS_CLOUD parameters, like rejectlimit,
to suppress these errors. As an alternative, you can also validate the hybrid partitioned
table you created to see the error messages and the rejected rows so that you can
change your format options accordingly. See Validate External Data and Validate Hybrid
Partitioned Data for more information.
See CREATE_HYBRID_PART_TABLE Procedure detailed information about the
parameters.
See DBMS_CLOUD URI Formats for more information on the supported cloud object
storage services.
The source files to create this type of external table must be exported from the source system
using the ORACLE_DATAPUMP access driver in External Tables. See Unloading and Loading
Data with the ORACLE_DATAPUMP Access Driver for details on exporting using the
ORACLE_DATAPUMP access driver.
To create an external table you first move the Oracle Data Pump dump files that were
exported using the ORACLE_DATAPUMP access driver to your Object Store and then use
DBMS_CLOUD.CREATE_EXTERNAL_TABLE to create the external table.
The source files in this example are the Oracle Data Pump dump files, exp01.dmp and
exp02.dmp.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name for creating external tables.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
2. Create an external table on top of your source files using the procedure
DBMS_CLOUD.CREATE_EXTERNAL_TABLE.
For example:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'CHANNELS_EXT',
3-407
Chapter 3
Link Data
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp01.dmp,
https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp02.dmp'
format => json_object('type' value 'datapump', 'rejectlimit'
value '1'),
column_list => 'CHANNEL_ID NUMBER, CHANNEL_DESC VARCHAR2(20),
CHANNEL_CLASS VARCHAR2(20)' );
END;
/
By default the database expects all rows in the external data file to be valid and
match both the target data type definitions as well as the format definition of the
files. As part of validation, DBMS_CLOUD makes sure all the necessary dump file
parts are there and also checks that the dump files are valid and not corrupt (for
example exp01.dmp, exp02.dmp, and so on). You can use the DBMS_CLOUD
format option rejectlimit to suppress these errors. As an alternative, you can
also validate the external table you created to see the error messages and the
rejected rows. See Validate External Data for more information.
For detailed information about the parameters, see CREATE_EXTERNAL_TABLE
Procedure.
See DBMS_CLOUD URI Formats for more information on the supported cloud
object storage services.
3-408
Chapter 3
Link Data
Query Big Data Service Hadoop (HDFS) Data from Autonomous Database
You can create database links to Oracle Big Data Service from Autonomous Database.
Big Data Service provides enterprise-grade Hadoop as a service, with end-to-end security,
high performance, and ease of management and upgradeability. After deploying the Oracle
Cloud SQL Query Server to Big Data Service, you can easily query data available on Hadoop
clusters at scale from Autonomous Database using SQL. This allows you to blend data
coming from your data lake with data in your Autonomous Database.
Oracle Cloud SQL Query Server is an Oracle SQL engine that can be accessed using
database links in Autonomous Database. You create links to Big Data Service using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK. See Integrating Cloud SQL with Autonomous
Database for more information.
3-409
Chapter 3
Link Data
Note:
The DBMS_DCAT Package is available for performing the tasks required to
query Data Catalog object store data assets. See DBMS_DCAT Package.
Data Catalog
Data Catalog harvests data assets that point to the object store data sources you
want to query with Autonomous Database. From Data Catalog you can specify how
the data is organized during harvesting, supporting different file organization patterns.
As part of the Data Catalog harvesting process, you can select the buckets and files
you want to manage within the asset. For further information, see Data Catalog
Overview.
3-410
Chapter 3
Link Data
Object Stores
Object Stores have buckets containing a variety of objects. Some common types of objects
found in these buckets include: CSV, parquet, avro, json, and ORC files. Buckets generally
have a structure or a design pattern to the objects they contain. There are many different
ways to structure data and many different ways of interpreting these patterns.
For example, a typical design pattern uses top-level folders that represent tables. Files within
a given folder share the same schema and contain data for that table. Subfolders are often
used to represent table partitions (for example, a subfolder for each day). Data Catalog
refers to each top-level folder as a logical entity, and this logical entity maps to an
Autonomous Database external table.
Connection
A connection is an Autonomous Database connection to a Data Catalog instance. For each
Autonomous Database instance there can be connections to multiple Data Catalog
instances. The Autonomous Database credential must have rights to access Data Catalog
assets that have been harvested from object storage.
Harvest
A Data Catalog process that scans object storage and generates the logical entities from
your data sets.
Data Asset
A data asset in Data Catalog represents a data source, which includes databases, Oracle
Object Storage, Kafka, and more. Autonomous Database leverages Oracle Object Storage
assets for metadata synchronization.
Data Entity
A data entity in Data Catalog is a collection of data such as a database table or view, or a
single file and normally has many attributes that describe its data.
Logical Entity
In Data Lakes, numerous files typically comprise a single logical entity. For example, you
may have daily clickstream files, and these files share the same schema and file type.
A Data Catalog logical entity is a group of Object Storage files that are derived during
harvesting by applying filename patterns that have been created and assigned to a data
asset.
Data Object
A data object in Data Catalog refers to data assets and data entities.
Filename Pattern
In a data lake, data may be organized in different ways. Typically, folders capture files of the
same schema and type. You must register to Data Catalog how your data is organized.
Filename patterns are used to identify how your data is organized. In Data Catalog, you can
define filename patterns using regular expressions. When Data Catalog harvests a data
asset with an assigned filename pattern, logical entities are created based on the filename
pattern. By defining and assigning these patterns to data assets, multiple files can be
grouped as logical entities based on the filename pattern.
Synchronize (Sync)
Autonomous Database performs synchronizations with Data Catalog to automatically keep
its database up-to-date with respect to changes to the underlying data. Synchronization can
be performed manually, or on a schedule.
The sync process creates schemas and external tables based on the Data Catalog data
assets and logical entities. These schemas are protected, which means their metadata is
3-411
Chapter 3
Link Data
managed by Data Catalog. If you want to alter the metadata, you must make the
changes in Data Catalog. The Autonomous Database schemas will reflect any
changes after the next sync is run. For further details, see Synchronization Mapping.
Synchronization Mapping
The synchronization process creates and updates Autonomous Database schemas
and external tables based on Data Catalog data assets, folders, logical entities,
attributes and relevant custom overrides.
3-412
Chapter 3
Link Data
3-413
Chapter 3
Link Data
3-414
Chapter 3
Link Data
Note:
The DBMS_DCAT Package is available for performing the tasks required to query Data
Catalog object store data assets. See DBMS_DCAT Package.
3-415
Chapter 3
Link Data
3-416
Chapter 3
Link Data
3-417
Chapter 3
Link Data
Hive-style Folder-style
{bucketName:.*}/{logicalEntity: {bucketName:[\w]+}/
[^/]+}.db/{logicalEntity:[^/] {logicalEntity:[^/]+}(?
+}/.* <!.db)/.*$
• Creates logical entities for sources • Creates a logical entity based on the
that contain ".db" as the first part of folder name off of the root
the object name. • To prevent duplication with Hive,
• To ensure uniqueness within the object names that have ".db" in them
bucket, the resulting name is (db- are skipped.
name).(folder name)
b. To create filename patterns, go to the Filename Patterns tab for your Data
Catalog and click Create Filename Pattern. For example, the following is the
Create Filename Pattern tab for the moviestream Data Catalog:
3-418
Chapter 3
Link Data
c. Now, associate your filename patterns with your data asset. Select Assign Filename
Patterns, check the patterns you want and click Assign.
3-419
Chapter 3
Link Data
For example, here are the patterns assigned to the phoenixObjStore data
asset:
3-420
Chapter 3
Link Data
b. After running the job, you see the logical entities. Use the Browse Data Assets to
review them.
In this example, you are looking at the customer-extension logical entity and its
attributes.
If you have a glossary, Data Catalog recommends categories and terms to associate
with the entity and its attributes. This provides a business context for the items.
Schemas, tables and columns are oftentimes not self-explanatory.
In our example, we want to differentiate between the different types of buckets and
the meaning of their content:
3-421
Chapter 3
Link Data
-- Run as admin
-------
-- Enable resource principal support
-------
exec dbms_cloud_admin.enable_resource_principal();
--------
-- Set the credentials to use for object store and data catalog
-- Connect to Data Catalog
-- Review connection
---------
-- Set credentials
exec dbms_dcat.set_data_catalog_credential(credential_name =>
'&oci_credential');
exec dbms_dcat.set_object_store_credential(credential_name =>
'&oci_credential');
3-422
Chapter 3
Link Data
b. Sync Data Catalog with Autonomous Database. Here, we'll sync all the object
storage assets:
-- View log
select type, start_time, status, logfile_table from
user_load_operations; -- Logfile_Table will have the name of the
table containing the full log.
select * from dbms_dcat$1_log;
3-423
Chapter 3
Link Data
After change:
3-424
Chapter 3
Link Data
For example, below are the landing (moviestream_landing) and gold zone
(moviestream_gold) buckets in object storage:
3-425
Chapter 3
Link Data
b. To create filename patterns, go to the Filename Patterns tab for your Data
Catalog and click Create Filename Pattern. For example, the following is the
Create Filename Pattern tab for the moviestream Data Catalog:
3-426
Chapter 3
Link Data
In this example, the data asset connects to the compartment for the moviestream
object storage resource.
c. Now, associate your filename patterns with your data asset. Select Assign Filename
Patterns, check the patterns you want and click Assign.
For example, here are the patterns assigned to the amsterdamObjStore data asset:
3-427
Chapter 3
Link Data
b. After running the job, you see the logical entities. Use the Browse Data
Assets to review them.
In this example, you are looking at the sales_sample_parquet logical entity
and its attributes. Note that Data Catalog has identified the month attribute as
partitioned.
3-428
Chapter 3
Link Data
ashburn-1.oraclecloud.com/n/mytenancy/b/private_data/o'
-- Run as admin
-------
-- Enable resource principal support
-------
exec dbms_cloud_admin.enable_resource_principal();
--------
-- Set the credentials to use for object store and data catalog
-- Connect to Data Catalog
-- Review connection
---------
-- Set credentials
exec dbms_dcat.set_data_catalog_credential(credential_name =>
'&oci_credential');
exec dbms_dcat.set_object_store_credential(credential_name =>
'&oci_credential');
b. Sync Data Catalog with Autonomous Database. Here, we'll sync all the object
storage assets:
-- View log
select type, start_time, status, logfile_table from
user_load_operations; -- Logfile_Table will have the name of the
table containing the full log.
select * from dbms_dcat$1_log;
3-429
Chapter 3
Link Data
CREATE TABLE
"DCAT$AMSTERDAMOBJSTORE_MOVIESTREAM_LANDING"."SALES_SAMPLE_PARQUE
T"
( "MONTH" VARCHAR2(4000) COLLATE "USING_NLS_COMP",
"DAY_ID" TIMESTAMP (6),
"GENRE_ID" NUMBER(20,0),
"MOVIE_ID" NUMBER(20,0),
"CUST_ID" NUMBER(20,0),
...
) DEFAULT COLLATION "USING_NLS_COMP"
ORGANIZATION EXTERNAL
( TYPE ORACLE_BIGDATA
ACCESS PARAMETERS
( com.oracle.bigdata.fileformat=parquet
com.oracle.bigdata.filename.columns=["MONTH"]
com.oracle.bigdata.file_uri_list="https://
swiftobjectstorage.eu-amsterdam-1.oraclecloud.com/v1/tenancy/
moviestream_landing/workshop.db/sales_sample_parquet/*"
...
)
)
REJECT LIMIT 0
PARTITION BY LIST ("MONTH")
(PARTITION "P1" VALUES (('2019-01'))
LOCATION ( 'https://swiftobjectstorage.eu-
amsterdam-1.oraclecloud.com/v1/tenancy/moviestream_landing/
workshop.db/sales_sample_parquet/month=2019-01/*'),
PARTITION "P2" VALUES (('2019-02'))
LOCATION ( 'https://swiftobjectstorage.eu-
amsterdam-1.oraclecloud.com/v1/tenancy/moviestream_landing/
workshop.db/sales_sample_parquet/month=2019-02/*'),
...PARTITION "P24" VALUES (('2020-12'))
LOCATION ( 'https://swiftobjectstorage.eu-
amsterdam-1.oraclecloud.com/v1/tenancy/moviestream_landing/
workshop.db/sales_sample_parquet/month=2020-12/*'))
PARALLEL
3-430
Chapter 3
Link Data
The default schema names are rather complicated. Let's simplify them by specifying both
the asset and the folder's Oracle-Db-Schema custom attribute in Data Catalog. Change
the data asset to PHX and the folders to landing and gold respectively. The schema is a
concatenation of the two.
a. From Data Catalog, navigate to the moviestream_landing bucket and change the
asset to landing and gold respectively.
Before change:
After change:
3-431
Chapter 3
Link Data
Autonomous Database without having to manually derive the schema for the
external data sources and create external tables.
• Concepts Related to Querying with AWS Glue Data Catalog
An understanding of the following concepts is necessary for querying with Amazon
Web Service (AWS) Glue data catalogs.
• Mapping between Autonomous Database and AWS Glue
During the synchronization process, external tables are created in Autonomous
Database derived from the AWS Glue Data Catalog databases and tables over
Amazon S3.
• User Workflow for Querying with AWS Glue Data Catalog
The basic user workflow for querying AWS S3 data with AWS Glue Data Catalog
involves connecting to AWS Glue Data Catalog, synchronizing with Autonomous
Database to automatically create external tables, and then querying the S3 data.
• Example: Query with AWS Glue Data Catalog
This example steps you through the process of running queries over datasets
stored in Amazon Simple Storage Service (Amazon S3) using an AWS Glue Data
Catalog.
3-432
Chapter 3
Link Data
statistics, user-defined metadata and other metadata. Tables in AWS Glue data catalog can
be created manually, or automatically using an AWS Glue crawler.
Table 3-2 OCI Data Catalog to AWS Glue Data Catalog Mapping
3-433
Chapter 3
Link Data
Users
The table below describes the different types of users that perform the user workflow
actions.
User Description
Database Data Catalog Administrator Database user with DCAT_SYNC role.
Database Data Catalog Query Database user able to grant access on automatically
Administrator created external tables to other users.
Data Analyst Database user on Autonomous Database querying
data in AWS S3 either by querying automatically
created external tables or by interacting directly with
AWS Glue Data Catalog.
AWS Glue Data Catalog User AWS user with access to an AWS Glue Data Catalog.
AWS S3 Object Storage User AWS user with access to data stored in AWS S3
User Workflow
The table below describes each action included in the workflow and what type of user
can perform the action.
Note:
The DBMS_DCAT package is available for performing the tasks required to
query AWS S3 object storage using AWS Glue Data Catalog. See
DBMS_DCAT Package.
Note:
Only Amazon Web
Services (AWS)
credentials are supported.
AWS Amazon Resource
Names (ARN) credentials
are not supported.
3-434
Chapter 3
Link Data
3-435
Chapter 3
Link Data
b. Navigate to the data catalog, databases and tables to find existing objects.
In this example, some objects exist in Amazon S3 that AWS Glue has
previously crawled and created tables for as shown below:
3-436
Chapter 3
Link Data
b. Associate the credentials with the AWS Glue Data Catalog and Amazon S3 object
storage.
These procedure calls associate the data catalog and object storage, respectively,
with the credentials.
exec dbms_dcat.set_data_catalog_credential('CRED_AWS');
exec dbms_dcat.set_object_store_credential('CRED_AWS');
b. Use the all_glue_tables view to get a list of tables available for sync.
3-437
Chapter 3
Link Data
c. Synchronize Autonomous Database with two tables, store and item, found in
the parq database.
begin
dbms_dcat.run_sync(
synced_objects => '
{
"database_list": [
{
"database": "parq",
"table_list": ["store","item"]
}
]
}',
error_semantics => 'STOP_ON_ERROR');
end;
/
4. Inspect new objects in Autonomous Database and run a query on top of S3.
a. Use SQL Developer to view new objects created by the previous sync
operation.
The GLUE$PARQ_TPCDS_ORACLE_PARQ schema was generated and named
automatically by the dbms_dcat.run_sync procedure call.
3-438
Chapter 3
Link Data
To validate a partitioned external table, see Validate External Partitioned Data. This
procedure includes a parameter that lets you specify a specific partition to validate.
To validate a hybrid partitioned table, see Validate Hybrid Partitioned Data. This procedure
includes a parameter that lets you specify a specific partition to validate.
Before validating an external table you need to create the external table. To create an
external table use the procedure for your table type, either
DBMS_CLOUD.CREATE_EXTERNAL_TABLE. For example:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name => 'CHANNELS_EXT' );
END;
/
This procedure scans your source files and validates them using the format options specified
when you create the external table.
The validate operation, by default, scans all the rows in your source files and stops when a
row is rejected. If you want to validate only a subset of the rows, use the rowcount parameter.
When the rowcount parameter is set the validate operation scans rows and stops either when
a row is rejected or when the specified number of rows are validated without errors.
For example, the following validate operation scans 100 rows and stops when a row is
rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name => 'CHANNELS_EXT',
rowcount => 100 );
END;
/
If you do not want the validate to stop when a row is rejected and you want to see all rejected
rows, set the stop_on_error parameter to FALSE. In this case VALIDATE_EXTERNAL_TABLE
scans all rows and reports all rejected rows.
If you want to validate only a subset of rows use the rowcount parameter. When rowcount is
set and stop_on_error is set to FALSE, the validate operation scans rows and stops either
when the specified number of rows are rejected or when the specified number of rows are
validated without errors. For example, the following example scans 100 rows and stops when
100 rows are rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name => 'CHANNELS_EXT',
rowcount => 100
stop_on_error => FALSE );
3-439
Chapter 3
Link Data
END;
/
See View Logs for Data Validation to see the results of validate operations in the
tables dba_load_operations and user_load_operations.
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
table_name => 'PET1',
partition_name => 'P1');
END;
/
This procedure scans your source files for partition P1 and validates them using the
format options specified when you create the external partitioned table.
The validation of a partitioned table by default validates all the partitions sequentially
until rowcount is reached. If you specify a partition_name then only a specific
partition is validated.
The validate operation, by default, scans all the rows in your source files and stops
when a row is rejected. If you want to validate only a subset of the rows, use the
rowcount parameter. When the rowcount parameter is set the validate operation scans
rows and stops either when a row is rejected or when the specified rowcount number
of rows are validated without errors.
For example, the following validate operation scans 100 rows and stops when a row is
rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
table_name => 'PET1',
rowcount => 100 );
END;
/
If you do not want the validate to stop when a row is rejected and you want to see all
rejected rows, set the stop_on_error parameter to FALSE. In this case
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE scans all rows and reports all rejected
rows.
3-440
Chapter 3
Link Data
If you want to validate only a subset of rows use the rowcount parameter. When rowcount is
set and stop_on_error is set to FALSE, the validate operation scans rows and stops either
when the specified number of rows are rejected or when the specified number of rows are
validated without errors. For example, the following example scans 100 rows and stops when
100 rows are rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
table_name => 'PET1',
rowcount => 100
stop_on_error => FALSE );
END;
/
See View Logs for Data Validation to see the results of validate operations in the tables
dba_load_operations and user_load_operations.
BEGIN
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE(
table_name => 'HPT1',
partition_name => 'P1');
END;
/
This procedure scans your source files for partition P1 and validates them using the format
options specified when you create the hybrid partitioned table.
The validation of a hybrid partitioned table by default validates all the external partitions
sequentially until rowcount is reached. If you specify a partition_name then only that specific
partition is validated.
The validate operation, by default, scans all the rows in your source files and stops when a
row is rejected. If you want to validate only a subset of the rows, use the rowcount parameter.
When the rowcount parameter is set the validate operation scans rows and stops either when
a row is rejected or when the specified rowcount number of rows are validated without errors.
For example, the following validate operation scans 100 rows and stops when a row is
rejected or when 100 rows are validated without errors:
BEGIN
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
3-441
Chapter 3
Link Data
If you do not want the validate to stop when a row is rejected and you want to see all
rejected rows, set the stop_on_error parameter to FALSE. In this case
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE scans all rows and reports all rejected
rows.
If you want to validate only a subset of rows use the rowcount parameter. When
rowcount is set and stop_on_error is set to FALSE, the validate operation scans rows
and stops either when the specified number of rows are rejected or when the specified
number of rows are validated without errors. For example, the following example
scans 100 rows and stops when 100 rows are rejected or when 100 rows are validated
without errors:
BEGIN
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
table_name => 'HPT1',
rowcount => 100
stop_on_error => FALSE );
END;
/
After you validate your source files you can see the result of the validate operation by
querying a load operations table:
• dba_load_operations: shows all validate operations.
• user_load_operations: shows the validate operations in your schema.
You can use these files to view load validation information. For example use this select
operation to query user_load_operations:
3-442
Chapter 3
Link Data
Using this SQL statement with the WHERE clause on the TYPE column displays all of the load
operations with type VALIDATE.
The LOGFILE_TABLE column shows the name of the table you can query to look at the log of a
validate operation. For example, the following query shows the log for this validate operation:
The column BADFILE_TABLE shows the name of the table you can query to look at the rows
where there were errors during validation. For example, the following query shows the
rejected records for the above validate operation:
Depending on the errors shown in the log and the rows shown in the BADFILE_TABLE, you can
correct the error by dropping the external table using the DROP TABLE command and
recreating it by specifying the correct format options in DBMS_CLOUD.CREATE_EXTERNAL_TABLE,
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE or DBMS_CLOUD.CREATE_HYBRID_PART_TABLE.
Note:
The LOGFILE_TABLE and BADFILE_TABLE tables are stored for two days for each
validate operation and then removed automatically.
3-443
Chapter 3
Link Data
Note:
See Create Database Links to an Oracle Database that is not an
Autonomous Database if the target for your database link is an Oracle
Database that is not an Autonomous Database.
3-444
Chapter 3
Link Data
using Access Control Lists). Make sure you enable your target database to allow access from
your source database for the database link to work. If you limit access with Access Control
Lists (ACLs), you can find the outbound IP address of your source Autonomous Database
and allow that IP address to connect to your target database. When the target database is
another Autonomous Database, you can add the outbound IP address of the source
database to the ACL of the target database.
See Obtain Tenancy Details for information on finding the outbound IP address.
To create a database link to a target Autonomous Database without a wallet (TLS):
1. On the Autonomous Database instance where you are creating the database link, create
credentials to access the target Autonomous Database. The username and password you
specify with DBMS_CLOUD.CREATE_CREDENTIAL are the credentials for the target database
(you use these credentials to create the database link).
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password' );
END;
/
Note:
You can use a vault secret credential for the target database credential in a
database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name.
2. Create a database link to the target Autonomous Database instance using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1521',
service_name => 'example_medium.adb.example.oraclecloud.com',
credential_name => 'DB_LINK_CRED',
3-445
Chapter 3
Link Data
For the credentials you create in Step 1, the target database credentials, if the
password of the target user changes you can update the credential that contains the
target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password' );
END;
/
Note:
You can create links to Big Data Service using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK. See Query Big Data Service
Hadoop (HDFS) Data from Autonomous Database for more information.
3-446
Chapter 3
Link Data
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/cwallet.sso',
directory_name => 'DBLINK_WALLET_DIR');
3-447
Chapter 3
Link Data
END;
/
Note:
The credential_name you use in this step is the credentials for the
Object Store. In the next step you create the credentials to access the
target database.
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password');
END;
/
Note:
You can use a vault secret credential for the target database credential in
a database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name.
6. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
3-448
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1522',
service_name => 'example_medium.adb.example.oraclecloud.com',
ssl_server_cert_dn => 'CN=adb.example.oraclecloud.com,OU=Oracle
BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED',
directory_name => 'DBLINK_WALLET_DIR');
END;
/
For the credentials you create in Step 5, the target database credentials, if the password of
the target user changes you can update the credential that contains the target user's
credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password' );
END;
/
Note:
You can create links to Big Data Service using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK. See Query Big Data Service Hadoop
(HDFS) Data from Autonomous Database for more information.
3-449
Chapter 3
Link Data
• CREATE_DATABASE_LINK Procedure
• GET_OBJECT Procedure and Function
• UPDATE_CREDENTIAL Procedure
Note:
Database links from an Autonomous Database to a target Autonomous
Database that is on a private endpoint are only supported in commercial
regions and US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly
provisioned databases.
For existing US Government databases on a private endpoint, if you want to
create database links from an Autonomous Database to a target in a US
Government region, you can file a Service Request at Oracle Cloud Support
and request to enable the private endpoint in government regions database
linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
Topics
3-450
Chapter 3
Link Data
3-451
Chapter 3
Link Data
Note:
When your Autonomous Database instance is configured with a private
endpoint, set the ROUTE_OUTBOUND_CONNECTIONS database property to
'PRIVATE_ENDPOINT' to specify that all outgoing database links are subject to
the Autonomous Database instance private endpoint VCN's egress rules.
See Enhanced Security for Outbound Connections with Private Endpoints for
more information.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'PRIVATE_ENDPOINT_CRED',
username => 'NICK',
password => 'password'
);
END;
/
Note:
You can use a vault secret credential for the target database credential in
a database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name.
2. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
3-452
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PRIVATE_ENDPOINT_LINK',
hostname => 'exampleHostname',
port => '1521',
service_name => 'example_high.adb.oraclecloud.com',
credential_name => 'PRIVATE_ENDPOINT_CRED',
directory_name => NULL,
private_target => TRUE);
END;
/
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, setting the
private_target parameter to TRUE is not required in this API. See
Enhanced Security for Outbound Connections with Private Endpoints for
more information.
3. Use the database link you created to access data in the target database.
For example:
3-453
Chapter 3
Link Data
Note:
For the credentials you create in Step 1, the Oracle Database credentials, if
the password of the target user changes you can update the credential that
contains the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
Note:
The wallet file, along with the Database user ID and password provide
access to data in the target Oracle database. Store wallet files in a
secure location. Share wallet files only with authorized users.
2. Create credentials to access your Object Store where you store the cwallet.sso.
See CREATE_CREDENTIAL Procedure for information about the username and
password parameters for different object storage services.
3. Create a directory on Autonomous Database for the wallet file cwallet.sso.
3-454
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/cwallet.sso',
directory_name => 'WALLET_DIR');
END;
/
Note:
The credential_name you use in this step is the credentials for the Object
Store. In the next step you create the credentials to access the target database.
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password');
END;
/
3-455
Chapter 3
Link Data
Note:
You can use a vault secret credential for the target database credential in
a database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name.
6. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PEDBLINK1',
hostname => 'example1.adb.ap-osaka-1.oraclecloud.com',
port => '1522',
service_name => 'example_high.adb.oraclecloud.com',
ssl_server_cert_dn => 'ssl_server_cert_dn',
credential_name => 'DB_LINK_CRED',
directory_name => 'WALLET_DIR',
private_target => TRUE);
END;
/
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, setting
the private_target parameter to TRUE is not required in this API. See
Enhanced Security for Outbound Connections with Private Endpoints for
more information.
3-456
Chapter 3
Link Data
Note:
For the credentials you create in Step 5, the Oracle Database credentials, if the
password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-457
Chapter 3
Link Data
Note:
See Create Database Links from Autonomous Database to Another
Autonomous Database if the target for your database link is another
Autonomous Database.
3-458
Chapter 3
Link Data
2. Create credentials to access your Object Store where you store the wallet file
cwallet.sso. See CREATE_CREDENTIAL Procedure for information about the
username and password parameters for different object storage services.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Create a directory on Autonomous Database for the wallet file cwallet.sso.
For example:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/cwallet.sso',
directory_name => 'DBLINK_WALLET_DIR');
END;
/
Note:
The credential_name you use in this step is the credentials for the Object
Store. In the next step you create the credentials to access the target database.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
5. On the Autonomous Database instance, create credentials to access the target database.
The username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the
credentials for the target database that you use to create the database link.
Note:
Supplying the credential_name parameter is required.
3-459
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password');
END;
/
Note:
You can use a vault secret credential for the target database credential in
a database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name.
6. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1522',
service_name =>
'example_medium.adb.example.oraclecloud.com',
ssl_server_cert_dn =>
'CN=adb.example.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle
Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED',
directory_name => 'DBLINK_WALLET_DIR');
END;
/
3-460
Chapter 3
Link Data
For the credentials you create in Step 5, the target database credentials, if the password of
the target user changes you can update the credential that contains the target user's
credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password' );
END;
/
Note:
You can create links to Big Data Service using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK. See Query Big Data Service Hadoop
(HDFS) Data from Autonomous Database for more information.
3-461
Chapter 3
Link Data
Note:
Database links from an Autonomous Database to a target Oracle database
that is on a private endpoint are only supported in commercial regions and
US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly
provisioned databases.
For existing US Government databases on a private endpoint, if you want to
create database links from an Autonomous Database to a target in a US
Government region, you can file a Service Request at Oracle Cloud Support
and request to enable the private endpoint in government regions database
linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
Depending on the type and the configuration of the target Oracle database:
• Another Oracle Database, such as on-premises or a Database Cloud Service
database, on a private endpoint that is configured for SSL (TCPS): In this
case you can create the database link with a wallet, and the database link
communicates with TCPS. See Create Database Links from Autonomous
Database to Oracle Databases on a Private Endpoint with a Wallet (mTLS) for
details:
• Oracle Database, such as on-premises or a Database Cloud Service
database, on a private endpoint that is configured for TCP: In this case you
create the database link without a wallet and the database link communicates with
TCP. See Create Database Links to Oracle Databases on a Private Endpoint
without a Wallet for details
See How to Create a Database Link from Your Autonomous Database to a Database
Cloud Service Instance for more information.
Topics
3-462
Chapter 3
Link Data
3-463
Chapter 3
Link Data
– Define an ingress rule in the target database's subnet security list or network
security group such that the traffic over TCP is allowed from the source
database IP address to the destination port.
See Configuring Network Access with Private Endpoints for information on
configuring private endpoints with ingress and egress rules.
Note:
When your Autonomous Database instance is configured with a private
endpoint, set the ROUTE_OUTBOUND_CONNECTIONS database parameter to
'PRIVATE_ENDPOINT' to specify that all outgoing database links are subject to
the Autonomous Database instance private endpoint VCN's egress rules.
See Enhanced Security for Outbound Connections with Private Endpoints for
more information.
Note:
This option is for target Oracle databases that are on a private endpoint and
do not have SSL/TCPS configured.
Perform the prerequisite steps, as required. See Prerequisites for Database Links from
Autonomous Database to a Target Autonomous Database on a Private Endpoint for
details.
To create a database link to a target database on a private endpoint using a secure
TCP connection without a wallet:
1. On Autonomous Database create credentials to access the target database. The
username and password you specify with DBMS_CLOUD.CREATE_CREDENTIAL are the
credentials for the target database used within the database link, (where the target
database is accessed through the VCN).
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'PRIVATE_ENDPOINT_CRED',
username => 'NICK',
password => 'password'
);
END;
/
3-464
Chapter 3
Link Data
Note:
You can use a vault secret credential for the target database credential in a
database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name.
2. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PRIVATE_ENDPOINT_LINK',
hostname => 'exampleHostname',
port => '1522',
service_name => 'exampleServiceName',
ssl_server_cert_dn => NULL,
credential_name => 'PRIVATE_ENDPOINT_CRED',
directory_name => NULL,
private_target => TRUE);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PRIVATE_ENDPOINT_LINK',
rac_hostnames => '["sales1-svr1.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr2.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr3.example.adb.us-
ashburn-1.oraclecloud.com"]',
port => '1522',
service_name => 'exampleServiceName',
ssl_server_cert_dn => NULL,
credential_name => 'PRIVATE_ENDPOINT_CRED',
directory_name => NULL,
3-465
Chapter 3
Link Data
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT,
setting the private_target parameter to TRUE is not required in this
API. See Enhanced Security for Outbound Connections with Private
Endpoints for more information.
3. Use the database link you created to access data in the target database.
For example:
3-466
Chapter 3
Link Data
Note:
For the credentials you create in Step 1, the Oracle Database credentials, if the
password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
Note:
This option is for target Oracle databases that have SSL/TCPS configured and that
are on a private endpoint.
If the target Oracle database does not have SSL/TCPS configured, you have two options:
• You can configure the target Oracle database to use TCP/IP with SSL (TCPS)
authentication. See Configuring Transport Layer Security Authentication for information
on configuring SSL/TCPS.
• You can connect to the target Oracle database with TCP. See Create Database Links to
Oracle Databases on a Private Endpoint without a Wallet for details.
Perform the prerequisite steps, as required. See Prerequisites for Database Links from
Autonomous Database to a Target Autonomous Database on a Private Endpoint for details.
To create a database link to a target Oracle database on a private endpoint using TCP/IP with
SSL (TCPS) authentication:
1. Copy your target database wallet, cwallet.sso, containing the certificates for the target
database to Object Store.
3-467
Chapter 3
Link Data
Note:
The wallet file, along with the Database user ID and password provide
access to data in the target Oracle database. Store wallet files in a
secure location. Share wallet files only with authorized users.
2. Create credentials to access your Object Store where you store the cwallet.sso.
See CREATE_CREDENTIAL Procedure for information about the username and
password parameters for different object storage services.
3. Create a directory on Autonomous Database for the wallet file cwallet.sso.
For example:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
cwallet.sso',
directory_name => 'WALLET_DIR');
END;
/
Note:
The credential_name you use in this step is the credentials for the
Object Store. In the next step you create the credentials to access the
target database.
3-468
Chapter 3
Link Data
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password');
END;
/
Note:
You can use a vault secret credential for the target database credential in a
database link. See Use Vault Secret Credentials for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name.
6. Create the database link to the target database using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PEDBLINK1',
hostname => 'example1.adb.ap-osaka-1.oraclecloud.com',
port => '1522',
service_name => 'example_high.adb.oraclecloud.com',
ssl_server_cert_dn => 'ssl_server_cert_dn',
credential_name => 'DB_LINK_CRED',
directory_name => 'WALLET_DIR',
private_target => TRUE);
END;
/
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, setting the
private_target parameter to TRUE is not required in this API. See Enhanced
Security for Outbound Connections with Private Endpoints for more information.
3-469
Chapter 3
Link Data
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PEDBLINK1',
rac_hostnames => '["sales1-svr1.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr2.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr3.example.adb.us-
ashburn-1.oraclecloud.com"]',
port => '1522',
service_name => 'example_high.adb.oraclecloud.com',
ssl_server_cert_dn => 'ssl_server_cert_dn',
credential_name => 'DB_LINK_CRED',
directory_name => 'WALLET_DIR',
private_target => TRUE);
END;
/
3-470
Chapter 3
Link Data
Note:
For the credentials you create in Step 5, the Oracle Database credentials, if the
password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-471
Chapter 3
Link Data
Note:
For complete information on supported versions, see Client Server
Interoperability Support Matrix for Different Oracle Versions (Doc ID
207303.1)
3-472
Chapter 3
Link Data
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'AWS_REDSHIFT_LINK_CRED',
username => 'nick',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name.
3-473
Chapter 3
Link Data
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'SERVICENOW_OAUTH',
params => JSON_OBJECT(
'gcp_oauth2' value JSON_OBJECT(
'client_id' value 'CLIENT_ID',
'client_secret' value 'CLIENT_SECRET',
'refresh_token' value 'Refresh_Token')));
END;
/
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'AWSREDSHIFT_LINK',
hostname => 'example.com',
port => '5439',
service_name => 'example_service_name',
credential_name => 'AWS_REDSHIFT_LINK_CRED',
gateway_params => JSON_OBJECT('db_type' value
'awsredshift'),
ssl_server_cert_dn => NULL);
END;
/
3-474
Chapter 3
Link Data
3-475
Chapter 3
Link Data
3-476
Chapter 3
Link Data
Obtain and download the ServiceNow REST config file to the specified
directory. For example:
exec DBMS_CLOUD.get_object('servicenow_dir_cred',
'https://objectstorage.<...>/
servicenow.rest','SERVICENOW_DIR');
Set the file_name value to the name of the REST config file you
downloaded, "servicenow.rest".
Then you can use the ServiceNow REST config file with either basic
authentication or OAuth2.0. See
HETEROGENEOUS_CONNECTIVITY_INFO View for samples.
snowflake When you use gateway_params parameter with db_type snowflake,
use the Snowflake account identifier as the hostname parameter. In this
case, the driver adds snowflakecomupting.com, so you do not pass
this part of the hostname explicitly. To find your Snowflake account
identifier, see Account Identifier Formats by Cloud Platform and Region.
For example: for the Snowflake account:
https://example-
marketing_test_account.snowflakecomputing.com
Set the hostname value to "example-marketing_test_account".
3-477
Chapter 3
Link Data
For example:
The table name you specify when you use SELECT with Google BigQuery must be
in quotes. For example:
Note:
For the credentials you create in Step 1, the target database credentials, if
the password of the target user changes you can update the credential that
contains the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'AWS_REDSHIFT_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-478
Chapter 3
Link Data
Note:
Database links from an Autonomous Database to an Oracle MySQL Database
Service that is on a private endpoint are only supported in commercial regions and
US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly provisioned
databases.
For existing US Government databases on a private endpoint, if you want to create
database links from an Autonomous Database to a target in a US Government
region, please file a Service Request at Oracle Cloud Support and request to
enable the private endpoint in government regions database linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
3-479
Chapter 3
Link Data
Note:
Supplying the credential_name parameter is required.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'MYSQL_LINK_CRED',
username => 'NICK',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name.
2. Create the database link to the Oracle MySQL Database Service using
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
For example, to create a database link:
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'MYSQL_LINK',
hostname => 'mysql.example.com',
port => '3306',
service_name => 'mysql.example_service_name',
ssl_server_cert_dn => NULL,
credential_name => 'MYSQL_LINK_CRED',
private_target => TRUE,
gateway_params => JSON_OBJECT('db_type' value 'mysql'));
END;
/
3-480
Chapter 3
Link Data
Note:
For the credentials you create in Step 1, the target database credentials, if the
password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'MYSQL_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
Note:
Oracle uses Progress DataDirect connectors. The Database Support column
provides links to the Progress website where you can find the supported database
versions for each database type.
3-481
Chapter 3
Link Data
• HETEROGENEOUS_CONNECTIVITY_INFO View
The HETEROGENEOUS_CONNECTIVITY_INFO view lists connectivity information and
examples for connecting with PL/SQL using database links and Oracle Managed
Heterogeneous Connectivity.
HETEROGENEOUS_CONNECTIVITY_INFO View
The HETEROGENEOUS_CONNECTIVITY_INFO view lists connectivity information and
examples for connecting with PL/SQL using database links and Oracle Managed
Heterogeneous Connectivity.
3-482
Chapter 3
Link Data
Topics
3-483
Chapter 3
Link Data
Note:
Before you create database links to a target gateway, do the following:
1. Configure the Oracle Database Gateway to access a non-Oracle
database.
2. Configure Oracle Net Listener to handle incoming requests on the Oracle
Database Gateway.
3. Create a self signed wallet on the Oracle Database Gateway.
For more information on Oracle Database Gateway, see Oracle Database Gateways in
Oracle Database Heterogeneous Connectivity User's Guide. Also see the Installation
and Configuration Guide and the Gateway User's Guide for the database you want to
connect to. For example, for Oracle Database Gateway for SQL Server see:
• Installing and Configuring Oracle Database Gateway for SQL Server
• Introduction to the Oracle Database Gateway for SQL Server in Oracle Database
Gateway for SQL Server User's Guide.
• Configure Oracle Net for the Gateway
Note:
The wallet file, along with the Database user ID and password provide
access to data available through the target gateway. Store wallet files in
a secure location. Share wallet files only with authorized users.
2. Create credentials to access the Object Store where you store the cwallet.sso.
See CREATE_CREDENTIAL Procedure for information about the username and
password parameters for different object storage services.
3. Create a directory on Autonomous Database for the wallet file cwallet.sso.
3-484
Chapter 3
Link Data
For example:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/cwallet.sso',
directory_name => 'DBLINK_WALLET_DIR');
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
3-485
Chapter 3
Link Data
For the credentials you create in Step 5, the Oracle Database Gateway credentials, if
the password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-486
Chapter 3
Link Data
This section covers the steps for using database links to connect from Autonomous Database
to a non-Oracle Database that through an Oracle Database Gateway, where the non-Oracle
Database is on a private endpoint.
• Prerequisites to Create Database Links with Customer-Managed Heterogeneous
Connectivity to Non-Oracle Databases on a Private Endpoint
Lists the prerequisites to create database links from an Autonomous Database with
Customer-Managed Heterogeneous Connectivity to Non-Oracle Databases that are on a
Private Endpoint.
• Create Database Links with Customer-Managed Heterogeneous Connectivity to Non-
Oracle Databases on a Private Endpoint (without a wallet)
Create database links from an Autonomous Database to an Oracle Database Gateway to
access Non-Oracle databases that are on a private endpoint without a wallet (TCP).
• Create Database Links with Customer-Managed Heterogeneous Connectivity to Non-
Oracle Databases on a Private Endpoint (with a Wallet)
Create database links from an Autonomous Database to an Oracle Database Gateway to
access Non-Oracle databases that are on a private endpoint (connecting with a wallet
TCPS).
Prerequisites to Create Database Links with Customer-Managed Heterogeneous Connectivity to Non-Oracle
Databases on a Private Endpoint
Lists the prerequisites to create database links from an Autonomous Database with
Customer-Managed Heterogeneous Connectivity to Non-Oracle Databases that are on a
Private Endpoint.
To create a database link with Customer-Managed Heterogeneous Connectivity to Non-
Oracle Databases that are on a Private Endpoint:
• The target database must be accessible from the source database's Oracle Cloud
Infrastructure VCN. For example, you can connect to the target database when:
– The target database is on a private endpoint.
– Both the source database and the target database are in the same Oracle Cloud
Infrastructure VCN.
– The source database and the target database are in different Oracle Cloud
Infrastructure VCNs that are paired.
– For a target on a private endpoint, DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK
supports specifying a single hostname with the hostname parameter. On a private
endpoint, using an IP address, SCAN IP, or a SCAN hostname is not supported
(when the target is on a public endpoint, CREATE_DATABASE_LINK supports using an IP
address, a SCAN IP, or a SCAN hostname).
• The following ingress and egress rules must be defined for the private endpoint:
– Define an egress rule in the source database's subnet security list or network security
group such that the traffic over TCP is allowed to the target database's IP address
and port number.
– Define an ingress rule in the target database's subnet security list or network security
group such that the traffic over TCP is allowed from the source database IP address
to the destination port.
See Configuring Network Access with Private Endpoints for information on configuring
private endpoints with ingress and egress rules.
3-487
Chapter 3
Link Data
Note:
When your Autonomous Database instance is configured with a private
endpoint, set the ROUTE_OUTBOUND_CONNECTIONS database parameter to
'PRIVATE_ENDPOINT' to specify that all outgoing database links are subject to
the Autonomous Database instance private endpoint VCN's egress rules.
See Enhanced Security for Outbound Connections with Private Endpoints for
more information.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'example.com',
3-488
Chapter 3
Link Data
For the credentials you create in Step 1, the Oracle Database Gateway credentials, if the
password of the target user changes you can update the credential that contains the target
user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-489
Chapter 3
Link Data
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
cwallet.sso',
directory_name => 'DBLINK_WALLET_DIR');
END;
/
3-490
Chapter 3
Link Data
Note:
The credential_name you use in this step is the credentials for the Object
Store. In the next step you create the credentials to access the target gateway.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'NICK',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'example.com',
port => '1522',
service_name => 'example_service_name',
ssl_server_cert_dn => 'ssl_server_cert_dn',
credential_name => 'DB_LINK_CRED',
directory_name => 'DBLINK_WALLET_DIR',
gateway_link => TRUE,
private_target => TRUE,
gateway_params => NULL);
END;
/
3-491
Chapter 3
Link Data
When gateway_link is TRUE and gateway_params is NULL, this specifies that the
database link is to a customer-managed Oracle gateway.
Users other than ADMIN require privileges to run
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
7. Use the database link you created to access data on the target gateway.
For example:
For the credentials you create in Step 5, the Oracle Database Gateway credentials, if
the password of the target user changes you can update the credential that contains
the target user's credentials as follows:
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name => 'DB_LINK_CRED',
attribute => 'PASSWORD',
value => 'password');
END;
/
3-492
Chapter 3
Link Data
1. Download your Autonomous Database wallet. See Download Client Credentials (Wallets)
for more information.
2. Upload the wallet to the database instance where you want to create the link to the
Autonomous Database.
3. Unzip the Autonomous Database wallet:
Note:
The wallet file, along with the Database user ID and password provide access
to data in your Autonomous Database. Store wallet files in a secure location.
Share wallet files only with authorized users.
System altered.
Set GLOBAL_NAMES to FALSE to use a database link name without checking that the name
is different than the remote database name.When GLOBAL_NAMES, is set to TRUE, the
database requires the database link to have the same name as the database to which it
connects. See GLOBAL_NAMES for more information.
5. Create the database link to the target Autonomous Database. Note that the security
path includes my_wallet_directory; the path where you unzip the Autonomous
Database wallet.
3-493
Chapter 3
Call Web Services from Autonomous Database
(ssl_server_dn_match=true)))';
6. Use the database link you created to access data on the target database (your
Autonomous Database instance in this case):
For example:
To list the database links, use the ALL_DB_LINKS view. See ALL_DB_LINKS for more
information.
For additional information, see:
• See CREATE DATABASE LINK for details on the procedure.
• See Create Database Links from Autonomous Database to a Publicly Accessible
Autonomous Database with a Wallet (mTLS)
BEGIN
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name => 'SALESLINK' );
END;
/
3-494
Chapter 3
Call Web Services from Autonomous Database
Topics
For example, to submit an HTTP request for a public host www.example.com, create an
Access Control List for the host:
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'www.example.com',
ace => xs$ace_type( privilege_list => xs$name_list('http'),
principal_name => 'ADMIN',
principal_type => xs_acl.ptype_db));
END;
3-495
Chapter 3
Call Web Services from Autonomous Database
Note:
If your Autonomous Database instance is on a private endpoint and you want
your UTL_HTTP calls to public hosts to be subject to your private endpoint
VCN's egress rules, set the ROUTE_OUTBOUND_CONNECTIONS database property
to PRIVATE_ENDPOINT.
See Enhanced Security for Outbound Connections with Private Endpoints for
more information.
See PL/SQL Packages Notes for Autonomous Database for information on restrictions
for UTL_HTTP on Autonomous Database.
To submit a request to a target host on a private endpoint, the target host must be
accessible from the source database's Oracle Cloud Infrastructure VCN. For example,
you can connect to the target host when:
• Both the source database and the target host are in the same Oracle Cloud
Infrastructure VCN.
• The source database and the target host are in different Oracle Cloud
Infrastructure VCNs that are paired.
• The target host is an on-premises network that is connected to the source
database's Oracle Cloud Infrastructure VCN using FastConnect or VPN.
Note:
Making UTL_HTTP calls to a private host is only supported in commercial
regions and US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly
provisioned databases.
For existing US Government databases on a private endpoint, if you want to
make UTL_HTTP from an Autonomous Database to a target in a US
Government region, please file a Service Request at Oracle Cloud Support
and request to enable the private endpoint in government regions database
linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
3-496
Chapter 3
Call Web Services from Autonomous Database
To make a UTL_HTTP request to a target on a private endpoint create an Access Control List
for the host.
For example:
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'www.example.com',
ace => xs$ace_type( privilege_list => xs$name_list('http'),
principal_name => 'ADMIN',
principal_type => xs_acl.ptype_db),
private_target => TRUE);
END;
/
As shown in this example, when you create an Access Control List for the host specify the
private_target parameter with the value TRUE.
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, setting the
private_target parameter to TRUE is not required in this API. See Enhanced
Security for Outbound Connections with Private Endpoints for more information.
SELECT UTL_HTTP.REQUEST(
url => 'https://www.example.com/',
https_host => 'www.example.com')
FROM dual;
See PL/SQL Packages Notes for Autonomous Database for information on restrictions for
UTL_HTTP on Autonomous Database.
When your Autonomous Database instance is on a private endpoint, to use UTL_HTTP with a
target proxy, the target proxy must be accessible from the source database's Oracle Cloud
Infrastructure VCN.
For example, you can connect using a proxy when:
• Both the source database and the proxy server are in the same Oracle Cloud
Infrastructure VCN.
• The source database and the proxy server are in different Oracle Cloud Infrastructure
VCNs that are paired.
• The proxy server is an on-premises network that is connected to the source database's
Oracle Cloud Infrastructure VCN using FastConnect or VPN.
3-497
Chapter 3
Call Web Services from Autonomous Database
Note:
Making UTL_HTTP calls to a proxy server on a private endpoint is only
supported in commercial regions and US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly
provisioned databases.
For existing US Government databases on a private endpoint, if you want to
make UTL_HTTP from an Autonomous Database to a target in a US
Government region, please file a Service Request at Oracle Cloud Support
and request to enable the private endpoint in government regions database
linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host =>'www-proxy-example.com',
ace => xs$ace_type(privilege_list =>
xs$name_list('HTTP_PROXY'),
principal_name => 'APPUSER1',
principal_type => xs_acl.ptype_db),
private_target => TRUE);
END;
/
As shown in this example, when you create an Access Control List for the proxy
server specify the private_target parameter with the value TRUE.
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, setting
the private_target parameter to TRUE is not required in this API. See
Enhanced Security for Outbound Connections with Private Endpoints for
more information.
3-498
Chapter 3
Call Web Services from Autonomous Database
For example:
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host =>'example.com',
ace => xs$ace_type( privilege_list => xs$name_list('HTTP'),
principal_name => 'APPUSER1',
principal_type => xs_acl.ptype_db)
END;
/
BEGIN
UTL_HTTP.SET_WALLET('');
UTL_HTTP.SET_PROXY('www-proxy-example:80');
END;
/
SELECT UTL_HTTP.REQUEST(
url => 'https://www.example.com/',
https_host => 'www.example.com')
FROM dual;
See PL/SQL Packages Notes for Autonomous Database for information on restrictions for
UTL_HTTP on Autonomous Database.
Example
...
UTL_HTTP.SET_AUTHENTICATION (l_http_request, 'web_app_user', 'xxxxxxxxxxxx');
...
3-499
Chapter 3
Call Web Services from Autonomous Database
UTL_HTTP.SET_CREDENTIAL (
r IN OUT NOCOPY req,
credential IN VARCHAR2,
scheme IN VARCHAR2 DEFAULT 'Basic',
for_proxy IN BOOLEAN DEFAULT FALSE);
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'HTTP_CRED',
username => 'web_app_user',
password => '<password>' );
END;
DECLARE
l_http_request UTL_HTTP.REQ;
BEGIN
l_http_request := UTL_HTTP.BEGIN_REQUEST('https://
www.example.com/v1/dwcsdev/NAME/dwcs_small_xt1.csv');
UTL_HTTP.SET_CREDENTIAL (l_http_request, 'HTTP_CRED','BASIC');
...
END;
This example first creates a request by invoking the BEGIN_REQUEST procedure and
sets HTTP authentication information in the HTTP request header by invoking the
SET_CREDENTIAL procedure. The Web server needs this information to authorize
the request. The value l_http_request is the HTTP request, HTTP_CRED is the
credentials name and BASIC is the HTTP authentication scheme.
See UTL_HTTP for more information.
See PL/SQL Packages Notes for Autonomous Database for information on restrictions
for UTL_HTTP on Autonomous Database.
3-500
Chapter 3
Move Files
Move Files
Describes how to create directories and copy files to Autonomous Database.
• Creating and Managing Directories on Autonomous Database
• Copy Files Between Object Store and a Directory in Autonomous Database
Use the procedure DBMS_CLOUD.PUT_OBJECT to copy a file from a directory to Object
Store. Use the procedure DBMS_CLOUD.GET_OBJECT to copy a file from Object Store to a
directory.
• Bulk Operations for Files in the Cloud
The PL/SQL package DBMS_CLOUD offers parallel execution support for bulk file upload,
download, copy, and transfer activities, which streamlines the user experience
and delivers optimal performance for bulk file operations.
CREATE DIRECTORY creates the database directory object and also creates the file system
directory if it does not already exist. If the file system directory exists then CREATE DIRECTORY
3-501
Chapter 3
Move Files
only creates the database directory object. For example, the following command
creates the database directory named staging and creates the file system directory
stage:
You can also create subdirectories. For example, the following command creates the
database directory object sales_staging and the file system directory stage/
sales:
When you create subdirectories you do not have to create the initial file system
directory. For example, in the previous example if the directory stage does not exist
then the CREATE DIRECTORY command creates both directories stage and stage/
sales.
To add a directory, you must have the CREATE ANY DIRECTORY system privilege. The
ADMIN user is granted the CREATE ANY DIRECTORY system privilege. The ADMIN user
can grant CREATE ANY DIRECTORY system privilege to other users.
Notes:
/u03/dbfs/7C149E35BB1000A45FD/data/stage
• You can create a directory in the root file system to see all the files with
the following commands:
After you create the ROOT_DIR directory, use the following command to
list all files:
3-502
Chapter 3
Move Files
For example, the following command drops the database directory object staging:
The DROP DIRECTORY command does not delete files in the directory. If you want to delete the
directory and the files in the directory, first use the procedure DBMS_CLOUD.DELETE_FILE to
delete the files. See DELETE_FILE Procedure for more information.
To drop a directory, you must have the DROP ANY DIRECTORY system privilege. The ADMIN
user is granted the DROP ANY DIRECTORY system privilege. The ADMIN user can grant DROP
ANY DIRECTORY system privilege to other users.
Notes:
For example, to list the contents of the staging directory, run the following query:
3-503
Chapter 3
Move Files
To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to grant read
privileges on the directory to that user. See LIST_FILES Function for more information.
Topics
• Attach Network File System to Autonomous Database
Use DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM to attach a file system to a directory
in your Autonomous Database.
• Detach Network File System from Autonomous Database
Use the DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure to detach a file
system from a directory in your Autonomous Database.
• Example: Set Up an NFSv4 Server on Oracle Cloud Compute
Provides an example for setting up an NSFv4 server for use with Autonomous
Database.
• DBA_CLOUD_FILE_SYSTEMS View
The DBA_CLOUD_FILE_SYSTEMS view lists information about the network file system
attached to a directory location in the database.
3-504
Chapter 3
Move Files
Note:
TheDBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM procedure can only attach a private
File Storage Service when the Autonomous Database instance is on a private
endpoint.
To access data in an Autonomous Database from the file systems in an on-premises data
center you must set up FastConnect or a Site-to-Site VPN to connect to the on-premises data
center. See FastConnect and Site-to-Site VPN for more information.
1. Create a directory or use an existing directory to attach a Network File System in your
Autonomous Database. You must have WRITE privilege on the directory object on your
Autonomous Database instance to attach a file system to a directory location in the
database.
For example, the following command creates the database directory named FSS_DIR
and creates the file system directory fss:
BEGIN
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM(
file_system_name => 'FSS',
file_system_location => 'myhost.sub000445.myvcn.oraclevcn.com:/
results',
directory_name => 'FSS_DIR',
description => 'Source NFS for sales data'
);
END;
/
Optionally you can use the params parameter and specify the nfs_version with value
3 to specify NFSv3.
• To use NFSv4, include the params parameter with
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM and specify the nfs_version with value 4 to
specify NFSv4:
BEGIN
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM(
file_system_name => 'FSS',
file_system_location => 'myhost.sub000445.myvcn.oraclevcn.com:/
results',
directory_name => 'FSS_DIR',
description => 'Source NFS for sales data',
3-505
Chapter 3
Move Files
These examples attach the network file system specified in the file_system_name
parameter to the Autonomous Database.
The file_system_location parameter specifies the location of the file system.
The value you supply with file_system_location consists of a Fully Qualified
Domain Name (FQDN) and a file path in the form: FQDN:file_path.
For example:
• FQDN: myhost.sub000445.myvcn.oraclevcn.com
For Oracle Cloud Infrastructure File Storage set the FQDN in Show
Advanced Options when you create a file system. See Creating File Systems
for more information.
• File Path: /results
The directory_name parameter specifies the directory name in the Autonomous
Database where you want to attach the file system. This is the directory you
created in Step 1, or another directory you previously created.
The description parameter specifies the description for the task.
The params parameter is a JSON value that specifies an additional attribute
nfs_version, whose value can be either 3 or 4 (NFSv3 or NFSv4).
After you attach a file system you can query the DBA_CLOUD_FILE_SYSTEMS view to
retrieve information about the attached file system.
For example:
This query returns details for the FSS file system name.
DECLARE
l_file UTL_FILE.FILE_TYPE;
3-506
Chapter 3
Move Files
DECLARE
l_file UTL_FILE.FILE_TYPE;
l_location VARCHAR2(100) := 'FSS_DIR';
l_filename VARCHAR2(100) := 'test.csv';
l_text VARCHAR2(32767);
BEGIN
-- Open the file.
l_file := UTL_FILE.FOPEN(l_location, l_filename, 'r');
• Oracle Cloud Infrastructure File Storage uses NFSv3 to share. See Overview of File
Storage for more information.
• If you attach to non-Oracle Cloud Infrastructure File Storage systems, the procedure
supports NFSv3 and NFSv4.
• If you have an attached NFS server that uses NFSv3 and the NFS version is updated to
NFSv4 in the NFS server, you must run DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM and
then DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM (using the params parameter with
nfs_version set to 4). This attaches NFS with the matching protocol so that Autonomous
Database can access the NFSv4 Server. Without detaching and then reattaching, the
NFS server will be inaccessible and you may see an error such as: "Protocol not
supported".
See the following for more information:
• UTL_FILE
3-507
Chapter 3
Move Files
• LIST_FILES Function
• OCI File Storage Service
• ATTACH_FILE_SYSTEM Procedure
Note:
The DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure can only detach a
private File Storage Service from databases that are on private endpoints.
You must have WRITE privilege on the directory object to detach a file system from a
directory location.
Run DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure to detach a file system from a
directory location in your Autonomous Database. To run this procedure, you must be
logged in as the ADMIN user or have the EXECUTE privilege on DBMS_CLOUD_ADMIN.
For example:
BEGIN
DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM (
file_system_name => 'FSS'
);
END;
/
This example detaches the network file system specified in the file_system_name
parameter from the Autonomous Database. You must provide a value for this
parameter.
The information about this file system is removed from the DBA_CLOUD_FILE_SYSTEMS
view.
See the following for more information:
• OCI File Storage Service
• Creating and Managing Directories on Autonomous Database
• DETACH_FILE_SYSTEM Procedure
3-508
Chapter 3
Move Files
The following ingress and egress rules need to be set for your VCN’s security list so that
Autonomous Database and the NFSv4 server can talk to each other
• Stateful ingress from ALL ports in source CIDR block to TCP port 2049.
• Stateful egress from TCP ALL ports to port 2049 in destination CIDR block.
2. Set up the NFS server on an Oracle Cloud VM with Oracle Linux 8 in the private subnet,
which can connect to the Autonomous Database instance.
$ mkdir /exports/data
$ chown nobody /exports/data
$ chgrp nobody /exports/data
$ firewall-cmd --reload
# Displays all of the current clients and all of the file systems that
the clients have mounted.
$ showmount -a
3-509
Chapter 3
Move Files
DBA_CLOUD_FILE_SYSTEMS View
The DBA_CLOUD_FILE_SYSTEMS view lists information about the network file system
attached to a directory location in the database.
• DBMS_CLOUD.CREATE_EXTERNAL_TABLE
• DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
You can specify one directory and one or more file names or use a comma separated
list of directories and file names. The format to specify a directory
is:'MY_DIR:filename.ext'. By default the directory name MY_DIR is a database object
and is case-insensitive. The file name is case sensitive.
When you use the file_uri_list parameter to specify a directory you do not need to
include the credential_name parameter, but you need READ object privileges on the
directory.
3-510
Chapter 3
Move Files
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'HRDATA1',
file_uri_list => 'HR_DIR:test.csv',
format => JSON_OBJECT('type' value 'csv') );
END;
/
This example copies the data from test.csv in the local directory HR_DIR to the table
HRDATA1.
Regular expressions are not supported when specifying the file names in a directory. You can
only use wildcards to specify file names in a directory. The character "*" can be used as the
wildcard for multiple characters, and the character "?" can be used as the wildcard for a
single character. For example:'MY_DIR:*" or 'MY_DIR:test?'
See Attach Network File System to Autonomous Database for information on attaching
network file systems.
• Compression options for files such as GZIP are not supported for directory files. See the
compression format option in DBMS_CLOUD Package Format Options for more
information.
• Special characters such as colon (:), single quote('), and comma(,) are not supported in
the directory name.
For example, to copy a file from Object Store to the stage directory, run the following
command:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.usphoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/cwallet.sso',
3-511
Chapter 3
Move Files
3-512
Chapter 3
Move Files
3-513
Chapter 3
Move Files
and buckets. To run the procedure, you must be logged in as the ADMIN user or
have the EXECUTE privilege on DBMS_CLOUD.
BEGIN
DBMS_CLOUD.BULK_COPY (
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value
7, 'logprefix' value 'BULKOP')
);
END;
/
This example bulk copies files from one Oracle Cloud Infrastructure Object
Storage bucket to another.
See BULK_COPY Procedure for more information.
See DBMS_CLOUD URI Formats for more information.
For example, use DBMS_CLOUD.BULK_COPY to copy files from Amazon S3 to Oracle
Cloud Infrastructure Object Storage.
BEGIN
DBMS_CLOUD.BULK_COPY(
source_credential_name => 'AWS_CRED',
source_location_uri => 'https://bucketname.s3-us-
west-2.amazonaws.com/',
target_credential_name => 'OCI_CRED',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7,
'logprefix' value 'BULKOP')
);
END;
/
3-514
Chapter 3
Move Files
2. When the source and target are in distinct Object Stores or have different accounts with
the same cloud provider, create a credential to access the target location and include the
target_credential_name parameter.
3. Run DBMS_CLOUD.BULK_MOVE procedure to bulk move files from one Cloud Object Storage
location to another. To run the procedure, you must be logged in as the ADMIN user or
have the EXECUTE privilege on DBMS_CLOUD.
BEGIN
DBMS_CLOUD.BULK_MOVE (
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7,
'logprefix' value 'BULKMOVE')
);
END;
/
This example bulk moves files from one Oracle Cloud Infrastructure Object Storage
location to another.
See BULK_MOVE Procedure for more information.
See DBMS_CLOUD URI Formats for more information.
For example, use DBMS_CLOUD.BULK_MOVE to move files from Amazon S3 to Oracle Cloud
Infrastructure Object Storage.
BEGIN
DBMS_CLOUD.BULK_MOVE(
source_credential_name => 'AWS_CRED',
source_location_uri => 'https://bucketname.s3-us-
west-2.amazonaws.com/',
target_credential_name => 'OCI_CRED',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7,
'logprefix' value 'BULKOP')
);
END;
/
3-515
Chapter 3
Move Files
BEGIN
DBMS_CLOUD.BULK_DOWNLOAD (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 7,
'logprefix' value 'BULKOP')
);
END;
/
This example bulk downloads files from the Oracle Cloud Infrastructure Object
Store location URI to the directory object in an Autonomous Database.
Note:
To write the files in the target directory object, you must have the WRITE
privilege on the directory object.
BEGIN
DBMS_CLOUD.BULK_UPLOAD (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 5,
'logprefix' value 'BULKUPLOAD')
);
3-516
Chapter 3
Move Files
END;
/
This example bulk uploads files from a directory object, as specified with the
directory_name parameter to the Oracle Cloud Infrastructure Object Store location URI.
Note:
To read the source files in the directory object, you must have the READ privilege
on the directory object.
BEGIN
DBMS_CLOUD.BULK_DELETE (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
format => JSON_OBJECT ('logretention' value 5, 'logprefix'
value 'BULKDEL')
);
END;
/
This example bulk deletes files from the Oracle Cloud Infrastructure Object Store.
See BULK_DELETE Procedure for more information.
See DBMS_CLOUD URI Formats for more information.
You can use the following views to monitor and troubleshoot bulk file loads:
• dba_load_operations: shows all load operations.
3-517
Chapter 3
Move Files
The STATUS_TABLE column shows the name of the table you can query to look at
detailed logging information for the bulk download operation. For example:
DESCRIBE DWN$2_STATUS
Name Null? Type
------------- -------- ---------------------------
ID NOT NULL NUMBER
NAME VARCHAR2(4000)
BYTES NUMBER
CHECKSUM VARCHAR2(128)
LAST_MODIFIED TIMESTAMP(6) WITH TIME ZONE
STATUS VARCHAR2(30)
ERROR_CODE NUMBER
ERROR_MESSAGE VARCHAR2(4000)
START_TIME TIMESTAMP(6) WITH TIME ZONE
END_TIME TIMESTAMP(6) WITH TIME ZONE
SID NUMBER
SERIAL# NUMBER
ROWS_LOADED NUMBER
3-518
Chapter 3
Move Files
The status table shows each file name and its status for the bulk operation.
The relevant error number and message are recorded in the status table if an operation on a
specific file fails.
For completed operations, the time needed for each operation can be calculated using the
reported START_TIME and END_TIME time.
The file operation STATUS column can have one of the following values:
If any file operation fails after two retry attempts, then the bulk operation is marked as failed
and an error is raised. For example:
ORA-20003: Operation failed, please query table DOWNLOAD$2_STATUS for error details
When you use a DBMS_CLOUD bulk file operation there are format parameter options that
control status tables:
• logretention: Specifies an integer value that determines the duration in days that the
status table is retained. The default value is 2 days.
• logprefix: Specifies a string value that determines the bulk operation status table's
name prefix.
Each bulk operation has its own default value for the logprefix option:
3-519
Chapter 3
Replicate Data
Replicate Data
Describes using Oracle GoldenGate to replicate data between an Autonomous
Database instance and to any target database or platform that Oracle GoldenGate
supports.
• Use Oracle GoldenGate to Replicate Data to Autonomous Database
You can replicate data to Autonomous Database using Oracle GoldenGate.
3-520
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
– Use Oracle Data Pump to export data to a directory on your database, and then
move the data from the directory to Cloud Object Storage.
– Use Oracle Data Pump to export the data to Cloud Object Storage directly. This
method is only supported with Oracle Cloud Infrastructure Object Storage and Oracle
Cloud Infrastructure Object Storage Classic.
– Use the procedure DBMS_CLOUD.EXPORT_DATA to export data to your Cloud Object
Store as Oracle Data Pump dump files, by specifying a query. This method is only
supported with Oracle Cloud Infrastructure Object Storage and Oracle Cloud
Infrastructure Object Storage Classic.
• Export to a directory:
– Use the procedure DBMS_CLOUD.EXPORT_DATA to export data to a directory as text by
specifying a query. This export method supports exporting data as CSV, JSON,
Parquet, or XML files.
– Use Oracle Data Pump to export data to a directory.
• Use Oracle GoldenGate Capture for Oracle Autonomous Database.
Topics
3-521
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
password => 'password'
);
END;
/
3-522
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
The values you provide for username and password depend on the Cloud Object Storage
service you are using.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value
json to export the results as JSON files on Cloud Object Storage.
To generate the JSON output files there are two options for the file_uri_list
parameter:
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage.
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage and include a file name prefix to use when generating the file names for the
exported JSON.
If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA
supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or
XML) for details.
For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix
specified in file_uri_list:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'json'));
END;
/
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'json', 'recorddelimiter' value
'"\r\n"' format json));
END;
/
3-523
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• The query parameter that you supply can be an advanced query, if required, such
as a query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output
files.
• Specify the format parameter with the encryption option to encrypt data while
exporting. See Encrypt Data While Exporting to Object Storage for more
information.
• When you no longer need the files that you export, use the procedure
DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to
delete the files.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
3-524
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value csv
to export the results as CSV files on Cloud Object Storage.
To generate the CSV output files there are two options for the file_uri_list parameter:
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage.
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage and include a file name prefix to use when generating the file names for the
exported CSV files.
If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA
supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or
XML) for details.
For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix
specified in file_uri_list:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'csv', 'delimiter' value
'|', 'compression' value 'gzip'));
END;
/
• The query parameter that you supply can be an advanced query, if required, such as a
query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output files.
• Specify the format parameter with the encryption option to encrypt data while exporting.
See Encrypt Data While Exporting to Object Storage for more information.
• When you no longer need the files that you export, use the procedure
DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the
files.
3-525
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the
value parquet to export the results as parquet files on Cloud Object Storage.
To generate the parquet output files there are two options for the file_uri_list
parameter:
• Set the file_uri_list value to the URL for an existing bucket on your Cloud
Object Storage.
• Set the file_uri_list value to the URL for an existing bucket on your Cloud
Object Storage and include a file name prefix to use when generating the file
names for the exported parquet files.
If you do not include the file name prefix in the file_uri_list,
DBMS_CLOUD.EXPORT_DATA supplies a file name prefix. See File Naming for Text
Output (CSV, JSON, Parquet, or XML) for details.
3-526
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix
specified in file_uri_list:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'parquet', 'compression'
value 'snappy'));
END;
/
• The query parameter that you supply can be an advanced query, if required, such as a
query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output files.
The default compression for type parquet is snappy.
• When you no longer need the files that you export, use the procedure
DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the
files.
• See DBMS_CLOUD Package Oracle Data Type to Parquet Mapping for details on Oracle
Type to Parquet Type mapping.
The following types are not supported or have limitations on their support for exporting
Parquet with DBMS_CLOUD.EXPORT_DATA:
3-527
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
3-528
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the value xml
to export the results as XML files on Cloud Object Storage.
To generate the XML output files there are two options for the file_uri_list parameter:
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage.
• Set the file_uri_list value to the URL for an existing bucket on your Cloud Object
Storage and include a file name prefix to use when generating the file names for the
exported JSON.
If you do not include the file name prefix in the file_uri_list, DBMS_CLOUD.EXPORT_DATA
supplies a file name prefix. See File Naming for Text Output (CSV, JSON, Parquet, or
XML) for details.
For example, the following shows DBMS_CLOUD.EXPORT_DATA with a file name prefix
specified in file_uri_list:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'xml', 'compression'
value 'gzip'));
END;
/
• The query parameter that you supply can be an advanced query, if required, such as a
query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output files.
• Specify the format parameter with the encryption option to encrypt data while exporting.
See Encrypt Data While Exporting to Object Storage for more information.
• When you no longer need the files that you export, use the procedure
DBMS_CLOUD.DELETE_OBJECT or use native Cloud Object Storage commands to delete the
files.
3-529
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
3-530
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
For more information, see the description for format option fileextension in
DBMS_CLOUD Package Format Options for EXPORT_DATA.
• compression_extension: When you include the format parameter with the compression
option with the value gzip, this is "gz".
When the format type is parquet, the compression value snappy is also supported and
is the default.
For example, the file name prefix in the following DBMS_CLOUD.EXPORT_DATA procedure is
specified in the file_uri_list parameter, as dept_export. The example generates the
output to the provided Object Store in the specified format.
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/dept_export',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'json'));
END;
/
When you specify a file name prefix the generated output files include the file name prefix,
similar to the following:
dept_export_1_20210809T173033Z.json
dept_export_2_20210809T173034Z.json
dept_export_3_20210809T173041Z.json
dept_export_4_20210809T173035Z.json
The number of generated output files depends on the size of the results, the database
service, and the number of ECPUs (OCPUs if your database uses OCPUs) in the
Autonomous Database instance.
In the following example the file_uri_list parameter does not include a file name prefix
and the compression parameter is supplied, with value gzip:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/',
query => 'SELECT * FROM DEPT',
format => json_object('type' value 'json', 'compression' value
'gzip'));
END;
/
3-531
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
generated output files include the file name prefix that DBMS_CLOUD.EXPORT_DATA
supplies and the files are compressed with gzip and the file extension .gz is added, as
follows:
Client1_Module1_Action1_1_20210809T173033Z.json.gz
Client1_Module1_Action1_2_20210809T173034Z.json.gz
Client1_Module1_Action1_3_20210809T173041Z.json.gz
Client1_Module1_Action1_4_20210809T173035Z.json.gz
data_1_20210809T173033Z.json.gz
data_2_20210809T173034Z.json.gz
data_3_20210809T173041Z.json.gz
data_4_20210809T173035Z.json.gz
For example, the file name prefix in the following DBMS_CLOUD.EXPORT_DATA procedure
is specified in the file_uri_list parameter, as dept_export. The example generates
the output to the provided directory in the specified format.
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'DATA_PUMP_DIR:sales.json',
query => 'SELECT * FROM SALES',
format => JSON_OBJECT('type' value 'json'));
END;
/
When you specify a file name prefix the generated output file include the file name
prefix, similar to the following:
sales_1_20230705T124523275915Z.csv
3-532
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous
Database instance.
The default output file chunk size is 10MB for CSV, JSON, or XML. You can change this
value with the format parameter maxfilesize option. See DBMS_CLOUD Package
Format Options for EXPORT_DATA for more information.
• For Parquet output, each generated file is less than 128MB and multiple output files may
be generated. However, if you have less than 128MB of result data, you may have
multiple output files depending on the database service and the number of ECPUs
(OCPUs if your database uses OCPUs) for the Autonomous Database instance.
The format parameter maxfilesize option does not apply for Parquet files.
The directory you export files to can be in the Autonomous Database file system or an
attached Network File System. See Creating and Managing Directories on Autonomous
Database for more information.
• Export Data as CSV to a Directory
Shows the steps to export table data from your Autonomous Database to a directory as
CSV data by specifying a query.
• Export Data as JSON a Directory
Shows the steps to export table data from your Autonomous Database to a directory as
JSON data by specifying a query.
• Export Data as Parquet to a Directory
Shows the steps to export table data from your Autonomous Database to a directory as
Parquet data by specifying a query.
• Export Data as XML to a Directory
Shows the steps to export table data from your Autonomous Database to Directory as
XML data by specifying a query.
• Export Data to a Directory as Oracle Data Pump Files
You can export data to a directory as Oracle Data Pump dump files by specifying a query.
3-533
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
3. Run DBMS_CLOUD.EXPORT_DATA and specify the format parameter type with the
value json to export the results as CSV files to a directory. Do not include the
credential parameter when sending output to a directory.
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.csv',
format => JSON_OBJECT('type' value 'csv'),
query => 'SELECT * FROM sales'
);
END;
/
When record delimiters include escape characters, such as \r\n or \t, enclose the
record delimiters in double quotes. For example, to use the record delimiter \r\n,
enclose the value in double quotes:"\r\n".
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.csv',
query => 'SELECT * FROM sales',
format => JSON_OBJECT('type' value 'json', 'recorddelimiter'
value '"\r\n"' format json));
END;
/
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => '"export_dir":sales.csv',
format => JSON_OBJECT('type' value 'csv'),
query => 'SELECT * FROM sales'
);
END;
/
3-534
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that you
specify in the file_uri_list. The procedure does not overwrite files. If a dump
file in the file_uri_list exists, DBMS_CLOUD.EXPORT_DATA generates another
file with a unique name. DBMS_CLOUD.EXPORT_DATA does not create directories.
• The query parameter that you supply can be an advanced query, if required, such as a
query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output files.
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.dmp',
format => json_object('type' value 'json'),
query => 'SELECT * FROM sales'
);
END;
/
When record delimiters include escape characters, such as \r\n or \t, enclose the record
delimiters in double quotes. For example, to use the record delimiter \r\n, enclose the
value in double quotes:"\r\n".
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.dmp',
query => 'SELECT * FROM sales',
3-535
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => '"export_dir":sales.dmp',
format => json_object('type' value 'json'),
query => 'SELECT * FROM sales'
);
END;
/
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that
you specify in the file_uri_list. The procedure does not overwrite
files. If a dump file in the file_uri_list exists,
DBMS_CLOUD.EXPORT_DATA generates another file with a unique name.
DBMS_CLOUD.EXPORT_DATA does not create directories.
• The query parameter that you supply can be an advanced query, if required, such
as a query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output
files.
3-536
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.parquet',
format => JSON_OBJECT('type' value 'parquet'),
query => 'SELECT * FROM sales'
);
END;
/
The directory name is case-sensitive when the directory name is enclosed in double
quotes. For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => '"export_dir":sales.parquet',
format => JSON_OBJECT('type' value 'parquet'),
query => 'SELECT * FROM sales'
);
END;
/
3-537
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• query: specifies a SELECT statement so that only the required data is exported.
The query determines the contents of the dump file(s).
For detailed information about the parameters, see EXPORT_DATA Procedure.
Notes for exporting with DBMS_CLOUD.EXPORT_DATA:
• The query parameter that you supply can be an advanced query, if required, such
as a query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output
files.
• See DBMS_CLOUD Package Oracle Data Type to Parquet Mapping for details on
Oracle Type to Parquet Type mapping.
The following types are not supported or have limitations on their support for
exporting Parquet with DBMS_CLOUD.EXPORT_DATA:
3-538
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.csv',
format => JSON_OBJECT('type' value 'csv'),
query => 'SELECT * FROM sales'
);
END;
/
The directory name is case-sensitive when the directory name is enclosed in double
quotes. For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => '"export_dir":sales.xml',
format => JSON_OBJECT('type' value 'xml'),
query => 'SELECT * FROM sales'
);
END;
/
3-539
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• query: specifies a SELECT statement so that only the required data is exported.
The query determines the contents of the dump file(s).
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that
you specify in the file_uri_list. The procedure does not overwrite
files. If a dump file in the file_uri_list exists,
DBMS_CLOUD.EXPORT_DATA generates another file with a unique name.
DBMS_CLOUD.EXPORT_DATA does not create directories.
• The query parameter that you supply can be an advanced query, if required, such
as a query that includes joins or subqueries.
• Specify the format parameter with the compression option to compress the output
files.
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
3-540
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales1.dmp, export_dir:sales2.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that you
specify in the file_uri_list.
• The provided directory must exist and you must be logged in as the ADMIN user or have
WRITE access to the directory.
• The procedure does not overwrite files. If a dump file in the file_uri_list exists,
DBMS_CLOUD.EXPORT_DATA reports an error such as:
3-541
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• The query parameter value that you supply can be an advanced query, if required,
such as a query that includes joins or subqueries.
• The dump files you create with DBMS_CLOUD.EXPORT_DATA cannot be imported
using Oracle Data Pump impdp. Depending on the database, you can use these
files as follows:
– On an Autonomous Database, you can use the dump files with the DBMS_CLOUD
procedures that support the format parameter type with the value 'datapump'.
You can import the dump files using DBMS_CLOUD.COPY_DATA or you can call
DBMS_CLOUD.CREATE_EXTERNAL_TABLE to create an external table.
– On any other Oracle Database, such as Oracle Database 19c on-premise, you
can import the dump files created with the procedure
DBMS_CLOUD.EXPORT_DATA using the ORACLE_DATAPUMP access driver. See
Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for
more information.
Note:
If you are using Oracle Cloud Infrastructure Object Storage or Oracle Cloud
Infrastructure Object Storage Classic, then you can use the alternative
methods to export directly to object store. See Move Data with Data Pump
Export to Object Store and Move Data to Object Store as Oracle Data Pump
Files Using EXPORT_DATA for more information.
3-542
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
3. Download the dump file set from the cloud object store, run Data Pump Import, and then
perform any required clean up such as removing the dump file set from your cloud object
store.
See Download Dump Files, Run Data Pump Import, and Clean Up Object Store for
details.
• Use Data Pump to Create a Dump File Set on Autonomous Database
Shows the steps to export data from your Autonomous Database to a directory with
Oracle Data Pump.
• Move Dump File Set from Autonomous Database to Your Cloud Object Store
To move the dump file set to your Cloud Object Store, upload the files from the database
directory to your Cloud Object Store.
Note:
Database Actions provides a link for Oracle Instant Client. To access this link from
Database Actions, under Downloads, click Download Oracle Instant Client.
1. Run Data Pump Export with the dumpfile parameter set, the filesize parameter set to less
than 5G, and the directory parameter set. For example, the following shows how to
export a schema named SALES in an Autonomous Database named DB2022ADB with 16
ECPUs:
expdp sales/password@db2022adb_high
directory=data_pump_dir
dumpfile=exp%U.dmp
parallel=4
encryption_pwd_prompt=yes
filesize=1G
logfile=export.log
3-543
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Note:
If during the export with expdp you use the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes with your import and
input the same password at the impdp prompt to decrypt the dump files
(remember the password you supply with export). The maximum length
of the encryption password is 128 bytes.
For the best export performance use the HIGH database service for your export
connection and set the parallel parameter to one quarter the number of ECPUs
(.25 x ECPU count). If you are using OCPU compute model, set the parallel
parameter to the number of OCPUs (1 x OCPU count). For information on which
database service name to connect to run Data Pump Export, see Manage
Concurrency and Priorities on Autonomous Database.
After the export is finished you can see the generated dump files by running the
following query:
For example, the output from this query shows the generated dump files and the
export log file:
2. Move the dump file set to your cloud object store. See Move Dump File Set from
Autonomous Database to Your Cloud Object Store for details.
3-544
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Notes:
• To perform a full export or to export objects that are owned by other users, you
need the DATAPUMP_CLOUD_EXP role.
• The DATA_PUMP_DIR is the only predefined directory. You can specify a different
directory as the directory argument if you previously created the directory and
you have write privileges on the directory. See Create Directory in Autonomous
Database for information on creating directories.
• The API you use to move the dump files to Cloud Object Storage has a
maximum file transfer size, so make sure you use a filesize argument that is
less than or equal to the maximum supported size for your Cloud Object
Storage service. See PUT_OBJECT Procedure for the Cloud Object Storage
Service file transfer size limits.
• For more information on Oracle Data Pump Export see Oracle Database
Utilities.
Move Dump File Set from Autonomous Database to Your Cloud Object Store
To move the dump file set to your Cloud Object Store, upload the files from the database
directory to your Cloud Object Store.
1. Connect to your database.
2. Store your object store credentials using the procedure DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
3. Move the dump files from the database to your Cloud Object Store by calling
DBMS_CLOUD.PUT_OBJECT.
For example:
BEGIN
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
3-545
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
namespace-string/b/bucketname/o/exp01.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp01.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp02.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp02.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp03.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp03.dmp');
DBMS_CLOUD.PUT_OBJECT(credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp04.dmp',
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp04.dmp');
END;
/
3-546
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• For Oracle Data Pump versions before 19.9 the CREDENTIAL parameter is not
supported. Use the DEFAULT_CREDENTIAL database property to set the credential to
use to access your Object Store. See Use Oracle Data Pump to Export Data to
Object Store Setting DEFAULT_CREDENTIAL Property for details.
2. Download the dump files from the Object Storage service, run Data Pump Import with the
dump files, and perform any required clean up such as removing the dump file set from
Object Storage.
See Download Dump Files, Run Data Pump Import, and Clean Up Object Store for
details.
• Use Oracle Data Pump to Export Data to Object Store Using CREDENTIAL Parameter
(Version 19.9 or Later)
Shows the steps to export data from your database to Object Storage with Oracle Data
Pump.
• Use Oracle Data Pump to Export Data to Object Store Setting DEFAULT_CREDENTIAL
Property
Shows the steps to export data from your database to Object Storage with Oracle Data
Pump.
Use Oracle Data Pump to Export Data to Object Store Using CREDENTIAL
Parameter (Version 19.9 or Later)
Shows the steps to export data from your database to Object Storage with Oracle Data
Pump.
Oracle recommends using the latest Oracle Data Pump version for exporting data from
Autonomous Database to other Oracle databases, as it contains enhancements and fixes for
a better experience. Download the latest version of Oracle Instant Client and download the
Tools Package, which includes Oracle Data Pump, for your platform from Oracle Instant
Client Downloads. See the installation instructions on the platform install download page for
the installation steps required after you download Oracle Instant Client and the Tools
Package.
Note:
Database Actions provides a link for Oracle Instant Client. To access this link from
Database Actions, under Downloads, click Download Oracle Instant Client.
If you are using Oracle Data Pump Version 19.9 or later, then you can use the credential
parameter as shown in these steps. For instructions for using Oracle Data Pump Versions
19.8 and earlier, see Use Oracle Data Pump to Export Data to Object Store Setting
DEFAULT_CREDENTIAL Property.
1. Connect to your database.
2. Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL. For
example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
3-547
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
If you are exporting to Oracle Cloud Infrastructure Object Storage, you can use the
Oracle Cloud Infrastructure native URIs or Swift URIs, but the credentials must be
auth tokens. See CREATE_CREDENTIAL Procedure for more information.
3. Run Data Pump Export with the dumpfile parameter set to the URL for an existing
bucket on your Cloud Object Storage, ending with a file name or a file name with a
substitution variable, such as exp%U.dmp, and with the credential parameter set
to the name of the credential you created in the previous step. For example:
expdp admin/password@db2022adb_high \
filesize=5GB \
credential=def_cred_name \
dumpfile=https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/exp%U.dmp \
parallel=16 \
encryption_pwd_prompt=yes \
logfile=export.log \
directory=data_pump_dir
Note:
If during the export with expdp you use the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes with your import and
input the same password at the impdp prompt to decrypt the dump files
(remember the password you supply with export). The maximum length
of the encryption password is 128 bytes.
3-548
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• The maximum filesize parameter value is 10000MB for Oracle Cloud Infrastructure
Object Storage exports.
• The maximum filesize parameter value is 20GB for Oracle Cloud Infrastructure
Object Storage Classic exports.
• If the specified filesize is too large, the export shows the error message:
• The directory parameter specifies the directory data_pump_dir for the specified
log file, export.log. See Access Log Files for Data Pump Export for more
information.
Note:
Oracle Data Pump divides each dump file part into smaller chunks for faster
uploads. The Oracle Cloud Infrastructure Object Storage console shows
multiple files for each dump file part that you export. The size of the actual
dump files will be displayed as zero (0) and its related file chunks as 10mb or
less. For example:
exp01.dmp
exp01.dmp_aaaaaa
exp02.dmp
exp02.dmp_aaaaaa
Downloading the zero byte dump file from the Oracle Cloud Infrastructure
console or using the Oracle Cloud Infrastructure CLI will not give you the full
dump files. To download the full dump files from the Object Store, use a tool
that supports Swift such as curl, and provide your user login and Swift auth
token.
If you import a file with the DBMS_CLOUD procedures that support the format
parameter type with the value 'datapump', you only need to provide the primary
file name. The procedures that support the 'datapump' format type automatically
discover and download the chunks.
When you use DBMS_CLOUD.DELETE_OBJECT, the procedure automatically
discovers and deletes the chunks when the procedure deletes the primary file.
4. Perform the required steps to use Oracle Data Pump import and clean up.
See Download Dump Files, Run Data Pump Import, and Clean Up Object Store for
details.
3-549
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Note:
To perform a full export or to export objects that are owned by other users,
you need the DATAPUMP_CLOUD_EXP role.
For detailed information on Oracle Data Pump Export parameters see Oracle
Database Utilities.
Note:
Database Actions provides a link for Oracle Instant Client. To access this link
from Database Actions, under Downloads, click Download Oracle Instant
Client.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
If you are exporting to Oracle Cloud Infrastructure Object Storage, you can use the
Oracle Cloud Infrastructure native URIs or Swift URIs, but the credentials must be
auth tokens. See CREATE_CREDENTIAL Procedure for more information.
3-550
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
3. As the ADMIN user, set the credential you defined in step 2 as the default credential for
your database. For example:
Note:
If during the export with expdp you use the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes with your import and input the
same password at the impdp prompt to decrypt the dump files (remember the
password you supply with export). The maximum length of the encryption
password is 128 bytes.
3-551
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• If the specified filesize is too large, the export shows the error message:
Note:
Oracle Data Pump divides each dump file part into smaller chunks for
faster uploads. The Oracle Cloud Infrastructure Object Storage console
shows multiple files for each dump file part that you export. The size of
the actual dump files will be displayed as zero (0) and its related file
chunks as 10mb or less. For example:
exp01.dmp
exp01.dmp_aaaaaa
exp02.dmp
exp02.dmp_aaaaaa
Downloading the zero byte dump file from the Oracle Cloud
Infrastructure console or using the Oracle Cloud Infrastructure CLI will
not give you the full dump files. To download the full dump files from the
Object Store, use a tool that supports Swift such as curl, and provide
your user login and Swift auth token.
If you import a file with the DBMS_CLOUD procedures that support the
format parameter type with the value 'datapump', you only need to
provide the primary file name. The procedures that support the
'datapump' format type automatically discover and download the chunks.
When you use DBMS_CLOUD.DELETE_OBJECT, the procedure automatically
discovers and deletes the chunks when the procedure deletes the
primary file.
5. Perform the required steps to use Oracle Data Pump import and clean up.
See Download Dump Files, Run Data Pump Import, and Clean Up Object Store for
details.
Note:
To perform a full export or to export objects that are owned by other users,
you need the DATAPUMP_CLOUD_EXP role.
3-552
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
For detailed information on Oracle Data Pump Export parameters see Oracle Database
Utilities.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name.
See CREATE_CREDENTIAL Procedure for information about
the username and password parameters for different object storage services.
3. Export data from Autonomous Database to your Cloud Object Store as Oracle Data
Pump dump file(s) by calling DBMS_CLOUD.EXPORT_DATA with the format parameter type
set to value datapump. For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name =>'DEF_CRED_NAME',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/exp01.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT warehouse_id, quantity FROM inventories'
);
END;
/
3-553
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
Note:
The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) that
you specify in the file_uri_list. The procedure does not overwrite
files. If a dump file in the file_uri_list exists,
DBMS_CLOUD.EXPORT_DATA reports an error. DBMS_CLOUD.EXPORT_DATA
does not create buckets.
3-554
Chapter 3
Exporting Data from Autonomous Database to Object Store or to Other Oracle Databases
• The query parameter value that you supply can be an advanced query, if required, such
as a query that includes joins or subqueries.
Download Dump Files, Run Data Pump Import, and Clean Up Object Store
If required, download the dump files from Cloud Object Store and use Oracle Data Pump
Import to import the dump file set to the target database. Then perform any required clean up.
1. Download the dump files from Cloud Object Store.
Note:
This step is not needed if you are importing the data to an Autonomous Data
Warehouse database, to an Autonomous Transaction Processing database, or
to Autonomous JSON Database.
If you export directly to Object Store using Oracle Data Pump, as shown in Move Data
with Data Pump Export to Object Store, then the dump files on Object Store show size 0.
Oracle Data Pump divides each dump file part into smaller chunks for faster uploads. The
Oracle Cloud Infrastructure Object Storage console shows multiple files for each dump
file part that you export. The size of the actual dump files will be displayed as zero (0)
and its related file chunks as 10mb or less. For example:
exp01.dmp
exp01.dmp_aaaaaa
exp02.dmp
exp02.dmp_aaaaaa
Downloading the zero byte dump file from the Oracle Cloud Infrastructure console or
using the Oracle Cloud Infrastructure CLI will not give you the full dump files. To
download the full dump files from the Object Store, use a tool that supports Swift such as
curl, and provide your user login and Swift auth token.
The cURL command does not support wildcards or substitution characters in its URL.
You need to use multiple cURL commands to download the dump file set from your
Object Store. Alternatively, you can use a script that supports substitution characters to
download all the dump files from your Object Store in a single command. See How To:
Download all files from an export to object store job in Autonomous Database using
cURL for an example.
2. Run Data Pump Import to import the dump file set to the target database.
If you are importing the data to another Autonomous Database, see Import Data Using
Oracle Data Pump on Autonomous Database.
3. Perform post import clean up tasks. If you are done importing the dump files to your
target database then drop the bucket containing the data or remove the dump files from
the Cloud Object Store bucket, and remove the dump files from the location where you
downloaded the dump files to run Data Pump Import.
3-555
Chapter 3
Data Sharing
For detailed information on Oracle Data Pump Import parameters see Oracle
Database Utilities.
To access the log file you need to move the log file to your Cloud Object Storage using
the procedure DBMS_CLOUD.PUT_OBJECT. For example, the following PL/SQL block
moves the file export.log to your Cloud Object Storage:
BEGIN
DBMS_CLOUD.PUT_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
import.log',
directory_name => 'DATA_PUMP_DIR',
file_name => 'export.log');
END;
/
Data Sharing
Share data on Autonomous Database
• Use Cloud Links for Read Only Data Access on Autonomous Database
Cloud Links provide a cloud-based method to remotely access read only data on
an Autonomous Database instance.
3-556
Chapter 3
Data Sharing
• Use Pre-Authenticated Request URLs for Read Only Data Access on Autonomous
Database
• Share Data with Data Studio
The Data Share tool enables you to share data and metadata with other users. The Data
Share page displays information about the different types of shares available in the
Oracle Autonomous Database.
Use Cloud Links for Read Only Data Access on Autonomous Database
Cloud Links provide a cloud-based method to remotely access read only data on an
Autonomous Database instance.
• About Cloud Links on Autonomous Database
• Grant Cloud Links Access for Database Users
The ADMIN user grants privileges to database users to register data sets. The ADMIN
user also grants privileges to database users to access registered data sets.
• Register or Unregister a Data Set
You can register a table or view you own as a registered data set. When you want to
remove or replace the data set, you need to unregister it.
• Register a Data Set with Authorization Required
Optionally, when you register a data set, in addition to the scope you can specify that
database level authorization is required to access a data set.
• Register a Data Set with Offload Targets for Data Set Access
Optionally, when you register a data set you can offload access to the data set to one or
more Autonomous Database instances that are refreshable clones.
• Use Cloud Links from a Read Only Autonomous Database Instance
You can share Cloud Links when a data set resides on a Read-Only Autonomous
Database instance.
• Find Data Sets and Use Cloud Links
A user who is granted access to read Cloud Links can search for data sets available to
an Autonomous Database instance and can access and use registered data sets with
their queries.
• Revoke Cloud Links Access for Database Users
The ADMIN user can revoke registration to disallow a user from registering data sets for
remote access. The ADMIN user can also revoke privileges or database users to access
registered data sets.
• Monitor and View Cloud Links Information
Autonomous Database provides views that allow you to monitor and audit Cloud Links.
• Define a Virtual Private Database Policy to Secure a Registered Data Set
• Notes for Cloud Links
Provides notes and restrictions for using Cloud Links.
3-557
Chapter 3
Data Sharing
The data is then made accessible to those with access granted at registration time. No
further actions are required to set up Cloud Links, and whoever is supposed to see
and access your data is able to discover and work with the data.
Cloud Links leverage Oracle Cloud Infrastructure access mechanisms to make data
accessible within a specific scope. Scope indicates who can remotely access the data.
Scope can be set to various levels, including to the region where the database resides,
to individual tenancies, or to compartments. In addition, you can specify that
authorization to access a data set is limited to one or more Autonomous Database
instances.
Cloud Links greatly simplify the sharing of tables or views across Autonomous
Database instances, as compared to traditional database linking mechanisms. With
Cloud Links you can discover data without requiring a complex database link setup.
Autonomous Database provides transparent access using SQL, and implements
privilege enforcement with the Cloud Links scope and by granting authorization to
individual Autonomous Database instances.
Cloud Links introduce the concepts of regional namespaces and names for data that is
made remotely accessible. This is similar to existing Oracle tables where there is a
table, for example "EMP" that resides in a namespace (schema), for example "LWARD".
There can only be one LWARD.EMP in your database. Cloud Links provide a similar
namespace and name on a regional level, that is not tied to a single database but
applies across many Autonomous Database instances as specified with the scope and
optionally with database authorization.
For example, you can register a data set under the namespace FOREST and for
security purposes or for naming convenience, you can provide a namespace and a
name other than the original schema and object names. In this example, TREE_DATA
is the visible name of the registered data set and this name is not required to be the
name of the source table. In addition to the namespace and the name, the cloud$link
keyword indicates to the database that it must resolve the source as a Cloud Link.
To access a registered data set, include the namespace, the name, and the
cloud$link keyword in the FROM clause of a SELECT statement:
Optionally, you can specify that access to a data set from one or more databases is
offloaded to a refreshable clone. When a consumer Autonomous Database is listed in
a data set's offload list, access to the data set is directed to the refreshable clone.
There are several concepts and terms to use when you work with Cloud Links:
• Registered Data Set (Data Set): Identifies a table or view that has been enabled
for remote access in an Autonomous Database. A registered data set also
indicates who is allowed to access the data set (its scope). Data set registration
defines a namespace and a name for use with Cloud Links. After data set
registration, these values together specify the Fully Qualified Name (FQN) for
remote access, and allow Cloud Links to manage accessibility for the data set.
• Data Set Owner: Specify a data set owner to provide a point of contact for
questions about a data set.
• Scope: Specifies who and from where a user is allowed to access a registered
data set. See Data Set Scope, Access Control, and Authorization for more details
on scope.
3-558
Chapter 3
Data Sharing
Note:
Cloud Links provide read only access to remote objects on an Autonomous
Database instance. If you want to use database links with other Oracle databases
or with non-Oracle databases, or if you want to use your remote data with DML
operations, you need to use database links. See Use Database Links with
Autonomous Database for more information.
3-559
Chapter 3
Data Sharing
• A user who has been granted privileges to register data sets specifies a scope
when they register a data set.
• Optionally, when you register a data set, a user who has been granted
authorization privileges can specify that an authorization step required to access a
data set. This authorization step is in addition to scope level access authorization.
3-560
Chapter 3
Data Sharing
The scope values, MY$REGION, MY$TENANCY, and MY$COMPARTMENT are variables that act as
convenience macros and resolve to OCIDs.
Note:
The scope you set when you register a data set is only honored when it matches or
is more restrictive than the value set with
DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER. For example, assume the ADMIN
granted the scope 'MY$TENANCY' with GRANT_REGISTER, and the user specifies
'MY$REGION' when they register a data set with DBMS_CLOUD_LINK.REGISTER. In this
case they would see an error such as the following:
You can also use an additional access control mechanism based on a SYS_CONTEXT value.
This mechanism uses the function DBMS_CLOUD_LINK.GET_DATABASE_ID that returns an
identifier that is available as a SYS_CONTEXT value.
When you register a data set with DBMS_CLOUD_LINK.REGISTER you can use the SYS_CONTEXT
value in Oracle Virtual Private Database (VPD) security policies to control database access to
further restrict and control what specific data can be accessed by individual Autonomous
Database instances.
See Define a Virtual Private Database Policy to Secure a Registered Data Set for more
information on using VPD policies.
The database ID value is also available in the V$CLOUD_LINK_ACCESS_STATS and
GV$CLOUD_LINK_ACCESS_STATS Views that tracks access statistics and audit
information.
See Using Oracle Virtual Private Database to Control Data Access for more information.
Data set registration with authorization required specifies database level access for a data
set, in addition to the scope access control specified for the data set, as follows:
• Databases that are within the SCOPE specified and have been authorized with
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION can view rows from the data set.
• Any databases that are within the specified SCOPE but have not been authorized with
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION cannot view data set rows. In this case,
consumers without authorization see the data set as empty.
• Databases that are not within the SCOPE specified see an error when attempting to access
the data set.
3-561
Chapter 3
Data Sharing
• 'MY$REGION'
• 'MY$TENANCY'
• 'MY$COMPARTMENT'
See Data Set Scope, Access Control, and Authorization for more information.
1. As the ADMIN user, allow a user to register data sets within a specified scope.
BEGIN
DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER(
username => 'DB_USER1',
scope => 'MY$REGION');
END;
/
This specifies that the user DB_USER1, has privileges to register data sets within a
specified scope, 'MY$REGION'.
A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_REGISTER_ENABLED') to
check if they are enabled for registering data sets.
For example, the following query:
EXEC DBMS_CLOUD_LINK_ADMIN.GRANT_READ('LWARD');
This allows LWARD to access registered data sets that are available to the
Autonomous Database instance.
A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_READ_ENABLED') to
check if they are enabled for READ access to a data set.
For example, the following query:
3-562
Chapter 3
Data Sharing
BEGIN
DBMS_MACADM.ADD_AUTH_TO_REALM(
realm_name => 'myRealm',
grantee => 'C##DATA$SHARE');
END;
/
The common schema C##DATA$SHARE is used internally as the connecting schema for
authorized remote Cloud Links access.
See Use Oracle Database Vault with Autonomous Database for more information.
Notes for granting privileges to database users to register data sets:
• To register data sets or see and access remote data sets, you have to have granted the
appropriate privilege for either registering with DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER
or for reading data sets with DBMS_CLOUD_LINK_ADMIN.GRANT_READ.
This is true for ADMIN user as well; however the ADMIN user can grant privileges to
themself.
• The views DBA_CLOUD_LINK_PRIVS and USER_CLOUD_LINK_PRIVS provide information on
user privileges. See Monitor and View Cloud Links Information for more information.
• A user can run the following query to check if they are enabled for authenticating
registered, protected data sets:
3-563
Chapter 3
Data Sharing
BEGIN
DBMS_CLOUD_LINK.REGISTER(
schema_name => 'CLOUDLINK',
schema_object => 'SALES_VIEW_AGG',
namespace => 'REGIONAL_SALES',
name => 'SALES_AGG',
description => 'Aggregated regional sales information.',
scope => 'MY$TENANCY',
auth_required => FALSE,
data_set_owner => 'amit@example.com' );
END;
/
BEGIN
DBMS_CLOUD_LINK.REGISTER(
schema_name => 'CLOUDLINK',
schema_object => 'SALES_ALL',
namespace => 'TRUSTED_COMPARTMENT',
name => 'SALES',
description => 'Trusted Compartment, only accessible within
my compartment. Early sales data.',
scope => 'MY$COMPARTMENT',
auth_required => FALSE,
data_set_owner => 'amit@example.com' );
END;
/
Note:
Other objects such as analytic views or synonyms are not supported.
• namespace: is the namespace you provide as a name for access with Cloud
Links (one part of the Cloud Link FQN).
A NULL value specifies a system-generated namespace value, unique to the
Autonomous Database instance.
3-564
Chapter 3
Data Sharing
• name: is the name you provide for access with Cloud Links (one part of the Cloud Link
FQN).
• scope: Specifies the scope. You can use one of the variables: MY$REGION,
MY$TENANCY, or MY$COMPARTMENT or specify one or more OCIDs.
See Data Set Scope, Access Control, and Authorization for more information.
• auth_required: A Boolean value that specifies if database level authorization is
required for the data set, in addition to the scope access control. When this is set to
TRUE, the data set enforces an additional authorization step. See Register a Data Set
with Authorization Required for more information.
• data_set_owner: Text value specifies information about the individual responsible for
the data set or a contact for questions about the data set. For example, you could
supply an email address for the data set owner.
See REGISTER Procedure for more information.
After the registration completes, the scope for these two registered objects are different,
based on the scope parameter:
• MY$TENANCY: Specifies tenancy level scope for REGIONAL_SALES.SALES_AGG.
• MY$COMPARTMENT: Specifies the more restrictive compartment level scope for my
compartment within my tenancy for TRUSTED_COMPARTMENT.SALES.
If you want to revoke remote access to a registered data set, you can unregister the data set.
BEGIN
DBMS_CLOUD_LINK.UNREGISTER(
namespace => 'TRUSTED_COMPARTMENT',
name => 'SALES');
END;
/
3-565
Chapter 3
Data Sharing
• You can register a table or view that resides in another user's schema when the
you have READ WITH GRANT OPTION privileges for the table or view.
•
Autonomous Database does not perform hierarchical validity checks at registration
time and registrations that are outside the scope will never be visible or
accessible.
For example, consider the following sequence:
1. A user with scope MY$COMPARTMENT registers an object with a scope that
specifies an individual database OCID.
2. When a user requests access to the registered data set, Autonomous
Database performs the check to see that the database OCID of the database
where the request originates is in the OCID list specified with the scope when
the data set was registered.
3. After this, the namespace.name object will be discoverable, visible, and
usable in the database where the request originated.
• DBMS_CLOUD_LINK.UNREGISTER may take up to ten (10) minutes to fully propagate,
after which the data can longer be accessed remotely.
Note:
You can only grant authorization, as shown in these steps, if you are
authorized. The ADMIN grants authorization privileges with
DBMS_CLOUD_LINK_ADMIN.GRANT_AUTHORIZE.
BEGIN
DBMS_CLOUD_LINK.REGISTER(
schema_name => 'CLOUDLINK',
schema_object => 'SALES_VIEW_AGG',
namespace => 'REGIONAL_SALES',
name => 'SALES_AGG',
description => 'Aggregated regional sales information.',
scope => 'MY$TENANCY',
auth_required => TRUE,
3-566
Chapter 3
Data Sharing
Note:
Other objects such as analytic views or synonyms are not supported.
• namespace: is the namespace you provide as a name for access with Cloud Links
(one part of the Cloud Link FQN).
A NULL value specifies a system-generated namespace value, unique to the
Autonomous Database instance.
• name: is the name you provide for access with Cloud Links (one part of the Cloud Link
FQN).
• scope: Specifies the scope. You can use one of the variables: MY$REGION,
MY$TENANCY, or MY$COMPARTMENT or specify one or more OCIDs.
See Data Set Scope, Access Control, and Authorization for more information.
• auth_required: A Boolean value that specifies if database level authorization is
required for the data set, in addition to the scope access control. When this is set to
TRUE, the data set enforces an additional authorization step.
• data_set_owner: Text value specifies information about the individual responsible for
the data set or a contact for questions about the data set. For example, you could
supply an email address for the data set owner.
2. Obtain the database ID for the database you want to grant authorization to (to allow the
database to access to the data set).
On the system you want to grant access the data set:
3. Use the database ID you obtained to grant authorization to a specified data set.
You can only grant authorization, and run DBMS_CLOUD_LINK.GRANT_AUTHORIZATION, as
shown in this step, if you are authorized. The ADMIN grants authorization with
DBMS_CLOUD_LINK_ADMIN.GRANT_AUTHORIZE.
BEGIN
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION(
database_id => '120xxxxxxx8506029999',
namespace => 'TRUSTED_COMPARTMENT',
3-567
Chapter 3
Data Sharing
Perform these steps multiple times if you want to authorize additional databases.
If you want to revoke authorization for a database:
BEGIN
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION(
database_id => '120xxxxxxx8506029999',
namespace => 'TRUSTED_COMPARTMENT',
name => 'SALES');
END;
/
Register a Data Set with Offload Targets for Data Set Access
Optionally, when you register a data set you can offload access to the data set to one
or more Autonomous Database instances that are refreshable clones.
When you register a data set you can specify one or more Autonomous Database
OCIDs that are refreshable clones. Use the optional offload_targets parameter with
DBMS_CLOUD_LINK.REGISTER to specify that access is offloaded to one or more
refreshable clones. The source database for each refreshable clone is the
Autonomous Database instance where you register the data set (data publisher).
The offload_targets value is JSON that defines one or more pairs of values with
keywords:
• CLOUD_LINK_DATABASE_ID: is a database ID that you obtain with
DBMS_CLOUD_LINK.GET_DATABASE_ID. This is database ID for the data set
consumer whose request is offloaded to an offload target.
• OFFLOAD_TARGET: is the OCID for an Autonomous Database instance that is a
refreshable clone.
The following figure illustrates using offload targets.
3-568
Chapter 3
Data Sharing
forest.tree_data@cloud$link
SELECT species
Refreshable
Consumer 3
Clone 3
FROM
Refreshable Clone 3
Refreshable Clone 2
Refreshable Clone 1
OFFLOAD TARGET
Refresh
OFFLOAD_TARGETS JSON
forest.tree_data@cloud$link
SELECT species
Data Publisher
Consumer 2
Refreshable
Clone 2
Refresh
FROM
CL3
CL2
CL1
forest.tree_data@cloud$link
SELECT species
Consumer 1
Refreshable
FROM
Clone 1
When a data set consumer requests access to a data set that you register with
offload_targets and the Autonomous Database instance's database ID matches the value
specified in CLOUD_LINK_DATABASE_ID, access is offloaded to the refreshable clone identified
with OFFLOAD_TARGET in the supplied JSON.
For example, the following shows a JSON sample with three OFFLOAD_TARGET/
CLOUD_LINK_DATABASE_ID value pairs:
{
"OFFLOAD_TARGETS": [
{
"CLOUD_LINK_DATABASE_ID": "34xxxxx69708978",
"OFFLOAD_TARGET":
"ocid1.autonomousdatabase.oc1..xxxxx3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vq
stifsfabc"
},
{
"CLOUD_LINK_DATABASE_ID": "34xxxxx89898978",
"OFFLOAD_TARGET":
"ocid1.autonomousdatabase.oc1..xxxxx3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vq
3-569
Chapter 3
Data Sharing
stifsfdef"
},
{
"CLOUD_LINK_DATABASE_ID": "34xxxxx4755680",
"OFFLOAD_TARGET":
"ocid1.autonomousdatabase.oc1..xxxxx3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr
4h25vqstifsfghi"
}
]
}
Note:
It can take up to 10 minutes after you create a refreshable clone for the
refreshable clone to be visible as an offload target. This means you may
need to wait up to 10 minutes after you create a refreshable clone for the
refreshable clone to be available for Cloud Links offload registration.
2. Obtain the database ID for one or more Autonomous Database instances that you
want to access the data set using data provided by the refreshable clone.
On the system you want to access the data set from a refreshable clone, run the
command:
BEGIN
DBMS_CLOUD_LINK.REGISTER(
schema_name => 'CLOUDLINK',
schema_object => 'SALES_VIEW_AGG',
namespace => 'REGIONAL_SALES',
name => 'SALES_AGG',
description => 'Aggregated regional sales information.',
scope => 'MY$TENANCY',
auth_required => FALSE,
data_set_owner => 'amit@example.com',
offload_targets => '{
"OFFLOAD_TARGETS": [
{
"CLOUD_LINK_DATABASE_ID": "34xxxx754755680",
"OFFLOAD_TARGET":
"ocid1.autonomousdatabase.oc1..xxxxxaaba3pv6wkcr4jqae5f44n2b2m2yt2j6
3-570
Chapter 3
Data Sharing
rx32uzr4h25vqstifsfghi"
}
]
}');
END;
/
Note:
Other objects such as analytic views or synonyms are not supported.
• namespace: is the namespace you provide as a name for access with Cloud Links
(one part of the Cloud Link FQN).
A NULL value specifies a system-generated namespace value, unique to the
Autonomous Database instance.
• name: is the name you provide for access with Cloud Links (one part of the Cloud Link
FQN).
• scope: Specifies the scope. You can use one of the variables: MY$REGION,
MY$TENANCY, or MY$COMPARTMENT or specify one or more OCIDs.
See Data Set Scope, Access Control, and Authorization for more information.
• auth_required: A Boolean value that specifies if database level authorization is
required for the data set, in addition to the scope access control. When this is set to
TRUE, the data set enforces an additional authorization step. See Register a Data Set
with Authorization Required for more information.
• data_set_owner: Text value specifies information about the individual responsible for
the data set or a contact for questions about the data set. For example, you could
supply an email address for the data set owner.
• offload_targets: Specifies one or more Autonomous Database OCIDs of
refreshable clones where access to data sets is offloaded, from the Autonomous
Database where the data set is registered.
For each data set consumer, Autonomous Database instance matching the value of
the specified cloud_link_database_id, access is offloaded to the refreshable clone
identified by the OCID specified in offload_target.
For each data set consumer, Autonomous Database instance matching the value of
the specified cloud_link_database_id, access is offloaded to the refreshable clone
identified by the OCID specified in offload_target.
See REGISTER Procedure for more information.
3-571
Chapter 3
Data Sharing
See Use Refreshable Clones with Autonomous Database for more information.
DECLARE
result CLOB DEFAULT NULL;
BEGIN
DBMS_CLOUD_LINK.FIND('TREE', result);
DBMS_OUTPUT.PUT_LINE(result);
END;
/
[{"name":"TREE_DATA","namespace":"FOREST","description":"Urban tree
data including county, species and height"}]
3-572
Chapter 3
Data Sharing
NAMESPACE NAME
--------- --------------
FOREST TREE_DATA
REGIONAL_SALES SALES_AGG
TRUSTED_COMPARTMENT SALES
DBMS_CLOUD_LINK.DESCRIBE('FOREST','TREE_DATA')
---------------------------------------------------
Urban tree data including county, species and height
Note:
It can take up to 10 minutes after you register a data set with
DBMS_CLOUD_LINK.REGISTER for the data set to be visible and accessible.
EXEC DBMS_CLOUD_LINK_ADMIN.REVOKE_REGISTER('DB_USER1');
This revokes the privileges to register data sets for the user, DB_USER1.
Running DBMS_CLOUD_LINK_ADMIN.REVOKE_REGISTER does not affect data sets that are
already registered. Use DBMS_CLOUD_LINK.UNREGISTER to remove access for a registered
data set.
See REVOKE_REGISTER Procedure and UNREGISTER Procedure for more
information.
2. As the ADMIN user, revoke access to registered data sets.
3-573
Chapter 3
Data Sharing
For example:
EXEC DBMS_CLOUD_LINK_ADMIN.REVOKE_READ('LWARD');
This revokes access to Cloud Links data sets for the user LWARD.
See REVOKE_READ Procedure for more information.
View Description
V$CLOUD_LINK_ACCESS_STATS Use to track access to each registered data set on the
and Autonomous Database instance. These views track
GV$CLOUD_LINK_ACCESS_STAT elapsed time, CPU time, the number of rows retrieved,
S Views and additional information about registered data sets.
You can use the information in these views to audit
Cloud Links data set access and usage.
DBA_CLOUD_LINK_REGISTRATIO Use to list details of the registered data sets on an
NS and Autonomous Database instance.
ALL_CLOUD_LINK_REGISTRATIO
NS Views
DBA_CLOUD_LINK_ACCESS and Use to retrieve details of registered data sets a database
ALL_CLOUD_LINK_ACCESS Views is allowed to access.
DBA_CLOUD_LINK_AUTHORIZATI Provides information about which databases are
ONS View authorized to access which data sets. This applies to
data sets where the auth_required parameter is
TRUE.
DBA_CLOUD_LINK_PRIVS and Provides information on the Cloud Link specific
USER_CLOUD_LINK_PRIVS Views privileges, REGISTER, READ, or AUTHORIZE, granted to
all users or to the current user.
3-574
Chapter 3
Data Sharing
accessing your registered data set and limit access beyond what is possible by specifying a
Cloud Link scope.
Consider an example where REGIONAL_SALES.SALES_AGG is made available at the tenancy
level. If you want to restrict access to all databases except one specific database, only
allowing full access to the specified database, you can add a VPD policy on the registered
data set.
For example:
1. Obtain the unique Cloud Link database ID for the Autonomous Database instance where
you want to provide full access.
DECLARE
DB_ID NUMBER;
BEGIN
DB_ID := DBMS_CLOUD_LINK.GET_DATABASE_ID;
DBMS_OUTPUT.PUT_LINE('Database ID:' || DB_ID);
END;
/
2. Create VPD policy on the database that registered the data set by only allowing full
access to the one specific database whose identifier you got on the database in question
in Step 1.
BEGIN
DBMS_RLS.ADD_POLICY(object_schema => 'CLOUDLINK',
object_name => 'SALES_VIEW_AGG',
policy_name => 'THIS_YEAR',
function_schema => 'ADMIN',
policy_function => 'VPD_POLICY_SALES',
statement_types => 'SELECT',
policy_type => DBMS_RLS.SHARED_CONTEXT_SENSITIVE);
END;
/
3-575
Chapter 3
Data Sharing
3-576
Chapter 3
Data Sharing
• Cloud Links remote connectivity uses the LOW service and this cannot be changed. You
can see the remote connections as user C##DATA$SHARE in V$SESSION and the Cloud
Links views V$CLOUD_LINK_ACCESS_STATS and
GV$CLOUD_LINK_ACCESS_STATS Views provide more details about remote
connections.
• Autonomous Database only supports Cloud Links when your database version is Oracle
Database 19c. Cloud Links are not supported with database version Oracle Database
21c. See Always Free Autonomous Database for information on using Autonomous
Database with Oracle Database 21c.
• All interfaces are case-sensitive unless otherwise noted, as follows:
– Anything you enter that exists in the database, for example user names and table
names, case is significant and must be entered with uppercase letters.
– Predefined variables, for example the predefined scope values must be entered in
uppercase letters.
– Anything that you specify for the setup of Cloud Links, for example namespaces or
names of tables in a namespace, must be specified as entered. For example, if you
define a namespace as trees, you must enclose the namespace in double quotes,
as "trees", when accessing it with SQL.
• You can share Cloud Links when a data set resides on an Autonomous Database
instance in Read-Only mode. See Use Cloud Links from a Read Only Autonomous
Database Instance for more information.
• It can take up to 10 minutes after you create a refreshable clone for the refreshable clone
to be visible as an offload target. This means you may need to wait up to 10 minutes after
you create a refreshable clone for the refreshable clone to be available for Cloud Links
offload registration.
See Register a Data Set with Offload Targets for Data Set Access and Create a
Refreshable Clone for an Autonomous Database Instance for more information.
3-577
Chapter 3
Data Sharing
3-578
Chapter 3
Data Sharing
To use a PAR URL to provide access to data as a schema object (table or view):
1. Identify the table or view that you want to share.
If there are restrictions on the data you want to make available, use the
application_user_id parameter when you generate the PAR URL and create a VPD
policy to restrict the data that you expose. See Define a Virtual Private Database Policy
to Secure PAR URL Data for more information.
2. Run DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL to generate the PAR URL.
DECLARE
status CLOB;
BEGIN
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
schema_name => 'ADMIN',
3-579
Chapter 3
Data Sharing
{
"status":"SUCCESS",
"id":"Vd1Px7QWASdqDbnndiuwTA_example",
"preauth_url":"https://dataaccess.adb.us-
ashburn-1.oraclecloudapps.com/adb/p/Vdtv..._example_wxd0PN/data",
"expiration_ts":"2023-12-04T23:51:35.334Z"
}
To use a PAR URL to provide to access to data as an arbitrary SQL query statement:
1. Identify the table or view that contains the information you want to share, as well
as the SELECT statement on the table or view that you want to use.
If there are restrictions on the data you want to make available, use the
application_user_id parameter when you generate the PAR URL and create a
VPD policy to restrict the data that you expose. See Define a Virtual Private
Database Policy to Secure PAR URL Data for more information.
2. Run DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL to generate the PAR URL.
DECLARE
status CLOB;
BEGIN
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
sql_statement => 'SELECT species, height FROM
3-580
Chapter 3
Data Sharing
TREE_DATA',
expiration_count => 10,
result => status);
dbms_output.put_line(status);
END;
/
Note:
The sql_statement value must be a SELECT statement.
This expiration_count parameter specifies the PAR URL expires and is invalidated after
10 uses and without an expiration_time specified, the expiration time is set to the
default 90 days.
See GET_PREAUTHENTICATED_URL Procedure for more information.
3. Check the result.
In this example status contains the result that includes the PAR URL.
{
"status":"SUCCESS",
"id":"xA6iRGexmapleARc_zz",
"preauth_url":"https://dataaccess.adb.us-
ashburn-1.oraclecloudapps.com/adb/p/XA2_example_m4s/data",
"expiration_ts":"2024-05-16T17:01:08.226Z",
"expiration_count":10
}
You can use DBMS_DATA_ACCESS.LIST_ACTIVE_URLS to show the PAR URLs. See List
PAR URLs for more information.
3-581
Chapter 3
Data Sharing
data is made available with a PAR URL. If you want to restrict access to some of the
data you can add a VPD policy.
For example:
1. Obtain the application_user_id value that you specified when you generated the
PAR URL.
2. Create VPD policy on the database where you generated the PAR URL.
BEGIN
DBMS_RLS.ADD_POLICY(
object_schema => 'HR',
object_name => 'EMPLOYEE',
policy_name => 'POL',
policy_function => 'LIMIT_SAL');
END;
/
DECLARE
result CLOB;
BEGIN
result := DBMS_DATA_ACCESS.LIST_ACTIVE_URLS;
dbms_output.put_line(result);
END;
/
3-582
Chapter 3
Data Sharing
Note:
The behavior of DBMS_DATA_ACCESS.LIST_ACTIVE_URLS is dependent on the invoker.
If the invoker is ADMIN or any user with PDB_DBA role, the function lists all active
PAR URLs, regardless of the user who generated the PAR URL. If the invoker is not
the ADMIN user and not a user with PDB_DBA role, the list includes only the active
PAR URLs generated by the invoker.
curl https://dataaccess.adb.us-chicago-1.oraclecloudapps.com/adb/p/
K6XExample/data
The PAR URL response includes links for any previous or next pages, when the data includes
more than one page. This allows you to navigate in either direction while fetching data. The
JSON also includes a self link that points to the current page, as well as a hasMore attribute
that indicates if there is more data available to fetch.
The following is the response format:
{
"items": [], <-- Array of records from database
"hasMore": true OR false, <-- Indicates if there are more records to
fetch or not
"limit": Number, <-- Indicates number of records in the
page. Maximum allowed number is 100.
"offset": Number, <-- Offset indicating the start of the
current page
"count": Number, <-- Count of records in the current page
"links": [
{
"rel": "self",
"href": "{Link to preauth url for the current page}"
},
{
"rel": "previous",
"href": "{Link to preauth url for the previous page}"
},
{
"rel": "next",
"href": "{Link to preauth url for the next page}"
}
3-583
Chapter 3
Data Sharing
]
}
For example, the following is a sample response from a PAR URL (with newlines
added for clarity):
{"items":[
{"COUNTY":"Main","SPECIES":"Alder","HEIGHT":45},
{"COUNTY":"First","SPECIES":"Chestnut","HEIGHT":51},
{"COUNTY":"Main","SPECIES":"Hemlock","HEIGHT":17},
{"COUNTY":"Main","SPECIES":"Douglas-fir","HEIGHT":34},
{"COUNTY":"First","SPECIES":"Larch","HEIGHT":12},
{"COUNTY":"Main","SPECIES":"Cedar","HEIGHT":21},
{"COUNTY":"First","SPECIES":"Douglas-fir","HEIGHT":10},
{"COUNTY":"Main","SPECIES":"Yew","HEIGHT":11},
{"COUNTY":"First","SPECIES":"Willow","HEIGHT":17},
{"COUNTY":"Main","SPECIES":"Pine","HEIGHT":29},
{"COUNTY":"First","SPECIES":"Pine","HEIGHT":16},
{"COUNTY":"First","SPECIES":"Spruce","HEIGHT":6},
{"COUNTY":"Main","SPECIES":"Spruce","HEIGHT":8},
{"COUNTY":"First","SPECIES":"Hawthorn","HEIGHT":19},
{"COUNTY":"First","SPECIES":"Maple","HEIGHT":16},
{"COUNTY":"Main","SPECIES":"Aspen","HEIGHT":35},
{"COUNTY":"First","SPECIES":"Larch","HEIGHT":27},
{"COUNTY":"First","SPECIES":"Cherry","HEIGHT":20},
{"COUNTY":"Main","SPECIES":"Pine","HEIGHT":37},
{"COUNTY":"Main","SPECIES":"Redwood","HEIGHT":78},
{"COUNTY":"Main","SPECIES":"Alder","HEIGHT":45},
{"COUNTY":"First","SPECIES":"Chestnut","HEIGHT":51},
{"COUNTY":"Main","SPECIES":"Hemlock","HEIGHT":17},
{"COUNTY":"Main","SPECIES":"Douglas-fir","HEIGHT":34},
{"COUNTY":"First","SPECIES":"Larch","HEIGHT":12},
{"COUNTY":"Main","SPECIES":"Cedar","HEIGHT":21},
{"COUNTY":"First","SPECIES":"Douglas-fir","HEIGHT":10},
{"COUNTY":"Main","SPECIES":"Redwood","HEIGHT":78}],
"hasMore":false,
"limit":100,
"offset":0,
"count":30,
"links":
[
{"rel":"self",
"href":"https://dataaccess.adb.us-ashburn-1.oraclecloudapps.com/adb/p/
F5Sn..._example/data"}
]}
3-584
Chapter 3
Data Sharing
DECLARE
status CLOB;
BEGIN
DBMS_DATA_ACCESS.INVALIDATE_URL(
id => 'Vd1Px7QWASdqDbnndiuwTAyyEstv82PCHqS_example',
result => status);
dbms_output.put_line(status);
END;
/
Views Description
V$DATA_ACCESS_URL_STATS and These views track PAR URL usage, including elapsed time,
GV$DATA_ACCESS_URL_STATS CPU time, and additional information.
Views
3-585
Chapter 3
Data Sharing
• Overview of Share
• Access and Enable the Data Share Tool
• Provide Share
• Consume Share
• Data Share Limitations
• FAQs on the Data Share Tool
• Overview of the Data Share Tool
Oracle Autonomous Database enables you to create shares using the share tool.
• Access and Enable the Data Share Tool
To share or consume data using the Data Share tool, navigate to the Data Studio,
launch the Data Share tool, and enable sharing.
• Provide Share
Use the Provide Share page to view the number of shares of the database, and
the number of share recipients, search for a share entity, search for a recipient,
create a new share, and create a new recipient.
• Consume Share
Once the providers share the objects, there are a few steps the recipients need to
follow to consume the share.
• Data Share Limitations on Autonomous Database
This section summarizes the limitations of Data Share (both Versioned and Live
Share) included in the Oracle Autonomous Database.
• FAQs on the Data Share Tool
The following are frequently asked questions on the Data Share tool.
Note:
You must have the correct privileges to create or consume a data share. In
case the Data Share card is disabled, click on the tool tip and follow the
steps for the administrator to grant you the required privilege.
Select the Data Share menu from the Data Studio suite in the Database Actions
homepage to access this tool. This opens the Data Share homepage. It consists of
widgets that enable you to provide and consume share objects.
3-586
Chapter 3
Data Sharing
Note:
This is the homepage you view after you have enabled sharing and set Provider
identification details.
Note:
If you do not see the Data Share tool card then your database user is missing the
required DWROLE role.
Click Quick Start Guide to familiarize yourself with the Data Share tool.
Click PLSQL or Data Studio (Web UI) to try Data Sharing with PL/SQL or Data Studio without
creating an account on Oracle Cloud tenancy.
Click Enable Sharing to grant sharing permission to you as a provider. See Access and
Enable the Data Share Tool for more details.
The widgets are defined in the following sections:
• Provide Share
• Consume Share
Share Terminology
Provider: The Autonomous Database Serverless enables the provider to share existing
objects. The share can contain a single table, a set of related tables, or a set of tables with
some logical grouping. It could be a person, an institution, or a software system that shares
the objects.
Example: An institution, such as NASA, that makes a data set available via data.gov.
3-587
Chapter 3
Data Sharing
3-588
Chapter 3
Data Sharing
• For a Live Share, the intended recipient will copy the sharing ID from the consumer page
and publish the share that can be shared with recipients. This is the case when provider
shares to only one PDB.
• A provider can share also share to ALL_REGIONS, ALL_TENANCY, or
ALL_COMPARTMENTS.
Share Architecture
The following diagram is a generalized flow diagram of the architecture of the Data Share.
3-589
Chapter 3
Data Sharing
3. Clicking the Data Share pane opens a Provider and Consumer page where you
can view Provide Share and Consume Share widgets.
Clicking the Enable Share icon takes you to the Enable Sharing screen where you
must select the user schema you want to enable and move it from the Available
Schemas column to the Selected Schemas column.
5. Click Save.
3-590
Chapter 3
Data Sharing
You have now enabled the share. Click Provide Share. On the Provide Share page, click
Provider Identification to create a Provider ID. This provides information to the recipient
on how to identify you.
This field provides the details of the provider before you share the data. The share
provider identification will be available to recipients with whom you grant the share.
Selecting this button opens a Provider Identification wizard. Specify the following fields on
the General tab of the Provider Identification wizard:
3-591
Chapter 3
Data Sharing
Note:
You must configure SMTP only once and then the system uses the
saved configuration from that point forward.
Specify the following field values in the SMTP Configuration tab of the wizard:
• Server host: Enter the endpoint used to send the email. For example,
internal-mailrouter.oracle.com.
• Server port: Enter the SMTP port used to accept an email. Email Delivery
supports TLS on port 25. Sender: Enter the email address of the sender. For
example, oaapgens_us@oracle.com.
• Server Encryption: This field indicates if TLS, the standard means of
performing encryption in transit for emails, is being used. Providers must
encrypt emails while it's in transit to the Oracle Cloud Infrastructure Email
Delivery service. Encrypted emails are protected from being read during
transit. If there is no encryption, enter, None.
Select credential to use for SMTP connection from the drop-down. If there is no
available credential in the drop-down, you can create credential by clicking Create
Credential icon. Refer to Create Oracle Cloud Infrastructure Native Credentials for
more details.
Click Test to test the SMTP configuration. You will view a screen that asks you to
run the ACL script. You can run the script if you have the ADMIN rights. This is just
a first time setting. After you receive a successful message of SMTP Test, you can
save the SMTP configuration.
You can now start sharing your data.
Provide Share
Use the Provide Share page to view the number of shares of the database, and the
number of share recipients, search for a share entity, search for a recipient, create a
new share, and create a new recipient.
The Provide Share page consists of widgets Shares and Recipients and icons to
create Shares and Recipients. The page is described in the following sections:
• Provide Share Overview
Use the Provide Share to create and publish shares.
• Provide Versioned Share
In this type of share, data is shared and published as well-defined, known, as-of-
snapshots. At the time of publication, the tool generates and stores the data share
3-592
Chapter 3
Data Sharing
as parquet files in the specified bucket. The recipient can directly access the share in the
object store.
• Provide Live Share
In this type of share the recipient accesses data directly from the Oracle table or view.
This share gives the recipient the latest data. Live Share works by querying data over a
cloud link, which is implemented on a database link. The datbase link allows you to query
objects over the network on a foreign machine and schema. The advantage of this mode
is that the data is up to date as of the time of query.
• View Share Entity Details
Use the Actions icon at the right of the share entity entry to view details about the live
share or delta share entity you create.
• Create Share Recipient
Use the + Create Share Recipient icon on the Provide Share page to create a consumer
of the data share. When there is already a recipient created, use the + Create Recipient
icon on the Recipients widget of the Provide Share page to create a consumer of the data
share.
• View Share Recipient Entity Details
Use the Actions icon at the right of the share recipient entity entry to view details about
the live share or delta share recipient entity you create.
• Provide Share Overview
The Provide Share page enables you to view, create, edit, and access information on the
shares and recipients you create from this page.
• Provide Versioned Share
You can share data as a set of distinct versions, such as at the end of each day. The
recipient will only see changes to the data when you publish a new version. As a
versioned share Provider, you must create an OCI native credential and associate the
bucket's URL with the credential.
• Provide Live Share
You can share data as of the latest database commit with Autonomous Databases in the
same region. The recipient will always see the latest data.
• View Share Entity Details
To view details about an entity, click the Actions icon at the right of the Share entity entry,
then click View Details.
• Create Share Recipient
On the Provide Shares page, click the Recipients widget and select + Create Recipient
to create a new Share Recipient.
• View Share Recipient Entity Details
Click the Actions icon at the right of the Share Recipient entity entry, then click View
Details.
3-593
Chapter 3
Data Sharing
The search field and the display area of the Provide Share page varies when you
switch between the SHARES and RECIPIENTS widgets.
Shares Page
The following image displays the Provide Share page when you select the SHARES
widget.
The Provide Share page when you select the Shares widget consists of the following:
1. The top section of the Provide Share page displays Shares and Recipients section
with two widgets. The widgets are defined in the following sections:
• SHARES- Select this widget to view, search and perform actions on the
Shares you create.
• RECIPIENTS- Select this widget to view, search and perform actions on the
Share Recipients you create.
2. Search Shares field
Selecting Shares widget enables you to search for the share you create. You can
click the field and type or paste the name of the share or the recipient you are
looking for. Click the magnifier icon to return the search input. Click X to clear your
searches in the Search Shares field.
3. + Create Share button
You can use the Create Share wizard to define data shares based on object
storage data and associate them with recipients. A single data share can consist
of multiple shared objects that are shared with one or multiple recipients.
You can create a new share and a new recipient from this button. Refer to the
Provide Live Share, Provide Versioned Share and Create Share Recipients
section to explore more on what each button does.
4. Provider Identification button : See Access and Enable the Data Share tool to
view more about this icon for a first time user. If you have already configured
SMTP, you can edit the Provider ID or set SMTP using this button.
5. Toolbar
The toolbar consists of the following buttons:
• Sort By
To select sorting values, click the Sort By button to open the list of options.
Then click the Ascending or Descending icon next to one or more of the
sorting values.
3-594
Chapter 3
Data Sharing
For example, if you select the Ascending icon next to Entity name and the
Descending icon next to Entity type, the entities will be sorted in alphabetical order
by entity name and then in reverse alphabetical order by entity type.
Click Reset in the list to clear the choices in the list.
The sorting values you choose are listed next to the Sort by label beneath the
toolbar. Click the X icon on a sorting value to remove it.
• Page size
By default, up to 25 entities are displayed on the page. If you want more entities on a
page, select a number from this list.
• Previous and Next
If the search results are displayed on multiple pages, click these buttons to navigate
through the pages.
• Refresh
Click to refresh the entities shown on the page, based on the current search string.
• Entity view options
Choose one of these three options to set how entities are displayed on the page.
Click Open Card view to display entities as card arranged into one or two columns;
click Open Grid View to display entities as rows in a table; or click Open List View
to display entities in a single column of panels.
6. Sort by settings
When you set sorting values by using the Sort By control in the toolbar, the settings are
displayed in small boxes beneath the toolbar. You can delete a setting by clicking the X
icon in the box. Or you can change the settings by returning to the Sort By control in the
toolbar.
7. Display area
The area below the Search field displays the entities returned by a search. If you select
the Shares widget, you will view the Share entities in the display area. If you select the
Recipient widget, you will view the Share Recipient entities in the display area. This area
shows all providers you are subscribed to.
Recipients Page
The following image displays the Provide Share page when you select the RECIPIENTS
widget and there is no RECIPIENT created
Click + Create Share Recipient to create a share recipient. See Create Share Recipient for
more details.
Click Provider Identification to specify the name and description to identify how the
recipients will view you. You can also configure SMTP settings.
3-595
Chapter 3
Data Sharing
The following image displays the Provide Share page when you select the
RECIPIENTS widget and there is minimum one RECIPIENT already created
3-596
Chapter 3
Data Sharing
By default, up to 25 entities are displayed on the page. If you want more entities on a
page, select a number from this list.
• Previousand Next
If the search results are displayed on multiple pages, click these buttons to navigate
through the pages.
• Refresh
Click to refresh the entities shown on the page, based on the current search string.
• Entity view options
Choose one of these three options to set how entities are displayed on the page.
Click Open Card view to display entities as card arranged into one or two columns;
click Open Grid View to display entities as rows in a table; or click Open List View
to display entities in a single column of panels.
6. Sort by settings
When you set sorting values by using the Sort By control in the toolbar, the settings are
displayed in small boxes beneath the toolbar. You can delete a setting by clicking the X
icon in the box. Or you can change the settings by returning to the Sort By control in the
toolbar.
7. Display area
The area below the Search field displays the entities returned by a search. If you select
the Shares widget, you will view the Share entities in the display area. If you select the
Recipient widget, you will view the Share Recipient entities in the display area.
3-597
Chapter 3
Data Sharing
2. In the Publish Details Tables tab of the wizard, select SHARE VERSIONS USING
OBJECT STORAGE.
Select a cloud location where you want the share to be published. If you already
have an OCI cloud storage location available, use the Select Cloud location
drop-down and select any available Cloud location from the drop-down. You will
view a successful message if you have access to read, write, or delete in that
cloud location.
Note:
Select Create Cloud location and follow the steps described in the
Create Oracle Cloud Infrastructure Native Credentials if you do not have
a cloud location available.
For sharing versioned data, you can enable the schedule for share. Select Enable
For Scheduling option to set up a schedule for sharing. In the time interval fields,
enter a number, and select a time type and the days on which to poll the bucket for
3-598
Chapter 3
Data Sharing
new or changed files. For example, to poll every two hours on Monday, Wednesday, and
Friday, enter 2, select Hours, and then select Monday, Wednesday, Friday in the
appropriate fields. You can select All Days, Monday to Friday, Sunday to Thursday, or
Custom from the Week Days drop-down. The Custom field enables you to select
Monday, Tuesday, Wednesday, Thursday and Friday in the appropriate fields.
Select a start and end date. If you don't select a start date, the current time and date are
used as the start date. The end date is optional. However, without an end date, the share
will continue to poll. This is an optional step.
Click Next to progress to the Select Tables tab of the Create Share wizard.
3. On the Select Tables tab of the wizard, select the schema from the drop-down menu and
click the table you wish to share from the Available Tables. Choose any of the available
options:
• >: This option enables you to move the table to Shared Tables.
• <: To remove the selected table from Shared Tables select this option.
• >>: This option allows you to move all the tables to the Shared Tables screen.
• <<: To remove all the selected tables from Shared Tables select this option.
Click Next to proceed to the Recipients tab of the Create Share wizard.
Note:
You may find it convenient to click on the specific tab to move back and forth.
You can update changes on any screen this way.
3-599
Chapter 3
Data Sharing
4. On the Recipients tab of the Create Share wizard, select available Recipient from
the drop-down.
Select Notify Recipients When Available when you want to notify the recipients
when they are available. This can be enabled when SMTP is configured to send
an email notification to the recipient.
5. In case you want to create a new recipient, select New Recipient. Selecting New
Recipient opens a Create Share Recipient wizard. See Create Share Recipient.
Note:
Ensure there are no spaces in the name of the recipient. This field does
not support duplicate recipient names. If you try creating a recipient with
a duplicate name, the creation of the share fails with the "Name is
already used by an existing object" error.
Click Create to finish creating the share recipient. The newly added recipient is
displayed in the list of recipients. Click Cancel to cancel the ongoing process of
recipient creation.
6. After you create a new share recipient, in the granted recipient's list you can:
• Select copy profile activation link to clipboard icon to copy the activation
link.
• Select Email recipient profile download link icon to mail the profile link to
the recipient you select in this step. Once you select it, the Share tool triggers
an activation mail with the profile link to the recipient’s email address. Click
Send to share the profile link with the recipient. See About the Consume
Share Page to know more about the activation mail and how to proceed as a
recipient once you receive the activation mail.
3-600
Chapter 3
Data Sharing
Select Create to create the share. Note that the status of the share changes from
unpublished to published depending on the size of the table in the share. You will receive
a successful share notification message at the top right of the display that says the share
is successful. The Provide Share page displays the share along with its details such as
the entity type, owner, shared objects and the recipients.
Note:
A versioned share will show different colors and states depending upon what state
the share is in.
3-601
Chapter 3
Data Sharing
1. On the Create Share wizard, in the Name field of the General tab, enter a name
for the Share. For example: My_Share.
In the Description field, enter a description for the data you are sharing. For
example: Weekly Sales report.
Select Next to progress to the Publish Details tab of the Create Share wizard. You
can alternatively click on the Publish Details tab to progress to the Publish Details
screen of the wizard.
2. In the Publish Details Tables tab of the wizard, select SHARE LIVE DATA USING
DIRECT CONNECTION.
Click Next to progress to the Select Tables tab of the Create Share wizard.
3. On the Select Tables tab of the wizard, select the schema from the drop-down
menu and click the table you wish to share from the Available Tables. Choose any
of the available options:
• >: This option enables you to move the table to Shared Tables.
• <: To remove the selected table from Shared Tables select this option.
• >>: This option allows you to move all the tables to the Shared Tables screen.
• <<: To remove all the selected tables from Shared Tables select this option.
3-602
Chapter 3
Data Sharing
Note:
If a live share data provider is sharing multiple tables which would normally be
joined together, it is recommended that the producer create a view that
performs the joins and then only shares the view.
Click Next to proceed to the Recipients tab of the Create Share wizard.
Note:
You may find it convenient to click on the specific tab to move back and forth.
You can update changes on any screen this way.
4. On the Recipients tab of the Create Share wizard, select available Recipients from the
drop-down. The list of recipients available in the drop-down depends on the scope of the
share. You can select any or all the following values depending on with whom you intend
to share the data:
• MY_REGION: Access to the data share is allowed for databases in the same region
as the data share provider.
• MY_TENANCY: Access to the data share is allowed for databases in the same
tenancy as the data share provider.
• MY_COMPARTMENT: Access to the data share is allowed for databases in the
same compartment as the data share provider.
3-603
Chapter 3
Data Sharing
5. In case you want to create a new recipient, select New Recipient. Selecting New
Recipient opens a Create Share Recipient wizard. Enter the recipient's Name,
Description (optional), Sharing ID, Email in their respective fields.
Note:
• Ensure there are no spaces in the name of the recipient. This field
does not support duplicate recipient names. If you try creating a
recipient with a duplicate name, the creation of the share fails with
the "Name is already used by an existing object" error.
• A Sharing ID is a unique provider for your Autonomous Database.
The Data Share tool uses it to share data with you. Copy the Sharing
ID from the Consume Share page to the clipboard and paste it in the
Sharing ID field of the Create Share Recipient wizard.
Click Create to finish creating the share recipient. The newly added recipient is
displayed in the list of recipients. Click Cancel to cancel the ongoing process of
recipient creation.
6. Select Create to create the share. You will receive a successful share notification
message at the top right of the display that says the share is successful. The
Provide Share page displays the share along with its details such as the entity
type, owner, shared objects and the recipients.
3-604
Chapter 3
Data Sharing
Click the name of the entity or click Action to view details about the share entity. See View
Share Entity Details.
After share is created, you can view the new Share entity in the Provide Share page.
3-605
Chapter 3
Data Sharing
To view more details about an item, click the Actions icon for the item, then click
Expand. For a cloud location, the files in the location are displayed. Pointing to the
name of the file displays the name, application, type, path, and schema of the file. To
collapse the display, click the Actions icon, then click Collapse.
Impact
Impact shows all known information about the downstream use of an entity, and
therefore how a change in the definition of an entity may affect other entities that
depend on it.
For a specific share entity, you can perform the following actions using the Actions
context menu:
• View Details: See View Share Entity Details.
• Objects: Opens Select Tables dialog of the specific share where you can remove
tables from the list of Shared Tables list or add tables from the list of Available
Tables list to the list of Shared Tables.
• Recipients: Opens the Recipients dialog where you can update the list of
recipients or delete, copy profile activation link, send an activation mail to the
selected recipient.
• Manage Versions: Opens the Versions dialog which displays the version,
publication date, number of objects, number of new objects, status of the publish
and Actions related to the publish. You can delete the version or make the
selected version as the current version from the Action column. Select Refresh to
refresh any updates or select Delete unused versions to delete the selected
version. When you delete the unused version, you also receive the option to delete
objects in the cloud store location.
• Publish: Select Publish to publish the updated share.
• Unpublish: Select Unpublish to unpublish the share.
• Delete: Removes the Share Provider Entity.
3-606
Chapter 3
Data Sharing
1. On clicking the + Create Recipient icon, a Create Share Recipient dialog box appears.
Note:
Ensure there are no spaces in the name of the recipient. This field does not
support duplicate recipient names. If you try creating a recipient with a
duplicate name, the creation of the share fails with the "Name is already
used by an existing object" error.
3-607
Chapter 3
Data Sharing
3-608
Chapter 3
Data Sharing
Impact
Impact shows all known information about the downstream use of an entity, and therefore
how a change in the definition of an entity may affect other entities that depend on it.
Click Actions in the Share Recipient entity card to open the context menu. The actions
available are:
• View Details: See View Share Recipient Entity Details.
• Granted Share:Opens Manage Granted Shares dialog where you can grant access to a
share from list of available shares or you can revoke access from the current selection of
share.
• Send Activation Mail: Opens the mail with the profile link. See About the JSON profile.
• Copy Profile Activation Link to Clipboard: Copies the JSON profile activation link to
clipboard.
• Rename: Select this option to rename the Recipient.
• Delete: Select this option to delete the Recipient.
Consume Share
Once the providers share the objects, there are a few steps the recipients need to follow to
consume the share.
Use the Consume Share page to perform the following operations:
• Consume Share Overview:
To consume data shares, you need to subscribe to them and create views of tables
included in the live share.
• Consume Versioned Share:
As a recipient, you will need to download your share profile, subscribe to the data share
provider, register the shares and create external tables on top of your shares. The Data
Share tool authorizes the access using the JSON profile sent to the recipient using the
activation mail. After the access is granted, the Data Share tool links the shared objects
with the Data link tool where the consumer can run the data link job and access the
objects shared by the provider.
• Consume Live Share:
This lets you as a recipient to consume live data from the database.
• View Share Provider Entity Details:
Use the Actions icon at the right of the live share or delta share provider entity entry to
view details about the live share or delta share provider entity you create.
• Consume Share Overview
The Consume Share provides an overview on the list of share providers, search for share
providers and add a share provider.
• Consume Versioned Share
You must follow these steps to make shared versioned data available to you within
Oracle Autonomous Database. Data shared with you through Delta Sharing is not
automatically available and discoverable in your Autonomous Database.
• Consume Live Share
Live data shared with you through data sharing is not automatically available for
consumption.
3-609
Chapter 3
Data Sharing
3-610
Chapter 3
Data Sharing
The area beneath the Search Consumer Share Providers field displays the entities
returned by a search and that match the filter criteria set in the Filters panel. You can sort
the entities by clicking the Sort By button and then setting sort values.
To access the share, you need to register the shared objects using personal authorization
JSON profile.
You can click on the profile link to download the JSON profile. Clicking the profile link takes
you to a new screen in the browser with a Get Profile Information button as shown below:
3-611
Chapter 3
Data Sharing
Select Get Profile Information to download the JSON profile to connect to the share
provider.
Note:
You can click on the Get Profile Information button only once. The share
tool does not allow you to select Get Profile Information twice. Clicking on it
twice brings up a screen which displays the list of causes of the failure to
download the profile.
{ "
shareCredentialsVersion ": 1,
"endpoint": "https://myhost.us.example.com/ords/prov/
_delta_sharing/",
"tokenEndpoint": "http://myhost.us.example.com:1234/ords/
pdbdba/oauth/token",
"bearerToken": "-xxxxxxxxxxxxxxxxxxxxx",
"expirationTime": "2023-01-13T07:53:11.073Z",
"clientID": "xxxxxxxxxxxxxxxxxxxxxx..",
"clientSecret": "xxxxxxxxxxxxxxxxxxxx.."}
The profile stores the credentials in an encrypted format. The parameters with their
description are:
• shareCredentialsVersion: The version of the share you publish.
• endpoint: Specifies the share endpoint.
• tokenEndpoint: Specifies the token endpoint. The Share tool client use the token
endpoint to refresh the timeout on your bearer token if you are consuming the
share using Oracle.
• bearerToken: This is a cryptic string which the authentication server generates in
response to a login request.
• expirationTime: This is the time taken for the authentication to expire.
3-612
Chapter 3
Data Sharing
• ClientID: Specifies the public identifier the authentication server generates when you
register the instance for authentication.
• clientSecret: Specifies a secret identifier the authentication server generates for
authorization.
Copy the JSON content of the profile in a notepad. You will need this JSON below to
subscribe your share provider.
Note:
Ensure you copy the full JSON content profile, including the left brace and the right
brace.
Security enhancements
As a share recipient, you must set up an access control list (ACL) to the share provider’s
machine by using the APPEND_HOST_ACE procedure as an ADMIN user, or another privileged
user. This allows you to access the share via the Internet.
Note:
This must be done before using the Add Share Provider Wizard to add an access
control entry (ACE) to the access control list (ACL) of the host (i.e. Share provider).
You can find the host name from the JSON profile you downloaded in the previous
step.
For example, if you wish to allow a database user, A_SHARE_USER to access the endpoints
on a host (Share provider) named, here is a sample of PL/SQL procedure you will need to
run in the SQL worksheet editor as an admin. As a prerequisite, extract the host name from
the endpoint property in the delta sharing JSON profile, as provided in the example above.
The hostname from the example is myhost.us.example.com.
BEGIN
dbms_network_acl_admin.append_host_ace(
host =>'myhost.us.example.com',
lower_port=>443,
upper_port=>443,
ace => xs$ace_type(
privilege_list => xs$name_list('http', 'http_proxy'),
principal_name =>'A_SHARE_USER',
principal_type => xs_acl.ptype_db));
COMMIT;
END;
/
3-613
Chapter 3
Data Sharing
• upper port- Specifies the upper port of an optional TCP port range.
• ace – The Access Control Entry.
• privilege list- Specifies the list of network privileges to be granted or denied.
• principal_name- It is the principal (database user or role) to whom the privilege is
granted or denied. It is case-sensitive.
• principal_type- Specifies the type of principal you use.
Refer to the PL/SQL Packages and Types Reference document for more details on the
DBMS_NETWORK_ACL_ADMIN package subprograms.
Grant an ACL to the user on the local ORDS endpoint. You will need this to generate
bearer tokens on locally created shares.
In this process you will load the provider’s JSON profile for configuration and
credentials to enable access to the recipients.
1. Open the Consume Share page and click + Subscribe to Share Provider to
select Subscribe to Delta Share Provider from the drop-down. This opens the
Subscribe to Share Provider dialog box.
2. On the Provider Settings pane of the Register Share Provider dialog box, specify
the following details:
3-614
Chapter 3
Data Sharing
• JSON: You can select this option and paste the JSON content of the profile you copy
into the notepad.
Upload the JSON profile file and create a share provider subscription.
Click Next to progress to the Add Shares tab.
3. On the Add Shares tab of the dialog, you will view the list of available shares. Click the
share you wish to consume from the Available Shares and select any of the available
options:
• >: This option enables you to move the Available Share to Selected Shares.
• <: Select this option to remove the selected share from Selected Shares.
• >>: This option allows you to move all the shares to the Selected Shares screen.
• <<: Select this option to remove all the selected shares from Selected Shares.
4. Click Subscribe to add the share. A confirmation prompt appears when the provider is
created successfully. After successful creation of the provider, you will now view the Link
Cloud Object screen of the Data Load page.
5. You can view the name of the Share provider in the cloud storage location field. The
share appears in the source file location with the files you add to the share.
Expand the Share folder cart, drag and drop the file you share from the source to the
Data Link cart.
Select Start in the Data link cart to run the data link job.
3-615
Chapter 3
Data Sharing
Under Share Source section, choose Select from Live Share Providers and
select the provider from the drop-down.
Under Share Provider Details field, enter the following:
3-616
Chapter 3
Data Sharing
• >: This option enables you to move the Available Share to Selected Shares.
• <: Select this option to remove the selected share from Selected Shares.
• >>: This option allows you to move all the shares to the Selected Shares screen.
• <<: Select this option to remove all the selected shares from Selected Shares.
Click Subscribe to add the share. A confirmation prompt appears when the provider is
created successfully. After successful creation of the provider, you will now view the Link
Cloud Object screen of the Data Load page.
4. You can view the name of the Share provider in the cloud storage location field. The
share appears in the source file location with the files you add to the share.
Expand the Share folder cart, drag and drop the file you share from the source to the
Data Link cart.
Select Start in the Data link cart to run the data link job.
3-617
Chapter 3
Data Sharing
3-618
Chapter 3
Develop
1. A live share data provider can create shares with a maximum of four objects.
2. If the live share data provider is sharing multiple tables which would normally be joined
together, it is recommended that the producer create a view that performs the joins and
then only shares the view.
Develop
You can develop and deploy RESTful Services with native Oracle REST Data Services
(ORDS) support on Autonomous Databases. Simple Oracle Document Access (SODA) for
REST lets you use a database as a simple JSON document store.
• Create Applications with Oracle APEX in Autonomous Database
You can create applications with Oracle APEX on Autonomous Database.
• Using JSON Documents with Autonomous Database
Autonomous Database has full support for data represented as JSON documents. In
Autonomous Databases, JSON documents can coexist with relational data.
• Developing RESTful Services in Autonomous Database
You can develop and deploy RESTful Services with native Oracle REST Data Services
(ORDS) support on Autonomous Databases. Simple Oracle Document Access (SODA)
for REST lets you use a database as a simple JSON document store.
• Using Oracle Database API for MongoDB
Oracle Database API for MongoDB makes it possible to connect to Oracle Autonomous
Database using MongoDB language drivers and tools.
• Send Email and Notifications on Autonomous Database
There are a number of options for sending email on Autonomous Database. You can also
send notifications to a Slack or MSTeams channel.
• User Defined Notification Handler for Scheduler Jobs
3-619
Chapter 3
Develop
3-620
Chapter 3
Develop
3-621
Chapter 3
Develop
Note:
If your Autonomous Database is configured to use a Private Endpoint, then
you can only access Oracle APEX from clients in the same Virtual Cloud
Network (VCN).
See Configuring Network Access with Private Endpoints for more
information.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To access Oracle APEX Administration Services you can use the Oracle Cloud
Infrastructure Console or Database Actions.
To access Administration Services from Database Actions:
1. On the Autonomous Database details page select Database Actions and in the
list click View all database actions.
2. From the Database Actions Launchpad, under Development, click APEX.
The Oracle APEX Administration Services sign-in page appears.
If you already created a workspace, the Application Express workspace sign-in
page appears instead. In this case, to open Administration Services, click
Administration Services link.
3. In the Password field, enter the password for the ADMIN user.
Note:
By default, Administration Services and the Oracle APEX development
environment on Autonomous Database use Database Accounts
authentication. This authentication method uses the database account
user name and password to authenticate users. You can switch to a
different authentication method on the Administration Services, Manage
Instance, Security page.
3-622
Chapter 3
Develop
See Set the ADMIN Password in Autonomous Database to change the password.
3-623
Chapter 3
Develop
3. On the Create Workspace page, in the Database User field, enter a new database
username or choose an existing user from the list.
The ADMIN database user cannot be associated with a workspace.
4. In the Password field, provide a strong password if the database user is a new
user. If the user is an existing database user you do not enter a password.
See About User Passwords on Autonomous Database to learn more about the
default password complexity rules.
5. (optional) In the Workspace Name field, change the name of the workspace that
was automatically populated.
6. Click Create Workspace.
See Access Oracle APEX App Builder and Create Oracle APEX Developer Accounts
to create additional developer accounts.
3-624
Chapter 3
Develop
See Configuring Network Access with Private Endpoints for more information.
To access Oracle APEX App Builder:
1. Sign in to Oracle APEX using the workspace name, username, and password you specify
when you create the workspace.
Note:
Oracle APEX Administration Services and the Oracle APEX development
environment on Autonomous Database use Database Accounts authentication.
This authentication method uses the database account user name and
password to authenticate users.
See Create Users on Autonomous Database - Connecting with a Client Tool to learn
more about the default password complexity rules.
11. In the Confirm Password field, confirm the password.
Alternatively, click Create and Create Another if you want to create the user and create
another user.
To share sign-in details with developers:
3-625
Chapter 3
Develop
1. On the Autonomous Databases page, under the Display Name column, select the
Autonomous Database instance that contains the user and the user's workspace.
2. On the Autonomous Database Details page select Database Actions and click
View all database actions.
3. In Database Actions Launchpad, under Development, right-click APEX and
choose Copy URL.
4. Provide the copied URL, along with the Workspace Name, the Username, and the
Password for the developer account you created.
Using this URL developers can access the Oracle APEX environment without having
to navigate to the Autonomous Database Oracle Cloud Infrastructure Console.
Note:
Changing the password of Workspace Administrators and Developers
through Manage Developers and Users page or Edit Profile page only
affects applications configured with "Application Express Accounts"
authentication scheme. To change the password used to access App Builder,
use Database Actions or another client to change the password of the
corresponding database user.
See Creating User Accounts in Oracle APEX Administration Guide for more
information.
3-626
Chapter 3
Develop
Use the following command to check if the view has been created:
describe myview;
3-627
Chapter 3
Develop
See Create View using JSON Data Guide for more information on creating a view
using JSON Data Guide.
3-628
Chapter 3
Develop
a Private Host with UTL_HTTP, and add the following access control list for the desired
host:
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'www.example.com',
ace => XS$ACE_TYPE(
privilege_list => XS$NAME_LIST('http'),
principal_name => APEX_APPLICATION.g_flow_schema_owner,
principal_type => XS_ACL.ptype_db),
private_target => true);
END;
/
Note:
If you set ROUTE_OUTBOUND_CONNECTIONS database property to
PRIVATE_ENDPOINT, you do not need to define access control lists for individual
hosts in order to access them from APEX. Enhanced Security for Outbound
Connections with Private Endpoints
• Your APEX instance does not require an outbound web proxy. Attempting to use a web
proxy at the API call level or application level shows the error: ORA-01031: insufficient
privileges.
• There is a default limit of 50,000 outbound web service requests per APEX workspace in
a rolling 24-hour period. If the limit of outbound web service calls is reached, the following
SQL exception is raised on the subsequent request and the request is blocked:
ORA-20001: You have exceeded the maximum number of web service requests per
workspace. Please contact your administrator.
You can raise or remove the default limit of outbound web service requests by setting a
value for the MAX_WEBSERVICE_REQUESTS instance parameter or by updating the Maximum
Web Service Requests attribute in APEX Administration Services. For example, to
change the limit to 250,000, connect to your database as ADMIN using a SQL client and
execute the following:
BEGIN
APEX_INSTANCE_ADMIN.SET_PARAMETER('MAX_WEBSERVICE_REQUESTS',
'250000');
COMMIT;
END;
/
3-629
Chapter 3
Develop
Note:
Third-party email providers are not supported.
1. Identify the SMTP connection endpoint for Email Delivery. You configure the
endpoint as the SMTP Host in your APEX instance in Step 4. You may need to
subscribe to additional Oracle Cloud Infrastructure regions if Email Delivery is not
available in your current region. See Configure SMTP Connection for more
information.
2. Generate SMTP credentials for Email Delivery. Your APEX instance uses
credentials to authenticate with Email Delivery servers when you send email. See
Generate SMTP Credentials for a User for more information.
3. Create an approved sender for Email Delivery. You need to complete this step for
all email addresses you use as the "From" with APEX_MAIL.SEND calls, as the
Application Email From Address in your apps, or in the SMTP_FROM instance
parameter. See Managing Approved Senders for more information.
4. Connect to your Autonomous Database as ADMIN user using a SQL client and
configure the following SMTP parameters using
APEX_INSTANCE_ADMIN.SET_PARAMETER:
• SMTP_HOST_ADDRESS: Specifies the SMTP connection endpoint from Step 1.
• SMTP_USERNAME: Specifies the SMTP credential user name from Step 2.
• SMTP_PASSWORD: Specifies the SMTP credential password from Step 2.
• Keep default values for SMTP_HOST_PORT parameter (587) and SMTP_TLS_MODE
parameter (STARTTLS).
For example:
BEGIN
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_HOST_ADDRESS', 'smtp.us-
phoenix-1.oraclecloud.com');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_USERNAME',
'ocid1.user.oc1.username');
APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_PASSWORD', 'password');
COMMIT;
END;
/
3-630
Chapter 3
Develop
BEGIN
APEX_INSTANCE_ADMIN.VALIDATE_EMAIL_CONFIG;
END;
/
If any errors are reported (for example, "ORA-29279: SMTP permanent error: 535
Authentication credentials invalid"), adjust the SMTP parameters and repeat the
validation step.
6. Send a test email using APEX SQL Workshop, SQL Commands specifying one of the
approved senders from Step 3 as "From". For example:
BEGIN
APEX_MAIL.SEND(p_from => 'alice@example.com',
p_to => 'bob@example.com',
p_subj => 'Email from Oracle Autonomous Database',
p_body => 'Sent using APEX_MAIL');
END;
/
Note:
There is a default limit of 5,000 emails per workspace in a 24-hour period. You can
update or remove this limit in Oracle APEX Administration Services or by setting the
WORKSPACE_EMAIL_MAXIMUM instance parameter. Oracle Cloud Infrastructure Email
Delivery may impose additional limitations.
An approved sender must be set up for all “From:” addresses sending mail though Oracle
Cloud Infrastructure, or mail will be rejected. There are limitations on approved senders with
Oracle Cloud Infrastructure Email Delivery. See Managing Approved Senders for more
information.
For more information, see:
• Overview of the Email Delivery Service
• APEX_MAIL in Oracle APEX API Reference
• APEX_INSTANCE_ADMIN in Oracle APEX API Reference
3-631
Chapter 3
Develop
Note:
The latest Oracle APEX Patch Set Bundles are automatically applied on
Autonomous Database and cannot be deferred.
3. Open the Available Updates dialog by clicking the next to Available Updates.
This shows the Available Updates dialog.
4. In the Available Updates dialog select an Upgrade Window.
3-632
Chapter 3
Develop
3-633
Chapter 3
Develop
When the Defer Upgrade setting specifies an Upgrade Window, if an Oracle APEX
upgrade is available and you take no action, the upgrade is applied automatically on
the date specified in the Available Updates area.
3. Open the Available Updates dialog by clicking the next to Available Updates.
4. On the Available Updates dialog click Upgrade Now.
This starts the Oracle APEX upgrade process in the background. You can continue
using Administration Services and App Builder as well as running apps during most of
the upgrade process.
3-634
Chapter 3
Develop
Access Oracle APEX, Oracle REST Data Services, and Developer Tools Using a
Vanity URL
By default you access Oracle APEX apps, REST endpoints, and developer tools on
Autonomous Database using the oraclecloudapps.com domain name. You can optionally
configure a vanity URL or custom domain name that is easy to remember to help promote
your brand identity.
After you acquire a desired domain name and matching SSL certificate from a vendor of your
choice, deploy an Oracle Cloud Infrastructure Load Balancer in your Virtual Cloud Network
(VCN) using your Autonomous Database as the backend. Your Autonomous Database
instance must be configured with a private endpoint in the same VCN. See Configuring
Network Access with Private Endpoints for more information.
To learn more, see the following:
• Introducing Vanity URLs for APEX and ORDS on Oracle Autonomous Database
• Automate Vanity URL Configuration Using Terraform
Tip:
This section does not apply to co-managed databases (for example, OCI Base
Database Service) and Oracle Autonomous Database on Dedicated Exadata
Infrastructure.
3-635
Chapter 3
Develop
3-636
Chapter 3
Develop
– VALIDATE_EMAIL_CONFIG
See APEX_INSTANCE_ADMIN in Oracle APEX API Reference.
• The following application authentication schemes are supported with limitations:
– HTTP Header Variable: Only after configuring a vanity URL.
See Access Oracle APEX, Oracle REST Data Services, and Developer Tools Using a
Vanity URL for more information.
– LDAP Directory, including APEX_LDAP API: With the same restrictions that apply to the
DBMS_LDAP package.
See PL/SQL Packages Notes for Autonomous Database for more information.
– SAML Sign-In: Only when using a customer managed ORDS.
See About Customer Managed Oracle REST Data Services on Autonomous
Database for more information.
• Disabled options in SQL Workshop:
The ability to create and manage database links in Object Browser is disabled. To create
a database link, use the DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK PL/SQL API.
See Use Database Links with Autonomous Database for more information.
• Runtime environment:
Oracle APEX is only available as a Full Development environment. Converting into a
Runtime environment, which minimizes the installed software footprint and removes UI
components such as App Builder and Administration Services, is not supported.
See About the Differences Between Runtime and Full Development Environments in
Oracle APEX App Builder User’s Guide to learn more.
3-637
Chapter 3
Develop
• Import SODA Collection Data Using Oracle Data Pump Version 19.6 or Later
Shows the steps to import SODA collections into Autonomous Database with
Oracle Data Pump.
Note:
If you are using Always Free Autonomous Database with Oracle Database
21c, then to avoid compatibility problems of SODA drivers, Oracle
recommends the following:
• Use the driver versions that are needed for working with JSON type as
specified in SODA Drivers. See SODA Drivers for more information.
• For projects that were started using a database release prior to Oracle
Database 21c, explicitly specify the metadata for the default collection as
specified in the example in SODA Drivers. For projects started using
release Oracle Database 21c or later, just use the default metadata. See
SODA Drivers for more information.
3-638
Chapter 3
Develop
3-639
Chapter 3
Develop
• SODA Notes
When you use SODA with Autonomous Database the following restrictions apply:
SODA Notes
When you use SODA with Autonomous Database the following restrictions apply:
• Automatic indexing is not supported for SQL and PL/SQL code that uses the SQL/
JSON function json_exists. See SQL/JSON Condition JSON_EXISTS for more
information.
• Automatic indexing is not supported for SODA query-by-example (QBE).
Work with JSON Documents Using SQL and PL/SQL APIs on Autonomous
Database
Autonomous Database supports JavaScript Object Notation (JSON) data natively in
the database.
When you use Autonomous Database to store JSON data you can take advantage of
all the features available in your database. You can combine your JSON data with non-
JSON data; then, using Autonomous Database features such as Oracle Machine
Learning Notebooks, you can analyze your data and create reports. You can access
JSON data stored in the database the same way you access other database data,
including using Oracle Call Interface (OCI), Microsoft .NET Framework, and Java
Database Connectivity (JDBC).
See JSON in Oracle Database to get started.
Topics
• About Loading JSON Documents
• Load a JSON File of Line-Delimited Documents into a Collection
3-640
Chapter 3
Develop
Import SODA Collection Data Using Oracle Data Pump Version 19.6 or Later
Shows the steps to import SODA collections into Autonomous Database with Oracle Data
Pump.
You can export and import SODA collections using Oracle Data Pump Utilities starting with
version 19.6. Oracle recommends using the latest Oracle Data Pump version for importing
data from Data Pump files into your database.
Download the latest version of Oracle Instant Client, which includes Oracle Data Pump, for
your platform from Oracle Instant Client Downloads. See the installation instructions on the
platform install download page for the installation steps required after you download Oracle
Instant Client.
In Oracle Data Pump, if your source files reside on Oracle Cloud Infrastructure Object
Storage you can use Oracle Cloud Infrastructure native URIs, Swift URIs, or pre-
authenticated URIs. See DBMS_CLOUD Package File URI Formats for details on these file
URI formats.
If you are using an Oracle Cloud Infrastructure pre-authenticated URI, you still need to supply
a credential parameter. However, credentials for a pre-authenticated URL are ignored (and
the supplied credentials do not need to be valid). See DBMS_CLOUD Package File URI
Formats for information on Oracle Cloud Infrastructure pre-authenticated URIs.
This example shows how to create the SODA collection metadata and import a SODA
collection with Data Pump.
1. On the source database, export the SODA collection using the Oracle Data Pump expdp
command.
See Export Your Existing Oracle Database to Import into Autonomous Database for more
information.
2. Upload the dump file set from Step 1 to Cloud Object Storage.
3. Create a SODA collection with the required SODA collection metadata on your
Autonomous Database.
For example, if you export a collection named MyCollectionName from the source
database with the following metadata:
• The content column is a BLOB type.
• The version column uses the SHA256 method.
Then on the Autonomous Database where you import the collection, create a new
collection:
• By default on Autonomous Database for a new collection the content column is set to
BLOB with the jsonFormat specified as OSON.
• By default on Autonomous Database for a new collection the versionColumn.method
is set to UUID.
See SODA Default Collection Metadata on Autonomous Database for details.
3-641
Chapter 3
Develop
For example:
DECLARE
collection_create SODA_COLLECTION_T;
BEGIN
collection_create :=
DBMS_SODA.CREATE_COLLECTION('MyCollectionName');
END;
/
COMMIT;
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'DEF_CRED_NAME',
user_ocid =>
‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwet
kpqsoa’,
tenancy_ocid =>
‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjq
qge32ewnrcyx2a’,
private_key =>
‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLG
PD8NM//JEBg=’,
fingerprint =>
‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
3-642
Chapter 3
Develop
For more information on Oracle Cloud Infrastructure Signing Key based credentials see
CREATE_CREDENTIAL Procedure.
Supported credential types:
• Data Pump Import supports Oracle Cloud Infrastructure Auth Token based
credentials and Oracle Cloud Infrastructure Signing Key based credentials.
For more information on Oracle Cloud Infrastructure Signing Key based credentials
see CREATE_CREDENTIAL Procedure.
• Data Pump supports using an Oracle Cloud Infrastructure Object Storage pre-
authenticated URL for the dumpfile parameter. When you use a pre-authenticated
URL, providing the credential parameter is required and impdp ignores the
credential parameter. When you use a pre-authenticated URL for the dumpfile, you
can use a NULL value for the credential in the next step. See Using Pre-
Authenticated Requests for more information.
5. Run Data Pump Import with the dumpfile parameter set to the list of file URLs on your
Cloud Object Storage and the credential parameter set to the name of the credential you
created in the previous step.
Note:
Import the collection data using the option CONTENT=DATA_ONLY.
Specify the collection you want to import using the INCLUDE parameter. This is useful if a
data file set contains the entire schema and the SODA collection you need to import is
included as part of the dump file set.
Use REMAP_DATA to change any of the columns during import. This example shows using
REMAP_DATA to change the version column method from SHA256 to UUID.
impdp admin/password@db2022adb_high \
directory=data_pump_dir \
credential=def_cred_name \
dumpfile= https://objectstorage.us-ashburn-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/export%u.dmp \
encryption_pwd_prompt=yes \
SCHEMA=my_schema \
INCLUDE=TABLE:\"= \'MyCollectionName\'\" \
CONTENT=DATA_ONLY \
REMAP_DATA=my_schema.'\"MyCollectionName\"'.VERSION:SYS.DBMS_SODA.TO_UUID
Note:
If during the export with expdp you used the encryption_pwd_prompt=yes
parameter then use encryption_pwd_prompt=yes and input the same password
at the impdp prompt that you specified during the export.
3-643
Chapter 3
Develop
In Oracle Data Pump version 19.6 and later, the credential argument authenticates
Oracle Data Pump to the Cloud Object Storage service you are using for your
source files. The credential parameter cannot be an Azure service principal,
Amazon Resource Name (ARN), or a Google service account. See Accessing
Cloud Resources by Configuring Policies and Roles for more information on
resource principal based authentication.
The dumpfile argument is a comma delimited list of URLs for your Data Pump
files.
For the best import performance use the HIGH database service for your import
connection and set the parallel parameter to one quarter the number of ECPUs
(.25 x ECPU count). If you are using OCPU compute model, set the parallel
parameter to the number of OCPUs (1 x OCPU count).
For information on which database service name to connect to run Data Pump
Import, see Manage Concurrency and Priorities on Autonomous Database.
For the dump file URL format for different Cloud Object Storage services, see
DBMS_CLOUD Package File URI Formats.
Note:
To perform a full import or to import objects that are owned by other
users, you need the DATAPUMP_CLOUD_IMP role.
3-644
Chapter 3
Develop
Note:
The Oracle REST Data Services (ORDS) application in Autonomous Database is
preconfigured and fully managed. ORDS connects to the database using the low
predefined database service with a fixed maximum number of connections (the
number of connections for ORDS does not change based on the number of ECPUs
(OCPUs if your database uses OCPUs). It is not possible to change the default
ORDS configuration.
See About Customer Managed Oracle REST Data Services on Autonomous
Database for information on using an additional alternative ORDS deployment that
enables flexible configuration options.
See Oracle REST Data Services for information on using Oracle REST Data Services.
See Predefined Database Service Names for Autonomous Database for information on the
low database service.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
3-645
Chapter 3
Develop
Note:
If you are using Always Free Autonomous Database with Oracle Database
21c, Oracle recommends the following:
For projects that were started using a database release prior to Oracle
Database 21c, explicitly specify the metadata for the default collection as
specified in the example in the section SODA Drivers. For projects started
using release Oracle Database 21c or later, just use the default metadata.
See SODA Drivers for more information.
GET /ords/hr/module/service/
3-646
Chapter 3
Develop
However, when authenticated as the REST enabled schema SCOTT, the same request:
GET /ords/hr/module/service/
results in an error:
Any database user whose credentials are correct and meets these rules is authenticated and
granted the ORDS, mid-tier, role: SQL Developer. The SQL Developer role enables the user
to access any endpoint that requires the SQL Developer role.
See REST-Enable a Database Table in Quick Start Guide for information on how to enable a
table for REST access.
/ords/schema/soda/latest/*
The following examples use the cURL command line tool (http://curl.haxx.se/) to submit
REST requests to the database. However, other 3rd party REST clients and libraries should
work as well. The examples use database schema ADMIN, which is REST-enabled. You can
SODA for REST with cURL commands from the Oracle Cloud Shell.
3-647
Chapter 3
Develop
This command creates a new collection named "fruit" in the ADMIN schema:
These commands insert three JSON documents into the fruit collection:
{"items":[{"id":"6F7E5C60197E4C8A83AC7D7654F2E375"...
{"items":[{"id":"83714B1E2BBA41F7BA4FA93B109E1E85"...
{"items":[{"id":"BAD7EFA9A2AB49359B8F5251F0B28549"...
{
"items": [
{
"id":"6F7E5C60197E4C8A83AC7D7654F2E375",
"etag":"57215643953D7C858A7CB28E14BB48549178BE307D1247860AFAB2A958400E16",
"lastModified":"2019-07-12T19:00:28.199666Z",
"created":"2019-07-12T19:00:28.199666Z",
"value":{"name":"orange", "count":42}
}
],
"hasMore":false,
"count":1
}
3-648
Chapter 3
Develop
SELECT
f.json_document.name,
f.json_document.count,
f.json_document.color
FROM fruit f;
Note:
If you are using Always Free Autonomous Database with Oracle Database 21c,
Oracle recommends the following:
For projects that were started using a database release prior to Oracle Database
21c, explicitly specify the metadata for the default collection as specified in the
example in the section SODA Drivers. For projects started using release Oracle
Database 21c or later, just use the default metadata. See SODA Drivers for more
information.
These examples show a subset of the SODA and SQL/JSON features. See the following for
more information:
• SODA for REST for complete information on Simple Oracle Document Access (SODA)
• SODA for REST HTTP Operations for information on the SODA for REST HTTP
operations
The following examples use the cURL command line tool (http://curl.haxx.se/) to submit
REST requests to the database. However, other 3rd party REST clients and libraries should
work as well. The examples use database schema ADMIN, which is REST-enabled. You can
use SODA for REST with cURL commands from the Oracle Cloud Shell.
You can load this sample purchase-order data set into a collection purchaseorder on your
Autonomous Database with SODA for REST, using these curl commands:
3-649
Chapter 3
Develop
latest/purchaseorder"
You can then use this purchase-order data to try out examples in Oracle Database
JSON Developer’s Guide.
For example, the following query selects both the id of a JSON document and values
from the JSON purchase-order collection stored in column json_document of table
purchaseorder. The values selected are from fields PONumber, Reference, and
Requestor of JSON column json_document, which are projected from the document as
virtual columns (see SQL NESTED Clause Instead of JSON_TABLE for more
information).
3-650
Chapter 3
Develop
See Manage User Privileges on Autonomous Database - Connecting with a Client Tool
for more information.
3. Sign out as the ADMIN user.
4. Sign in to Database Actions as the user that is setting up to use OAuth authentication.
5. In Database Actions, use a SQL worksheet to register the OAuth client.
a. Register the OAuth client.
For example, enter the following commands into the SQL worksheet, where you
supply the appropriate values for your user and your client application.
BEGIN
OAUTH.create_client(
p_name => 'my_client',
p_grant_type => 'client_credentials',
p_owner => 'Example Company',
p_description => 'A client for my SODA REST resources',
p_support_email => 'user_name@example.com',
p_privilege_names => 'my_priv'
);
OAUTH.grant_client_role(
p_client_name => 'my_client',
p_role_name => 'SQL Developer'
);
OAUTH.grant_client_role(
p_client_name => 'my_client',
p_role_name => 'SODA Developer'
);
COMMIT;
END;
/
3-651
Chapter 3
Develop
This registers a client named my_client to access the my_priv privilege using
OAuth client credentials.
6. Obtain the client_id and client_secret required to generate the access token.
For example, in the SQL worksheet run the following command:
7. Obtain the access token. To get an access token you send a REST GET request to
database_ORDS_urluser_name/oauth/token.
The database_ORDS_url is available from Database Actions, under Related
Services, on the RESTful Services and Soda card. See Access RESTful
Services and SODA for REST for more information.
In the following command, use the client_id and the client_secret you
obtained in Step 6.
The following example uses the cURL command line tool (http://curl.haxx.se/) to
submit REST requests to Autonomous Database. However, other 3rd party REST
clients and libraries should work as well.
You can use the cURL command line tool to submit the REST GET request. For
example:
{"access_token":"JbOKtAuDgEh2DXx0QhvPGg","token_type":"bearer","expi
res_in":3600}
To specify both the client_id and the client_secret with the curl --user
argument, enter a colon to separate the client_id and the client_secret. If you
only specify the user name, client_id, curl prompts for a password and you can
enter the client_secret at the prompt.
8. Use the access token to access the protected resource.
The token obtained in the previous step is passed in the Authorization header. For
example:
3-652
Chapter 3
Develop
Cache-Control: private,must-revalidate,max-age=0
{"items":[],"hasMore":false}
See Configuring Secure Access to RESTful Services for complete information on secure
access to RESTful Services.
Access Oracle REST Data Services, Oracle APEX, and Developer Tools Using a
Vanity URL
By default you access Oracle REST Data Services (ORDS) endpoints, Oracle APEX apps,
and developer tools on Autonomous Database using the oraclecloudapps.com domain
name.
To access Oracle REST Data Services (ORDS) endpoints, you can optionally configure a
vanity URL or custom domain name that is easy to remember to help promote your brand
identity.
After you acquire a desired domain name and matching SSL certificate from a vendor of your
choice, deploy an Oracle Cloud Infrastructure Load Balancer in your Virtual Cloud Network
(VCN) using your Autonomous Database as the backend. Your Autonomous Database
instance must be configured with a private endpoint in the same VCN. See Configuring
Network Access with Private Endpoints for more information.
To learn more, see the following:
• Introducing Vanity URLs for APEX and ORDS on Oracle Autonomous Database
• Automate Vanity URL Configuration Using Terraform
3-653
Chapter 3
Develop
Installing and configuring a customer managed environment for ORDS allows you to
run ORDS with configuration options that are not possible using the default Oracle
managed ORDS available with Autonomous Database.
See Installing and Configuring Customer Managed ORDS on Autonomous Database
for more information.
Installing and configuring a customer managed environment for ORDS is only
supported with Autonomous Database Serverless. Your Autonomous Database must
be configured for one of the following workload types: Data Warehouse, Transaction
Processing, or JSON Database.
Note:
Oracle REST Data Services 19.4.6 or higher is required to use a customer
managed environment for ORDS with Autonomous Database. Customer
managed environment for ORDS is not supported if your database is
configured for APEX workload type.
3-654
Chapter 3
Develop
for the Autonomous Database instance. In addition to configuring the network access, you
must enable MongoDB API on the Autonomous Database instance.
• Configure Access for MongoDB
To use the MongoDB API, you can create and configure a new Autonomous Database or
modify the configuration of an existing Autonomous Database by configuring ACLs or by
defining a private endpoint.
• Enable MongoDB API on Autonomous Database
After you configure the network access for the Autonomous Database instance, enable
the MongDB API.
At this point, to use Oracle Database API for MongoDB configure secure access by selecting
and configuring one of these network access types:
• Secure access from allowed IPs and VCNs only
• Private endpoint access only
See Configuring Network Access with Private Endpoints for information on configuring an
Autonomous Database instance with a private endpoint.
3-655
Chapter 3
Develop
Note:
To use Oracle Database API for MongoDB the Network must be configured
and the Access type must be either: Allow secure access from specified
IPs and VCNs or Virtual Cloud Network.
Note:
Public IP addresses may change. Any change to your public IP address will
require a change in the ACL. If you are unable to access your database, your
ACL should be something you check.
3-656
Chapter 3
Develop
3-657
Chapter 3
Develop
For example, you could create a collection using the Oracle Database API for
MongoDB as follows:
use scott;
db.createCollection('fruit');
Note:
The role DWROLE in the Autonomous Database contains these roles, among
others.
Access to schemas not granted to the user is prohibited. For example, the user
SCOTT can only access collections in the schema SCOTT. There is one exception. If
the authenticated user has the Autonomous Database privileges CREATE USER, ALTER
USER and DROP USER, that user can access any ORDS-enabled schema.
Additionally, a user with these privileges can implicitly create schemas. That is, when
the user creates a collection in a database that does not exist, the schema will be
created automatically. See Oracle Database API for MongoDB for more information.
3-658
Chapter 3
Develop
3-659
Chapter 3
Develop
Topics
• Retrieve the Autonomous Database Connection String from Database Actions
• Test Connection Using the Command Line
• Test Connection Using a Node.js Application
• Retrieve the Autonomous Database MongoDB Connection String
• Test Connection Using the Command Line
• Test Connection Using a Node.js Application
3-660
Chapter 3
Develop
Use the MongoDB Shell which is a command-line utility used to connect and query your data.
1. You can retrieve the connection string for your Autonomous Database instance with
Database Actions.
See Access Database Actions as ADMIN for more information.
2. On the Database Actions Launchpad, under Related Services, click Oracle Database
API for MongoDB.
3. On the Oracle Database API for MongoDB page click Copy.
1. Login as the test user. See Create a Test Autonomous Database User for MongoDB for
more information.
3-661
Chapter 3
Develop
authMechanism=PLAIN&authSource=$external&tls=true&loadBalanced=false
Using MongoDB: 3.6.2
Using Mongosh: 1.0.7
For mongosh info see: https://docs.mongodb.com/mongodb-shell/admin
> show dbs
testuser 0 B
>
Note:
Use URI percent-encoding to replace any reserved characters in your
connection-string URI — in particular, characters in your username and
password. These are the reserved characters and their percent
encodings:
! # $ % & ' ( ) * +
%21 %23 %24 %25 %26 %27 %28 %29 %2A %2B
, / : ; = ? @ [ ]
%2C %2F %3A %3B %3D %3F %40 %5B %5D
'mongodb://RUTH:%40least1%2F2%23%3F@<server>:27017/
ruth/ ...'
Depending on the tools or drivers you use, you might be able to provide
a username and password as separate parameters, instead of as part of
a URI connection string. In that case you likely won't need to encode any
reserved characters they contain.
See also:
• Percent Encoding - Reserved Characters
• Uniform Resource Identifier (URI): Generic Syntax
3-662
Chapter 3
Develop
}
testuser> db.fruit.insertOne( {name:"apple", count:12, color: "red"} )
{
acknowledged: true,
insertedId: ObjectId("614ca340dab254f63e4c6b48")
}
testuser> db.fruit.insertOne( {name:"pear", count:5} )
{
acknowledged: true,
insertedId: ObjectId("614ca351dab254f63e4c6b49")
}
testuser>
SELECT
f.json_document.name,
f.json_document.count,
f.json_document.color
FROM fruit f;
1. Download Node.js. If you have already downloaded or installed Node.js for your
environment, you can skip this step.
$ wget https://nodejs.org/dist/latest-v14.x/node-v14.17.5-linux-x64.tar.xz
2. Extract the contents of the Node,js archive. If you have already downloaded or installed
Node.js for your environment, you can skip this step.
3-663
Chapter 3
Develop
$ export PATH="`pwd`"/node-v14.17.5-linux-x64/bin:$PATH
$ mkdir autonomous_mongodb
$ cd autonomous_mongodb
$ npm init –y
// Insert a movie
const doc = { title: 'Back to the Future',
year: 1985, genres: ['Adventure',
'Comedy', 'Sci-Fi'] }
const result = await movies.insertOne(doc);
console.log(movie);
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
3-664
Chapter 3
Develop
Note:
Use URI percent-encoding to replace any reserved characters in your
connection-string URI — in particular, characters in your username and
password. These are the reserved characters and their percent encodings:
! # $ % & ' ( ) * +
%21 %23 %24 %25 %26 %27 %28 %29 %2A %2B
, / : ; = ? @ [ ]
%2C %2F %3A %3B %3D %3F %40 %5B %5D
'mongodb://RUTH:%40least1%2F2%23%3F@<server>:27017/ruth/ ...'
Depending on the tools or drivers you use, you might be able to provide a
username and password as separate parameters, instead of as part of a
URI connection string. In that case you likely won't need to encode any
reserved characters they contain.
See also:
• Percent Encoding - Reserved Characters
• Uniform Resource Identifier (URI): Generic Syntax
d. Run the example. The output should look similar to the following.
$ node connect
{
_id: new ObjectId("611e3266005202371acf27c1"),
title: 'Back to the Future',
year: 1985,
genres: [ 'Adventure', 'Comedy', 'Sci-Fi' ]
}
3-665
Chapter 3
Develop
Note:
Oracle Cloud Infrastructure Email Delivery Service is the only supported
email provider for public SMTP endpoints.
3-666
Chapter 3
Develop
3. Create an approved sender for Email Delivery. Complete this step for all email addresses
you use as the "From" with UTL_SMTP.MAIL.
See Managing Approved Senders for more information.
4. Allow SMTP access for ADMIN user by appending an Access Control Entry (ACE).
For example:
BEGIN
-- Allow SMTP access for user ADMIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => ’www.us.example.com’,
lower_port => 587,
upper_port => 587,
ace => xs$ace_type(privilege_list => xs$name_list('SMTP'),
principal_name => 'ADMIN',
principal_type => xs_acl.ptype_db));
END;
/
See PL/SQL Packages Notes for Autonomous Database for UTL_SMTP restrictions with
Autonomous Database.
• SMTP Send Email Sample Code
Shows sample code for sending email with UTL_SMTP on Autonomous Database.
mail_conn utl_smtp.connection;
username varchar2(1000):= 'ocid1.user.oc1.username';
passwd varchar2(50):= 'password';
msg_from varchar2(50) := 'adam@example.com';
mailhost VARCHAR2(50) := 'smtp.us-ashburn-1.oraclecloud.com';
BEGIN
mail_conn := UTL_smtp.open_connection(mailhost, 587);
3-667
Chapter 3
Develop
utl_smtp.starttls(mail_conn);
utl_smtp.mail(mail_conn, msg_from);
utl_smtp.rcpt(mail_conn, msg_to);
UTL_smtp.open_data(mail_conn);
UTL_smtp.close_data(mail_conn);
UTL_smtp.quit(mail_conn);
EXCEPTION
WHEN UTL_smtp.transient_error OR UTL_smtp.permanent_error THEN
UTL_smtp.quit(mail_conn);
dbms_output.put_line(sqlerrm);
WHEN OTHERS THEN
UTL_smtp.quit(mail_conn);
dbms_output.put_line(sqlerrm);
END;
/
Where:
• mailhost: specifies the SMTP Connection Endpoint from Step 1 in Send Email with
Email Delivery Service on Autonomous Database.
• username: specifies the SMTP credential username from Step 2 in Send Email
with Email Delivery Service on Autonomous Database.
• passwd: specifies the SMTP credential password from Step 2 in Send Email with
Email Delivery Service on Autonomous Database.
• msg_from: specifies one of the approved senders from Step 3 in Send Email with
Email Delivery Service on Autonomous Database.
3-668
Chapter 3
Develop
• Both the source Autonomous Database instance and the email provider are in the same
Oracle Cloud Infrastructure VCN.
• The source Autonomous Database instance and the email provider are in different Oracle
Cloud Infrastructure VCNs that are paired.
• The email provider is on an on-premises network that is connected to the source
Autonomous Database instance's Oracle Cloud Infrastructure VCN using FastConnect or
VPN.
As a prerequisite, to send email using an email provider, define the following ingress and
egress rules:
• Define an egress rule in the source database's subnet security list or network security
group such that the traffic to the target host is allowed on port 587 or port 25 (depending
on which port you are using).
• Define an ingress rule in the target host's subnet security list or network security group
such that the traffic from the source Autonomous Database instance's IP address to port
587 or port 25 is allowed (depending on which port you are using).
Note:
Making calls to a private email provider is only supported in commercial regions and
US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly provisioned
databases.
For existing US Government databases on a private endpoint, if you want to call an
email provider from an Autonomous Database to a target in a US Government
region, please file a Service Request at Oracle Cloud Support and request to
enable the private endpoint in government regions database linking feature.
US Government regions include the following:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP
Authorization
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
Authorization
3-669
Chapter 3
Develop
Note:
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE only supports a single
hostname for the host parameter (on a private endpoint, using an IP
address, a SCAN IP, or a SCAN hostname is not supported).
Example
...
UTL_SMTP.AUTH (l_mail_conn, 'ocid1.user.oc1.username', 'xxxxxxxxxxxx',
schemes => 'PLAIN');
...
As shown in the example above, when you invoke AUTH subprogram, you must pass
the username/password in clear text as part of PL/SQL formal parameters. You might
need to embed the username/password into various PL/SQL automation or cron
scripts. Passing clear text passwords is a compliance issue that is addressed in
UTL_SMTP.SET_CREDENTIAL subprogram.
3-670
Chapter 3
Develop
PROCEDURE UTL_SMTP.SET_CREDENTIAL (
c IN OUT NOCOPY connection,
credential IN VARCHAR2,
schemes IN VARCHAR2 DEFAULT NON_CLEARTEXT_PASSWORD_SCHEMES
);
FUNCTION UTL_SMTP.SET_CREDENTIAL (
c IN OUT NOCOPY connection,
credential IN VARCHAR2,
schemes IN VARCHAR2 DEFAULT NON_CLEARTEXT_PASSWORD_SCHEMES)
RETURN reply;
Example
• Create a credential object:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'HTTP_CRED',
username => 'web_app_user',
password => '<password>' );
END;
DECLARE
l_mail_conn UTL_SMTP.CONNECTION;
BEGIN
l_mail_conn := UTL_SMTP.OPEN_CONNECTION('smtp.example.com', 587);
UTL_SMTP.SET_CREDENTIAL(l_mail_conn, 'SMTP_CRED', SCHEMES =>
'PLAIN');
...
END;
This example sends the command to authenticate to the SMTP server. The Web server
needs this information to authorize the request. The value l_mail_conn is the SMTP
connection, SMTP_CRED is the credentials name and PLAIN is the SMTP authentication
scheme.
See UTL_SMTP for more information.
See PL/SQL Packages Notes for Autonomous Database for information on restrictions for
UTL_SMTP on Autonomous Database.
3-671
Chapter 3
Develop
Note:
DBMS_CLOUD_NOTIFICATION supports sending email only to public SMTP
endpoints. Oracle Cloud Infrastructure Email Delivery Service is the only
supported email provider.
Note:
You can only use DBMS_CLOUD_NOTIFICATION for Email notifications with
Autonomous Database version 19.21 and above.
1. Identify your SMTP connection endpoint for Email Delivery. You may need to
subscribe to additional Oracle Cloud Infrastructure regions if Email Delivery is not
available in your current region.
For example, select one of the following for the SMTP connection endpoint:
• smtp.us-phoenix-1.oraclecloud.com
• smtp.us-ashburn-1.oraclecloud.com
• smtp.email.uk-london-1.oci.oraclecloud.com
• smtp.email.eu-frankfurt-1.oci.oraclecloud.com
See Configure SMTP Connection for more information.
2. Generate SMTP credentials for Email Delivery. UTL_SMTP uses credentials to
authenticate with Email Delivery servers when you send email.
See Generate SMTP Credentials for a User for more information.
3. Create an approved sender for Email Delivery. Complete this step for all email
addresses you use as the "From" with UTL_SMTP.MAIL.
See Managing Approved Senders for more information.
4. Create a credential object and use DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE to
send a message as an email.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'EMAIL_CRED',
username => 'username',
password => 'password'
);
3-672
Chapter 3
Develop
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'email',
credential_name => 'EMAIL_CRED',
message => 'Subject content',
params => json_object('recipient' value
'mark@example.com, suresh@example.com',
'to_cc' value
'nicole@example.com, jordan@example.com',
'to_bcc' value
'manisha@example.com',
'subject' value 'Test subject',
'smtp_host' value
'smtp.email.example.com',
'sender' value
'approver_sender@example.com' )
);
END;
/
Use the params parameter to specify the sender, smtp_host, subject, recipient, and
recipients of a CC or BCC in string values.
• sender: specifies the Email ID of the approved sender from Step 3.
• smtp_host: specifies the SMTP host name from step 2.
• subject: specifies the subject of the email.
• recipient: specifies the email IDs of recipients. Use a comma between email IDs
when there are multiple recipients.
• to_cc: specifies the email IDs that are receiving a CC of the email. Use a comma
between email IDs when there are multiple CC recipients.
• to_bcc: specifies the email IDs that are receiving a BCC of the email. Use a comma
between email IDs when there are multiple BCC recipients.
See SEND_MESSAGE Procedure for more information.
1. Identify your SMTP connection endpoint for Email Delivery. You may need to subscribe to
additional Oracle Cloud Infrastructure regions if Email Delivery is not available in your
current region.
For example, select one of the following for the SMTP connection endpoint:
• smtp.us-phoenix-1.oraclecloud.com
• smtp.us-ashburn-1.oraclecloud.com
• smtp.email.uk-london-1.oci.oraclecloud.com
3-673
Chapter 3
Develop
• smtp.email.eu-frankfurt-1.oci.oraclecloud.com
See Configure SMTP Connection for more information.
2. Generate SMTP credentials for Email Delivery. UTL_SMTP uses credentials to
authenticate with Email Delivery servers when you send email.
See Generate SMTP Credentials for a User for more information.
3. Create an approved sender for Email Delivery. Complete this step for all email
addresses you use as the "From" with UTL_SMTP.MAIL.
See Managing Approved Senders for more information.
4. Create a credential object and use DBMS_CLOUD_NOTIFICATION.SEND_DATA to send
the output of a query as an email.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'EMAIL_CRED',
username => 'username',
password => 'password'
);
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_DATA(
provider => 'email',
credential_name => 'EMAIL_CRED',
query => 'SELECT tablespace_name FROM
dba_tablespaces',
params => json_object('recipient' value
'mark@example.com, suresh@example.com',
'to_cc' value
'nicole@example.com1, jordan@example.com',
'to_bcc' value
'manisha@example.com',
'subject' value 'Test
subject',
'type' value 'json',
'title' value 'mytitle',
'message' value 'This is the
message',
'smtp_host' value
'smtp.email.example.com',
'sender' value
'approver_sender@example.com' )
);
END;
/
Use the params parameter to specify the sender, smtp_host, subject, recipient,
recipients of a CC or BCC, the message, data type, and the title in String values.
• sender: specifies the Email ID of the approved sender from Step 3.
• smtp_host: specifies the SMTP host name from step 2.
3-674
Chapter 3
Develop
• subject: specifies the subject of the email. The maximum size is 100 characters.
• recipient: This specifies the email IDs of recipients. Use a comma between email IDs
when there are multiple recipients.
• to_cc: specifies the email IDs that are receiving a CC of the email. Use a comma
between email IDs when there are multiple CC recipients.
• to_bcc: specifies the email IDs that are receiving a BCC of the email. Use a comma
between email IDs when there are multiple BCC recipients.
• message: specifies the message text.
• type: specifies the output format as either CSV or JSON.
• title: specifies the title of the attachment of SQL output. The title should only contain
letters, digits, underscores, hyphens, or dots as characters in its value due to it being
used to generate a file name.
Notes for sending the results of a query as mail:
• You can only use DBMS_CLOUD_NOTIFICATION for mail notifications with Autonomous
Database version 19.21 and above.
• The maximum message size for use with DBMS_CLOUD_NOTIFICATION.SEND_DATA for
mail notification is 32k bytes.
See SEND_DATA Procedure for more information.
3-675
Chapter 3
Develop
The Slack app is installed in a Slack Workspace, which in turn has Channels
where messages can be sent. The bot token of the Slack app must have the
following permission scopes defined:
channels:read
chat:write
files:write
Tip:
If you can not use the CREATE_CREDENTIAL procedure successfully,
consult with the ADMIN user to grant execute access on DBMS_CLOUD
packages.
The credential's username is SLACK_TOKEN and the password is the bot token.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'SLACK_CRED',
username => 'SLACK_TOKEN',
password => 'xoxb-34....96-34....52-zW....cy');
END;
/
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE (
host => 'slack.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type(
privilege_list => xs$name_list('http'),
principal_name => example_invoking_user,
3-676
Chapter 3
Develop
See Configuring Access Control for External Network Services for more information.
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'slack',
credential_name => 'SLACK_CRED',
message => 'Alert from Autonomous Database...',
params => json_object('channel' value 'C0....08'));
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_DATA(
provider => 'slack',
credential_name => 'SLACK_CRED',
query => 'SELECT username, account_status, expiry_date FROM USER_USERS
WHERE rownum < 5',
params => json_object('channel' value 'C0....08',
'type' value 'csv'));
END;
/
Use the params parameter to specify the Slack channel and the data type:
3-677
Chapter 3
Develop
The Channel ID is a unique ID for a channel and is different from the channel
name. In Slack, when you view channel details you can find the Channel ID on the
“About” tab. See How to Find your Slack Team ID and Slack Channel ID for more
information.
• type: Specifies the output type. Valid values are: 'csv' or 'json'.
See SEND_DATA Procedure for more information.
Related Topics
• SEND_DATA Procedure
The SEND_DATA procedure sends the results of the specified query to a provider.
3-678
Chapter 3
Develop
Tip:
After your app is approved by the IT Admin, you can provide the bot id and
secret key to other users to install the app within Teams in the org.
7. After the app is approved by the IT admin and the above requested permissions are
granted, you can use the app's bot ID and secret key are used to create the credential
object and generate a bot token.
8. To send a query result to a Microsoft Teams channel, obtain the team id and the tenant
id.
Tip:
The team id is located in the team link between /team/ and /conversations. The
tenant id is located after "tenantId=" at the end of the team link. This link is
found by clicking the three dots next to the team name and selecting Get link
to team.
For example:
https://teams.microsoft.com/l/team/teamID/conversations?
groupId=groupid%tenantId=tenantid
9. Obtain the channelID.
Tip:
channelID is located in the channel link between /team/ and the channel name.
This link is found by clicking the three dots next to the channel name and
selecting Get link to channel.
For example:
https://teams.microsoft.com/l/channel/channelID/channel_name?
groupId=groupid&tenantId=tenantid
10. Create a credential object to access the Microsoft Teams app from Autonomous
Database.
3-679
Chapter 3
Develop
Tip:
If you can not use the CREATE_CREDENTIAL procedure successfully,
consult with the ADMIN user to grant execute access on DBMS_CLOUD
packages.
The credential's username is the bot_id and the password is the bot key.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(credential_name =>
'TEAMS_CRED',
username => 'bot_id',
password => 'bot_secret');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'msteams',
credential_name => 'TEAMS_CRED',
message => 'text from new teams api',
params => json_object('channel' value 'channelID'));
END;
/
3-680
Chapter 3
Develop
Example:
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_DATA(provider => 'msteams',
credential_name => 'TEAMS_CRED',
query => 'SELECT tablespace_name FROM dba_tablespaces',
params => json_object('tenant'value '5b743bc******c0286',
'team'value '0ae401*********5d2bd',
'channel'value
'19%3a94be023*****%40thread.tacv2',
'title'value 'today',
'type'value 'csv'));
END;
/
Use the params parameter to specify the tenant, team, channel, title, and data type in string
values.
• tenant: specifies the tenant ID obtained from Step 8 in Prepare to Send Microsoft Teams
Notifications from Autonomous Database.
• team: specifies the team ID obtained from Step 8 in Prepare to Send Microsoft Teams
Notifications from Autonomous Database.
• channel: specifies the channel ID obtained from Step 9 in Prepare to Send Microsoft
Teams Notifications from Autonomous Database.
• title: specifies the title of the file. The title can only contain alphabets, digits, underscores,
and hyphens. The file name that appears in Microsoft Teams will be a concatenation of
the title parameter and the timestamp to ensure uniqueness. The maximum title size is 50
characters.
For example: 'title'_'timestamp'.'format'
• type: This specifies the output format. Valid values are CSV or JSON.
Note:
The maximum file size supported when using
DBMS_CLOUD_NOTIFICATION.SEND_DATA for Microsoft Teams is 4MB.
3-681
Chapter 3
Develop
3-682
Chapter 3
Develop
This example creates the JOB_STATUS table and INSERT_JOB_STATUS procedure to insert
the session-specific values into the table.
You must be logged in as the ADMIN user or have CREATE ANY PROCEDURE system
privilege to create a job notification handler procedure.
Note:
An ORA-27405 is returned when you specify an invalid owner or object name as
the job notification handler procedure.
3-683
Chapter 3
Develop
BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('job_notification_handler','ADMI
N.SEND_NOTIFICATION');
END;
/
VALUE
---------------
"ADMIN"."SEND_NOTIFICATION"
3-684
Chapter 3
Develop
You must assign EXECUTE privilege to allow other users to use the job notification handler. For
example:
BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'DWUSER.MY_JOB',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN null; END;',
enabled => FALSE,
auto_drop => FALSE);
END;
/
BEGIN
DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION(
job_name => 'DWUSER.MY_JOB',
recipients => 'PLACEHOLDER_STRING',
subject => 'Job Notification-%job_owner%.%job_name%-%event_type%',
body => '%event_type% occurred at %event_timestamp%.
%error_message%',
events => 'job_started, job_succeeded, job_completed');
END;
/
3-685
Chapter 3
Develop
This procedure adds notifications for the DWUSER.MY_JOB job. The notifications are
sent whenever any of the specified job state events is raised.
See ADD_JOB_EMAIL_NOTIFICATION Procedure for more information.
3. Enable the scheduler job.
EXEC DBMS_SCHEDULER.ENABLE('DWUSER.MY_JOB');
3-686
Chapter 3
Develop
{"job_owner":"DWUSER","job_name":"MY_JOB","job_class_name":"DEFAULT_JOB_CL
A
SS","event_type":"JOB_STARTED","event_timestamp":"12-JAN-23
08.13.46.193306
PM
UTC","error_code":0,"error_msg":null,"sender":null,"recipient":"data_lo
ad_pipeline","subject":"Job Notification-DWUSER.MY_JOB-
JOB_STARTED","msg_te
xt":"JOB_STARTED occurred at 12-JAN-23 08.13.46.193306 PM UTC.
","comments"
:"User defined job notification handler"}
{"job_owner":"DWUSER","job_name":"MY_JOB","job_class_name":"DEFAULT_JOB_CL
A
SS","event_type":"JOB_SUCCEEDED","event_timestamp":"12-JAN-23
08.13.46.2863
44 PM
UTC","error_code":0,"error_msg":null,"sender":null,"recipient":"data_
load_pipeline","subject":"Job Notification-DWUSER.MY_JOB-
JOB_SUCCEEDED","ms
g_text":"JOB_SUCCEEDED occurred at 12-JAN-23 08.13.46.286344 PM UTC.
","com
ments":"User defined job notification handler"}
EXEC DBMS_SCHEDULER.REMOVE_JOB_EMAIL_NOTIFICATION('DWUSER.MY_JOB');
BEGIN
DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE ('job_notification_handler','');
END;
/
3-687
Chapter 3
Develop
https://github.com/oracle/oci-toolkit-eclipse/releases
Then follow the installation instructions and details about how to get started in this
step-by-step walkthrough:
New Eclipse Plugin for Accessing Autonomous Database (ATP/ADW)
3-688
Chapter 3
Develop
Note:
Promotion of Always Free to paid is supported only if the Always Free instance
has database release 19c.
3-689
Chapter 3
Develop
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'JAVAVM' );
3-690
Chapter 3
Develop
END;
/
This initiates the request to install Oracle Java on the Autonomous Database instance.
See ENABLE_FEATURE Procedure for more information.
2. Restart the Autonomous Database instance.
See Restart Autonomous Database for more information.
After you restart the Autonomous Database instance, Oracle JVM is enabled on the instance.
STATUS VERSION
------ ----------
VALID 19.0.0.0.0
GET_JDK_VERSION
---------------
1.8.0_331
3-691
Chapter 3
Develop
ERROR at line 1:
ORA-29548: Java system class reported: release of Java system
classes in the
database (19.0.0.0.220118 1.8) does not match that of the oracle
executable
(19.0.0.0.220419 1.8).
During the maintenance window, when the Java patching phase is active there is
no response for Java session calls or you see the ORA-29548 error. After the
maintenance window completes, Java usage is restored.
You can use the events NewMaintenanceSchedule and
ScheduledMaintenanceWarning to be notified of Oracle Java patching. See
Information Events on Autonomous Database for more information.
See About Scheduled Maintenance and Patching for more information.
3-692
Chapter 3
Develop
3-693
Chapter 3
Develop
3-694
Chapter 3
Develop
Capture a Workload
The first step in using Database Replay is to capture the production workload.
When you begin workload capture on the production system, all requests from external
clients directed to Oracle Database are tracked and stored in binary files called capture files.
A workload capture results in the creation of two subdirectories, cap and capfiles, which
contain the capture files.
The capture files provide all pertinent information about the client request, including
transaction details, bind values, and SQL text.
These capture files are platform independent and can be transported to another system.
• Capture a Workload on an Autonomous Database Instance
Run DBMS_CLOUD_ADMIN.START_WORKLOAD_CAPTURE to initiate workload capture on your
Autonomous Database instance.
BEGIN
DBMS_CLOUD_ADMIN.START_WORKLOAD_CAPTURE(
capture_name => 'test',
duration => 60);
END;
/
3-695
Chapter 3
Develop
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to
be notified at the start of START_WORKLOAD_CAPTURE. See Information Events
on Autonomous Database for more information.
Example:
BEGIN
DBMS_CLOUD_ADMIN.CANCEL_WORKLOAD_CAPTURE;
END;
/
This cancels the current workload capture and enables refresh on the refreshable
clone.
You can query the DBA_CAPTURE_REPLAY_STATUS view to check the cancel workload
status.
See DBA_CAPTURE_REPLAY_STATUS View for more information.
See CANCEL_WORKLOAD_CAPTURE Procedure for more information.
BEGIN
DBMS_CLOUD_ADMIN.FINISH_WORKLOAD_CAPTURE;
END;
/
To run this procedure you must be logged in as the ADMIN user or have the EXECUTE
privilege on DBMS_CLOUD_ADMIN.
3-696
Chapter 3
Develop
You can query the ID, NAME, START_TIME, and END_TIME columns of the
DBA_WORKLOAD_CAPTURES view to retrieve the details of your workload capture. See
DBA_WORKLOAD_CAPTURES for more information.
The workload capture file is uploaded to the Object Store as a zip file.
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to be
notified about the completion of FINISH_WORKLOAD_CAPTURE as well as the Object
Storage link to download the capture file. This PAR URL is contained in the
captureDownloadURL field of the event and is valid for 7 days from the date of
generation. See Information Events on Autonomous Database for more information.
Note:
This step is not applicable when you are replaying the workload on a full clone.
3-697
Chapter 3
Develop
– If the replay target is a refreshable clone, you should refresh it to the capture
start time and then disconnect it.
You can retrieve the capture start time by querying the START_TIME column
from the DBA_WORKLOAD_CAPTURES view on the Autonomous Database instance
where the workload was captured. See DBA_WORKLOAD_CAPTURES for
more information.
– Replay the workload capture.
Example to replay a workload on a refreshable clone :
BEGIN
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
capture_name => 'CAP_TEST1');
END;
/
This example downloads the capture files from the Object Storage, replays the
captured workload, and uploads a replay report after a replay.
The CAPTURE_NAME parameter specifies the name of the workload capture. This
parameter is mandatory.
Example to replay a workload on a full clone :
BEGIN
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
capture_name => 'CAP_TEST1',
capture_source_tenancy_ocid =>
'OCID1.TENANCY.REGION1..ID1',
capture_source_db_name => 'ADWFINANCE');
END;
/
Note:
If there are multiple captures with the same capture name, REPLAY_WORKLOAD
uses the latest capture. Oracle recommends that you use a unique capture
name for each capture to prevent confusion on which capture you are
replaying.
This example downloads the capture files from the Object Storage, replays the
captured workload, and uploads a replay report after a replay.
The CAPTURE_NAME parameter specifies the name of the workload capture. This
parameter is mandatory.
The CAPTURE_SOURCE_TENANCY_OCID parameter specifies the source tenancy OCID of
the workload capture. This parameter is mandatory when running the workload
capture in a full clone.
3-698
Chapter 3
Develop
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to be
notified about the start and completion of the REPLAY_WORKLOAD, as well as the
Object Storage link to download the replay reports.
The PAR URL is contained in the replayDownloadURL field of the event and is valid
for 7 days from the date of generation. The PAR URL points to a zip file that
contains a replay report in HTML and an AWR report. See Information Events on
Autonomous Database for more information.
Capture a Workload
The first step in using Database Replay is to capture the production workload.
When you begin workload capture on the production system, all requests from external
clients directed to Oracle Database are tracked and stored in binary files called capture files.
3-699
Chapter 3
Develop
A workload capture results in the creation of two subdirectories, cap and capfiles,
which contain the capture files.
The capture files provide all pertinent information about the client request, including
transaction details, bind values, and SQL text.
These capture files are platform independent and can be transported to another
system.
See Workload Capture to capture a workload on an on-premises database.
BEGIN
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
credential_name => 'CRED_TEST',
synchronization => TRUE,
process_capture => TRUE);
END;
/
This downloads the capture files contained in the Object Storage location specified in
the location_uri parameter, and replays the workload capture from the capture files.
The replay generates and uploads the replay and Automatic Workload Repository
reports to the Object Storage location specified in the location_uri parameter.
The credential_name parameter specifies the credential to access the object storage
bucket. The credential that you supply must have the write privileges to write into the
Object Storage bucket. The write privileges are required to upload the replay report to
the bucket.
If you do not specify a credential_name value, then DEFAULT_CREDENTIAL is used.
3-700
Chapter 3
Develop
Note:
You must maintain the same logical state of the capture and replay databases at
the start of the capture time.
You can query the DBA_CAPTURE_REPLAY_STATUS view to check the workload replay status.
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to be
notified about the start and completion of the REPLAY_WORKLOAD as well as the
Object Storage link to download the replay reports. This PAR URL is contained in
the replayDownloadURL field of the event and is valid for 7 days from the date of
generation. See Information Events on Autonomous Database for more information.
DBA_CAPTURE_REPLAY_STATUS View
The DBA_CAPTURE_REPLAY_STATUS view provides the most recent status for your workload.
The values in the view are updated when you execute FINISH_WORKLOAD_CAPTURE,
CANCEL_WORKLOAD_CAPTURE or REPLAY_WORKLOAD.
3-701
Chapter 3
Develop
3-702
Chapter 3
Develop
3-703
Chapter 3
Develop
DEFINE repo_name='test_repo';
DEFINE cred_name='GITHUB_CRED';
VAR repo clob
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_REPO(
params => JSON_OBJECT('provider' value 'github',
'repo_name' value
'&repo_name',
'credential_name' value
'&cred_name',
'owner' value
'<myuser>')
);
END;
/
DEFINE repo_name='test_repo';
DEFINE cred_name='AZURE_REPO_CRED';
VAR repo clob
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_REPO(
params => JSON_OBJECT('provider' value 'azure',
'repo_name' value
'&repo_name',
'credential_name' value
'&cred_name',
'organization' value
'<myorg>',
'project' value
'<myproject>')
);
3-704
Chapter 3
Develop
END;
/
DEFINE repo_name='test_repo';
DEFINE cred_name='AWS_REPO_CRED';
VAR repo clob
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_REPO(
params => JSON_OBJECT('provider' value 'aws',
'repo_name' value '&repo_name',
'credential_name' value
'&cred_name',
'region' value 'us-east-1')
);
END;
/
2. To update a repository:
3-705
Chapter 3
Develop
3. To list repositories:
4. To delete a repository:
BEGIN
DBMS_CLOUD_REPO.CREATE_BRANCH (
repo => l_repo,
branch_name => 'test_branch',
parent_branch => 'main'
);
END;
/
BEGIN
DBMS_CLOUD_REPO.DELETE_BRANCH (
repo => l_repo,
branch_name => 'test_branch'
3-706
Chapter 3
Develop
);
END;
/
BEGIN
DBMS_CLOUD_REPO.MERGE_BRANCH (
repo => l_repo,
branch_name => 'test_branch',
target_branch_name => 'main'
);
END;
/
5. You can list commits in a branch in Cloud Code Repository based on the following;
• Repository
• Branch
• The file path of the branch
• SHA/commit_id
Note:
SHA is a checksum 40-character string composed of hexadecimal
characters (0–9 and a–f).
To list commits based on the repository, commit_id and file path of the branch:
To list commits based on the repository, and file path of the branch:
3-707
Chapter 3
Develop
To list commits based on the repository, branch_name and file path of the branch:
BEGIN
DBMS_CLOUD_REPO.EXPORT_SCHEMA(
repo => l_repo,
schema_name => 'USER1',
file_path => 'myschema_ddl.sql'
filter_list =>
to_clob('[
{ "match_type":"equal",
"type":"table"
},
{ "match_type":"not_equal",
"type":"view"
},
{ "match_type":"in",
"type":"table",
"name": " ''EMPLOYEE_SALARY'',''EMPLOYEE_ADDRESS'' "
},
3-708
Chapter 3
Develop
{ "match_type":"equal",
"type":"sequence",
"name": "EMPLOYEE_RECORD_SEQ"
},
{ "match_type":"like",
"type":"table",
"name": "%OFFICE%"
}
]'
);
);
END;
/
This example exports the metadata of the USER1 schema into the l_repo repository. The
export includes the metadata of the tables EMPLOYEE_SALARY and EMPLOYEE_ADDRESS, and any
table name that contains OFFICE. It also exports the EMPLOYEE_RECORD_SEQ sequence and
excludes the views in the schema.
See EXPORT_SCHEMA Procedure for more information.
2. To create a file:
3. To update a file:
3-709
Chapter 3
Develop
4. To list files:
5. To delete a file:
3-710
Chapter 3
Develop
3-711
Chapter 3
Develop
3-712
Chapter 3
Develop
Note:
If you are using a Resource Principal for authentication, the necessary policies
required for OCI Function access must be configured. See Details for Functions
for more information.
2. Create a catalog.
A catalog is a collection of wrapper functions that reference and call their respective
cloud functions via their API endpoints.
Example to create a catalog for Oracle Cloud Infrastructure Functions.
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name => 'OCI_CRED',
catalog_name => 'OCI_DEMO_CATALOG',
service_provider => 'OCI',
cloud_params => '{"region_id":"phx",
"compartment_id":"compartment_id"}'
);
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.LIST_FUNCTIONS (
credential_name => 'OCI_CRED',
catalog_name => 'OCI_DEMO_CATALOG',
function_list => :function_list
);
3-713
Chapter 3
Develop
END;
/
SEARCH_RESULTS
--------------------------------------------------------------------
----------------------------
"invokeEndpoint" : "https://
dw.us.func.oci.oraclecloud_example.com"
}
]
3-714
Chapter 3
Develop
BEGIN
DBMS_CLOUD_FUNCTION.SYNC_FUNCTIONS (
catalog_name => 'OCI_DEMO_CATALOG'
);
END;
/
PL/SQL procedure successfully completed.
This creates a PL/SQL wrapper for adding new functions to the catalog and removing
wrappers for functions that have been deleted from the catalog.
Run the following query to verify the sync.
OBJECT_NAME
--------------------------------------------------------------------------
------
CREATE_PAR
FINTECH
JWT_CODEC
OCI-OBJECTSTORAGE-CREATE-PAR-PYTHON
RUN_DBT
Note:
Keep a note of the current user in order to run this command.
3-715
Chapter 3
Develop
Type created.
3-716
Chapter 3
Develop
DESC fintech_fun
COLUMN STATUS format a30
COLUMN OUTPUT format a30
DECLARE
l_comp fintech_rt;
BEGIN
l_comp := fintech_fun(command=>'tokenize',value => 'PHI_INFORMATION');
DBMS_OUTPUT.put_line ('Status of the function = '|| l_comp.status);
DBMS_OUTPUT.put_line ('Response of the function = '|| l_comp.output);
END;
/
This invokes the fintech_fun cloud function by calling the function reference
oocid1.funfn.oci.phx.aaaaaa_example in the OCI_DEMO_CATALOG catalog.
6. You can drop an existing function using DROP_FUNCTION procedure. For example:
3-717
Chapter 3
Develop
7. You can drop an existing catalog using DROP_CATALOG procedure. For example:
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
catalog_name => 'OCI_DEMO_CATALOG'
);
END;
/
Note:
To access AWS lambda functions you need to configure the necessary
policies. See Creating an IAM policy to access AWS Lambda resources and
Using resource-based policies for Lambda for more information.
3-718
Chapter 3
Develop
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name => 'AWS_CRED',
catalog_name => 'AWS_DEMO_CATALOG',
service_provider => 'AWS',
cloud_params => '{"region_id":"ap-northeast-1"}'
);
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.LIST_FUNCTIONS (
credential_name => 'AWS_CRED',
catalog_name => 'AWS_DEMO_CATALOG',
function_list => :function_list
);
END;
/
SEARCH_RESULTS
3-719
Chapter 3
Develop
--------------------------------------------------------------------
-------------------------------
"invokeEndpoint" : "https://swiy3.lambda-url.ap-
north-1.on.aws_example/"
},
{
"functionName" : "testlambda_example",
"functionArn" : "arn:aws:lambda:ap-
north-1:378:func:testlambda_example",
"invokeEndpoint" : "https://swiy3.lambda-url.ap-
north-1.on.aws_example/"
},
SEARCH_RESULTS
--------------------------------------------------------------------
-------------------------------
{
"functionName" : "hellp-python_example",
"functionArn" : "arn:aws:lambda:ap-north-1:378:func:hellp-
python_example",
"invokeEndpoint" : "https://swiy3.lambda-url.ap-
north-1.on.aws_example/"
},
{
"functionName" : "testlam_example",
"functionArn" : "arn:aws:lambda:ap-
north-1:378:func:testlam_example",
SEARCH_RESULTS
--------------------------------------------------------------------
-------------------------------
"invokeEndpoint" : "https://swiy3.lambda-url.ap-
north-1.on.aws_example/"
}
]
BEGIN
DBMS_CLOUD_FUNCTION.SYNC_FUNCTIONS (
catalog_name => 'AWS_DEMO_CATALOG'
);
END;
/
PL/SQL procedure successfully completed.
3-720
Chapter 3
Develop
This creates a PL/SQL wrapper for adding new functions to the catalog and removing
wrappers for functions that have been deleted from the catalog.
Run the following query to verify the sync.
OBJECT_NAME
--------------------------------------------------------------------------
------
TESTLAMBDA
HELLP-PYTHON
TESTLAM
TEST3
SUMOFNUMBERS
Note:
Keep a note of the current user in order to run this command.
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
credential_name => 'AWS_CRED',
catalog_name => 'AWS_DEMO_CATALOG',
function_name => 'aws_testlambda',
function_id => 'arn:aws:lambda:ap-
northeast-1:378079562280:function:hellp-python'
);
END;
/
PL/SQL procedure successfully completed.
3-721
Chapter 3
Develop
5. After the function is created you can DESCRIBE and invoke it.
DESC AWS_TESTLAMBDA
COLUMN STATUS format a30
COLUMN OUTPUT format a30
AWS_TESTLAMBDA(NULL)
----------------------------------------------------
{"STATUS":"200","RESPONSE_BODY":"Hello Python!!"}
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
catalog_name => 'AWS_DEMO_CATALOG'
);
END;
/
3-722
Chapter 3
Develop
1. To access Azure functions you need to use Azure Service Principal with Autonomous
Database. You must grant the Website Contributor role to the Azure Service Principal
for the Azure function app under its Access control (IAM).
See the following for more information:
• Use Azure Service Principal to Access Azure Resources
• Azure built-in roles
• Assign Azure roles using the Azure portal
2. Create a catalog.
A catalog is a collection of wrapper functions that reference and call their respective
cloud functions via their API endpoints.
Example to create a catalog for Azure functions.
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name => 'AZURE$PA',
catalog_name => 'AZURE_DEMO_CATALOG',
service_provider => 'AZURE',
cloud_params =>
'{"subscription_id":"XXXXXXXXXXXXXXXXXXXXXXXX_example"}'
);
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.LIST_FUNCTIONS (
credential_name => 'AZURE$PA',
catalog_name => 'AZURE_DEMO_CATALOG',
function_list => :function_list
);
END;
/
3-723
Chapter 3
Develop
BEGIN
DBMS_CLOUD_FUNCTION.SYNC_FUNCTIONS (
catalog_name => 'AZURE_DEMO_CATALOG'
);
END;
/
This creates a PL/SQL wrapper for adding new functions to the catalog and
removing wrappers for functions that have been deleted from the catalog.
Run the following query to verify the sync.
Note:
Keep a note of the current user in order to run this command.
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
credential_name => 'AZURE$PA',
catalog_name => 'AZURE_DEMO_CATALOG',
function_name => 'azure_testfunc',
function_id => 'function_id_path',
input_args => :function_args
);
END;
/
Note:
The maximum length of the function name is limited to 100 characters.
3-724
Chapter 3
Develop
DESC AZURE_TESTFUNC
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
catalog_name => 'AZURE_DEMO_CATALOG'
);
END;
/
3-725
Chapter 3
Develop
3-726
Chapter 3
Develop
External procedures are only supported when your Autonomous Database is on a private
endpoint. The EXTPROC agent instance is hosted on a private subnet and the Autonomous
Database access the EXTPROC agent through a Reverse Connection Endpoint (RCE).
Note:
Autonomous Database only supports C language external procedures.
void UpdateSalary(x)
float x;
...
3-727
Chapter 3
Develop
• ISO/ANSI prototypes other than numeric data types that are less than full width
(such as float, short, char). For example:
void UpdateSalary(double x)
...
• Other data types that do not change size under default argument promotions.
This example changes size under default argument promotions:
void UpdateSalary(float x)
...
This creates the shared object (.so), extproc.so library. The UpdateSalary procedure,
defined in the previous step, is contained in the extproc.so library. The shared object
(.so) libraries are dynamically loaded at run time.
3-728
Chapter 3
Develop
This takes you to the Oracle Autonomous Database EXTPROC Agent details page.
1. On the Oracle Autonomous Database EXTPROC Agent page, under Type Stack, perform
the following:
• From the Version drop-down list, select the package version of the stack. By default,
the menu displays the latest version.
• From the Compartment drop-down list, select the name of the compartment where
you want to launch the instance.
Note:
If you don't have permission to launch the instance in the selected
compartment, the instance is launched in the root compartment.
• Select the I have reviewed and accept the Oracle Standard Terms and
Restrictions checkbox.
2. Click Launch Stack.
3-729
Chapter 3
Develop
This takes you to the Create stack page that allows you to create stack for the
EXTPROC agent.
3-730
Chapter 3
Develop
2. Click Next.
This takes you to the Configure variables page which enables you to configure
variables for the infrastructure resources that the stack creates when you run the apply
job for this execution plan.
3. On the Configure variables page enter the information in the areas: Configure the
EXTPROC Agent, Network Configuration, and Compute configuration.
a. Provide information in the Configure the EXTPROC Agent area.
• External Libraries: Provide a list of libraries, separated by comma (,), which you
want to allow to be invoked from your Autonomous Database. For example,
extproc.so, extproc1.so.
After you create the stack, you must copy the libraries to the /u01/app/oracle/
extproc_libs directory on the EXTPROC agent VM.
• Wallet Password: Provide the wallet password.
The wallet and a self-signed certificate is generated for mutual TLS
authentication between the Autonomous Database and the EXTPROC agent VM.
The wallet is created in the /u01/app/oracle/extproc_wallet directory.
3-731
Chapter 3
Develop
Note:
After the wallet is created, the wallet password cannot be
changed.
3-732
Chapter 3
Develop
– Use Existing VCN and Subnet: Select this option to create the EXTPROC
agent using an existing VCN. This creates the EXTPROC agent instance in
the provided subnet.
When you select this option, provide the following information for the existing
VCN and Subnet:
* Under Virtual Cloud Network:
From the Existing VCN drop-down list choose an existing VCN. If the
specified VCN does not exist, a new VCN is created.
* Under EXTPROC Subnet:
From the Existing Subnet drop-down list choose an existing subnet.
When you choose to use an existing VCN and subnet, add an ingress
rule for the EXTPROC agent instance's port 16000. You also add an egress
rule on public subnet.
See Configuring Network Access with Private Endpoints for more
information.
3-733
Chapter 3
Develop
• EXTPROC Agent Access type: Choose one of the following options from
the drop-down list.
– Secure access from specific ADB-S Private Endpoint databases
in your VCN: Choose this option to allow only specified private
endpoint IPs inside your Virtual Cloud Network (VCN) to connect to
your EXTPROC agent.
When this option is chosen, you provide a list of allowed private
endpoint IP Addresses in the next step.
– Secure access from all ADB-S Private Endpoint databases in
your VCN: Choose this option to allow any private endpoint inside
your Virtual Cloud Network (VCN) to connect to your EXTPROC agent.
• Private Endpoint IP Addresses
Provide a list of private endpoint IP addresses separated by comma (,) for
the Private Endpoint IP Addresses variable. For example, 10.0.0.0,
10.0.0.1.
Note:
This field only shows when you select Secure access from
specific ADB-S Private Endpoint databases in your VCN for
the EXTPROC Agent Access type.
3-734
Chapter 3
Develop
4. Click Next.
This takes you to the Review page.
5. On the Review page perform the following steps:
a. Verify the configuration variables.
3-735
Chapter 3
Develop
b. Select the Run apply checkbox under Run apply on the created stack?
c. Click Create.
Note:
This area does not show variables that have default values or variables
that you have not changed.
Resource Manager runs the apply job to create stack resources accordingly. This
takes you to the Job details page and the job state is Accepted. When the apply
job starts the status is updated to In Progress.
3-736
Chapter 3
Develop
Note:
The information you need to connect to the instance created as part of the
stack is available in the Application Information tab.
To execute remote procedures at the EXTPROC agent instance, the Autonomous Database and
the EXTPROC agent connect using Mutual Transport Layer Security (mTLS). When using
Mutual Transport Layer Security (mTLS), clients connect through a TCPS (Secure TCP)
database connection using standard TLS 1.2 with a trusted client certificate authority (CA)
certificate. See About Mutual TLS (mTLS) Authentication for more information.
Note:
You can also obtain and use a public certificate issued by a Certificate Authority
(CA).
As a prerequisite, you must export the wallet to Object Storage from the /u01/app/oracle/
extproc_wallet directory on the VM where EXTPROC runs.
3-737
Chapter 3
Develop
• Do not rename the wallet file. The wallet file in Object Storage must be named
cwallet.sso.
2. Create credentials to access your Object Storage where you store the wallet file
cwallet.sso. See CREATE_CREDENTIAL Procedure for information about the
username and password parameters for different object storage services.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See About Using Resource
Principal to Access Oracle Cloud Infrastructure Resources for more information.
3. Create a directory on Autonomous Database for the wallet file cwallet.sso.
BEGIN
DBMS_CLOUD.GET_OBJECT (
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
cwallet.sso',
directory_name => 'WALLET_DIR'
);
END;
/
Follow these steps to create a library in your Autonomous Database and register C
routines as an external procedure in the library:
1. Create a library.
An external procedure is a C language routine stored in a library. To invoke
external procedures with Autonomous Database, you create a library.
3-738
Chapter 3
Develop
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
library_name => 'demolib',
library_listener_url => 'remote_extproc_hostname:16000',
library_wallet_dir_name => 'wallet_dir',
library_ssl_server_cert_dn => 'CN=VM Hostname',
library_remote_path => '/u01/app/oracle/extproc_libs/
library name'
);
END;
/
This creates the demolib library in your Autonomous Database and registers the dynamic
link library in your database. The EXTPROC agent instance is pre-configured to host
external procedures on port 16000.
See CREATE_CATALOG Procedure for more information.
Query DBA_CLOUD_FUNCTION_CATALOG View and
USER_CLOUD_FUNCTION_CATALOG View views to retrieve the list of all the catalogs
and libraries in your database.
Query the USER_CLOUD_FUNCTION_ERRORS View view to list any errors generated
during the connection validation to the remote library location.
2. After you create the library, use DBMS_CLOUD_FUNCTION.CREATE_FUNCTION to create
PL/SQL wrapper functions that refer to the external procedures (C functions).
Example:
DECLARE
plsql_params clob := TO_CLOB('{"sal": "IN, FLOAT", "comm" :"IN,
FLOAT"}');
external_params clob := TO_CLOB('sal FLOAT, sal INDICATOR SHORT, comm
FLOAT, comm INDICATOR SHORT,
RETURN INDICATOR SHORT, RETURN FLOAT');
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
LIBRARY_NAME => 'demolib',
FUNCTION_NAME => '"PercentComm"',
PLSQL_PARAMS => plsql_params,
EXTERNAL_PARAMS => external_params,
API_TYPE => 'FUNCTION',
WITH_CONTEXT => FALSE,
RETURN_TYPE => 'FLOAT'
);
END;
/
This creates the PercentComm function and registers the PercentComm external procedure
in the DEMOLIB library.
The PercentComm function in the library is a reference to the respective external
procedure whose name is referenced by the FUNCTION_ID parameter.
3-739
Chapter 3
Develop
In this example the FUNCTION_ID parameter is not provided, the value provided for
the FUNCTION_NAME parameter is used as a FUNCTION_ID.
Example:
DECLARE
plsql_params clob := TO_CLOB('{"row_id": "IN,CHAR"}');
external_params clob := TO_CLOB('CONTEXT, row_id STRING, row_id
LENGTH SB4');
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
LIBRARY_NAME => 'demolib',
FUNCTION_NAME => 'UpdateSalary_local',
FUNCTION_ID => '"UpdateSalary"',
PLSQL_PARAMS => plsql_params,
EXTERNAL_PARAMS => external_params,
API_TYPE => 'PROCEDURE',
WITH_CONTEXT => TRUE
);
END;
/
DESC "PercentComm";
4. You can drop an existing function using DROP_FUNCTION procedure. For example:
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
LIBRARY_NAME => 'demolib'
);
END;
/
3-740
Chapter 3
Analyze
Analyze
Provides information on the built-in tools that allow you to explore your data with reports,
analytics, and visualization.
• Build Reports and Dashboards Using Oracle Analytics
You can use Oracle Analytics Cloud and Oracle Analytics Desktop with Autonomous
Database.
• Creating Dashboards, Reports, and Notebooks with Autonomous Database
Autonomous Database includes support for building dashboards, reports, and notebooks
for data analysis using Oracle Analytics Desktop and Oracle Machine Learning
Notebooks.
• Use Oracle Spatial with Autonomous Database
Oracle Spatial Database is included in Autonomous Database, allowing developers and
analysts to get started easily with location intelligence analytics and mapping services.
• The Data Analysis Tool
The Data Analysis tool enables you to create Analytic Views with multidimensional
metadata.
• The Catalog Page
The Catalog page displays information about entities in the Oracle Autonomous
Database.
• The Data Insights Page
The Data Insights page displays information about patterns and anomalies in the data of
entities in your Oracle Autonomous Database.
• Use Oracle OLAP with Autonomous Database
Oracle OLAP is available on Autonomous Database. Oracle OLAP supports uses such
as planning, forecasting, budgeting, and analysis.
3-741
Chapter 3
Analyze
3-742
Chapter 3
Analyze
Oracle Analytics Desktop makes it easy to visualize your Autonomous Database data so you
can focus on exploring interesting data patterns. Just connect to Autonomous Database,
select the elements that you’re interested in, and let Oracle Analytics Desktop find the best
way to visualize it. Choose from a variety of visualizations to look at data in a specific way.
Oracle Analytics Desktop also gives you a preview of the self-service
visualization capabilities included in Oracle Analytics Cloud, Oracle's industrial-strength cloud
analytics platform. Oracle Analytics Cloud extends the data exploration and visualization
experience by offering secure sharing and collaboration across the enterprise, additional data
sources, greater scale, and a full mobile experience including proactive self-learning analytics
delivered to your device. Try Oracle Analytics Desktop for personal analytics and to sample a
taste of Oracle’s broader analytics portfolio.
Oracle Analytics Desktop’s benefits include:
• A personal, single-user desktop application.
• Offline availability.
• Completely private analysis.
• Full control of data source connections.
• Lightweight single-file download.
• No remote server infrastructure.
• No administration tasks.
See the User’s Guide for Oracle Analytics Desktop for information on connecting Oracle
Analytics Desktop to Autonomous Database.
To get started with Oracle Analytics Desktop use the following link to visit the software
download page which includes more information about system requirements and provides
instructions for installing Oracle Analytics Desktop on different platforms:
Oracle Analytics Desktop Download Details
3-743
Chapter 3
Analyze
Work with Oracle Machine Learning User Interface for Data Access, Analysis, and
Discovery
You can use Oracle Machine Learning Notebooks to access data and for data
discovery, analytics, and notebooks.
To access Oracle Machine Learning Notebooks you can use the Oracle Cloud
Infrastructure Console or Database Actions.
To access Oracle Machine Learning Notebooks from the Oracle Cloud Infrastructure
Console:
1. From the Display Name column, select an Autonomous Database.
3-744
Chapter 3
Analyze
Oracle Machine Learning Notebooks allows you to access your data in your database and
build notebooks with the following:
• Data Ingestion and Selection
• Data Viewing and Discovery
• Data Graphing, Visualization, and Collaboration
• Data Analysis
You can also create and run SQL statements and create and run SQL scripts that access
your data in your database.
3-745
Chapter 3
Analyze
Note:
SDO_GCDR.ELOC_GEOCODE and SDO_GCDR.ELOC_GEOCODE_AS_GEOM functions
are only available on Autonomous Database.
3-746
Chapter 3
Analyze
Note:
If you do not see the Data Analysis card then your database user is missing the
required DWROLE role.
The Data Analysis home page consists of three parts: Analyses, Analytic Views and Tables.
3-747
Chapter 3
Analyze
Analyses
The top section of the homepage comprises of a list of Analyses. Use the search field
to search for Analyses you create. The top section of the homepage comprises of a list
of Analyses. Use the search field to search for Analyses you create.
Analyses are analysis of multiple Analytic Views. The Analyses card displays the name
of the analysis. Click Actions (three vertical dots) to open the context menu.
The actions available are:
• View: Opens the Analysis View page in a new window where you can view the
analysis.
• Edit: Opens the selected Analysis page where you can edit the reports present in
the analysis.
3-748
Chapter 3
Analyze
• Rename: Allows you to rename the Analysis. Click save to modify the new name.
• Delete: Opens the delete Analysis dialog where you can delete the analysis.
Analytic Views
The bottom section of the homepage displays list of existing Analytic Views. Each Analytic
View card displays the name of the Analytic View. The Actions icon enables you to manage
the Analytic View. Click Actions (three vertical dots) to open the context menu. The actions
available are:
• Analyze: Opens the Analytic View browser and the Analysis View page in a new window
where you can view the Analysis.
• Data Quality: Opens the Data Quality page where the tool validates the selected Analytic
View for errors and lists them out.
• Export: Allows you to export the Analytic View to Tableau and PowerBI.
• Edit Analytic View: Opens the Edit Analytic View dialog box where you can edit the
properties of the selected Analytic View.
• Compile Analytic View: This option compiles the Analytic View and returns compilation
errors if there are any.
• Show DDL: Displays the DDL statements for the Analytic View.
• Delete Analytic View: Deletes the selected Analytic View.
The +Create button enables you to create Analysis and create Analytic View from the
home page.
You can select both hierarchies and measures from Analytic Views. Hierarchies are DB
objects that allow users to define relationships between various levels or generations of
dimension members. As the name implies, hierarchies organize data using hierarchical
relationships. With this tool you can analyze and visualize data in different Points of View
(POV). You can export the metadata and visualize it with sophisticated tools like Oracle
Analytics Cloud (OAC) and Tableau.
Advantages of Data Analysis tool
With Data Analysis tool you can:
• Visualize, analyze and inspect your data clearly and efficiently with pivot tables
• Calculate total number of errors present in the Analytic View you create and provide
solutions to minimize the errors
• Automatically display meaningful insights to help you make better decisions
• Analyze your data across dimensions with support for hierarchical aggregation and drill-
down
3-749
Chapter 3
Analyze
• Share your Analytic Views with the tool of your choice over various options of raw
data consumption to draw meaningful insights and make them accessible to any
user
By identifying relationships among tables and columns, Analytic Views enable your
system to optimize queries. They also open new avenues for analyzing data. These
avenues include data insights, improved hierarchy navigation, and the addition of
hierarchy-aware calculations.
This tool runs complex and hierarchical SQL queries along with SQL extensions in the
background, which simplifies real-time calculations. It makes complex data more
accessible and easier to understand.
Note:
Read these topics for detailed descriptions on Analyses and Analytic Views:
• Searching and obtaining information about Analytic Views
• Creating Analytic Views
• Working with Analyses
• Viewing Analyses
• Workflow to build analyses
• Creating Analyses
– Creating Report
– Saving Analyses
• Searching and obtaining information about Analytic Views
When you first open the Data Analysis page, it displays the list of schemas and
Analytic Views. With Select Schema, you can select a preferred Schema from a
list of schemas available in the drop-down.
• Creating Analytic Views
You can create Analytic Views and view information about them. You can also edit
and perform other actions on them.
• Working with Analyses
Analyses are a collection of multiple reports on a single page, which provides
quick access to multiple data analyses collected from different Analytic Views.
3-750
Chapter 3
Analyze
• Viewing Analyses
Analyses provide you an insight into the performance of your data.
• Workflow to build Analyses
Here is the workflow to build an analyses.
• Creating Analyses
Use the Data Analysis tool to create and edit your Analyses. The analysis provides you
customized view of Analytic View data. An analysis consists of one or more reports that
displays the results of analysis.
• Creating Reports
A single report you generate analyzes an AV based on the Levels and measures you
select.
• Using Calculation Templates
The Data Analysis tool provides templates for all of the calculations typically in demand
for business intelligence applications.
• Oracle Autonomous Database Add-in for Excel
The Oracle Autonomous Database Add-in for Excel integrates Microsoft Excel
spreadsheets with the Autonomous Database to retrieve and analyze data from Analytic
Views in the database. You can run custom SQL queries and view their results in the
worksheet.
• Oracle Autonomous Database add-on for Google Sheets
The Oracle Autonomous Database add-on for Google Sheets enables you to query
tables using SQL or Analytic Views using a wizard directly from Google Sheets for
analysis.
3-751
Chapter 3
Analyze
When you create an Analytic View, you identify a fact table that contains the data to
inspect. The Generate Hierarchies and Measures button looks at the contents of that
table, identifies any hierarchies in the fact table, and searches for other tables that
may contain related hierarchies.
While creating an Analytic View, you can enable or disable the following advanced
options:
• Autonomous Aggregate Cache, which uses the dimensional metadata of the
Analytic View to manage a cache and that improves query response time.
• Analytic View Transparency Views, which presents Analytic Views as regular
database views and enables you to use your analytic tools of choice while gaining
the benefits of Analytic Views.
• Analytic View Base Table Query Transformation, which enables you to use your
existing reports and tools without requiring changes to them.
3-752
Chapter 3
Analyze
3-753
Chapter 3
Analyze
The Search for Dimension Tables check box when selected, enables you to search
for dimension tables while generating hierarchies and measures.
After the hierarchies and measures are generated, they are displayed under their
respective tabs. Review the hierarchies and measures created for you.
Specify the Name, Fact Table and select Advanced Options in the General tab of
Create Analytic View pane. Click Create to generate an Analytic View.
3-754
Chapter 3
Analyze
You can view all the fact tables and views associated with the analytic view.
In the filter field, you can either manually look for the source or start typing to search for the
fact table or views from the list of available fact tables and views. After typing the full name of
the source, the tool automatically matches the fact table or view.
Select Generate and Add hierarchy from Source to generate analysis and hierarchies
associated with the source data you select.
Select Find and Add Joins to link all the data sources with the fact table. You can add
multiple join entries for a single hierarchy.
Click OK to select the source.
The Generating Hierarchies and Measures dialog box displays the progress of analyzing the
dimension tables and creating the hierarchies. When the process completes, click Close.
3-755
Chapter 3
Analyze
Note:
When you add a hierarchy from the data source, you see the new hierarchy
in the list of hierarchies in the Hierarchies tab. You can navigate between the
Data Sources tab, the Hierarchies tab, the Measures tab, the Calculations
tab. You can add a hierarchy from a source that is not connected by
navigating back to the Data Sources tab.
Select Remove Hierarchy Source to remove the hierarchies you create from the data
sources. You cannot remove hierarchies generated from the fact table you select from
this option.
Expand Joins to view the Hierarchy Source, Hierarchy Column and the Fact
column mapped with the Analytic View. The Joins is visible only when the hierarchy
table differs from the fact table. You can add multiple join entries for a single hierarchy.
Expand Sources to view the fact table associated with the Analytic View. The data
model expands to include the data from the source that you added.
Pointing to an item displays the name, application, type, path and the schema of the
table. Click the Actions (three vertical dots) icon at the right of the item to display a
menu to expand or collapse the view of the table.
An expanded item displays the columns of the table. Pointing to a column displays the
name, application, type, path, and schema of the column.
3-756
Chapter 3
Analyze
The lines that connect the dimension tables to the fact table indicate the join paths between
them. Pointing to a line displays information about the join paths of the links between the
tables. If the line connects a table that is collapsed, then the line is dotted. If the line connects
two expanded tables, then the line is solid and connects the column in the dimension table to
the column in the fact table.
3-757
Chapter 3
Analyze
To remove the hierarchy, select the hierarchy you want to remove from the list and
click Remove Hierarchy
Select Move Up or Move Down to position the order of the Hierarchy in the resulting
view.
Click Switch Hierarchy to Measure to change the hierarchy you select to a measure
in the Measures list.
You can also Add Hierarchy and Add Hierarchy From Table by right-clicking the
Hierarchy tab.
If you click on a hierarchy name, a dialog box displays the Hierarchy Name and
Source.
To change the source, select a different source from the drop-down list.
Select Add Level to add a level to the hierarchy. Click Remove Level to remove the
selected level from the hierarchy.
3-758
Chapter 3
Analyze
To view the data in the fact table and statistics about the data, click the Preview Data button.
In the Preview and Statistics pane, the Preview tab shows the columns of the table and the
data in the columns. The Statistics tab shows the size of the table and the number of rows
and columns.
If you click on a particular level in the Hierarchy tab, a dialog box displays it's respective Level
Name, Level Key, Alternate Level Key, Member Name, Member Caption, Member
Description, source, and Sort By drop-down. To change any of the field values, enter the
value in the appropriate field.
Note:
You can enter multiple level keys Member Name, Member Caption, Member
Description and Sort By.
Member Captions and Member Descriptions generally represent detailed labels for objects.
These are typically end-user-friendly names. For example, you can caption a hierarchy
representing geography areas named GEOGRAPHY_HIERARCHY as "Geography" and
specify its description as "Geographic areas such as cities, states, and countries."
To see the measures for the Analytic View, click Measures tab. To immediately create the
Analytic View, click Create. To cancel the creation, click Cancel.
3-759
Chapter 3
Analyze
To alternatively add a measure from the data source, right- click the Measures tab.
This pops up a list of columns that can be used as measures. Select one measure
from the list.
You can exclude a column from the measures on right-clicking the Measures tab and
selecting Remove Measure.
Click Switch Measure to Hierarchy to change the measure you select to hierarchy in
the Hierarchies list.
3-760
Chapter 3
Analyze
You must specify a measure as the default measure for the analytic view; otherwise, the first
measure in the definition is the default. Select Default Measure from the drop-down.
To add a measure, right-click the Measures tab and select Add Measure. To remove a
measure, select the particular measure you want to remove, right-click on it and select
Remove Measure.
You can select a different column for a measure from the Column drop-down list. You can
select a different operator from the Expression drop-down list.
In creating an analytic view, you must specify one or more hierarchies and a fact table that
has at least one measure column and a column to join to each of the dimension tables
outside of the fact table.
Note:
You can create the measures without increasing the size of the database since the
calculated measures do not store the data. However, they may slow performance.
You need to decide which measures to calculate on demand.
The Analytic Views provides easy-to-use templates for creating calculated measures.
Once you create a calculated measure, it appears in the list of measures of the Analytic
View .You can create a calculated measure at any time which is available for querying in
SQL.
The Data Analysis tool provides easy-to-use templates for creating calculated measures.
3-761
Chapter 3
Analyze
Click Add Calculated Measure to add calculations to the measures. You can view the
new calculation with system generated name in the Calculations tab.
Click the newly created calculated measure.
In the Measure Name field, enter the name of the calculated measure.
You can select preferred category of calculation from a list of options such as Prior and
Future Period, Cumulative Aggregates, Period To Date, Parallel Period, Moving
Aggregates, Share, Qualified Data Reference, and Ranking using the Calculation
Category drop-down.
Your choice of category of calculation dynamically changes the Calculation Template.
For more details on how to use Calculation templates, see Using Calculation
Templates.
Select the Measure and Hierarchy on which you want to base the calculated
measures.
Select Offset value by clicking the up or the down arrow. The number specifies the
number of members to move either forward or backward from the current member.
The ordering of members within a level is dependent on the definition of the attribute
dimension used by the hierarchy. The default value is 0 which represents POSITION
FROM BEGINNING.
The Expression field lists the expressions which the calculated measure uses.
On the creation of the Analytic view, the calculated measure appears in the navigation
tree in the Calculated Measures folder.
Click Create. A confirmation dialog box appears that asks for your confirmation. Select
Yes to proceed with the creation of Analytic View.
After creating the Analytic View, you will view a success message informing you of its
creation.
On editing the Analytic View you create, you can view the calculated measure in the
navigation tree in the Calculations folder.
Click the Tour icon for a guided tour of the worksheet highlighting salient features and
providing information if you are new to the interface.
Click the help icon to open the contextual or online help for the page you are viewing.
3-762
Chapter 3
Analyze
Click Show DDL to generate Data Definition Language statements for the analytic view.
Viewing Analyses
Analyses provide you an insight into the performance of your data.
You can use the Analysis and the Analyze pane to search or browse Analytic Views, view
their analysis, or reports you have access to. Clicking on the Analyses takes you to a page
where you can view the Analyze pane. Here you can view default hierarchy level and
measures selected. You can drag and drop any levels and measures from the Analytic View
browser to rows/columns and Values in the drop area respectively. This defines your analysis
criteria. Once the values are dropped, the Data Analysis tool generates a query internally.
The tool displays the results of the analysis in the form of reports in the Analyses that
matches your analysis criteria. You can add multiple reports to the Analysis. You can also
examine and analyze the reports and save them as a new analysis. You can just save the
Analysis and not a single report. Once you save all the reports, it will be part of that single
Analysis. Reports are unnamed.
3-763
Chapter 3
Analyze
Creating Analyses
Use the Data Analysis tool to create and edit your Analyses. The analysis provides
you customized view of Analytic View data. An analysis consists of one or more
reports that displays the results of analysis.
You can create a Basepay analysis and add content to track your team's pay. You can
view the analysis in a pivot view or tabular view or in the form of charts. You can create
an analysis that displays these three views.
In this example, you create a new analysis called New_Analysis.
1. On the Data Analysis home page, click the Create Analysis button.
2. You can make changes to existing Analyses by adding different values from
hierarchies and measures.
3. The AV name you view on the report represents the AV used to create the report.
With a different Analytic View you can create a different report.
Note:
You must have at least one report to build an analysis.
4. To edit the existing analysis, In the Analytic View browser, select the objects to
analyze in the navigation pane and drag and drop them to the drop area in
Columns, Rows or Values and Filters of the Analyze tab.
5. The report updates based on the artifacts (levels and measures) you select.
The new analysis which contains the updated report will now be visible in the Data
Analysis home page for further editing.
• Saving Analyses
You can save the personalized settings you made for the Analysis and use them
on any other Analyses for future reference.
Saving Analyses
You can save the personalized settings you made for the Analysis and use them on
any other Analyses for future reference.
You won't have to make these decisions manually every time you open the Analyses
page if you save these preferences.
1. Open the analysis for editing from the Data Analysis home page. Select Edit from
the Analysis tile you wish to edit.
2. Click on the Save As icon. Enter a descriptive name for your Analysis.
3. Click Save.
Creating Reports
A single report you generate analyzes an AV based on the Levels and measures you
select.
3-764
Chapter 3
Analyze
You can add multiple reports to the newly created analysis. One report is independent of
another report. To add a report in the Analysis.
1. Open the Analysis for editing from the Data Analysis home page. Select Edit from the
Analysis tile you wish to edit.
2. Click on the + Report icon to add one or more reports to the Analysis. You can use a
report to add configured Analyses to the Analyses page.
3. Click on the report to select it. The resize arrow in the report resizes the report window.
4. Click the cross icon on the selected report to delete the report from the Analysis.
5. The header displays the name of the Analytic View you select.
6. You can expand or collapse the report with their respective arrows.
When you add a report to the Analysis, the report provides the following actions:
• Edit SQL
• Performance
• Rename Report: Click Rename Report to rename it. Click Save Report to save the
current report.
• Delete Report: Click Delete Report to delete the report.
Edit SQL
You can view the SQL output when you click Edit SQL. The lower right pane in SQL displays
the output of the operation executed in the SQL editor. The following figure shows the output
pane on the SQL page.
3-765
Chapter 3
Analyze
The Performance menu displays the PL/SQL procedures in the worksheet area which
describes the reports associated with the Analytic Views.
The top part of the performance output consists of the worksheet editor for running
SQL statements and an output pane to view the results in different forms. You can
view the results in a Diagram View, Chart View, Clear Output from the SQL editor,
Show info about the SQL statements and Open the performance menu in a new tab.
The following figure shows the output pane of the performance menu:
3-766
Chapter 3
Analyze
• Advanced View: This is the default view of the query when you click Performance.
Displays data from SQL Query in mixed tabular/tree view. There is a Diagram View icon
that you can use to switch back to the diagram view.
• Chart View: Displays data from the SQL query in the form of charts.
• Print Diagram: Prints the diagram.
• Save to SVG: Saves the diagram to file
• Zoom In, Zoom Out: If a step is selected in the diagram, clicking the Zoom In icon
ensures that it remains at the center of the screen.
• Fit Screen: Fits the entire diagram in the visible area.
• Actual Size: Sets the zoom factor to 1.
• Expand All: Displays all steps in the diagram.
• Reset Diagram: Resets the diagram to the initial status, that is, only three levels of steps
are displayed.
• Show Info: Shows the SELECT statement used by the Explain Plan functionality.
• Open in New Tab: Opens the diagram view in a new tab for better viewing and
navigation. The diagram is limited to the initial SELECT statement.
• Min Visible Total CPU Cost(%): Defines the threshold to filter steps with total CPU cost
less than the provided value. Enter a value between 0 and 100. There is no filtering for 0.
• Plan Notes: Displays the Explain Plan notes.
Properties in Explain Plan Diagram
Double-click or press Enter on a selected step to open the Properties slider, which provides
more information about that step. See PLAN_TABLE in Oracle Database Reference for a
description of each property. The Properties slider shows:
• Displays information for that step extracted from PLAN_TABLE in a tabular format. Nulls
are excluded. You can select JSON to view the properties in JSON format.
3-767
Chapter 3
Analyze
Table Browser: Select Query from the drop-down if you choose to create a report
on SQL query to view a Table browser. If you select SQL Query, the Table browser
displays the available tables and their corresponding columns. You can drill down
tables to view their corresponding columns.
3-768
Chapter 3
Analyze
2. SQL Worksheet editor with the Run icon: You can view this component only when you
generate a report on a SQL Query and not an Analytic View.
The SQL editor area enables you to run SQL statements and PL/SQL scripts from the
tables you want to query displayed on the Table browser. By default, the SQL editor
displays the Select * statement to display all the columns from the first table. Click Run to
run the statements in the editor.
3-769
Chapter 3
Analyze
3. Output pane: The output pane, when you view the results of a SQL Query,
consists of the following tabs:
• Query Result: Displays the results of the most recent Run Statement
operation in a display table.
• Explain Plan: Displays the plan for your query using the Explain Plan
command. For more information, refer to the Explain Plan Diagram in
Creating Reports section.
• Autotrace: Displays the session statistics and execution plan from
v$sql_plan when running an SQL statement using the Autotrace feature.
4. Modes of visualization in the Query Result tab: You can select any of the four
modes to visualize the results of the SQL query report you generate.
The four modes of visualization, when you view the reports generated on a SQL
query, are:
• Base Query: This type of view is by default. Query written in the SQL editor is
the Base Query.
• Table: You can view the SQL results in tabular form. By selecting this view, a
Column drop zone appears which enables you to drag and drop selected
columns from the Table browser. By dropping the selected columns in the drop
zone, you can view only those columns in the Query Result tab. Select the
cross mark beside the Column name to remove it from the drop zone.
• Pivot: You can view the results of the SQL query in pivot format. By selecting
this format, a Columns, Rows, and Values drop zone appears where you can
drag and drop the selected columns, rows or values from the Tables browser.
Note:
Values must be a NUMBER type.
3-770
Chapter 3
Analyze
• Chart: You can view the SQL results in the form of a chart. By selecting this view an
X-axis and Y-axis drop zone appears. Drag and drop selected columns from the
Table browser to the drop zone. You must ensure that only the columns with
NUMERIC data type can be dropped in the Y axis. Otherwise, the display result
would fail with a Must be a NUMBER type error. You can add multiple values to
the Y-axis. To view the results in the chart view of only a particular y axis, select the Y
axis value from the drop-down.
5. Modes of visualization of reports generated on an Analytic View: You can select any
of the three modes to visualize the results of the report you generate on an Analytic View.
The three modes of visualization when you view the reports generated on an Analytic
View are:
• Table: You can view the SQL results in tabular form. By selecting this view, a Rows
and Filters drop zone appears which enables you to drag and drop selected
Hierarchies and Measures from the Analytic View browser. This way you can view
the report results that consist of the selected hierarchies and measures.
• Pivot: You can view the results of the Analytic View report in the pivot format. By
selecting this format, a Columns, Rows,Values and Filters drop zone appears where
you can drag and drop the selected hierarchies and measures from the Analytic View
browser.Note: Values must be a NUMBER type.
• Chart: You can view the report you generate on an Analytic View in the form of a
chart. By selecting this view an X-axis, Y-axis, and Filters drop zone appears. Drag
and drop selected columns from the Table browser to the drop zone. You must
ensure that only the columns with NUMERIC data type can be dropped in the Y axis.
Otherwise, the display result would fail with a Must be a NUMBER type error. You
can add multiple values to the Y-axis. You can select Horizontal and Vertical from the
drop-down to view Horizontal and Vertical Charts respectively. You also have the
option to select Area Chart, Bar Chart, Line Chart, and Pie Chart from the drop-down.
6. Faceted search panel: For the reports you generate on a SQL query and an Analytic
View, you can view a Faceted search column. For a SQL report, this panel allows you to
add filters to the report. The tool generates a filter for each value in the column that is
retrieved from the query result. You can filter different columns on the faceted search
panel and view the results in the Query result to get only the data you wish to view. You
can view the data retrieved from the SQL query in either text or visual format. For reports
you generate on an Analytic View, select Faceted from the radio button. This filter
behaves differently than the Faceted search you generate on an SQL Query. See Adding
filters to a report you generate on an Analytic View.
3-771
Chapter 3
Analyze
The Analyses page when you create a report on an SQL query looks like this:
3-772
Chapter 3
Analyze
The Analyses page when you create a report on an Analytic View looks like this:
The following topics describes how to create a report and access the Analyses page:
Creating Reports
1. You can create Reports using either of the following ways:
• On a Query
• On an Analytic View
2. Adding filters to a report you generate on an Analytic View
3-773
Chapter 3
Analyze
Note:
By default, you will view “Select * from the <Tablename> you select.
4. You will view a funnel icon in the Query Result tab which displays a filter with the
REGION column as Asia. The Query Result will display only the records with
REGION as Asia.
3-774
Chapter 3
Analyze
2. You can select more filters by selecting the values from the faceted search panel, or by
clicking the funnel.
3. Clicking the funnel icon displays all the values of the Employee name column. Select
Jones to filter the report results further displaying the salary of employees named Scott
and Jones. You can view the values in a list view or a multi select view.
3-775
Chapter 3
Analyze
Cumulative Aggregates
Cumulative calculations start with the first time period and calculate up to the current
member, or start with the last time period and calculate back to the current member.
The tool provides several aggregation methods for cumulative calculations:
• Cumulative Average: Calculates a running average across time periods.
• Cumulative Maximum: Calculates the maximum value across time periods.
• Cumulative Minimum: Calculates the minimum value across time periods.
• Cumulative Total: Calculates a running total across time periods.
You can choose the measure, the time dimension, and the hierarchy. For selecting the
time range see "Choosing a Range of Time Periods" in Oracle OLAP User’s Guide.
These are the results of a query against the calculated measure, which displays
values for the descendants of calendar year 2021. The minimum value for quarters
begins with Q1-21 and ends with Q4-21, and for months begins with Jan-21 and ends
with Dec-21.
TIME TIME_LEVEL SALES MIN_SALES
-------- -------------------- ---------- ----------
Q1.21 CALENDAR_QUARTER 32977874 32977874
Q2.21 CALENDAR_QUARTER 35797921 32977874
Q3.21 CALENDAR_QUARTER 33526203 32977874
3-776
Chapter 3
Analyze
These are the results of a query against the calculated measure. The PRIOR_PERIOD column
shows the value of Sales for the preceding period at the same level in the Calendar hierarchy.
TIME TIME_LEVEL SALES PRIOR_PERIOD
-------- -------------------- ---------- ------------
2020 CALENDAR_YEAR 136986572 144290686
2021 CALENDAR_YEAR 140138317 136986572
Q1.20 CALENDAR_QUARTER 31381338 41988687
Q2.20 CALENDAR_QUARTER 37642741 31381338
Q3.20 CALENDAR_QUARTER 32617249 37642741
Q4.20 CALENDAR_QUARTER 35345244 32617249
Q1.21 CALENDAR_QUARTER 36154815 35345244
Q2.21 CALENDAR_QUARTER 36815657 36154815
Q3.21 CALENDAR_QUARTER 32318935 36815657
Q4.21 CALENDAR_QUARTER 34848911 32318935
3-777
Chapter 3
Analyze
Period to Date
Period-to-date functions perform a calculation over time periods with the same parent
up to the current period.
These functions calculate period-to-date:
• Period to Date: Calculates the values up to the current time period.
• Period to Date Period Ago: Calculates the data values up to a prior time period.
• Difference From Period to Date Period Ago: Calculates the difference in data
values up to the current time period compared to the same calculation up to a prior
period.
• Percent Difference From Period To Date Period Ago: Calculates the percent
difference in data values up to the current time period compared to the same
calculation up to a prior period.
When creating a period-to-date calculation, you can choose from these aggregation
methods:
• Sum
• Average
• Maximum
• Minimum
You also choose the measure, the time dimension, and the hierarchy.
These are the results of a query against the calculated measure. The MIN_TO_DATE
column displays the current minimum SALES value within the current level and year.
TIME TIME_LEVEL SALES MIN_TO_DATE
-------- -------------------- ---------- -----------
Q1.21 CALENDAR_QUARTER 36154815 36154815
Q2.21 CALENDAR_QUARTER 36815657 36154815
Q3.21 CALENDAR_QUARTER 32318935 32318935
Q4.21 CALENDAR_QUARTER 34848911 32318935
JAN-21 MONTH 13119235 13119235
FEB-21 MONTH 11441738 11441738
MAR-21 MONTH 11593842 11441738
APR-21 MONTH 11356940 11356940
MAY-21 MONTH 13820218 11356940
JUN-21 MONTH 11638499 11356940
JUL-21 MONTH 9417316 9417316
AUG-21 MONTH 11596052 9417316
SEP-21 MONTH 11305567 9417316
OCT-21 MONTH 11780401 9417316
NOV-21 MONTH 10653184 9417316
DEC-21 MONTH 12415325 9417316
3-778
Chapter 3
Analyze
Parallel Period
Parallel periods are at the same level as the current time period, but have different parents in
an earlier period. For example, you may want to compare current sales with sales for the
prior year at the quarter and month levels.
The Data Analysis tool provides several functions for parallel periods:
• Parallel Period: Calculates the value of the parallel period.
• Difference From Parallel Period: Calculates the difference in values between the
current period and the parallel period.
• Percent Difference From Parallel Period: Calculates the percent difference in values
between the current period and the parallel period.
To identify the parallel period, you specify a level and the number of periods before the
current period. You can also decide what happens when two periods do not exactly match,
such as comparing daily sales for February (28 days) with January (31 days).
You also choose the measure, the time dimension, and the hierarchy.
These are the results of a query against the calculated measure, which lists the months for
two calendar quarters. The parallel month has the same position within the previous quarter.
The prior period for JUL-21 is APR-21, for AUG-21 is MAY-21, and for SEP-21 is JUN-21.
TIME PARENT SALES LAST_QTR
-------- ---------- ---------- ----------
APR-21 CY2006.Q2 11356940 13119235
MAY-21 CY2006.Q2 13820218 11441738
JUN-21 CY2006.Q2 11638499 11593842
JUL-21 CY2006.Q3 9417316 11356940
AUG-21 CY2006.Q3 11596052 13820218
SEP-21 CY2006.Q3 11305567 11638499
Moving Aggregates
Moving aggregates are performed over the time periods surrounding the current period.
The Data Analysis tool provides several aggregation methods for moving calculations:
• Moving Average: Calculates the average value for a measure over a fixed number of
time periods.
• Moving Maximum: Calculates the maximum value for a measure over a fixed number of
time periods.
• Moving Minimum: Calculates the minimum value for a measure over a fixed number of
time periods.
• Moving Total: Returns the total value for a measure over a fixed number of time periods.
3-779
Chapter 3
Analyze
You can choose the measure, the time dimension, and the hierarchy. You can also
select the range, as described in "Choosing a range of time periods" in Oracle OLAP
User’s Guide , and the number of time periods before and after the current period to
include in the calculation.
These are the results of a query against the calculated measure, which displays
values for the descendants of calendar year 2021. Each value of Minimum Sales is the
smallest among the current value and the values immediately before and after it. The
calculation is performed over all members of a level in the cube.
TIME TIME_LEVEL SALES MIN_SALES
-------- -------------------- ---------- ----------
Q1.21 CALENDAR_QUARTER 32977874 32977874
Q2.21 CALENDAR_QUARTER 35797921 32977874
Q3.21 CALENDAR_QUARTER 33526203 33526203
Q4.21 CALENDAR_QUARTER 41988687 31381338
JAN-21 MONTH 11477898 10982016
FEB-21 MONTH 10982016 10517960
MAR-21 MONTH 10517960 10517960
APR-21 MONTH 11032057 10517960
MAY-21 MONTH 11432616 11032057
JUN-21 MONTH 13333248 11432616
JUL-21 MONTH 12070352 11108893
AUG-21 MONTH 11108893 10346958
SEP-21 MONTH 10346958 10346958
OCT-21 MONTH 14358605 10346958
NOV-21 MONTH 12757560 12757560
DEC-21 MONTH 14872522 12093518
Share
Share calculates the ratio of a measure's value for the current dimension member to
the value for a related member of the same dimension.
You can choose whether the related member is:
• Top of hierarchy: Calculates the ratio of each member to the total.
• Member's parent: Calculates the ratio of each member to its parent.
• Member's ancestor at level: Calculates the ratio of each member to its ancestor,
that is, a member at a specified level higher in the hierarchy.
When creating a share calculation, you can choose the measure, dimension, and
hierarchy. You also have the option of multiplying the results by 100 to get percentages
instead of fractions.
Share Example
This template defines a calculated measure using SHARE:
Share of measure SALES in PRODUCT.PRIMARY hierarchy of the PRODUCT dimension as
a ratio of top of hierarchy.
3-780
Chapter 3
Analyze
These are the results of a query against the calculated measure. The TOTAL_SHARE column
displays the percent share of the total for the selected products.
PRODUCT PROD_LEVEL SALES TOTAL_SHARE
-------------------- --------------- ---------- -----------
Total Product TOTAL 144290686 100
Hardware CLASS 130145388 90
Desktop PCs FAMILY 78770152 55
Portable PCs FAMILY 19066575 13
CD/DVD FAMILY 16559860 11
Software/Other CLASS 14145298 10
Accessories FAMILY 6475353 4
Operating Systems FAMILY 5738775 4
Memory FAMILY 5430466 4
Modems/Fax FAMILY 5844185 4
Monitors FAMILY 4474150 3
Documentation FAMILY 1931170 1
Rank
Rank orders the values of a dimension based on the values of the selected measure. When
defining a rank calculation, you choose the dimension, a hierarchy, and the measure.
You can choose a method for handling identical values:
• Rank: Assigns the same rank to identical values, so there may be fewer ranks than there
are members. For example, it may return 1, 2, 3, 3, 4 for a series of five dimension
members.
• Dense Rank: Assigns the same minimum rank to identical values. For example, it may
return 1, 2, 3, 3, 5 for a series of five dimension members.
• Average Rank: Assigns the same average rank to identical values. For example, it may
return 1, 2, 3.5, 3.5, 5 for a series of five dimension members.
You can also choose the group in which the dimension members are ranked:
• Member's level: Ranks members at the same level.
• Member's parent: Ranks members with the same parent.
• Member's ancestor at level: Ranks members with the same ancestor at a specified
level higher in the hierarchy.
Rank Example
This template defines a calculated measure using Rank:
Rank members of the PRODUCT dimension and PRODUCT.PRIMARY hierarchy based on measure
SALES. Calculate rank using RANK method with member's parent in order lowest to
highest. Rank NA (null) values nulls last.
These are the results of a query against the calculated measure in which the products are
ordered by RANK:
PRODUCT SALES RANK
-------------------- ---------- ----------
Monitors 4474150 1
Memory 5430466 2
Modems/Fax 5844185 3
CD/DVD 16559860 4
3-781
Chapter 3
Analyze
3-782
Chapter 3
Analyze
3-783
Chapter 3
Analyze
• Click the Download Add-in icon in the Microsoft Excel tab to download the Oracle
Autonomous Database Add-in for Excel.
3-784
Chapter 3
Analyze
Note:
After running the installer on Windows, the add-in automatically creates a
network share and adds the shared location as a trusted catalog location for
Office add-ins. A catalog is used to store the manifest for the Excel Add-in. It
enables publishing and management of the Excel add-in as well as other add-
ins that are available in the Office Store and licensed for corporate use. You can
acquire the Excel add-in by specifying the shared manifest folder as a trusted
catalog.
Note:
You must have Administrator privileges to successfully install the Excel add-in
for Oracle Autonomous Database.
Note:
You can change the functionality of the installer after initial installation. Re-run the
installer and choose the option of your preference. You can either choose to repair
your existing installation by deleting it and selecting the installed trusted catalog or
adding another manifest to the working installation.
3-785
Chapter 3
Analyze
No add-ins will now be available from the Shared folder of the Office Add-ins window.
To uninstall the Oracle Autonomous Database Add-in for Excel for Mac:
• Enter the following command in the terminal to remove the manifest.xml file.
rm ~/Library/Containers/com.microsoft.Excel/Data/Documents/wef/
manifest.xml
The Oracle Autonomous Database Add-in is uninstalled from Excel for Mac.
Note:
After uninstalling the Add-in, if you re-install it from different Autonomous
Database (ADB) then the add-in attempts to load the old ADB. You need to
then check if the location (share path) of the shared manifest folder is
pointing to the right location. Refer to Configuring the Excel Trusted Add-in
Catalog in FAQs for Troubleshooting errors with Excel Add-in chapter for
more details.
3-786
Chapter 3
Analyze
Selecting Native SQL icon or Query Wizard icon from the ribbon launches the Oracle
Autonomous Database wizard in the Excel task pane.
Click Move in the drop-down of the wizard pane to move the wizard to your preferred
location.
The Resize option in the drop-down resizes the query window. As you select this option, you
can resize the wizard window by moving the double-headed arrow sideways. The wizard
expands when you move the arrow to the left and it contracts when you drag it to the right.
Click Move out of Tab to move the add-in from the taskpane.
Click Close to close the wizard.
Connection management
Each time you start the add-in for excel, you must start a connection.
With the connections feature, you can connect to multiple autonomous databases with a
single add-in. Multiple connections can be created, however, only one connection can remain
active. Ensure that the autonomous database from where you download the plug-in is
available for connecting to other autonomous databases.
The connection panel lets you connect to the autonomous database through a connection
where you provide the login credentials and access the autonomous database.
With the Connections icon, you can:
• Create or delete multiple connections using a single add-in.
• Share connection information by exporting and importing connection information to a file.
• View existing connections.
3-787
Chapter 3
Analyze
• Refresh connections to retrieve updated data from the autonomous database and
view the connection status of the Excel Add-in.
Selecting Connections opens the Connections wizard. You must reinstall the add-in if
you had already installed the add-in once before.
Note:
This is implicit type of connection. Refer to Authenticate using Implicit
connection to understand more about implicit connection.
The Connections wizard consists of the following four buttons at the header:
• Refresh: You can refresh the connection with this icon. The green icon besides
the connection name indicates the connection is active. A red icon besides the
connection name indicates that you are not connected to the database.
• Add: Select Add to Add a Connection. Refer to Add a Connection section for
more details.
• Export: Select Export to export connections. Refer to the Share a Connection
section for more details.
• Import: Select Import to launch the import wizard to choose a connection file.
These files are in JSON format. Refer to the Share a Connection section for more
details.
Add a connection
You can add a connection to an autonomous database. Adding a connection enables
you to specify the connection credentials to the database in which you will connect to
the schema of your autonomous database.
This connection will allow you to use the database from Excel.
3-788
Chapter 3
Analyze
1. Click on the Add button on the header of the Connections pane to add a connection. This
opens an Add new connection dialog box.
2. Specify the following fields on the Add new connection dialog box:
• Alias: Enter the Alias name for the Autonomous Database URL. For readability
purpose, Oracle recommends to use a different name than the actual URL.
• Autonomous Database URL: Enter the URL of the Autonomous Database you wish
to connect to. Copy the entire url from the web UI of the autonomous database. For
example, enter or copy the following link "https://<hostname>-
<databasename>.adb.<region>.oraclecloudapps.com/" to connect to the database.
This will be provided to you by the administrator.
• Schema: Enter the schema you use for this connection.
• Client ID: Enter the Client ID for this connection. Refer to the Generate the client ID
for a connection section to generate the client ID of this implicit connection and paste
it on this field.
Click the Copy implicit connection query template button to get the PL/SQL code to
generate a client ID required in the Client ID field. For details refer to the "Generate Client
ID for a connection" section below.
3. Click Save to save the connection.
You should be able to view the new connection now.
Import a Connection
You can import a connection file that is in a JSON format without specifying connection
credentials to the database.
Before importing the connection file from the Database Actions instance, you need to
download the connection file.
1. From the Downloads page of the Database Actions instance, click the Download
Connection File icon to download the connection file in your system.
3-789
Chapter 3
Analyze
3. Click and drop the connection file from your system to the drop area of the wizard.
After the connection file loads, select the check box beside the connection file you
downloaded to import the file in the wizard.
3-790
Chapter 3
Analyze
4. Click ok to proceed.
5. Click the three vertical dots beside the connection file and select Connect.
Note:
If you view a red icon beside the connection file, this means that the connection
is inactive or incorrect. Click the three vertical dots beside the connection file
and select Edit to update the connection file. Ensure the Autonomous
Database URL is correct and click Save. An example of a correct URL is
“https://<hostname>-<databasename>.adb.<region>.oraclecloudapps.com/"
6. Specify the schema name in the username field and the corresponding password in the
credentials screen you view. Click Sign in to login to the autonomous database.
You will view “Active Connection” beside the connection name.
3-791
Chapter 3
Analyze
The Copy implicit connection query template button copies the connection query
template. The template consists of a PL/SQL code that generates OAuth Client ID.
You can copy and run this PL/SQL code in the worksheet editor to create the Client ID.
This section describes how to generate a client ID.
1. On the Development section of the Database Actions Launchpad, select SQL
card. This opens the SQL page.
2. Paste the implicit connection query template you copied as explained in the
previous section. Here is a sample of the implicit connection query template.
COMMIT;
3-792
Chapter 3
Analyze
Note:
Do not change the template else you might view errors which will cause
unsuccessful creation of Client ID. The p_redirect_uri field is auto generated
and is different for each autonomous database.
6. Click the Run Script icon on the worksheet toolbar to run the PL/SQL code.
7. The following is the sample output you will view in the Script Output tab after you run the
PL/SQL code.
3-793
Chapter 3
Analyze
Elapsed: 00:00:00.113
----------------------------------------
NAME CLIENT_ID
---------------------------------- ------------------------
Client OohrmcjhzmXh3skoeEusXA..
[PROVIDE_A_UNIQUE_CLIENT_NAME] hFEVl8GSn1yR3cSAP3KL2w..
Elapsed: 00:00:00.170
2 rows selected.
8. Copy the client ID from the first line of the script output. You can also copy the
client ID equivalent to the client name you provided on the implicit connection
query template. Here is the client ID, in the above example,
OohrmcjhzmXh3skoeEusXA..
9. Paste the Client ID on the Client ID value field of Add new connection dialog box.
Refer to the Add a connection for more details on this.
Once you have created a new connection you will be able to view the connection you
have added in the Connections panel. A connection in the panel lists the following:
• An alias at the top. For example, test is the alias.
• The bottom part of the connection panel displays the URL of the autonomous
database with the schema you connect to.
• A connection status indicator where the indicator identifies if the connection is
connected or not. A red cross mark indicates there is no connection whereas a
check mark indicates the connection is successful.
• An actions icon right most to the connection panel.
Click the Actions icon on the connection. You can perform the following actions on the
selected connection:
3-794
Chapter 3
Analyze
• Connect: Click Connect to connect the add-in with the autonomous database. This
opens the login page of Oracle Database Actions the autonomous database you wish to
connect to.
Enter the schema name in the username field and the corresponding password.
Note:
– The add-in for Excel asks for your permission the first time you log into the
database. Select Approve to proceed with the login.
After you connect to Oracle Database Actions, the Database Actions home page
appears.
• Activate:
When you connect to multiple different autonomous databases, there can be only one
active connection. Click Activate to make the selected connection active. The Active
connection is displayed at the top of the panel. You can expand or collapse the active
connection. Expand the active connection to view its detailed status such as the
Autonomous database URL, schema and the status of connection.
You can collapse in case of spacing issues. You can view the alias and the status of the
connection in its collapsed form.
• Edit: This button enables you to edit the existing connection. Click Edit to review, view or
edit any of the connection-based information. Selecting Edit opens the same dialog you
view when you add a connection. Edit any of the information details, such as, Alias,
Autonomous Database URL, Schema or the Client ID of the existing connection.
• Duplicate: Select Duplicate to clone the connection from the list of connections
displayed in the Connections panel. This creates a copy of the connection without having
to enter the details again.
• Disconnect: Select Disconnect to disconnect from the connection. Once the connection
disconnects, you will see a red cross mark besides the connection name. This indicates
that the connection is terminated.
• Remove: Select Remove to remove the connection from the list of connections
displayed in the Connections panel.
Share a connection
You can import or export a connection using the Import and Export buttons on the
Connections panel.
• Import: Click Import and select a connection file from your local device. Once you import
the connection file, you can view the connection in the Connection panel with a check
box besides it. Select the check box and click OK to add the connection to the list of
connections in the panel. The add-in copies the connection information which you can
use as a new connection. With the import feature, you do not have to enter the
connection information to add a new connection.
• Export: The export button exports an existing connection which you can import later.
Clicking Export opens a check box beside each connections present in the connections
list. You can select the connection you wish to export. Multiple selections are allowed.
After you select the connection, click OK. Once the connection file is exported, you can
3-795
Chapter 3
Analyze
view that the add-in for Excel downloads the connection file (*.json file) in your
local device. The exported connection file is named
spreadsheet_addin_connections.json.
To run a query using the add-in, run Excel and create a blank workbook using the
standard Excel workbook file format.
1. In the Excel ribbon, select the Autonomous Database.
2. Click the Native SQL icon from the ribbon. This opens an Oracle Autonomous
Database dialog box in the Excel Task Pane with Tables and Views icons and a
search field beside them.
3-796
Chapter 3
Analyze
3. Select Table to view all the existing tables in the schema. Click Views to view the existing
views in the schema.
4. You can right click on the table whose data you want to query and choose Select to view
all the columns of the table. The column names will be displayed in the Write a Query
section. You can click on the table and view individual columns as well. Click the Run
button to run the commands in the query editor. The query results will be displayed in the
worksheet you select.
Note:
You will view an error message if you click the Run icon while the query editor
is empty.
5. You can click + sign besides the Select worksheet drop-down to display the results in a
new worksheet.
6. The worksheet also displays information such as the timestamp, the user who creates
and runs the query and the autonomous database URL.
To run another query follow these steps:
3-797
Chapter 3
Analyze
1. Click the eraser icon to clear the previous query from the SQL editor and write a
new query.
2. In the Select worksheet drop-down, select a new sheet, Sheet 2 in this case. The
Add-in adds a sheet for the user. If you choose to work on the same sheet, the
Add-in refreshes the data in the existing worksheet.
3. Click the Run icon to display the query results.
The worksheet displays the result of the query at a go. While this behavior works for
most scenarios, sometimes, for large data sets, the query result might exceed 10K
rows. Although you can view the 10K rows, a confirmation window appears which asks
if you would want to view the rest of the result.
Select Yes to view the entire result set. Loading the entire data may take a while. You
must fetch the entire query result before working with Pivot tables, or else it will lead to
incorrect results from aggregation in Pivot tables.
Close the Query Wizard panel to cancel the operation of fetching the result.
Note:
Close the Query Wizard panel to cancel the operation of fetching the result.
3-798
Chapter 3
Analyze
3-799
Chapter 3
Analyze
2. Filter panel:The Filter panel displays to the right of the Analytic View panel. You
can create filter conditions and add calculations to the query in this panel.
3. Query Result panel:The Query Result panel displays to the right of the Filter
panel. Once you select the filter criteria and determine what calculated measures
to add to your query, you run the query. You can view and revise the SQL query.
After the SQL query runs, you view the query results in the worksheet. You can
select the output format of the result here. You can view the results in tabular
3-800
Chapter 3
Analyze
To query an analytic view and explore the Query Wizard menu in the MS Excel ribbon:
1. On the ribbon, select the Query Wizard icon.
2. Selecting the Query Wizard opens an Oracle Autonomous Database dialog box in the
Excel Task Pane.
3. Select an existing Analytic View from the drop-down in the Analytic View pane. As you
select the Analytic View, it appears on the Analytic View field.
4. Select your choice of measures, hierarchies, and levels from list of available measures,
hierarchies and levels associated with the Analytic View. Click Next.
5. The wizard window progresses to the Filter pane where you can add or edit filters to
query.
6. Under Add or Edit filter conditions, do the following.
• Select the column name and the attribute name from the drop-down. The values of
the attribute change dynamically with the change in column names.
• Select an operator in the Operator field to apply to the values that you specify in the
Value field.
• Specify a value or values from the list that contains members of the column that you
select. You need to enter the value into the Values field manually. For example, you
can select > in the Operator field to use only values greater than the value that you
select in the Value list. If you select 100,000 from the Value list, then the filter uses
3-801
Chapter 3
Analyze
values from the column that are greater than 100,000. You can use this
information in an analysis to focus on products that are performing well.
• Click Add Filter to add another filter condition.
7. Under Add or Edit Calculations, do the following.
• Specify the column whose values you want to include in the group or
calculated item.
• On the Calc expression field, enter a custom calculated expression you want
to perform on the column value. You can add functions or conditional
expressions.
8. Click Next to progress to the Query Result.
9. You can view, edit, and review the query you have generated from the Query
Review editor.
10. Select Remove empty columns to remove empty columns from the result.
11. Select Column per level to retrieve all levels of a hierarchy in a single column.
12. Select the worksheet from the drop-down where you want to view the result.
14. You can view the result of the query in the worksheet you select.
15. You can always modify the query in the Oracle Autonomous Database dialog box
editor even after results are generated.
16. Select Table in the Query Result pane to view the results in the worksheet in a
tabular format.
17. Select Pivotin the Query Result pane to view the results in a new worksheet in
Pivot format.
3-802
Chapter 3
Analyze
3-803
Chapter 3
Analyze
See https://support.microsoft.com/en-gb/office/create-a-pivottable-to-analyze-
worksheet-data-a9a84538-bfe9-40a9-a8e9-f99134456576 for more information on
Pivot tables.
3-804
Chapter 3
Analyze
levels of detail within them. Drilling is a way to navigate through data in views quickly and
easily.
• Drill down to display data in more detail, which displays more members.
• Drill up to display less data.
When you drill down in a table, the detail level data is added to the current data.
For example, when you drill from a month, the table displays data for the month and for the
days in that month.
1. Hover over the cell in the spreadsheet which contains + sign.
2. Click the + sign besides the member you want to drill.
The details are added to the pivot table.
You can now create, manage, and run queries directly with the analytic views in the
autonomous database and create powerful data driven reports.
Why is the My Add-ins icon from the Insert ribbon in the MS Excel workbook greyed
out?
Even before installing the Excel add-in, sometimes the My Add-ins icon from the Insert
ribbon in the MS Excel workbook appears to be greyed out.
1. From the File menu in Excel ribbon, go to Account and select Manage Settings from
the Account page.
2. Ensure that you select the Turn on optional connected experiences.
3. From the File menu in Excel ribbon, go to Options and select the Trust Center option
from Excel Options.
4. Click Trust Center Settings and ensure that you deselect Disable all Application Add-
ins (if selected) from the Add-ins tab in the Trust Center dialog box.
5. Select the Trusted Add-in Catalogs menu from the Trust Center dialog box and ensure
that you deselect the Don’t Allow any web add-ins to start checkbox.
Why doesn’t the sign-in page of the Excel Add-in load or appear?
At times you might encounter issues with the Excel Add-in even after it is loaded correctly.
For example, an add-in fails to load or is inaccessible. Check the compatibility version of the
Excel and the operating system you use.
If the compatibility is correct and the sign-in page to the Excel Add-in still does not show up,
or it does not load properly, we recommend applying all pending Windows, Office, and
browser updates.
1. From the Windows Start menu, select Settings, Update & Security, and then Windows
Update.
2. If updates are available on the Windows Update page, review the updates and click
Install Now.
3-805
Chapter 3
Analyze
Note:
The details of applying Windows updates can vary from version-to-version
and if required, check with your system administrator for assistance.
Checking is only required the first time you use the installer, or if the shared manifest
folder is changed. The change occurs during uninstalling and re-pointing to a new
ADB.
To remove the catalog from the trusted table and add a new catalog pointing to a
different address:
• Select the Catalog you want to remove from the trusted catalog table and click
Remove.
3-806
Chapter 3
Analyze
• Enter the correct share path of the shared manifest folder in the Catalog url field and
click Add catalog to add the shared folder to trusted catalog.
Restart Excel to make the new shared folder active to access the add-in.
Why doesn’t the add-in work correctly even after configuring the Excel trusted Add-in
catalog?
Let’s say you configure the Excel trusted add-in catalog after re-installing the add-in but even
then, it does not load correctly. Sometimes the database server changes are not reflected in
Excel even after you set the share path of the shared manifest folder as a trusted add-in
catalog. Clear the Office cache to resolve this issue.
Refer to this page https://docs.microsoft.com/en-us/office/dev/add-ins/testing/clear-
cache#clear-the-office-cache-on-windows to clear the Office cache on Windows and Mac.
Clearing the Office cache unloads the Excel add-in. Install the add-in and check the
configuration of the Excel trusted add-in catalog. This should solve the issue of incorrect
loading of the Excel add-in.
3-807
Chapter 3
Analyze
3-808
Chapter 3
Analyze
• This opens a Download screen with Microsoft Excel and Google Sheets tabs. Click the
Google Sheets tab and select Download Add-in.
You can now view a zip file in the Downloads folder of your system. Extract the contents of
the zip file onto your system.
To setup the Oracle Autonomous Database add-on for Google Sheets, you must import the
files in the oracleGoogleAddin folder to Google Apps Script.
Note:
Importing the files is a one-time activity and that, typically, this is done by an
administrator.
After you import or upload the files to Google Apps Script follow these steps to complete the
setup of Oracle Autonomous Database add-on for Google Sheets:
• Deploy the Google Script as a web app.
• Generate Client ID and Client Secret keys using UI.
3-809
Chapter 3
Analyze
Clasp is an open-source tool to develop and manage the Google Apps Script projects
from your terminal.
Note:
Clasp is written in Node.js. and distributed via the Node Package Manager
(NPM) tool. It is required to install Node.js version 4.7.4 or later to use clasp.
1. Enter sheet.new in the address bar of the web browser to open Google Sheet.
Make sure you are logged in with your Google account.
2. Select Apps Script from the Extensions menu. You can view the Apps Script
editor window.
3. Select the Code.gs file in the Apps Script editor window which already exists by
default. Click on the vertical dots besides the Code.gs file. Select Delete to delete
the existing Code.gs file.
4. After you install Node.js, enter the following npm command in the command
prompt to install clasp. You must enter this command in the location where you
have downloaded and extracted the oracleGoogleAddin folder.
After you install clasp, the clasp command is available to use from any directory on
your computer.
5. Enter the following command to login and authorize management of your Google
account's Apps Script projects.
clasp login
Once this command is run, the system launches the default browser and asks you
to sign into your Google account where your Google Apps Script project will be
stored. Select Allow for clasp to access your Google Account.
3-810
Chapter 3
Analyze
Note:
If you have not enabled the Apps Script API in Google Apps Script, the above
command will not be successful. Enable the API by visiting https://
script.google.com/home/usersettings site and enabling the Google Apps Script
API by selecting the On button.
6. In your existing Google Apps Script project, click the Project Settings in your left pane.
Click Copy to Clipboard to copy the Script ID under IDs.
7. Go back to the command prompt and enter the following command with the Script ID you
copied in the previous step:
8. Push all the files from your folder to the Google Apps Script files by specifying the
following command:
clasp push
This command uploads all of the script project's files from your computer to Google Apps
Script files.
9. Go to the newly created Google Sheet and click on the Extensions menu and select
Apps Script. Under Files, you can view all the files present in the oracleGoogleAddin
folder.
10. After you import or upload the files to Google Apps Script follow these steps to complete
the setup of Oracle Autonomous Database add-on for Google Sheets:
• Deploy the Google Script as a web app
• Generate Client ID and Client Secret keys using UI
3-811
Chapter 3
Analyze
Note:
https://script.google.com/macros/s/AKfycbwFITvtYvGDSsrun22g7TrbrfV-
bUVoWKs7OrA_3rtRAlmcGFe8bejNprZML7gFPzQ/exec
This is the Web application deployment URL. You will paste this value in the
Google Sheet Redirect URL field you view in the Download Connection File
section of Working with Connections when you download the connection file from
the Database Actions or when you manually create a connection from the Google
Sheet to the Autonomous Database.
10. You can close the Apps Script browser tab at this point and navigate back to the
Google Sheets browser tab. You are now ready to create a connection to the
Autonomous Database.
11. Ensure to save the worksheet after you upload all the files to Apps Script. Click the
Refresh button once you have uploaded all the files. You can now view a new
3-812
Chapter 3
Analyze
Note:
Generate OAuth Client ID and OAuth Client Secret fields by using the UI.
You generate the client keys by accessing the Autonomous Database instance URL
appending with oauth/clients.
For example, if your instance is "https://localhost:port/ords/schemaName", you need to sign in
to the link "https://localhost:port/ords/schemaName/_sdw/?nav=rest-workshop&rest-
workshop=oauth-clients" to generate new client. Be sure to include the trailing slash.
1. Sign in to Database Actions with "https://machinename.oraclecloudapps.com/ords/
SchemaName/oauth/clients/" link. You can view an OAuth Clients page in the link "https://
localhost:port/ords/schemaName/_sdw/?nav=rest-workshop&rest-workshop=oauth-
clients".
3-813
Chapter 3
Analyze
3. From the Grant type drop-down, select the type of client connection you want. You
can select the following options:
• AUTH_CODE: Select this option for implicit connection. Use this response
type when the autonomous database is in a private subnet or within a
customer firewall.
• IMPLICIT: Select this option for explicit connection. This is the more secure
method and is preferred to use.
4. Enter the following fields. The fields with an asterisk (*) are mandatory:
• Name: Name of the client.
• Description: Description of the purpose of the client.
• Redirect URI: web application deployment URL
• Support URI: Enter the URI where end users can contact the client for
support. Example: https://script.google.com/
• Support Email: Enter the email where end users can contact the client for
support.
• Logo: Select an image from your local system to insert a logo for your new
client.
5. Navigate to the Roles tab to select the roles of the client. This is not a mandatory
field. You have the following role options:
• OAuth Client Developer
• RESTful Services
3-814
Chapter 3
Analyze
• SODA Developer
• SQL Developer
• Schema Administrator
6. Progress to the Allowed Origins tab. Specify and add the list of URL prefixes in the text
field. This is not a mandatory field.
7. Progress to the Privileges tab to add any privilege. You do not require any privilege to
create an OAuth Client.
8. Click Create to create the new OAuth Client. This registers the OAuth Client which you
can view on the OAuth Clients page.
9. Click the show icon to view the Client ID and the Client Secret fields.
3-815
Chapter 3
Analyze
Selecting the Download Connection File button opens a Download Connection File
wizard. Specify the following field values in the wizard:
• Google Sheet Redirect URL: This is the Web application deployment URL you
copied from the Deploy the Google Script as a Web app section.
• Response Type:
– Select Token as the response type when the autonomous database is in the
private subnet or within a customer firewall. When creating a connection using
this response type, choose the connection type as implicit.
– Select CODE as the response type when you need a more secure method.
This method is preferred to use. When creating a connection using this
response type, choose the connection type as explicit.
Click Download to download the JSON connection file in your local.
You can import this connection from the Import tab in the Connections wizard. This will
automatically generate all the field values without having to manually fill in the fields.
3-816
Chapter 3
Analyze
The above methods require a web application deployment URL. You obtain this URL when
you deploy a Google Script as a web app.
The following sections demonstrate how to connect using implicit and explicit connections.
Google Sheets needs permission to access the Autonomous Database. You must first
complete the authorization to connect to the autonomous database. The add-on requires
one-time authentication for the setup.
1. On the Google Sheet, click Ask Oracle and select Connections.
2. Selecting Connections requires one-time Google authentication. Clicking Connections
opens a pop-up window that asks your permission to run the authorization. Click
Continue.
3. You will now view a window that informs you that the application requests access to
sensitive information in your Google account. Click Advanced and select the Go to
Untitled project (unsafe) link.
4. Selecting the link opens a new window which ensures that you trust the application. Click
Allow to continue. You have now completed the setup.
5. Select Connections from Ask Oracle menu in the Google sheet.
3-817
Chapter 3
Analyze
3-818
Chapter 3
Analyze
7. Selecting Add Connection opens an Add Connection wizard in the connection list panel
3-819
Chapter 3
Analyze
When you select the Implicit option, you can view the following fields:
OAuth Client ID: client_id you generate using the Create New Client wizard in
the UI. Refer to the Generate Client ID and Client Secret using the UI section.
Schema Name: Specify the name of the schema.
When you select Authorization Code, you can view the following fields:
OAuth Client ID: client_id you generate using the Create New Client wizard in
the UI. Refer to the Generate Client ID and Client Secret using the UI section.
OAuth Client Secret: client_secret you generate using the Create New
Client wizard in the UI. Refer to the Generate Client ID and Client Secret using the
UI section.
Schema Name: Specify the name of the schema.
Click Save.
3-820
Chapter 3
Analyze
After you click Save you can view the new connection in the connection list panel. The
connection list displays the name of the connection, the name of the schema, and the
OAuth type you grant. However, it is still in a disconnected state.
9. Click the three vertical dots beside the connection name and perform the following
operations:
Connect: Select Connect to connect to the Autonomous Database and change the
status of the connection to active. Selecting Connect opens the sign in page of the
autonomous database. After you login you will view a page that says that the database
access is granted to you. Close the window and return to Google Sheets. You will now
see that the connection is active.
Edit: Select Edit to update any value of the connection. Click Save to update the edited
values.
Duplicate: Select Duplicate to create a duplicate connection.
Remove: Select Remove to remove the connection from the connection list.
10. You can export an existing connection without having to add a connection again. Click
the Export tab in the Connections wizard to export the selected connection, select the
3-821
Chapter 3
Analyze
11. Import an existing connection by clicking the Import tab in the Connections
wizard. Click the drop connection area and drag and drop the connection file
saved in your local system to import the connection. You can import the
Connection File you downloaded from the Download Connection File section in
3-822
Chapter 3
Analyze
2. Clicking Register opens a pop-up window which asks your permission to run the
authorization. Click Continue. Selecting Continue will redirect you to the Google
Accounts page where you need to select your Gmail account.
3. You will now view a window which informs you that the application requests access to
sensitive information in your Google account. Click Advanced and select the Go to
Untitled project (unsafe) link.
4. Selecting the link opens a new window which ensures that you trust the application. Click
Allow to continue.
5. You have now completed the setup. Select Register from Ask Oracle menu in the Google
sheet.
This opens an Oracle Autonomous Database wizard in the Google sheet. Specify the
following fields:
• ADB URL: Enter the ADB URL. For example, https://
DBADDRESS.oraclecloudapps.com/ords/USER.
• OAuth Client ID: client_id you generate during authentication.
• OAuth Client Secret: client_secret you generate during authentication. Refer to the
Create Connections with the Google spreadsheet section for more details.
6. Select Authorize.
After successful authorization of the credentials, you can now view Query wizard, Native
SQL and Analyses and Reports, Clear Data, Delete All Sheets, About, and Sign Out
menu items under Ask Oracle.
3-823
Chapter 3
Analyze
3-824
Chapter 3
Analyze
4. You will view the default query in the query editor area. You can select any of the three
modes to visualize the results of the SQL query report you generate:
• Base Query: This type of view is by default. Query written in the SQL editor is the
Base Query.
• Table: You can view the SQL results in tabular form. By selecting this view, a column
drop zone appears which enables you to drag and drop selected columns from the
Table browser. By dropping the selected columns in the drop zone, you can view only
those columns in the Result data generated in the worksheet. Select the cross mark
beside the Column name to remove it from the drop zone.
• Pivot: You can view the results of the SQL query in pivot format. By selecting this
format, an X, and Y drop zone appears where you can drag and drop the selected
columns from the Tables browser to the drop area.
5. Click the funnel icon (Faceted Filter) to add filters to the result. The wizard generates a
filter for each value in the column that is retrieved from the query result. You can filter
3-825
Chapter 3
Analyze
different columns on the faceted filter panel and view the results in the worksheet
to view only the data you wish to view. For example, to view the customer reports
of males, click the faceted filter and Male under Gender. Click Save to view the
results.
3-826
Chapter 3
Analyze
To query an Analytic View and explore the Data Analysis menu in the Google Sheets:
1. On the Google Sheet, select the menu item Ask Oracle > Data Analysis. Selecting Data
Analysisopens a Data Analysis wizard in the Google task pane.
2. Select AV from the AV or Query drop-down, select qteam from the schema drop-down,
and the AV from the available AV's.
3. You can select any of the two modes to visualize the results of the AV query you
generate:
• Table: You can view the AV query results in tabular form. By selecting this view, a
column drop zone appears which enables you to drag and drop selected columns
from the Table browser. By dropping the selected columns in the drop zone, you can
view only those columns in the Result data generated in the worksheet. Select the
cross mark beside the Column name to remove it from the drop zone.
• Chart: You can view the results of the AV query in chart format. By selecting this
format, an X, and Y drop zone appears where you can drag and drop the selected
hierarchies and measures from the AV browser to the drop area.
3-827
Chapter 3
Analyze
Note:
You are allowed to drop measures in the Y axis.
4. Click the funnel icon to view the Faceted Filter list. The wizard generates a filter for
each value in the column that is retrieved from the query result. You can filter
different columns on the faceted filter panel and view the results in the worksheet
to view only the data you wish to view. For example, to view the movies with the
genre as Drama, select Drama from the faceted filter and click Save.
5. Click Run to view the results in the worksheet you select consisting of the Drama
genre.
3-828
Chapter 3
Analyze
3-829
Chapter 3
Analyze
3. The Oracle Autonomous Database wizard opens Tables and Views icons and a
3-830
Chapter 3
Analyze
select the consumer group options such as, low, medium, and high. You can edit the
3-831
Chapter 3
Analyze
The reports and charts can be viewed in of a different range of charts, namely, Bar
Charts, Area charts, Line Charts, and Pie Charts. Reports provide analytical insights
that you create from the Analytic Views. An Analysis can contain multiple reports. The
Analyses and Reports icon enables you to retrieve Analyses and Reports from the
Autonomous Database.
View Analysis
To view Analysis and explore the Analyses and Reports menu:
1. Select Analysis under Output format.
2. Use the Select an Analysis drop-down to choose the Analysis you want to view.
3. Click View Analysis to view the analysis in the Google Sheet.
View Report
To view Reports :
1. Select the Analyses and Reports menu from the Ask Oracle menu. This opens
the Analyses and Reports wizard.
2. Select Report under Output Format.
3. Use Select an Analysis drop-down under Choose Analysis to choose the
Analysis you want to view.
3-832
Chapter 3
Analyze
4. After you select the Analysis, to view the report present in the Analysis, click the Select a
report drop-down and select the report you wish to view.
5. Click View Report Detail to view more information about the report: Analytic View Name,
Type of visualization, and rows, columns, and values you select while creating the report.
6. Select the worksheet from the drop-down where you would want to view the report.
3-833
Chapter 3
Analyze
7. Click View Report to view the report in the sheet you select from the previous
step. You can now view the report in the worksheet you select.
Clear Sheet
Once the add-on runs the query and retrieves the data into the worksheet, you can
view the Timestamp, User, AV-query and SQL-query of the Analytic View in the
automatically generated query results.
The worksheet displays the result of the query at one go. Consider, for example, if you
want to modify the query and generate the query result in the same sheet. You must
clear the existing data in the sheet.
To clear query results in the Google sheet, click the menu item Ask Oracle and select
Clear Sheet.
This option erases all types of data including images and formatting in the selected
sheet.
About menu
Use this option to view details about the add-in
The About menu from Ask Oracle displays if the add-on is connected to server,
ORDS version, the add-on version, ORDS Schema version and the database version.
Sign Out
Use this option to sign out.
Select Sign Out menu from Ask Oracle to sign out from the database session.
Share or Publish
Once you generate the query results in the Google Sheet, you can share it with other
users. With sharing, creates a copy of the worksheet and sends it with the design tools
hidden and worksheet protection turned on.
3-834
Chapter 3
Analyze
Note:
If you do not see the Catalog card then your database user is missing the required
DWROLE role.
The following topics describe the Catalog page and how to use it.
• About the Catalog Page
• Sorting and Filtering Entities
• Searching for Entities
• Viewing Entity Details
• About the Catalog Page
Use the Catalog page to get information about the entities in and available to your Oracle
Autonomous Database. You can see the data in an entity, the sources of that data, the
objects that are derived from the entity, and the impact on derived objects from changes
in the sources.
• Registering Cloud Links to access data
Cloud Links enable you to remotely access read only data on an Autonomous Database
instance.
• Sorting and Filtering Entities
On the Catalog page, you can sort and filter the displayed entities.
3-835
Chapter 3
Analyze
To view details about an entity, click the Actions icon at the right of the entity
entry, then click View Details.
• Editing Tables
You can create and edit objects using Edit Table wizard available from the Edit
menu in Actions (three vertical dots) besides the table entity.
3-836
Chapter 3
Analyze
– OCID: Access to the data set is allowed for the specific Autonomous Database
instances identified by OCID.
Click OK to finish the registration of the selected table with the cloud link.
Editing Tables
You can create and edit objects using Edit Table wizard available from the Edit menu in
Actions (three vertical dots) besides the table entity.
Clicking Edit from the Actions menu opens the Edit Table wizard. You can visit the panes in
any order to edit a table. The table properties are grouped in several panes.
• Schema: Database schema in which the table exists.
• Name: Name of the table.
The different panes in the dialog are described in the following sections:
• Columns Pane
• Primary Key Pane
• Unique Keys Pane
• Indexes Pane
3-837
Chapter 3
Analyze
Columns Pane
Specifies properties for each column in the table.
General tab
Lists the columns available in the table.To add a column, click Add Column (+). A new
row is added to the table below. Select the row and enter the details for the column. To
delete a column, select the row and click Remove Column (-). To move a column up
or down in the table, select it and use the up-arrow and down-arrow icons.
3-838
Chapter 3
Analyze
3-839
Chapter 3
Analyze
• Name: Name of the constraint to be associated with the primary key definition.
• Enabled: If this option is checked, the primary key constraint is enforced: that is,
the data in the primary key column (or set of columns) must be unique and not
null.
• Index: Name of the index to which the primary key refers.Tablespace: Name of the
tablespace associated with the index.
• Available Columns: Lists the columns that are available to be added to the
primary key definition. You can select multiple attributes, if required, for the primary
key.
• Selected Columns: Lists the columns that are included in the primary key
definition.
To add a column to the primary key definition, select it in Available Columns and
click the Add (>) icon; to remove a column from the primary key definition, select it
in Selected Columns and click the Remove (<) icon. To move all columns from
available to selected (or the reverse), use the Add All (>>) or Remove All (<<)
icon. To move a column up or down in the primary key definition, select it in
Selected Columns and use the arrow buttons.
3-840
Chapter 3
Analyze
Remove All (<<) icon. To move a column up or down in the unique constraint definition,
select it in Selected Columns and use the arrow buttons.
Indexes Pane
Lists the indexes defined for the table.
To add an index, click Add Index (+); to delete an index, select it and click Remove Index (-).
• Name: Name of the index.
• Type: The type of Oracle index. Non-unique means that the index can contain multiple
identical values; Unique means that no duplicate values are permitted; Bitmap stores
rowids associated with a key value as a bitmap.
• Tablespace: Name of the tablespace for the index.
• Expression: A column expression is an expression built from columns, constants, SQL
functions, and user-defined functions. When you specify a column expression, you create
a function-based index.
• Available Columns and Selected Columns: Columns selected for the index. To select a
column, click the column in the Available Columns box, and then click the click the Add
Selected Columns icon to move it to the Selected Columns box.
3-841
Chapter 3
Analyze
Comments Pane
Enter descriptive comments in this pane.
This is optional.
Storage Pane
Enables you to specify storage options for the table.
When you create or edit a table or an index, you can override the default storage
options.
• Organization: Specifies that the table is stored and organized with (Index) or
without an index (Heap) or as an external table (External).
• Tablespace: Name of the tablespace for the table or index.
• Logging: ON means that the table creation and any subsequent INSERT
operations against the table are logged in the redo log file. OFF means that these
operations are not logged in the redo log file.
• Row Archival: YES enables in-database archiving, which allows you to archive
rows within the table by marking them as invisible.
3-842
Chapter 3
Analyze
3-843
Chapter 3
Analyze
3-844
Chapter 3
Analyze
3-845
Chapter 3
Analyze
– Trusted: Enables the use of dimension and constraint information that has
been declared trustworthy by the database administrator but that has not been
validated by the database. If the dimension and constraint information is valid,
performance may improve. However, if this information is invalid, then the
refresh procedure may corrupt the materialized view even though it returns a
success status.
DDL Pane
You can review and save the SQL statements that are generated when creating or
editing the object. If you want to make any changes, go back to the relevant panes and
make the changes there.
For a new table, click CREATE to view the generated DDL statements.
When you edit table properties, click UPDATE to view the generated ALTER
statements.
For a new table, the UPDATE tab will not be available. When you are finished, click
Apply.
Output Pane
Displays the results of the DDL commands.
If there are any errors, go to the appropriate pane, fix the errors, and run the
commands again. You can save to a text file or clear the output.
Show User Preferences button, a list of recently viewed objects, and several
suggested searches. You can enter a search string, click on one of the recent objects
to see its details, or click on Show search suggestions icon to view suggestions on
the right side of the page.
As soon as you select Catalog menu from Data Studio menu bar, the page is
displayed as below.
3-846
Chapter 3
Analyze
After you have finished constructing the search query, select Save the Search from
the drop-down next to the magnifier icon to save the save the current search query.
You can also select Use as Query Scope from the drop-down in the end of the
search field to take the current search query and turn it to saved query scope.
Click X to clear your searches in the Catalog field.
For details about how to construct a search string, see Searching for Entities.
2. Toolbar
The toolbar appears after you run an initial search. It contains these buttons:
• Sort By
3-847
Chapter 3
Analyze
To select sorting values, click the Sort By button to open the list of options.
Then click the Ascending or Descending icon next to one or more of the
sorting values. For example, if you select the Ascending icon next to Entity
name and the Descending icon next to Entity type, the entities will be sorted
in alphabetical order by entity name and then in reverse alphabetical order by
entity type.
Click Reset in the list to clear the choices in the list.
The sorting values you choose are listed next to the Sort by label beneath the
toolbar. Click the X icon on a sorting value to remove it.
• Page size
By default, up to 25 entities are displayed on the page. If you want more
entities on a page, select a number from this list.
• Refresh
Click to refresh the entities shown on the page, based on the current search
string.
Click Open Card view to display entities as card arranged into one or
two columns; click Open Grid View to display entities as rows in a table;
Click Action and select View Details to view the details of an entity. In
the Card view and the List view, you can also click the name of the entity to
show its details.
• Saved Search Suggestions
Click to open or close the Saved Search Suggestions panel.
3-848
Chapter 3
Analyze
Click Show User Preferences to set the behavior of the Catalog. When you set
these options, they take place immediately and are also saved as the default behavior for
the page. Clicking Show User Preferences opens the General tab where you can view
the following options.
• Show system tables
Select this option to include system tables in the search results.
• Show private tables
Select this option to include private tables in the search results.
• Page size
Select the number of entities to display on the page.
• Entities view
Select how entities are shown on the page. There are three options: Card view
(default) displays entities as cards arranged into one or two columns; Grid view
displays entities as rows in a table; and List view displays entities in a single column
of panels.
• Save last 5 search queries
Select this option to save the five previous search queries that you entered into the
Search catalog field. When the Search catalog field is empty and you click in the
field, the last five search queries are listed in the drop-down list. (Any predefined
searches selected from the Suggestions panel won't appear in this list, unless you
edited them to make a new search.)
Gradually progress to use the Query Scopes tab to view, create, and delete query
scopes. You can search for catalogs and save this search using query scopes. See
Searching for Entities for exploring this tab.
Click on the Saved Searches tab to save the current search query as per the criteria
you specify or create a new saved search.
4. Filters panel
Select one or more filter values to limit the entities shown on the page. Only those entities
that match the filter values are shown. That is, the items returned by a search are filtered
by these filter settings. Selecting all or none of the options shows all entities. See Sorting
and Filtering Entities.
5. Sort by settings
When you set sorting values by using the Sort By control in the toolbar
(see above), the settings are displayed in small boxes beneath the toolbar. You can
delete a setting by clicking the X icon in the box. Or you can change the settings by
Click the name of the entity or click Action to view details about the entity. See
Viewing Entity Details.
3-849
Chapter 3
Analyze
Sort Entities
To set a sort order for the entities on the page,
3. Click the Ascending icon or the Descending icon next to that value.
After you've selected a value, it is displayed in a box next to the Sort by label
beneath the toolbar.
4. You can set one or more parameters. If you want to add a parameter to your
search, repeat the steps above. For example, you can set Entity name (ASC) and
then set Created on (DESC). Both appear as separate boxes next to the Sort by
label beneath the toolbar.
To remove a sort value, you can do either of the following:
• Click the X icon in the box that shows the sort value (next to the Sort by label
beneath the toolbar).
Filter Entities
Restrict which entities that are returned by a search are displayed on the page by
setting filters in the Filters panel on the left side of the Catalog page. Select one or
more filter values. Only those entities that are returned by a search and that match the
filter values are shown. Selecting all or none of the options shows all entities returned
by the search.
By default, system tables and private tables are not displayed. To display them, click
Catalog User Preferences in the toolbar and then click the Show system tables
slider or the Show private tables slider, or both.
3-850
Chapter 3
Analyze
3-851
Chapter 3
Analyze
You can view, create, and edit the previously saved query scopes in this wizard. The
query scopes are categorized based on by whom its created.
• Select Custom to view, create, edit, or delete the query scopes created by you.
• Select Predefined to view the query scopes which are already defined by the
Database Actions. This option does not allow you to create, edit, or delete the
query scopes.
• Select All to view all the query scopes. This includes Custom and Predefined
query scopes.
Specify the following fields in the Query Scope tab.
• Name: Enter the name of the Query Scope. This is a mandatory field.
• Label: This is a mandatory field. Enter a descriptive name here. You will use this
field to refer to a query scope.
• Definition: Enter the Oracle Autonomous Database Data Definition Language
(DDL) that creates the search entity. This is the same search criteria you enter in
the Search Catalog field.
Click Create to create the Query Scope. Click Cancel to cancel its creation.
Once you have created the new query scope, it is visible in the list of query scopes in
the Catalog User Preferences wizard. Click New to create a new query scope
This wizard also appears on selecting Catalog User Preferences. See Catalog User
Preferences for details.
3-852
Chapter 3
Analyze
2. Specify the search criteria in the Search Catalog field. For example, you want to search
for a specific Entity type and a specific owner.
3. Select Save the Search from the drop-down next to the magnifier icon.
4. Clicking the icon prompts the Catalog User Preferences wizard.
5. Enter the name of the saved search in the Title field. The wizard automatically generates
the Scope of the search The definition of the search is also automatically created by
concatenating fields used in the Search Catalog field, for example, type: TABLE units
AND owner: ADPTEST.
6. Enter description of the search in Description filed. This is not a mandatory step.
7. Select Create to create the saved search. Click Cancel to cancel its creation.
After the creation of the saved search, it appears on the list of Saved Searches.
You can change the columns displayed in the search results by clicking the pencil icon in the
Actions column. Click the delete icon in the Actions column to delete the search you save.
The saved searches you create are available for selection in the Saved Search panel in the
right of the Catalog page.
Click New to create a new saved search.
This wizard also appears on selecting Catalog User Preferences. See Catalog User
Preferences for details.
Basic Structure
The query string consists of a set of search terms. Here are some examples.
• sales
• type:TABLE
• owner!=SH
• #deployment
• type:TABLE,VIEW
Combine search terms by using the two Boolean operators AND and OR:
3-853
Chapter 3
Analyze
Search Terms
Search terms come in three forms:
• Simple Search Terms
• Property Search Terms
• Classification Search Terms
3-854
Chapter 3
Analyze
• If the operator is : or =, then the condition works like an IN LIST. For example, name:X,Y
is equivalent to entity_name IN ('X', 'Y') and to (entity_name = 'X' OR
entity_name = 'Y').
• If the operator is !=, then the condition works like a NOT IN LIST. For example, type!
=TABLE,VIEW is equivalent to entity_type NOT IN ('TABLE', 'VIEW') and to
(entity_type != 'TABLE' AND entity_type != 'VIEW').
The property name must be one of the following predefined strings. Property names are case
sensitive, so TYPE, for example, is not supported.
Some properties apply to all entity types, and some properties apply only to a specific entity
type, as shown in the two tables below.
The following table shows those properties that apply to all entity types.
Property Meaning
application The name of the entity application as defined by the
ALL_LINEAGE_APPLICATIONS view. Examples include DATABASE and
INSIGHTS.
created The timestamp when the entity was created.
dateCreated The date when the entity was created. This is the same as created,
but truncated to the nearest day.
dateUpdated The date when the entity was last updated. This is the same as
updated, but truncated to the nearest day.
daysSinceCreated The number of days since the entity was created. The value is zero if
the entity was created today.
daysSinceUpdated The number of days since the entity was last updated. The value is
zero if the entity was updated today.
link The name of the database link where the entity is defined. This can be
used to search entities in other, linked, databases.
local The value YES if the entity is defined within the database itself;
otherwise, the value NO. Tables or Insights defined in a schema are
examples of local entities. Objects in cloud storage, such as CSV or
Parquet files, are examples of entities that are not local.
name The name of the entity.
3-855
Chapter 3
Analyze
Property Meaning
namespace The namespace of the entity as defined by
ALL_LINEAGE_NAMESPACES.
oracleMaintained The value YES, if the entity is created and maintained by Oracle;
otherwise, the value NO. The ALL_TABLES view is an example of an
Oracle maintained entity.
owner The owner of the entity.
parent The full entity path of the parent entity, if it exists.
parentName The name of the parent entity, if it exists.
parentPath The full entity path of the parent entity, if it exists.
parentType The type of the parent, if it exists.
path The full path of the entity.
rootName The name of the outermost containing entity. If the entity has no parent,
then the rootName is equal to the entity name. If the entity does have a
parent, then the rootName is defined, recursively, as the rootName of
the parent entity.
rootNamespace The namespace of the outermost containing entity. If the entity has no
parent, then the rootNamespace is equal to the entity namespace. If
the entity does have a parent, then the rootNamespace is defined,
recursively, as the rootNamespace of the parent entity.
type The entity type, as defined by ALL_LINEAGE_ENTITY_TYPES.
updated The timestamp when the entity last updated.
Some searchable properties are specific to certain entity types. The following table
shows those entity-type-specific properties. When searching for entities with these
properties, it will speed up your search to specify the entity type. For example, instead
of just searching on numRows, specify the entity type that has the numRows property:
TABLE AND numRows > 10
3-856
Chapter 3
Analyze
Note:
You can find the list of query-type-specific properties by running the following query
on The SQL Page:
select *
from (
select
entity_type,
JSON_QUERY(annotation, '$.searchProperties') properties
from all_lineage_entity_types)
where properties is not null;
Property Meaning
= Equal to
!= Not equal to
> Greater than
3-857
Chapter 3
Analyze
Property Meaning
>= Greater than or equal to
< Less than
<= Less than or equal to
: Equal to
For example, type:TABLE is the same as type=TABLE.
~= Represents REGEXP_LIKE
For example, the following two search items are equivalent:
• sales
• name~=sales
The value of the property search term is a string. As with the simple search term, this
string can be unquoted, single-quoted, or double-quoted. Unquoted strings are
converted to uppercase and quoted strings are left as they are.
3-858
Chapter 3
Analyze
• #caption/"ITALIAN"
If the name of the specified language contains a space (for example, "CANADIAN FRENCH"),
you must either enclose the name in quotes or replace the space with an underscore
character (or both). The following are equivalent:
• #caption/Canadian_French
• #caption/"CANADIAN FRENCH"
• #caption/"Canadian_French"
If you use a classification name on its own as a search term, then the search returns entities
that have any non-null value for that classification. If you've defined a classification called
DEPLOYED, for example, then you can see all entities with the classification by using a simple
term:
• #deployed
As with other search terms, you can negate it using NOT or - (hyphen). To exclude all entities
with the DEPLOYED classification, you can use either of the following searches:
• NOT #DEPLOYED
• -#deployed
You can also use classifications with operator/value pairs, with the same semantics used by
property search items; for example:
• #caption~=sales
• #description/Spanish=Ventas
Preview
Preview displays the data of the entity. For a table, the Preview displays the columns of the
table and the data in those columns. You can sort the data in the column into ascending or
descending order by clicking the up or down arrow to the right of the column name.
3-859
Chapter 3
Analyze
Describe
For an analytic view, the Describe tab has information about the entity, and displays
the hierarchies, levels, level depth, dimension tables, level columns, and number of
distinct values for a level.
Lineage
Lineage displays all known information about the upstream dependencies of the entity,
and therefore how the entity was created and how it is linked to other entities.
For example, for a table that you created in your database, the lineage is just the table.
For a table that you created by loading a CSV file from cloud storage, the lineage
includes the ingest directive for the data load and the CSV file that is the source of the
data.
Pointing to the name of an item in the lineage displays the table name, the application
that created it, the type of entity, the path to it, and the schema it is in.
Arrows point from an entity to the entity that it derives from. For example, for a table
created in a data load job, an arrow points from the table to the ingest job and an
arrow points from the ingest job to the CSV file. If you point to an arrow, then a Links
Information box appears that shows information about the relationship between the
two entities.
To view more details about an item, click the Actions icon for the item, then click
Expand. For a table, the columns of the table are displayed. Pointing to the name of
the table or of a column displays the name, application, type, path, and schema of the
table or column. To collapse the display, click the Actions icon, then click Collapse.
You can increase or decrease the size of the displayed objects by using the + (plus)
and - (minus) keys. You can reposition the objects by grabbing a blank spot in the
display and dragging vertically or horizontally.
The lineage for some entities, such as analytic views, is more complex. An analytic
view entity deployed for a business model has links to columns in a fact table and to
hierarchies. The fact table has links to attribute dimensions and, for a data load job, to
an ingest directive for the job. The ingest directive has a link to the source file. The
attribute dimensions have links to tables for the dimensions. Those tables have links to
ingest directives that have link to source files.
Impact
Impact shows all known information about the downstream use of an entity, and
therefore how a change in the definition of an entity may affect other entities that
depend on it. For example, if a table is used in an analytic view, a change to one of the
column definitions in the table may affect the mapping from that column to the analytic
view.
Classifications
For an analytic view and its attribute dimensions, hierarchies, and measures, the
Classifications tab displays classifications and their values. Classifications are
metadata that applications can use to present information about analytic views. When
creating a business model, you may specify the values for the Caption and Description
classifications.
3-860
Chapter 3
Analyze
Optimize
For an analytic view, the Optimize tab has information about caches created for the analytic
view. A cache may exist if the advanced option Enable Autonomous Aggregate Cache was
selected for the business model for which the analytic view is deployed.
Statistics
Statistics display information about the entity. For example, the statistics for a table include
the size of the table and the numbers of rows and columns. They also include the names of
the columns, their data types, the number of distinct values and null values, the maximum
and minimum values, and other information.
The data is represented in the form of histogram which is column statistic which provides
more detailed information about data distribution in a table's columns.
The histograms in the statistics pane can be representative of the following types:
• Frequency: In a frequency histogram, each distinct column value corresponds to a single
bucket of the histogram. Since each value has its own dedicated bucket, some buckets
may have many values, whereas others have few.
• Top-frequency: A top frequency histogram is a variation on a frequency histogram that
ignores non-popular values that are statistically insignificant
• Height-Balanced: In this histogram, column values are divided into buckets so that each
bucket contains approximately the same number of rows.
• Hybrid: A hybrid histogram combines characteristics of both height-based histograms and
frequency histograms. This approach enables the optimizer to obtain better selectivity
estimates in some situations.
Refer to Gathering Statistics for Tables section for more details.
Job Report
Displays a report of the total rows loaded and failed for a specific table.
The report displays of the total rows loaded and failed.
You can view the name of the table, the time the table was loaded and the time taken to
process the load.
Data Definition
Data Definition displays the Oracle Autonomous Database DDL that created the entity.
Click the Actions icon next to Table entity to perform the following operations:
• Select View Details to view details about the table.
• Select Gather Statistics to display new statistics after you modify the table’s structure.
Refer to the Gathering Statistics for Tables for more detailed information.
• Select Register to Cloud Link to register the table for remote access for a selected
audience you define. See Registering Cloud Links to Access Data section for more
information.
• Select Add to Share to include an entity in one or more shares. Click on any of the
available shares in the list below. The list displays the name of the share
• Select Edit to edit the properties of the table. Refer to Editing Table for more information.
3-861
Chapter 3
Analyze
About Insights
You can generate insights for a table or for the analytic view deployed for data
analysis.
The insights that Data Insights generates for the analytic view of a business model can
be more useful than those for a table because of the additional metadata that an
analytic view provides.
Insights highlight data points as potentially anomalous if the actual value for a
measure when filtering on pairs of analytic view hierarchy values or table column
values is considerably higher or lower than the expected value, calculated across all
hierarchy or column values. Insights highlight unexpected patterns, which you may
want to investigate.
Insights are automatically generated by various analytic functions built into the
database. The results of the insight analysis appear as a series of bar charts in the
Data Insights dashboard.
Data Insights uses the following steps to generate insights:
1. Finds the values of a measure, for example Sales, across all of the distinct pairs of
the hierarchy or column values for the measure. If Sales has the hierarchies or
columns Marital Status, Age Band, Income Level, and Gender, then the pairs
would be the values of each distinct value of each hierarchy or column paired to
each distinct value of each of the other hierarchies or columns. For example, if the
values of Marital Status are Married and Single, and the values of Age Band are A,
B, and C, then the pairs would be Married and A, Married and B, Married and C,
Single and A, Single and B, and Single and C. Each distinct value of Marital Status
3-862
Chapter 3
Analyze
would also be paired with each distinct value of Income Level and Gender, and so on.
2. Estimates an expected value for the measure for each hierarchy or column pair.
3. Calculates the actual value for the measure for each hierarchy or column pair, for
example Marital Status = S, Age Band = C, and then the difference between the actual
value and the expected value.
4. Scores all of the differences and selects the largest variations between the actual and
expected values to highlight as potential insights.
The resulting insights highlight cases where the measure value is significantly larger or
smaller for a given hierarchy or column value pair than expected, for example much higher
Sales where Marital Status = S and Age Band = C.
Insights for analytic views tend to use the higher levels of a hierarchy because the differences
between the estimated and actual values are generally larger than they are for lower level
attributes. For example, the difference in dollars between the estimated and actual sales for
the entire USA are generally larger than the difference between the estimated and actual
sales for a town with a population under 1000. The difference is calculated in absolute values,
not percentages.
Insights for tables categorize columns as dimension columns or measure columns based on
their data types and cardinality. A VARCHAR2 column is always categorized as a dimension,
but a NUMBER column may be either a dimension or a measure. For example, a NUMBER
column for YEAR values that has only 10 distinct values in a table with 1 million rows is
assumed to be a dimension.
Generate Insights
To generate insights for a table or business model, do the following:
1. In the Schema field, select a schema.
2. In the Analytic View/Table field, select an analytic view or a table.
3. In the Column field, select a column that contains data about which you want gain
insights.
4. Click Search.
A confirmation notice announces that the request for insights has been successfully
submitted. Dismiss the notice by clicking the Close (X) icon in the notice.
A progress bar indicates that the search is in progress and when it has completed. The
insights appear in the Data Insights dashboard as a series of bar charts.
To refresh the display of the insights, click Refresh. To have refreshes occur automatically,
click Enable Auto Refresh.
Click Recent Searches to view the list of previous insights search.
Select View Errors to see any log of error that occurs while its creation. The results appear in
a new browser tab.
3-863
Chapter 3
Manage the Service
3-864
Chapter 3
Manage the Service
• Lifecycle Operations
You can start, stop, restart, rename, and terminate an Autonomous Database instance.
• Update Compute and storage limits
Autonomous Database has the ability to increase its storage or CPU cores. You can also
activate auto scaling.
• Update Billing Model, License Type, or Update Instance to a Paid Instance
Describes how to view and update your license type and Oracle Database Edition for
Autonomous Database. Also shows how to update the billing model to the ECPU
compute model and shows how to update your instance from free (Always Free) to a paid
instance.
• Cloning and Moving an Autonomous Database
• Use Refreshable Clones with Autonomous Database
Autonomous Database provides cloning where you can choose to create a full clone of
the active instance, create a metadata clone, or create a refreshable clone. With a
refreshable clone the system creates a clone that can be easily updated with changes
from the source database.
• Manage Autonomous Database Built-in Tools
Autonomous Database includes built-in tools that you can enable and disable when you
provision or clone a database, or at any time for an existing database.
• Use Autonomous Database Events
• Manage Time Zone File Updates on Autonomous Database
• Monitor Autonomous Database Availability
Shows you the steps to view availability information for an Autonomous Database
instance.
• Monitor Regional Availability of Autonomous Databases
The regional availability is represented as a list of daily uptime percentages of
Autonomous Databases across a set of services in each available region.
Lifecycle Operations
You can start, stop, restart, rename, and terminate an Autonomous Database instance.
• Provision Autonomous Database
Follow these steps to provision a new Autonomous Database instance using the Oracle
Cloud Infrastructure Console.
• Start Autonomous Database
Describes the steps to start an Autonomous Database instance.
• Stop Autonomous Database
Describes the steps to stop an Autonomous Database instance.
• Restart Autonomous Database
Describes the steps to restart an Autonomous Database instance.
• Rename Autonomous Database
Shows you the steps to change the database name for an Autonomous Database
instance.
• Update the Display Name for an Autonomous Database Instance
Shows you the steps to change the display name for an Autonomous Database instance.
3-865
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
• Choose your Compartment. See Compartments for information on using and
managing compartments.
On the Autonomous Databases page, perform the following steps:
1. Click Create Autonomous Database.
2. Provide basic information for the Autonomous Database.
• Compartment. See Compartments for information on using and managing
compartments.
• Display name Specify a user-friendly description or other information that
helps you easily identify the resource. The display name does not have to be
unique.
The default display name is a generated 16-character string that matches the
default database name.
• Database name Specify the database name; it must consist of letters and
numbers only. The maximum length is 30 characters. The same database
name cannot be used for multiple Autonomous Databases in the same
tenancy in the same region.
The default database name is a generated 16-character string that matches
the default display name.
3. Choose a workload type. Select the workload type for your database from the
choices:
3-866
Chapter 3
Manage the Service
• Data Warehouse
• Transaction Processing
• JSON
• APEX
4. Choose a deployment type.
• Serverless
Run Autonomous Database on serverless architecture.
• Dedicated Infrastructure
Run Autonomous Database on dedicated Exadata infrastructure.
Select Serverless to create an Autonomous Database Serverless instance.
See Create an Autonomous Database on Dedicated Exadata Infrastructure for steps to
create your instance on dedicated Exadata infrastructure.
5. Configure the database (ECPU compute model)
• Always Free: Select to show Always Free options.
You can only create a free instance in the tenancy's Home region.
• Choose database version: Select the database version. The available database
version is 19c.
With Always Free selected, the available database versions are: 19c and 21c.
• ECPU count: Specify the number of CPUs for your database. The minimum value is
2.
Your license type determines the ECPU count maximum. For example, if your
license type is Bring your own license (BYOL) with Oracle Database Standard Edition
(SE), the ECPU count maximum is 32.
• Compute auto scaling: By default compute auto scaling is enabled to allow the
system to automatically use up to three times more CPU and IO resources to meet
workload demand. If you do not want to use compute auto scaling then deselect this
option.
See Use Auto Scaling for more information.
• Storage: Specify the storage you wish to make available to your database.
Depending on your workload type you have these options:
– Data Warehouse: Specify your storage in Terabytes (TB).
– Transaction Processing: Specify your storage in Gigabytes (GB) or Terabytes
(TB). Enter the size in the Storage field. Select GB or TB for the Storage unit
size.
• By default, the IO capacity of your database depends on the number of ECPUs you
provision. When you provision 384 TB of storage, your database is entitled to the full
IO capacity of the Exadata infrastructure, independent of the number of ECPUs you
provision.
• Storage auto scaling: By default storage auto scaling is disabled. Select if you want
to enable storage auto scaling to allow the system to automatically expand to use up
to three times more storage.
See Use Auto Scaling for more information.
3-867
Chapter 3
Manage the Service
• Show advanced options: Click to show the compute model options or if you
want to create or join an elastic pool:
– Enable elastic pool:
See Create or Join a Resource Pool While Provisioning or Cloning an
Instance for more information.
– Compute model: Shows the selected compute model.
Click Change compute model to change the compute model. After you
select a different compute model, click Save.
* ECPU
Use the ECPU compute model for Autonomous Database. ECPUs are
based on the number of cores elastically allocated from a pool of
compute and storage servers.
* OCPU
Use the legacy OCPU compute model if your tenancy is using the
OCPU model and you want to continue using OCPUs. The OCPU
compute model is based on physical core of a processor with hyper
threading enabled.
Note:
OCPU is a legacy billing metric and has been retired for
Autonomous Data Warehouse (Data Warehouse workload
type) and Autonomous Transaction Processing
(Transaction Processing workload type). Oracle
recommends using ECPUs for all new and existing
Autonomous Database deployments. See Oracle Support
Document 2998742.1 for more information.
3-868
Chapter 3
Manage the Service
Note:
After you provision your Autonomous Database you can change the network
access option you select for the instance.
3-869
Chapter 3
Manage the Service
Click Add contact and in the Contact email field, enter a valid email address. To
enter multiple Contact email addresses, repeat the process to add up to 10
customer contact emails.
See View and Manage Customer Contacts for Operational Issues and
Announcements for more information.
11. (Optional) Click Show advanced options to select advanced options.
• Encryption key
Encrypt using an Oracle-managed key: By default Autonomous Database
uses Oracle-managed encryption keys. Using Oracle-managed keys,
Autonomous Database creates and manages the encryption keys that protect
your data and Oracle handles rotation of the TDE master key.
Encrypt using a customer-managed key in this tenancy: If you this option,
a master encryption key from a Oracle Cloud Infrastructure Vault in the same
tenancy is used to generate the TDE master key on Autonomous Database.
Encrypt using a customer-managed key located in a remote tenancy: If
you this option, a master encryption key in the Oracle Cloud Infrastructure
Vault located in a remote tenancy is used to generate the TDE master key on
Autonomous Database.
See Use Customer-Managed Encryption Keys on Autonomous Database for
more information.
• Maintenance
Patch level By default the patch level is Regular. Select Early to configure
the instance with the early patch level. Note: you cannot change the patch
level after you provision an instance.
See Set the Patch Level for more information.
• Management
Choose a character set and a national character set for your database.
See Choose a Character Set for Autonomous Database for more information.
• Tools
If you want to view or customize the tools configuration, select the tools tab.
See Configure Autonomous Database Built-in Tools when you Provision or
Clone an Instance for more information.
• Tags
If you want to use Tags, enter the Tag key and Tag value. Tagging is a
metadata system that allows you to organize and track resources within your
tenancy. Tags are composed of keys and values which can be attached to
resources.
See Tagging Overview for more information.
12. Optionally, you can save the resource configuration as a stack by clicking Save as
Stack. You can then use the stack to create the resource through the Resource
Manager service.
Enter the following details on the Save as Stack dialog, and click Save.
• Name: Optionally, enter a name for the stack.
3-870
Chapter 3
Manage the Service
On the Oracle Cloud Infrastructure console the Lifecycle State shows Provisioning until the
new database is available.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
1. On the Details page, from the More actions drop-down list, select Start.
Start is only shown for a stopped instance.
2. Click Start to confirm.
Notes for starting a stopped Autonomous Database instance:
• When an Autonomous Database instance is started, Autonomous Database CPU billing
is initiated, billed by the second with a minimum usage period of one minute.
• When an Autonomous Database remains stopped for over 7 days it may take a few
minutes or longer to start up, depending on the size of the database. To avoid this extra
startup time, Oracle recommends starting your Autonomous Database instance once
every 7 days.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
1. On the Details page, from the More actions drop-down list, select Stop.
3-871
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
1. On the Details page, from the More actions drop-down list, select Restart.
2. In the confirmation dialog, select Restart to confirm.
When concurrent operations such as scaling are active, the confirmation also
confirms either pausing or canceling the concurrent operation. See Concurrent
Operations on Autonomous Database for more information.
The Autonomous Database instance state changes to "Restarting". After the restart is
successful, Autonomous Database instance state is "Available".
Note:
When an Autonomous Database instance is restarted, Autonomous
Database CPU billing is initiated, billed by the second with a minimum usage
period of one minute.
3-872
Chapter 3
Manage the Service
• A single tenancy in the same region cannot contain two Autonomous Databases with the
same name.
• The database rename operation changes the connection strings required to connect to
the database. Thus, after you rename a database you must download a new wallet for
any existing instance wallets that you use to connect to the database (and any
applications also must be updated to use the new connection string and the new wallet to
connect to the database).
If you are using a regional wallet, you can continue to use the existing wallet to connect
to any databases that were not renamed. However, if you want to connect to a renamed
database, you must download a new regional wallet.
See Download Client Credentials (Wallets) for more information.
Note:
The rename operation terminates all connections to the database. After the rename
operation completes, you can reconnect to the database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To rename an Autonomous Database instance, do the following:
1. On the Details page, from the More actions drop-down list, select Rename Database.
This shows the Rename Database dialog.
3-873
Chapter 3
Manage the Service
3-874
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To update the display name on an Autonomous Database instance:
1. On the Details page, from the More actions drop-down list, select Update display
name.
3-875
Chapter 3
Manage the Service
Note:
Terminating Autonomous Database permanently deletes the instance and
removes all automatic backups. You cannot recover a terminated database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
1. On the Details page, from the More actions drop-down list, select Terminate.
2. On the Terminate Autonomous Database page enter the database name to
confirm that you want to terminate the database.
When concurrent operations such as scaling are active, the confirmation also
confirms either pausing or canceling the concurrent operation. See Concurrent
Operations on Autonomous Database for more information.
3. Click Terminate Autonomous Database.
Note:
There are limitations for terminating a database when Autonomous Data
Guard is enabled with a cross-region standby database. See Terminate a
Cross-Region Standby Database for more information.
3-876
Chapter 3
Manage the Service
Note:
The READ_ONLY session parameter applies when the database is in Read/Write
mode. If the database is in Read-Only mode, setting the session parameter value to
TRUE or to FALSE does not change the operation mode for the session.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
3-877
Chapter 3
Manage the Service
3-878
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To set the Auto Start/Stop Schedule, do the following:
1. On the Details page, from the More actions drop-down list, select Auto Start/Stop
Schedule.
2. For each day you want to schedule, select the start time check box and stop time check
box and select a start time and a stop time.
3-879
Chapter 3
Manage the Service
The drop-down list allows you to schedule start times and stop times hourly, on the
hour, specified in Coordinated Universal Time (UTC).
Selecting both a start time and a stop time is not required. You can select just a
start time or just a stop time.
3. Click Apply to apply the schedule.
4. In the confirmation dialog, click Apply.
While the system applies the Auto Start/Stop Schedule, the lifecycle state changes
to Updating. When the operation completes the lifecycle state shows Available.
The Auto Start/Stop Schedule field under General Information on the
Autonomous Database Details page lists the days with an Auto Start/Stop
Schedule enabled, and includes an Edit link.
To remove start and stop times for a day from an Auto Start/Stop Schedule, edit the
schedule and deselect the schedule for that day. To remove all scheduled start and
stop times and disable Auto Start/Stop Schedule, deselect the start and stop check
boxes for all days in the schedule.
Notes for Autonomous Database Auto Start/Stop Schedule:
• A scheduled stop works the same way as a manual stop; when the scheduled stop
occurs, any running database sessions are stopped.
• You can perform a manual start for a database, if necessary, after a database has
stopped due to a scheduled stop.
See Start Autonomous Database for more information.
• You can perform a manual stop for a database, if necessary, after a database has
started due to a scheduled start.
See Stop Autonomous Database for more information.
• The Lifecycle State must be Available to view or to edit the Auto Start/Stop
Schedule.
• Always Free Autonomous Database does not support Auto Start/Stop Scheduling.
3-880
Chapter 3
Manage the Service
• A database that has been scheduled stopped, remains stopped until the next scheduled
start time or until you start the database. Likewise, a database that has been scheduled
started, remains running until the next scheduled stop time or until you stop the database.
• When an Autonomous Database remains stopped for over 7 days it may take a few
minutes or longer to start up, depending on the size of the database. To avoid this extra
startup time, Oracle recommends starting your Autonomous Database instance once
every 7 days.
Action Description
Stop Stop pauses scaling. The scaling restarts when the database starts.
Restart Restart pauses scaling. The scaling restarts when the database starts.
3-881
Chapter 3
Manage the Service
Action Description
Restore Restore pauses scaling. The scaling restarts when the restore
completes if resources allow.
Switchover Switchover pauses scaling. The scaling restarts when the switchover
completes. The scaling also occurs on the standby database after the
switchover.
Failover Failover pauses scaling. The scaling restarts when the failover
completes. The scaling also occurs on the standby database if it is
available after the failover.
Terminate Scaling stops and the database is terminated.
Change Database Mode A mode change pauses scaling. The scaling restarts when the
Read-Write or Read-Only database starts after the mode change completes.
Action Description
Stop Stop cancels an ongoing long-term backup. The long-term backup
does not restart when the database starts.
Restart Restart cancels an ongoing long-term backup. The long-term
backup does not restart when the database starts.
Restore Restore cancels an ongoing long-term backup.
Switchover Switchover cancels an ongoing long-term backup. The long-term
backup does not restart after the switchover completes.
Failover Failover cancels an ongoing long-term backup. The long-term
backup does not restart after the failover completes.
Terminate Terminate cancels an ongoing long-term backup.
3-882
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
1. On the Details page, click Manage resource allocation.
2. In the Manage resource allocation area, add resources for your scale request.
• Enter a value or click the up arrow to select a value for ECPU count (OCPU count if
your database uses OCPUs). The default is no change.
• Storage: Specify the storage you wish to make available to your database.
Depending on your workload type and your compute model, you have these options:
– Data Warehouse: Specify your storage in Terabytes (TB).
– JSON: Specify your storage in Terabytes (TB).
– Transaction Processing: Specify your storage in Gigabytes (GB) or Terabytes
(TB). Enter the size in the Storage field. Select GB or TB for the Storage unit
size. GB units are only available when the Workload type is Transaction
Processing and the Compute model is ECPU.
The default is no change.
3-883
Chapter 3
Manage the Service
3-884
Chapter 3
Manage the Service
Note:
Clicking Shrink initiates the Shrink storage operation. See Shrink Storage for
more information.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
1. On the Details page, click Manage resource allocation.
2. In the Manage resource allocation area, select the change in resources for your scale
request:
• Enter a value or click the down arrow to select a value for the ECPU count (OCPU
count if your database uses OCPUs). The default is no change.
• Storage: Specify the storage you wish to make available to your database.
Depending on your workload type and your compute model, you have these options:
– Data Warehouse: Specify your storage in Terabytes (TB).
– JSON: Specify your storage in Terabytes (TB).
– Transaction Processing: Specify your storage in Gigabytes (GB) or Terabytes
(TB). Enter the size in the Storage field. Select GB or TB for the Storage unit
size. GB units are only available when the Workload type is Transaction
Processing and the Compute model is ECPU.
The default is no change.
3-885
Chapter 3
Manage the Service
Note:
Clicking Shrink initiates the Shrink storage operation. See Shrink
Storage for more information.
3-886
Chapter 3
Manage the Service
When you click Apply with a resource change, the Lifecycle State changes to Scaling in
Progress.... After the Lifecycle State changes to Available the changes apply immediately.
Note the following restrictions for scaling down storage:
• Scaling down storage is not allowed if the Autonomous Database instance contains either
of the following:
– Advanced Queuing tables
– Tables with LONG columns
• If you have columns with the ROWID data type, the ROWIDs that these column values point
to may change during the scale down storage operation.
• Tables that contain the following may be moved offline during a scale down operation.
DML operations on these tables may be blocked for the duration of the move and the
table indexes for these tables may become unusable until the scale down operation
completes:
– Tables with bitmap join indexes
– Nested tables
– Object tables
– Partitioned tables with domain indexes
3-887
Chapter 3
Manage the Service
• In the OCPU compute model, when the OCPU count is 128, this allows the
database to use up to 128 x 3 OCPUs (384 OCPUs) when auto scaling enabled.
To see the average number of OCPUs used during an hour you can use the
"Number of OCPUs allocated" graph on the Overview tab on the Database
Dashboard card in Database Actions. See Database Dashboard Overview for
more information.
Enabling compute auto scaling does not change the concurrency and parallelism
settings for the predefined services. See Manage Concurrency and Priorities on
Autonomous Database for more information.
Note:
Your license type determines the ECPU count maximum. For example, if
your license type is Bring your own license (BYOL) with Oracle Database
Standard Edition (SE), the ECPU count maximum is 32.For this license type
the maximum allowed value for ECPU count is 32. With compute auto
scaling enabled you can use up to ECPU count x 3 ECPUs. This license
restricts the number of ECPUs you can use to a maximum of 32 ECPUs, with
or without compute auto scaling enabled.
When compute auto scaling is enabled your database may use and you may be billed
for additional CPU consumption as needed by your workload, up to three times (3x)
the number of base CPUs (as shown in the ECPU count or OCPU count field on the
Oracle Cloud Infrastructure Console).
• Your billed CPU usage per hour while your database is running is based on the
base number of CPUs you selected for your database, plus any additional CPU
usage due to auto-scaling.
• A stopped Autonomous Database instance has zero CPU usage.
• CPU usage is measured each second, in units of whole CPUs and averaged
across an hour. If your database is running for less than an hour or auto-scales for
only part of an hour, it will be billed per second for the average CPU consumption
during that hour. The minimum CPU consumption is one minute.
For example, if your database has ECPU count 4 with Compute auto scaling
enabled:
• Assume in hour one your database is Available for the entire hour and CPU
utilization is below 4 ECPUs. Your database will be billed for the usage of 4
ECPUs.
• Assume in hour two your database is Available for the entire hour and CPU
utilization is below 4 ECPUs for 30 minutes (50% of the hour) and auto scales to 8
ECPUs for 30 minutes (the other 50% of the hour). The usage for this period, for
billing, is 6 ECPUs (based on the average per-second ECPU usage for hour two).
See Add CPU or Storage Resources or Enable Auto Scaling for the steps to enable
compute auto scaling.
3-888
Chapter 3
Manage the Service
Note:
Reducing temp tablespace requires a database restart.
3-889
Chapter 3
Manage the Service
If you disable Storage auto scaling and the used storage is greater than the reserved
base storage, as specified by the storage shown in the Storage field on the Oracle
Cloud Infrastructure Console, Autonomous Database shows a warning on the disable
storage auto scaling confirmation dialog. The warning lets you know that the reserved
base storage value will be increased to the nearest TB greater than the actual storage
usage, and shows the new reserved base storage value.
To see the Autonomous Database instance storage usage, you can view the "Storage
allocated" and "Storage used" graphs on the Overview tab by clicking the Database
Dashboard card in Database Actions. See Database Dashboard Overview for more
information.
See Add CPU or Storage Resources or Enable Auto Scaling for the steps to enable
storage auto scaling.
Shrink Storage
When the storage used in the database is significantly lower than the allocated
storage, the shrink operation reduces the allocated storage.
To understand storage allocation and the shrink operation, note the following:
• Reserved base storage: is the base amount of storage you select for the database
when you provision or scale the database, excluding any auto-scaled value. The
reserved base storage is shown in the Storage field on the Oracle Cloud
Infrastructure Console.
• Allocated storage: is the amount of storage physically reserved for all database
tablespaces (excluding sample schema tablespaces). This number also includes
the free space in these tablespaces.
• Used storage: is the amount of storage actually used in all tablespaces (excluding
the sample schema tablespaces). The used storage excludes the free space in
these tablespaces. Used storage is the storage actually used by database objects,
tables, indexes, and so on, including internally used temp space.
• Maximum storage: is the maximum storage reserved. When storage auto scaling
is disabled, the maximum storage equals the reserved base storage. When
storage auto scaling is enabled, the maximum storage is three times the base
storage (maximum = reserved base x 3).
Note:
The Shrink operation is not available with Always Free Autonomous
Database.
To shrink storage:
1. On the Details page, click Manage resource allocation.
2. In the Manage resource allocation area, select Shrink.
3-890
Chapter 3
Manage the Service
Note:
The Shrink operation is a long running operation.
3-891
Chapter 3
Manage the Service
3-892
Chapter 3
Manage the Service
View and Update Your License and Oracle Database Edition on Autonomous
Database (ECPU Compute Model)
Describes how to view and update your license type and Oracle Database Edition for
Autonomous Database.
Note:
See View and Update Your License and Oracle Database Edition on Autonomous
Database (OCPU Compute Model) if you are using the OCPU compute model.
The License type field on the Autonomous Database Information tab shows your license and
Oracle Database Edition. For example:
License type: Bring Your Own License (BYOL), Enterprise Edition
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To change your license type or Oracle Database Edition:
1. On the Details page, from the More actions drop-down list, select Update license and
Oracle Database edition.
2. On the Update license and Oracle Database edition page select a license and Oracle
Database Edition:
• Bring your own license (BYOL) Select if your organization already owns Oracle
database software licenses. Bring your existing database software licenses to the
database cloud service.
• Choose an Oracle Database Edition:
When you select Bring your own license (BYOL), you also choose an Oracle
Database Edition. The edition you select is based on the license you bring to
Autonomous Database and changes the maximum value that you can select for the
ECPU count.The choices are:
Oracle Database Enterprise Edition (EE): For this license type the maximum
allowed value for ECPU count is 512, however you may contact your Oracle account
team to request more ECPUs. With compute auto scaling enabled you can use up to
ECPU count x 3 ECPUs. For example, if you set the ECPU count to 512, you can
use up to 1,536 ECPUs.
Oracle Database Standard Edition (SE): For this license type the maximum allowed
value for ECPU count is 32. With compute auto scaling enabled you can use up to
ECPU count x 3 ECPUs. This license restricts the number of ECPUs you can use to
a maximum of 32 ECPUs, with or without compute auto scaling enabled.
3-893
Chapter 3
Manage the Service
To dismiss this message, click Update license and Oracle Database edition and set
a license type and Oracle Database Edition.
For information on Bring your own license (BYOL) and other licensing options for
Oracle Cloud Infrastructure, see the following:
• Lower your TCO for Oracle Database Standard Edition with the Bring Your Own
License (BYOL) program
• For BYOL policies, see Oracle Cloud Services contracts. BYOL policies are
described in the Oracle PaaS and IaaS Universal Credits Service Descriptions
document under Cloud Service Descriptions.
• Frequently asked questions: Oracle Bring Your Own License (BYOL)
• Cloud pricing
Notes for performing Update license and Oracle Database edition operation:
• If you try to switch to License type: Bring your own license (BYOL) Standard
Edition when the base ECPU count is above 32, you see the following message:
You cannot select Oracle Database Standard Edition unless your ECPU
count is a total of sixteen (32) or less ECPUs. Use the Manage Scaling
option to adjust your ECPU count before changing to Standard Edition.
In this case, lower your ECPU count before you change the license type. See
Remove CPU or Storage Resources or Disable Auto Scaling for more information.
• If you switch to License type: Bring your own license (BYOL) Standard Edition
when the base ECPU count is 32 and compute auto scaling is enabled, you see
the following message:
Compute auto scaling will be disabled with a base ECPU count of 32,
under an Oracle Database Standard Edition license.
In this case, Compute auto scaling is disabled and the ECPU count is set to 32.
• If you switch to License type: Bring your own license (BYOL) Standard Edition
when the base ECPU count is below 32 and above 8, with compute auto scaling
enabled, you see the following message:
Your compute auto scaling maximum will be set to 32 ECPUs, under an
Oracle Database Standard Edition license.
In this case, the maximum number of ECPUs with compute auto scaling is 32.
3-894
Chapter 3
Manage the Service
View and Update Your License and Oracle Database Edition on Autonomous
Database (OCPU Compute Model)
Describes how to view and update your license type and Oracle Database Edition for
Autonomous Database.
Note:
See View and Update Your License and Oracle Database Edition on Autonomous
Database (ECPU Compute Model) if you are using the ECPU compute model.
The License Type field on the Autonomous Database Information tab shows your license
and Oracle Database Edition. For example:
License Type: Bring Your Own License (BYOL), Enterprise Edition
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To change your license type or Oracle Database Edition:
1. On the Details page, from the More actions drop-down list, select Update license and
Oracle Database edition.
2. On the Update license and Oracle Database edition page select a license and Oracle
Database Edition:
• Bring Your Own License (BYOL) Select if your organization already owns Oracle
database software licenses. Bring your existing database software licenses to the
database cloud service.
• Choose an Oracle Database Edition:
When you select Bring your own license (BYOL), you also choose an Oracle
Database Edition. The choices are:
Oracle Database Enterprise Edition (EE): For this license type the maximum
allowed value for OCPU count is 128, however you may contact your Oracle account
team to request more than 128 OCPUs. With auto scaling enabled you can use up to
OCPU count x 3 OCPUs. For example, if you set the OCPU count to 128, you can
use up to 384 OCPUs.
Oracle Database Standard Edition (SE): For this license type the maximum allowed
value for OCPU count is 8. With auto scaling enabled you can use up to OCPU
count x 3 OCPUs. This license restricts the number of OCPUs you can use to a
maximum of 8 OCPUs, with or without auto scaling enabled.
The edition you select is based on the license you bring to Autonomous Database
and changes the maximum value that you can select for the OCPU count.
3-895
Chapter 3
Manage the Service
To dismiss this message, click Update license and Oracle Database edition and set
a license type and Oracle Database Edition.
For information on Bring Your Own License (BYOL) and other licensing options for
Oracle Cloud Infrastructure, see the following:
• Lower your TCO for Oracle Database Standard Edition with the Bring Your Own
License (BYOL) program
• For BYOL policies, see Oracle Cloud Services contracts. BYOL policies are
described in the Oracle PaaS and IaaS Universal Credits Service Descriptions
document under Cloud Service Descriptions.
• Frequently asked questions: Oracle Bring Your Own License (BYOL)
• Cloud pricing
Notes for performing Update license and Oracle Database edition operation:
• If you try to switch to License Type: Bring your own license (BYOL) Standard
Edition when the base OCPU count is above 8, you see the following message:
You cannot select Oracle Database Standard Edition unless your OCPU
count is a total of eight (8) or less OCPUs. Use the Manage Scaling
option to adjust your OCPU count before changing to Standard Edition.
In this case, lower your OCPU count before you change the license type. See
Remove CPU or Storage Resources or Disable Auto Scaling for more information.
• If you switch to License Type: Bring your own license (BYOL) Standard Edition
when the base OCPU count is 8 and Compute auto scaling is enabled, you see
the following message:
Compute auto scaling will be disabled with a base OCPU count of 8,
under an Oracle Database Standard Edition license.
In this case, Compute auto scaling is disabled and the OCPU count is set to 8.
• If you switch to License Type: Bring your own license (BYOL) Standard Edition
when the base OCPU count is below 8 and above 2, with Compute auto scaling
enabled, you see the following message:
Your compute auto scaling maximum will be set to 8 OCPUs, under an
Oracle Database Standard Edition license.
In this case, the maximum number of OCPUs with Compute auto scaling
enabled is 8.
3-896
Chapter 3
Manage the Service
Note:
If you update an Autonomous Database instance to use the ECPU compute model,
you cannot revert back to the OCPU compute model.
See Compute Models in Autonomous Database for details on the Autonomous Database
compute models.
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To update to the ECPU compute model:
1. On the Details page, under Resource allocation, in the OCPU count field click Update
to ECPU model.
3-897
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Always Free Autonomous
Database instance from the links under the Display Name column.
3-898
Chapter 3
Manage the Service
Note:
Promotion of Always Free to a paid Autonomous Database is supported only if the
database version for the Always Free Autonomous Database is Oracle Database
19c.
1. Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
2. From the Oracle Cloud Infrastructure left navigation menu click Oracle Database, and
then click Autonomous JSON Database.
1 You can subscribe to information event AJDNonJsonStorageExceeded, to be informed when the 20 GB limit is
exceeded. See About Information Events on Autonomous Database.
3-899
Chapter 3
Manage the Service
3. Choose your JSON Database from those listed in the compartment, by clicking its
name in column Display name.
4. Do one of the following:
• From the More actions drop-down list, select Change workload type.
• In tab Autonomous Database information, under heading General
information, item Workload type, click Edit.
5. Click Convert to confirm that you want to convert this database to Autonomous
Transaction Processing.
6. If you were using the refreshable clone feature with your Autonomous JSON
Database then re-create the clone after promotion to Autonomous Transaction
Processing. See Using Refreshable Clones with Autonomous Database.
See Autonomous Database Billing Summary for more information.
3-900
Chapter 3
Manage the Service
3-901
Chapter 3
Manage the Service
Note:
Depending on the cloning options you choose, cloning an Autonomous
Database instance copies your database files to a new instance. Depending
on the size of your database and where you are cloning your database,
expect the cloning operation to take significantly longer than provisioning a
new Autonomous Database instance. There is no downtime associated with
cloning and the cloning operation has no impact on applications running on
the source.
Note:
The cross tenancy cloning option is only
available using the CLI or the
Autonomous Database REST APIs. This
option is not available using the Oracle
Cloud Infrastructure Console.
3-902
Chapter 3
Manage the Service
Note:
The where clause is optional and provides a more fine grained way to grant access
to a specific database.
This shows the policies for cloning within a tenancy. See Prerequisites for Cross Tenancy
Cloning for cross tenancy cloning policies.
See Getting Started with Policies for more information.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• Choose your region. See Switching Regions for information on switching regions and
working in multiple regions.
• Choose your Compartment. See Compartments for information on using and managing
compartments.
3-903
Chapter 3
Manage the Service
3-904
Chapter 3
Manage the Service
Note:
The unavailable cloning options are grayed out. See Clone Autonomous
Database to Change Workload Type for more information on cross workload
cloning.
3-905
Chapter 3
Manage the Service
Note:
After you clone your Autonomous Database you can change the network
access option you select for the cloned instance.
3-906
Chapter 3
Manage the Service
• License included
The default license type is license included. With this option, when you create a
database by cloning you subscribe to new database software licenses and the
database cloud service.
• Switch to Bring your own license (BYOL)
Select this to show more license options. Select this if your organization already
owns Oracle Database software licenses and you want to bring your existing
database software licenses to the database cloud service. See Cloud pricing for
information on Bring your own license (BYOL) and other licensing options for Oracle
Cloud Infrastructure cloud service pricing.
When you select to switch to Bring your own license (BYOL), you also select an
Oracle Database edition. The Oracle Database edition you select is based on the
license you bring to Autonomous Database and changes the maximum value that
you can select for the ECPU count. The choices are:
– Oracle Database Enterprise Edition (EE): For this license type the maximum
allowed value for ECPU count is 512, however you may contact your Oracle
account team to request more ECPUs. With compute auto scaling enabled you
can use up to ECPU count x 3 ECPUs. For example, if you set the ECPU count
to 512, you can use up to 1,536 ECPUs.
– Oracle Database Standard Edition (SE): For this license type the maximum
allowed value for ECPU count is 32. With compute auto scaling enabled you can
use up to ECPU count x 3 ECPUs. This license restricts the number of ECPUs
you can use to a maximum of 32 ECPUs, with or without compute auto scaling
enabled.
See Use Auto Scaling for more information.
11. (Optional) Provide contacts for operational notifications and announcements
Click Add contact and in the Contact email field, enter a valid email address. If the
database you are cloning has a customer contact list, the list is copied. To enter multiple
Contact email addresses, repeat the process to add up to 10 customer contact emails.
See View and Manage Customer Contacts for Operational Issues and Announcements
for more information.
12. (Optional) Click Show advanced options to select advanced options.
• Encryption key
Encrypt using an Oracle-managed key: By default Autonomous Database uses
Oracle-managed encryption keys. Using Oracle-managed keys, Autonomous
Database creates and manages the encryption keys that protect your data and
Oracle handles rotation of the TDE master key.
Encrypt using a customer-managed key in this tenancy: If you this option, a
master encryption key from a Oracle Cloud Infrastructure Vault in the same tenancy
is used to generate the TDE master key on Autonomous Database.
Encrypt using a customer-managed key located in a remote tenancy: If you this
option, a master encryption key in the Oracle Cloud Infrastructure Vault located in a
remote tenancy is used to generate the TDE master key on Autonomous Database.
See Use Customer-Managed Encryption Keys on Autonomous Database for more
information.
• Maintenance
3-907
Chapter 3
Manage the Service
Patch level By default the patch level is the patch level of the source
database. Select Early to configure the instance with the early patch level.
When cloning a source database with Early patch level, you can only choose
the Early patch level for your clone.
See Set the Patch Level for more information.
• Management
Shows the character set and national character set for your database.
See Choose a Character Set for Autonomous Database for more information.
• Tools
If you want to view or customize the tools configuration, select the tools tab.
See Configure Autonomous Database Built-in Tools when you Provision or
Clone an Instance for more information.
• Tags
If you want to use Tags, enter the Tag key and Tag value. Tagging is a
metadata system that allows you to organize and track resources within your
tenancy. Tags are composed of keys and values which can be attached to
resources.
See Tagging Overview for more information.
13. Click Create Autonomous Database clone.
On the Oracle Cloud Infrastructure console the State shows Provisioning... until the
new database is available.
If you created a cross-region clone, a new tab shows the newly provisioned clone's
Oracle Cloud Infrastructure Console and the region shown is the region you selected
when you created the clone.
See Notes for Cloning Autonomous Database for additional information on cloning.
See Cloning an Autonomous Database for information on using the API.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Choose your Compartment. See Compartments for information on using and
managing compartments.
3. Select an Autonomous Database instance from the list in your compartment.
4. On the Details page, from the More actions drop-down list, select Create clone.
3-908
Chapter 3
Manage the Service
5. On the Create Autonomous Database clone page, choose the clone type from the
choices:
• Full clone: creates a new database with the source database’s data and metadata.
• Refreshable clone: creates a read-only full clone that can be easily refreshed with
the source database’s data.
See Use Refreshable Clones with Autonomous Database for more information.
• Metadata clone: creates a new database with the source database’s metadata
without the data.
6. In the Configure clone source area, select the Clone source option:
• Clone from database instance: This creates a clone from a running database. See
Clone an Autonomous Database Instance for details and steps with this selection.
• Clone from a backup: This selection creates a database clone using a backup.
Select this option.
7. In the Configure clone source area, select the Backup clone type:
• Point in time clone: Enter a timestamp to clone in the Enter Timestamp field.
• Select the backup from a list: Enter From and To dates to narrow the list of
backups then select a backup to use for the clone source.
• Latest backup timestamp: See Clone an Autonomous Database from Latest
Backup for information on this clone option.
Point in time clone
3-909
Chapter 3
Manage the Service
3-910
Chapter 3
Manage the Service
Note:
The unavailable cloning options are grayed out. See Clone Autonomous
Database to Change Workload Type for more information on cross workload
cloning.
• Choose database version: Select the database version. The available database
version is 19c.
• ECPU count: Specify the number of CPUs for your database. The minimum value for
the number of ECPUs is 2.
Your license type determines the ECPU count maximum. For example, if your
license type is Bring your own license (BYOL) with Oracle Database Standard Edition
(SE), the ECPU count maximum is 32.
• Compute auto scaling: By default compute auto scaling is enabled to allow the
system to automatically use up to three times more CPU and IO resources to meet
workload demand. If you do not want to use compute auto scaling then deselect this
option.
See Use Auto Scaling for more information.
• Storage: Specify the storage you wish to make available to your database.
Depending on your workload type you have these options:
– Data Warehouse: Specify your storage in Terabytes (TB).
For a Full Clone, the minimum storage that you can specify is the source
database's actual used space rounded to the next TB.
– Transaction Processing: Specify your storage in Gigabytes (GB) or Terabytes
(TB). Enter the size in the Storage field. Select GB or TB for the Storage unit
size.
For a Full Clone, the minimum storage that you can specify is the source
database's actual used space rounded to the next GB.
By default, the IO capacity of your database depends on the number of ECPUs you
provision. When you provision 384 TB of storage, your database is entitled to the full
IO capacity of the Exadata infrastructure, independent of the number of ECPUs you
provision.
• Storage auto scaling By default storage auto scaling is disabled. Select if you want
to enable storage auto scaling to allow the system to automatically expand to use up
to three times more storage.
See Use Auto Scaling for more information.
• Show advanced options: Click to show the compute model option:
– Enable elastic pool:
3-911
Chapter 3
Manage the Service
Note:
After you clone your Autonomous Database you can change the network
access option you select for the cloned instance.
• License included
3-912
Chapter 3
Manage the Service
The default license type is license included. With this option, when you create a
database by cloning you subscribe to new database software licenses and the
database cloud service.
• Switch to Bring your own license (BYOL)
Select this to show more license options. Select this if your organization already
owns Oracle Database software licenses and you want to bring your existing
database software licenses to the database cloud service. See Cloud pricing for
information on Bring your own license (BYOL) and other licensing options for Oracle
Cloud Infrastructure cloud service pricing.
When you select to switch to Bring your own license (BYOL), you also select an
Oracle Database edition. The Oracle Database edition you select is based on the
license you bring to Autonomous Database and changes the maximum value that
you can select for the ECPU count. The choices are:
– Oracle Database Enterprise Edition (EE): For this license type the maximum
allowed value for ECPU count is 512, however you may contact your Oracle
account team to request more ECPUs. With compute auto scaling enabled you
can use up to ECPU count x 3 ECPUs. For example, if you set the ECPU count
to 512, you can use up to 1,536 ECPUs.
– Oracle Database Standard Edition (SE): For this license type the maximum
allowed value for ECPU count is 32. With compute auto scaling enabled you can
use up to ECPU count x 3 ECPUs. This license restricts the number of ECPUs
you can use to a maximum of 32 ECPUs, with or without compute auto scaling
enabled.
See Use Auto Scaling for more information.
15. (Optional) Provide contacts for operational notifications and announcements
Click Add contact and in the Contact email field, enter a valid email address. If the
database you are cloning has a customer contact list, the list is copied. To enter multiple
Contact email addresses, repeat the process to add up to 10 customer contact emails.
See View and Manage Customer Contacts for Operational Issues and Announcements
for more information.
16. (Optional) Click Show advanced options to select advanced options.
• Encryption Key
Encrypt using an Oracle-managed key: By default Autonomous Database uses
Oracle-managed encryption keys. Using Oracle-managed keys, Autonomous
Database creates and manages the encryption keys that protect your data and
Oracle handles rotation of the TDE master key.
Encrypt using a customer-managed key in this tenancy: If you this option, a
master encryption key from a Oracle Cloud Infrastructure Vault in the same tenancy
is used to generate the TDE master key on Autonomous Database.
Encrypt using a customer-managed key located in a remote tenancy: If you this
option, a master encryption key in the Oracle Cloud Infrastructure Vault located in a
remote tenancy is used to generate the TDE master key on Autonomous Database.
See Use Customer-Managed Encryption Keys on Autonomous Database for more
information.
• Maintenance
Patch level By default the patch level is the patch level of the source database.
Select Early to configure the instance with the early patch level. When cloning a
3-913
Chapter 3
Manage the Service
source database with Early patch level, you can only choose the Early patch
level for your clone.
See Set the Patch Level for more information.
• Management
Shows the character set and national character set for your database.
See Choose a Character Set for Autonomous Database for more information.
• Tools
If you want to view or customize the tools configuration, select the tools tab.
See Configure Autonomous Database Built-in Tools when you Provision or
Clone an Instance for more information.
• Tags
If you want to use Tags, enter the Tag key and Tag value. Tagging is a
metadata system that allows you to organize and track resources within your
tenancy. Tags are composed of keys and values which can be attached to
resources.
See Tagging Overview for more information.
17. Click Create Autonomous Database Clone.
On the Oracle Cloud Infrastructure console the State shows Provisioning... until the
new database is available.
Notes:
See Notes for Cloning Autonomous Database for additional information on cloning.
See Cloning an Autonomous Database for information on using the API.
• Clone an Autonomous Database from Latest Backup
When you choose to clone from latest backup, this selects the latest backup as the
clone source. You can choose this option if a database becomes unavailable or for
any reason when you want to create a clone based on the most recent backup.
3-914
Chapter 3
Manage the Service
Note:
The cross tenancy cloning option is only available using the CLI or the Autonomous
Database REST APIs. This option is not available using the Oracle Cloud
Infrastructure Console.
3-915
Chapter 3
Manage the Service
• All clone types are supported: the cloned database can be a Full clone, a
Metadata clone, or a Refreshable clone.
• A clone can be created from a source Autonomous Database instance or from a
backup (using the latest backup, a specified backup, or by selecting a long-term
backup).
• The source Autonomous Database instance can use either the ECPU or OCPU
compute model. Depending on your workload type, you can clone from a source
that uses the OCPU compute model to a clone that uses the ECPU compute
model (this is allowed for the Data Warehouse and the Transaction Processing
workload types).
• The cloned database can be in the same region or in a different region (cross-
region).
• The cross tenancy cloning option does not support cloning with customer
managed keys on the source. See Manage Encryption Keys on Autonomous
Database for more information on customer managed keys.
3-916
Chapter 3
Manage the Service
b. Under Identity click Domains and select an identity domain (or create a new identity
domain).
c. Under Identity domain, click Groups.
d. To add a group, click Create group.
e. On the Create group page, enter a Name and a Description.
For example, enter the Name: DestinationGroup.
f. On the Create group page, click Create.
g. Click Create to save the group.
h. On the Group page, click Assign user to groups and select the users you want to
add to the group.
i. Click Add.
j. On the Group page, from the Group information tab copy the OCID for use in Step
2.
2. On the source tenancy, define OCI Identity and Access Management policies for the
source Autonomous Database instance.
a. On the source tenancy, in the Oracle Cloud Infrastructure Console click Identity &
Security.
b. Under Identity, click Policies.
c. To write a policy, click Create Policy.
d. On the Create Policy page enter a Name and a Description.
e. On the Create Policy page, select Show manual editor.
f. In the policy builder, add policies so that the group in the destination tenancy is
allowed to create a clone using an Autonomous Database instance on the source
tenancy as the clone source.
For example, define the following generic policies:
3-917
Chapter 3
Manage the Service
Note:
The where clause is optional and provides a more fine grained way
to grant access to a specific database.
For example, set these policies on the source tenancy to allow cross tenancy
cloning:
3-918
Chapter 3
Manage the Service
Note:
If these polices are revoked, cross tenancy cloning is no longer allowed.
Note:
The cross tenancy cloning option is only available using the CLI or the Autonomous
Database REST APIs. This option is not available using the Oracle Cloud
Infrastructure Console.
3-919
Chapter 3
Manage the Service
• For information about using the API and signing requests, see REST APIs and
Security Credentials.
• For information about SDKs, see Software Development Kits and Command Line
Interface.
Note:
The cross tenancy cloning option is only available using the CLI or the
Autonomous Database REST APIs. This option is not available using the
Oracle Cloud Infrastructure Console.
Note:
See Create a Cross Tenancy Refreshable Clone to create a cross
tenancy refreshable clone.
3-920
Chapter 3
Manage the Service
Use the CreateAutonomousDatabase API to create a cross tenancy clone by cloning from
a backup of an existing Autonomous Database.
See the following for information on the REST API:
• CreateAutonomousDatabase
• CreateAutonomousDatabaseFromBackupDetails Reference
• CreateAutonomousDatabaseFromBackupTimestampDetails Reference
• For information about using the API and signing requests, see REST APIs and Security
Credentials.
• For information about SDKs, see Software Development Kits and Command Line
Interface.
3-921
Chapter 3
Manage the Service
The following are not allowed when you clone and select a different workload type:
• Creating a cross workload refreshable clone.
• Cloning to a more restrictive workload type, such as from Autonomous Transaction
Processing or Autonomous Data Warehouse database to Autonomous JSON
Database or APEX Service.
See the following for steps to create an Autonomous Database clone:
• Clone an Autonomous Database Instance
• Clone an Autonomous Database from a Backup
• If you define a network Access Control List (ACL) on the source database, the
currently set network ACL is cloned to the new database. If a database is cloned
from a backup, the current source database's ACL is applied (not the ACL that
was valid at the time of the backup).
• If you create a clone and the source database has an access control list (ACL) and
you specify the private endpoint network access option, Virtual cloud network for
the target database, the ACL is not cloned to the new database. In this case, you
must define security rules within your Network Security Group (or groups) to
control traffic to and from your target database (instead of using the access control
3-922
Chapter 3
Manage the Service
rules that were specified in the ACL on the clone source). See Configure Private
Endpoints When You Provision or Clone an Instance for more information.
• Cloning an Autonomous Database instance copies your database files to a new instance.
There is no downtime associated with cloning and the cloning operation has no impact on
applications running on the source.
• For a Metadata Clone, the APEX Apps and the OML Projects and Notebooks are copied
to the clone. For a Metadata Clone, the underlying database data of the APEX App or
OML Notebook is not cloned.
• The Autonomous Database Details page for an Autonomous Database instance that was
created by cloning includes the Cloned From field. This displays the name of the
database where the clone was created.
3-923
Chapter 3
Manage the Service
• Metadata Clone: the first load into a table after the database is cloned clears the
statistics for that table and updates the statistics with the new load.
For more information on Optimizer Statistics, see Optimizer Statistics Concepts.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Choose your Compartment. See Compartments for information on using and
managing compartments.
3. Select an Autonomous Database instance from the list in your compartment.
3-924
Chapter 3
Manage the Service
4. On the Details page, from the More actions drop-down list, select Move resource.
5. In the Move Autonomous Database to a different compartment page, select the new
compartment.
3-925
Chapter 3
Manage the Service
3-926
Chapter 3
Manage the Service
Note:
Refreshable clones have a one week refresh age limit. If you do not perform a
refresh within a week, the refreshable clone is no longer refreshable. After a
refreshable clone passes the refresh time limit, you can use the instance as a read
only database or you can disconnect from the source to make the database a read/
write (standard) database.
3-927
Chapter 3
Manage the Service
Operation Description
Create You can create a refreshable clone from an Autonomous Database
instance. You can create more than one refreshable clone using the same
Autonomous Database instance as a source.
See Create a Refreshable Clone for an Autonomous Database Instance for
the steps to create a refreshable clone.
View You view a refreshable clone from the Oracle Cloud Infrastructure console
Autonomous Database Details page.
See View Refreshable Clones for an Autonomous Database Instance for
more information.
Start or Restart When a refreshable clone is stopped as indicated by the Lifecycle State
Stopped, you can start the database.
When a refreshable clone is available as indicated by the Lifecycle State
Available, you can restart the database.
Refresh For a refreshable clone, you can refresh the clone with data from the source
database. See Refresh a Refreshable Clone on Autonomous Database for
more information.
Edit Automatic When you enable the automatic refresh option a refreshable clone
Refresh Policy automatically refreshes from the source database at regular intervals. By
default, automatic refresh is disabled and you must manually refresh at
least once every 7 days. See Edit Automatic Refresh Policy for Refreshable
Clone for more information.
Disconnect Clone You can disconnect a refreshable clone from the source database to make
from Source the clone a standard read/write database.
See Disconnect a Refreshable Clone from the Source Database for more
information.
Reconnect When a clone database is disconnected, you can use the clone as a
Refreshable standard read/write database. There is a 24 hour reconnect period where
Clone you can reconnect the clone to its source database. After the reconnect
period the refreshable clone is disassociated from the source database and
reconnecting to the source database is not allowed.
See Reconnect a Refreshable Clone to the Source Database for more
information.
Stop When a refreshable clone is stopped, database operations are not available
and charging for OCPU usage on the refreshable clone stops.
Terminate If you want to terminate a refreshable clone, select More actions and
Terminate. Terminating a refreshable clone disassociates the clone
database from the source database.
3-928
Chapter 3
Manage the Service
Note:
The automatic refresh option is only available when the source database uses the
ECPU compute model.
Note the following for a refreshable clone with automatic refresh enabled:
• You can manually refresh a refreshable clone between automatic refreshes. If you
manually refresh to a refresh point later than the next refresh point specified for an
automatic refresh, the automatic refresh may fail. Following refreshes, after the failed
refresh, are attempted at the next scheduled automatic refresh.
• If a refreshable clone is stopped, disconnected, or already at a later timestamp than the
specified automatic refresh point, the automatic refresh fails.
• If a scheduled refresh is missed due to the source database being stopped, or the
refreshable clone being disconnected, or when the refreshable clone was manually
refreshed to later point, the missed refresh is skipped. The refreshable clone refreshes at
the next scheduled interval.
• When an automatic refresh is missed, Autonomous Database does the following:
– Shows a banner on the Oracle Cloud Infrastructure Console.
For example:
– Generates the automatic refresh failed event. See Autonomous Database Event
Types for more information.
• If no refresh is performed within 7 days, the refreshable clone can no longer be
refreshed.
When a refreshable clone has not been refreshed within seven days and the refreshable
clone has exceeded its maximum refresh time, you have the following options:
– You can continue to use the refreshable clone as a read-only database. The
refreshable clone is not refreshable and the data on the refreshable clone reflects the
state of the source database at the time of the last successful refresh.
– You can disconnect the refreshable clone from the source database. This
disconnects the refreshable clone from the source Autonomous Database instance.
See Disconnect a Refreshable Clone from the Source Database for more
information.
3-929
Chapter 3
Manage the Service
Refreshable Clone Refresh Timing and Disconnecting from the Source Database
A refreshable clone with Automatic refresh disabled has a one week refresh age limit
and a banner on the Oracle Cloud Infrastructure console displays the date and time up
to which you can refresh the refreshable clone. The banner also includes a Refresh
clone button.
When a refreshable clone that has Auto refresh disabled is not refreshed within
seven (7) days from the last refresh, the banner message changes to indicate that the
refreshable clone that has not been refreshed within seven days cannot be refreshed.
The button in the banner changes to Disconnect clone from source database.
When a refreshable clone has not been refreshed within seven days and the
refreshable clone has exceeded its maximum refresh time, you have the following
options:
• You can continue to use the refreshable clone as a read-only database. The
refreshable clone is not refreshable and the data on the refreshable clone reflects
the state of the source database at the time of the last successful refresh.
• You can disconnect the refreshable clone from the source database. This
disconnects the refreshable clone from the source Autonomous Database
instance.
See Disconnect a Refreshable Clone from the Source Database for more
information.
When a refreshable clone has exceeded the maximum refresh time, if you want to use
a refreshable clone that can be refreshed from the source database, you must create a
new refreshable clone. If you create a new refreshable clone, you might also want to
terminate the refreshable clone that is no longer able to refresh from the source
database.
3-930
Chapter 3
Manage the Service
When a disconnected database is not reconnected within 24 hours from the disconnect time,
the Oracle Cloud Infrastructure Console removes the reconnect refreshable clone banner.
When a disconnected refreshable clone has exceeded the reconnect period, you have the
following options:
• You can use the database as a standard Autonomous Database; there is no longer an
option to reconnect the database to the source database.
• If you want to use a refreshable clone that can be refreshed from the source database,
then you must create a new refreshable clone. If you create a new refreshable clone,
then you might want to terminate the disconnected clone that is no longer able to refresh
from the source database.
3-931
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To create a refreshable clone do the following:
1. On the Details page, from the More actions drop-down list, select Create clone.
2. On the Create Autonomous Database clone page, choose the clone type
Refreshable clone from the choices:
• Full clone: creates a new database with the source database’s data and
metadata.
• Refreshable clone To create a refreshable clone, select this clone type.
• Metadata clone: creates a new database with the source database’s
metadata without the data.
3. Provide basic information for the Autonomous Database clone.
• Choose your preferred region. Use the current region or select a different
region. When you create a refreshable clone, the preferred region list only
shows a remote region if your tenancy is subscribed to the remote region (you
must be subscribed to the remote region where you create a refreshable
clone).
The list of available regions only shows a remote region if your tenancy is
subscribed to the remote region (you must be subscribed to the paired remote
region for the target region where you are cloning your database). See
Autonomous Database Cross-Region Paired Regions for more information.
• Create in compartment. See Compartments for information on using and
managing compartments.
• Source database name This field is read-only and shows the name of the
source database.
• Display name Specify a user-friendly description or other information that
helps you easily identify the database.
You can use the name provided, of the form: Clone-of-DBname or change this
to the name you want to use to identify the database. The supplied DBname is
the name of the source database that you are cloning.
• Database name: Specify the database name; it must consist of letters and
numbers only. The maximum length is 30 characters. The same database
name cannot be used for multiple Autonomous Databases in the same
tenancy in the same region.
3-932
Chapter 3
Manage the Service
Note:
The storage for a refreshable clone is set to the same size as on the source
database. To change the storage size for a refreshable clone, you must change
the storage value on the source database.
5. Select Enable automatic refresh to specify that the refreshable clone automatically
refreshes. By default, Enable automatic refresh is deselected and you must manually
refresh at least once every 7 days. When you select Enable automatic refresh, the
dialog shows two fields:
• Frequency of refresh: specifies the refresh frequency in hours or days. The
minimum is one hour and the maximum is 7 days. The default Frequency of refresh
value is 1 hour.
• Data lag: specifies the data lag in minutes, hours, or days, the minimum is 0 minutes
and the maximum is 7 days. This is the value that specifies the amount of time
behind the source that the data refresh is, where a value of 0 specifies that the
refreshable clone refreshes to the latest available timestamp. The default Data lag
value is 0.
6. Choose network access
Note:
After you clone your Autonomous Database you can change the network
access option you select for the cloned instance.
3-933
Chapter 3
Manage the Service
3-934
Chapter 3
Manage the Service
See View and Manage Customer Contacts for Operational Issues and Announcements
for more information.
9. (Optional) Click Show advanced options to select advanced options.
• Encryption key
A refreshable clone uses the encryption key of the source Autonomous Database.
See Use Customer-Managed Encryption Keys on Autonomous Database for more
information.
• Maintenance
Patch level By default the patch level is the patch level of the source database.
Select Early to configure the instance with the early patch level. When cloning a
source database with Early patch level, you can only choose the Early patch level
for your clone.
See Set the Patch Level for more information.
• Management
Shows the character set and national character set for your database.
See Choose a Character Set for Autonomous Database for more information.
• Tools
Refreshable clones inherit their tool status for each built-in tool from the refreshable
clone's source database. If the source of a refreshable clone changes its tool
configuration, the change is reflected in the refreshable clone after the next refresh.
See Configure Autonomous Database Built-in Tools for more information.
• Tags
If you want to use Tags, enter the Tag key and Value. Tagging is a metadata system
that allows you to organize and track resources within your tenancy. Tags are
composed of keys and values which can be attached to resources.
See Tagging Overview for more information.
10. Click Create Autonomous Database clone.
On the Oracle Cloud Infrastructure console the State shows Provisioning... until the
refreshable clone is available.
After the provisioning completes the Lifecycle state shows Available and the Mode is Read-
only.
After you download the wallet for the refreshable clone and connect to the database, you can
begin using the database to perform read-only operations such as running queries or building
reports and notebooks.
When automatic refresh is disabled, after you create a refreshable clone, the Oracle Cloud
Infrastructure console shows a banner similar to the following with a message indicating the
date prior to which the next refresh must be completed (the banner shows the 7 day refresh
limit).
3-935
Chapter 3
Manage the Service
On the details under Clone information the console shows the Refresh point,
indicating the timestamp of the source database's data that the clone was last
refreshed with.
If an Autonomous Database has attached refreshable clones, you can view the
refreshable clones as follows:
1. Choose your region. See Switching Regions for information on switching regions
and working in multiple regions.
2. Choose your Compartment. See Compartments for information on using and
managing compartments.
3. Select an Autonomous Database instance from the list in your compartment.
4. On the Autonomous Database details page, under Resources, click
Refreshable clones.
This shows the list of refreshable clones for the Autonomous Database instance.
If you click at the end of a row for a refreshable clone in the list, you can select an
action to perform for the refreshable clone.
3-936
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To edit auto refresh options:
1. On the clone source, on the Details page, under Resources, select Refreshable
clones.
2. In the Refreshable clones area, under Display name, select a refreshable clone.
3. On the refreshable clone, under the Clone information area, in the Auto refresh field
click Edit.
3-937
Chapter 3
Manage the Service
• Data lag: specifies the data lag in minutes, hours, or days, the minimum is 0
minutes and the maximum is 7 days. This is the value that specifies the
amount of time behind the source that the data refresh is, where a value of 0
specifies that the refreshable clone refreshes to the latest available timestamp.
The default Data lag value is 0.
5. Click Save.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
You can refresh a refreshable clone as follows:
1. On the Details page, under Clone information, in the Clone source field click the
Refresh link.
This shows the Refresh clone dialog.
2. In the Refresh clone dialog, specify the Refresh point timestamp.
The refresh point specifies the timestamp of the source database data to which the
refreshable clone is refreshed.
3-938
Chapter 3
Manage the Service
The timestamp can be a minimum of 1 minute in the past and a maximum of 7 days in
the past. The timestamp must be later than, that is after, the last refresh point. Thus, if
you create a refreshable clone, then you would need to wait some time before you
refresh the clone.
3. Click Refresh clone.
On the Oracle Cloud Infrastructure console the Lifecycle state shows Updating until the
refresh operation completes. While the database is updating connections and queries wait
until the refresh completes.
You can see the status of the refresh operation under Work requests.
On the Oracle Cloud Infrastructure Console, under Clone Information the Refresh point
shows the timestamp of the source database's data that the clone was refreshed to.
Note:
Refreshable clones have a one week refresh age limit. If you do not perform a
refresh within a week then the refreshable clone will not be refreshable. The refresh
by date is shown in the Oracle Cloud Infrastructure Console banner. In the case
where a refreshable clone is not refreshable, you can disconnect the database from
the source to make it a read/write (standard) database. See Disconnect a
Refreshable Clone from the Source Database for more information.
3-939
Chapter 3
Manage the Service
When you disconnect a refreshable clone the refreshable clone is disassociated from
the source database. This converts the database from a refreshable clone to a regular
database. Following the disconnect operation you are allowed to reconnect the
disconnected database to the source database. The reconnect operation is limited to a
24 hour period.
There are several ways to find refreshable clones. See View Refreshable Clones for
an Autonomous Database Instance for more information.
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
Starting from the Source Database Console Disconnect a refreshable clone from
the source database and convert the refreshable clone to a read/write database:
1. On the Details page, under Resources select Refreshable clones.
This shows the list of refreshable clones.
2. In the row for the refreshable clone you want to disconnect, click at the end of
the row and select Disconnect clone from source database.
3. In the Disconnect refreshable clone dialog enter the source database name to
confirm disconnecting the clone.
4. Click Disconnect refreshable clone.
Starting from the Clone Database Console Disconnect a refreshable clone from the
source database and convert the refreshable clone to a read/write database:
1. On the Details page, under Clone information, in the Clone source field click the
Disconnect link.
2. In the Disconnect refreshable clone dialog enter the source database name to
confirm disconnecting the clone.
3. Click Disconnect refreshable clone.
The Autonomous Database Lifecycle state of the cloned database changes to
Updating. When the disconnect operation completes the Lifecycle state changes to
Available and the Mode shows Read/Write.
After you disconnect a refreshable clone, the Oracle Cloud Infrastructure Console
updates with the following changes:
• The Display name on the Autonomous Databases page updates and does not
show the Refreshable clone indicator for the cloned instance.
• On the disconnected instance, the Autonomous Database Details page updates
and does not show the Refreshable clone indicator, and the Clone information
area is removed.
3-940
Chapter 3
Manage the Service
• On the disconnected instance, the Autonomous Database details page displays a banner
indicating the date and time up to which you can reconnect the database to the source.
The banner also includes a Reconnect refreshable clone button. For example:
See Reconnect a Refreshable Clone to the Source Database for more information.
• On the source database, on the Autonomous Database Details page, when you click
Refreshable clones under Resources, the list no longer shows an entry for the
disconnected clone.
Notes for disconnecting a refreshable clone.
• Disconnecting a refreshable clone from the source may fail if the source database has
scaled down its storage and the refreshable clone has a larger amount of data than the
source. In this case, you have the following options:
– You may scale up the storage on the source database temporarily before
disconnecting, and then scale the source back down after disconnecting.
– Refresh the clone to a point where the scale down of storage occurred.
• After you disconnect a refreshable clone you have 24 hours to reconnect. After the
reconnect period is over, the reconnect operation is not available. If you do not reconnect
a disconnected refreshable clone, the clone is a standard Autonomous Database and
there is no longer an option to reconnect the database to the source database.
• A disconnected refreshable clone is no longer associated with the source database. To
use the database or to initiate the reconnect operation, you must know the name of the
refreshable clone database that was disconnected from the source database. You must
initiate the reconnect operation from the disconnected clone database. You cannot
reconnect a disconnected clone from the source database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
Reconnect a database and restore all the data back to that time when the refreshable clone
was disconnected from the source database as follows:
1. On the Details page, from the More actions drop-down list, select Reconnect
refreshable clone.
This shows the Reconnect Refreshable Clone dialog.
3-941
Chapter 3
Manage the Service
2. In the Reconnect Refreshable Clone dialog, enter the source database name to
confirm.
Note:
All data and metadata inserted, updated, or deleted from the database
where you run the reconnect operation will be lost when the database is
reconnected to the source database.
3-942
Chapter 3
Manage the Service
1. Perform the prerequisite steps to define the OCI Identity and Access Management
policies to authorize cross tenancy cloning.
See Prerequisites for Cross Tenancy Cloning for details.
Note:
If the polices you specify to allow cross tenancy cloning are subsequently
revoked, cross-tenancy cloning is no longer allowed. In addition, for a
refreshable clone, when the policies are revoked actions on the remote tenancy
that require contact with the source tenancy, such as disconnect and refresh will
fail.
2. On the tenancy where you want to create the refreshable clone, use the CLI or call the
REST API.
You run the CLI or call the REST API on the destination tenancy in the destination region
where you want to create the refreshable clone. The source database resides in a
different tenancy, and if specified in a different region. This creates either a cross-tenancy
refreshable clone in the same region, or a cross-tenancy refreshable clone in a different
region (cross-region).
For example, with the CLI:
3-943
Chapter 3
Manage the Service
3-944
Chapter 3
Manage the Service
the database console and in the billing for the refreshable clone. When the refreshable
clone is refreshed to a refresh point after the scale up or down operation, the system
makes a corresponding change to the refreshable clone storage (scaling up or scaling
down the refreshable clone storage to match the source database).
• You cannot use the rename operation on a refreshable clone instance or on a database
that is the source for a refreshable clone.
• When the patch level of a source Autonomous Database instance is Regular, in regions
that support the Early patch level you can set the patch level of a clone to Early. See Set
the Patch Level for more information.
• When the patch level of a source Autonomous Database instance is Early, you can only
choose the Early patch level for your clone (the refreshable clone must use the same
Early patch level). See Set the Patch Level for more information.
• Automatic Workload Repository (AWR) data and reports are not available for refreshable
clones. In addition, the graphs that rely on AWR data are not available, including the
following graphs:
– The Running SQL statements graph on the Overview tab shown on the Database
Dashboard card in Database Actions.
– The SQL statement response time graph on the Overview tab shown on the
Database Dashboard card in Database Actions.
– The Time period graphs on the Monitor tab shown on the Database Dashboard card
in Database Actions.
– The Performance Hub graph data older than one hour is not available.
• For an Oracle Autonomous JSON Database (workload type JSON Database), note the
following when reconnecting to the source database:
– If, after you disconnect the refreshable clone, you promote both the clone and the
source to Oracle Autonomous Transaction Processing (workload type Transaction
Processing), you can reconnect the database to the source.
– If after you disconnect the refreshable clone, you promote the source database to
Oracle Autonomous Transaction Processing (workload type Transaction
Processing) and do not promote the disconnected clone, the disconnected clone
must also be promoted to Oracle Autonomous Transaction Processing (workload
type Transaction Processing) before you perform the reconnect operation.
– If after you disconnect the refreshable clone, you promote the disconnected database
to Oracle Autonomous Transaction Processing (workload type Transaction
Processing), you can still reconnect to the source but the reconnected database
remains in the promoted state.
See Promote to Autonomous Transaction Processing for more information.
3-945
Chapter 3
Manage the Service
Tool Description
Oracle APEX Oracle APEX is a low-code development platform that you can
use to build scalable, secure enterprise applications that can be
deployed anywhere.
See Create Applications with Oracle APEX in Autonomous
Database for more information.
Database Actions Oracle Database Actions is a web-based interface that uses
Oracle REST Data Services to provide development, data tools,
administration and monitoring features for Oracle Autonomous
Database.
See Connect with Built-In Oracle Database Actions for more
information.
Graph Studio Graph Studio automates the creation of knowledge (RDF) and
property graphs and includes interactive tooling for query,
analysis, and visualization of these graphs in the Autonomous
Database. You must log in as a graph-enabled user to access
Graph Studio. Create this user in Database Actions.
See Using Oracle Graph with Autonomous Database for more
information.
Oracle ML User Interface Oracle Machine Learning User Interface provides immediate
access to the Oracle Machine Learning components and
functionality on Autonomous Database, including OML
Notebooks, OML AutoML UI, OML Models, and template example
notebook. This includes:
• OML Notebooks
• OML AutoML UI
• OML for Python
• OML for R
• OML Services
3-946
Chapter 3
Manage the Service
Tool Description
Data Transforms Oracle Data Transforms allows you to design graphical data
transformations in the form of data flows and workflows. The data
flows define how the data is moved and transformed between
different systems, while the workflows define the sequence in
which the data flows run.
Web Access (ORDS) Oracle REST Data Services (ORDS) provides HTTPS interfaces
for working with the contents of your Oracle Database in one or
more REST enabled schema
See Develop with Oracle REST Data Services on Autonomous
Database for more information.
MongoDB API Oracle Database API for MongoDB enables MongoDB-compatible
clients and drivers to connect directly to Autonomous Database.
See Using Oracle Database API for MongoDB for more
information.
SODA Drivers Simple Oracle Document Access (SODA) is a set of APIs that let
you work with JSON documents managed by the Oracle
Database without needing to use SQL. SODA drivers are
available for REST, Java, Node.js, Python, PL/SQL, and C.
See Use SODA for REST with Autonomous Database for more
information.
3-947
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
1. Navigate to the Autonomous Database details page.
2. Select the Tool configuration tab.
This displays the tools configuration details and status.
3-948
Chapter 3
Manage the Service
3-949
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
1. On the Autonomous Database details page, select the Tool configuration tab.
2. Click Edit tool configuration.
3. Select or deselect in the Enable tool field to enable or disable a tool.
4. (Optional) When enabled, Graph Studio, Oracle Machine Learning User Interface,
and Data Transforms provide default ECPU count and Maximum idle count
values and let you configure these values.
Change the ECPU count value if you want to provide more or less resources for a
built-in tool.
Change the Maximum idle time value to specify the time, in minutes, after which
an idle Graph Studio, Data Transforms, or Oracle Machine Learning VM is
terminated.
See About Configuring Built-in Tools Compute Resource Limits for more
information.
5. Click Apply to apply your changes.
3-950
Chapter 3
Manage the Service
3-951
Chapter 3
Manage the Service
The Lifecycle State changes to Updating. When the request completes, the
Lifecycle State shows Available.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
1. On the Autonomous Database details page, select the Tool configuration tab.
2. Click Edit tool configuration.
3. Select or deselect in the Enable tool field to enable or disable a tool.
4. Click Apply to apply your changes.
3-952
Chapter 3
Manage the Service
3-953
Chapter 3
Manage the Service
The Lifecycle State changes to Updating. When the request completes, the
Lifecycle State shows Available.
Configure Autonomous Database Built-in Tools when you Provision or Clone an Instance
(ECPU compute model)
Describes how to enable or disable Autonomous Database built-in tools when you
provision or clone an instance.
When you are cloning, note the following:
• When you are cloning and you select Full Clone or Metadata Clone, the built-in
tools default settings are based on the defaults for the workload type of the clone.
In this case, the defaults are not based on source database built-in tools settings.
• When you are cloning and you select Refreshable Clone, the refreshable clone
inherits the built-in tool status from the source database. Editing the built-in tool
status for refreshable clone is not be possible during or after provisioning. If the
source database of refreshable clone changes its tool configuration, this is
reflected in the refreshable clone after the next refresh.
Perform the steps to provision or clone an instance:
• Provision Autonomous Database
• Clone an Autonomous Database Instance
1. Click Show advanced options to select advanced options.
2. In advanced options, select the Tools tab.
3. Click Edit tool configuration.
4. Select or deselect a tool in the Enable tool field to enable or disable the tool.
5. (Optional) When enabled, Graph Studio, Oracle Machine Learning User Interface,
and Data Transforms provide default ECPU count and Maximum idle count
values and let you configure these values.
Change the ECPU count value if you want to provide more or less resources for a
built-in tool.
Change the Maximum idle time value to specify the time, in minutes, after which
an idle Graph Studio, Data Transforms, or Oracle Machine Learning VM is
terminated.
See About Configuring Built-in Tools Compute Resource Limits for more
information.
3-954
Chapter 3
Manage the Service
6. Click Apply to apply the configuration changes and return to the provision or clone steps.
3-955
Chapter 3
Manage the Service
3-956
Chapter 3
Manage the Service
Configure Autonomous Database Built-in Tools when you Provision or Clone an Instance (OCPU
compute model)
Describes how to enable or disable Autonomous Database built-in tools when you provision
or clone an instance
When you are cloning, note the following:
• When you are cloning and you select Full Clone or Metadata Clone, the built-in tools
default settings are based on the defaults for the workload type of the clone. In this case,
the defaults are not based on source database built-in tools settings.
• When you are cloning and you select Refreshable Clone, the refreshable clone inherits
the built-in tool status from the source database. Editing the built-in tool status for
refreshable clone is not be possible during or after provisioning. If the source database of
refreshable clone changes its tool configuration, this is reflected in the refreshable clone
after the next refresh.
Perform the steps to provision or clone an instance:
• Provision Autonomous Database
• Clone an Autonomous Database Instance
1. Click Show advanced options to select advanced options.
2. In advanced options, select the Tools tab.
3. Click Edit tool configuration.
4. Select or deselect a tool in the Enable tool field to enable or disable the tool.
5. Click Apply to apply the configuration changes and return to the provision or clone steps.
3-957
Chapter 3
Manage the Service
3-958
Chapter 3
Manage the Service
• Disabling ORDS affects built-in tools such as Database Actions, APEX, and MongoDB
API. Therefore, if you disable ORDS, this also disables APEX, Database Actions, and
MongoDB API.
• When ORDS is disabled, Database Actions, APEX, and MongoDB API are disabled. If
you subsequently enable ORDS this does not automatically enable APEX, Database
Actions, or the MongoDb APIs. You must manually enable each built-in tool that is
dependent on ORDS. Enable ORDS first, then you can enable APEX, Database Actions,
or the MongoDB APIs
• Always Free Autonomous Database does not provide configuration options for
Autonomous Database tools and does not allow you to enable or disable Autonomous
Database tools.
• By default when you provision an Autonomous Database with Autonomous Data
Warehouse or Autonomous Transaction Processing workload types, the MongoDB API is
disabled. If MongoDB API is disabled, you can enable it during provisioning or cloning, or
in an existing database.
You can enable MongoDB API only if you have an ACL or if you are on a private
endpoint. See Configure Access for MongoDB for more information.
• For JSON Database, Graph Studio is not supported and enabling or disabling Graph
Studio for JSON Database is not possible.
For JSON Database, MongoDB API is enabled by default and you can disable it during
the provisioning or cloning, or in an existing database.
• When you are cloning and you select Full Clone or Metadata Clone, the built-in tools
default settings are based on the defaults for the workload type of the clone. In this case,
the defaults are not based on source database built-in tools settings.
• When you are cloning and you select Refreshable Clone, the refreshable clone inherits
the built-in tool status from the source database. Editing the built-in tool status for
refreshable clone is not be possible during or after provisioning. If the source database of
refreshable clone changes its tool configuration, this is reflected in the refreshable clone
after the next refresh.
• When you provision or clone an Autonomous Database with the APEX workload type
(APEX service), the only supported built-in tools are Oracle APEX, Web Access (ORDS),
and Database Actions. In the APEX service, you can only disable Database Actions.
• All existing sessions end after you disable a built-in tool. In addition, disabling Oracle
APEX or Web Access (ORDS) also affects the existing applications that are based on
these tools. For example, a user application based on ORDS does not continue to work
after you disable ORDS.
3-959
Chapter 3
Manage the Service
Event Description
AdminPasswordWarning Provides a message that the Autonomous Database ADMIN
password is expiring within 30 days or is expired.
• If the ADMIN password is within 30 days of being
unusable, you receive a warning indicating the date
when the ADMIN password can no longer be used.
• If the ADMIN password expires and is no longer usable,
Autonomous Database reports an event that specifies
that the ADMIN user password has expired and must be
reset.
3-960
Chapter 3
Manage the Service
Event Description
AutomaticFailoverBegin When an automatic failover begins for the Autonomous
Database and the database is unavailable an
AutonomousDatabase-AutomaticFailoverBegin event
is generated.
The event will only be triggered if you are using Autonomous
Data Guard.
AutomaticFailoverEnd When an automatic failover that was triggered is complete
and the database is now available an
AutonomousDatabase-AutomaticFailoverEnd event is
generated.
The event will only be triggered if you are using Autonomous
Data Guard.
DatabaseDownBegin The Autonomous Database instance cannot be opened, or
the services such as high, low, medium, tp, or tpurgent are
not started or available.
The following conditions do not trigger DatabaseDownBegin:
• Operations performed during the maintenance window
• Load balancer, network, or backup related issues
• A user stopping the instance
This event will not be triggered if you are using Autonomous
Data Guard and the standby database is not available due to
any of these conditions.
DatabaseDownEnd The database is recovered from the down state, meaning the
Autonomous Database instance is opened with its services,
following a DatabaseDownBegin event. DatabaseDownEnd
is triggered only if there was a preceding
DatabaseDownBegin event.
The following conditions do not trigger DatabaseDownEnd:
• Operations performed during the maintenance window
• A user starting the instance
If you are using Autonomous Data Guard and the primary
database goes down, this triggers a DatabaseDownBegin
event. If the system fails over to the standby database, this
triggers a DatabaseDownEnd event.
DatabaseInaccessibleBegin The Autonomous Database instance is using customer-
managed keys and the database becomes inaccessible
(state shows Inaccessible).
The following conditions trigger
DatabaseInaccessibleBegin:
• Oracle Cloud Infrastructure Vault master encryption key
is deleted.
• Oracle Cloud Infrastructure Vault master encryption key
is disabled.
• Autonomous Database instance is not able to reach the
Oracle Cloud Infrastructure Vault.
DatabaseInaccessibleEnd If the Autonomous Database instance recovers from the
Inaccessible state that generated a
DatabaseInaccessibleBegin event, when the database
state changes to Available, the database triggers a
DatabaseInaccessibleEnd event.
3-961
Chapter 3
Manage the Service
Event Description
FailedLoginWarning If the number of failed login attempts reaches 3 * number of
total users in the last three (3) hours, the database triggers a
FailedLoginWarning event.
Failed login attempts could be due to a user entering an
invalid username or password, or due to a connection
timeout. The event is triggered when the failed login attempts
threshold is exceeded within the monitored time period (the
last three hours).
InstanceRelocateBegin If an Autonomous Database is relocated to another Exadata
infrastructure due to server maintenance, hardware refresh, a
hardware issue, or as part of a resource scale up, the
InstanceRelocateBegin event is triggered at the start of
the relocation process.
InstanceRelocateEnd If an Autonomous Database is relocated to another Exadata
infrastructure due to server maintenance, hardware refresh, a
hardware issue, or as part of a resource scale up, the
InstanceRelocateEnd event is triggered at the end of the
relocation process.
WalletExpirationWarning This event is generated when Autonomous Database
determines that a wallet is due to expire in less than six (6)
weeks. This event is reported at most once per week.
This event is triggered when there is a connection that uses
the wallet that is due to expire.
Use the event type Autonomous Database - Critical to specify critical events in
Event rules.
Event Description
AJDNonJsonStorageExceeded This event is generated when an Autonomous JSON
Database has exceeded the maximum storage limit of 20GB
of data stored outside of SODA collections. This limit does
not apply to data stored in SODA collections or objects
associated with SODA collections, such as indexes or
materialized views.
In addition to this event, an email is sent to the account
owner. You must either reduce your usage of non-SODA-
related data to below the 20GB limit or promote the
Autonomous JSON Database to Autonomous Transaction
Processing. See Promote to Autonomous Transaction
Processing for more information.
APEXUpgradeAvailable This event is generated when you are using Oracle APEX
and a new release becomes available.
3-962
Chapter 3
Manage the Service
Event Description
APEXUpgradeBegin This event is generated when you are using Oracle APEX
and your Autonomous Database instance begins an upgrade
to a new Oracle APEX release or begins to apply an Oracle
APEX Patch Set Bundle.
APEXUpgradeEnd This event is generated when you are using Oracle APEX
and your Autonomous Database instance completes an
upgrade to a new Oracle APEX release or completes the
installation of an Oracle APEX Patch Set Bundle.
DatabaseConnection This event is generated if a connection is made to database
from a new IP address (the connection has not been made
from the specified IP address in the last 30 days).
InactiveConnectionsDetect This event is generated when the number of inactive
ed database connections detected is above a certain ratio
compared to all database connections for the Autonomous
Database instance. Subscribing to this event can help you
keep track of unused connections.
This event is generated once per day, when the following is
true:
• The number of inactive connections, where the elapsed
time for the inactive connection is greater than 24 hours,
is great than 10% of the total number of connections in
any state.
For this calculation, inactive connections are connections
where the status is "INACTIVE".
Use the following query to report detailed information on
sessions that have been inactive for more than 24 hours:
Long term backup ended This event is triggered when a long term backup completes.
Long term backup schedule This event is triggered when a long term backup schedule is
disabled disabled.
Long term backup schedule This event is triggered when a long term backup schedule is
enabled / updated either enabled or updated.
Long term backup started This event is triggered when a long term backup starts.
MaintenanceBegin This event is triggered when the maintenance starts and
provides the start timestamp for the maintenance (this event
does not provide the scheduled start time).
MaintenanceEnd This event is triggered when the maintenance ends and
provides the end timestamp for the maintenance (this event
does not provide the scheduled end time).
3-963
Chapter 3
Manage the Service
Event Description
NewMaintenanceSchedule This event is generated when the maintenance date is
updated and the new date is shown on the Oracle Cloud
Infrastructure Console.
When the Java version will be updated, the event
description field indicates this with the following: This
maintenance involves Java version update,
please expect down time of Java service during
this maintenance window.
OperatorAccess This event is triggered when operator access is detected for
the database. You can query the access details from the
DBA_OPERATOR_ACCESS view with the request ID provided in
the event description.
The OperatorAccess event is generated once every 24
hours.
ScheduledMaintenanceWarni This event is generated when the instance is 24 hours from a
ng scheduled maintenance and again when the instance is 1
hour (60 minutes) from the scheduled maintenance.
When the Java version will be updated, the event
description field indicates this with the following message:
This maintenance involves Java version update,
please expect down time of Java service during
this maintenance window.
WorkloadCaptureBegin This event is triggered when a workload capture is initiated.
WorkloadCaptureEnd This event is triggered when a workload capture completes
successfully.
This generates a pre-authenticated (PAR) URL to download
the capture reports.
This URL is contained in the captureDownloadURL field of
the event and is valid for 7 days from the date of generation.
WorkloadReplayBegin This event is triggered when a workload replay is initiated.
WorkloadReplayEnd This event is triggered when a workload replay completes
successfully.
This generates a pre-authenticated (PAR) URL to download
the replay reports.
This URL is contained in the replayDownloadURL field of
the event and is valid for 7 days from the date of generation.
Note:
When Autonomous Data Guard is enabled, any of these events that occur on
the standby database do not trigger an Information event.
3-964
Chapter 3
Manage the Service
3-965
Chapter 3
Manage the Service
3-966
Chapter 3
Manage the Service
3-967
Chapter 3
Manage the Service
Note:
When you subscribe to an event category, either Critical or Information, you
are notified when any events in the category occur. For example, to get
notified when the database goes down, subscribe to the Autonomous
Database Critical event type.
3-968
Chapter 3
Manage the Service
For example with these entries the Rule Logic area shows:
Note:
By default, both AUTO_DST_UPGRADE and AUTO_DST_UPGRADE_EXCL_DATA are disabled.
You can enable one or the other of these options, but not both.
3-969
Chapter 3
Manage the Service
With every Daylight Saving Time (DST) version release, there are DST file changes
introduced to make existing data comply with the latest DST rules. Applying a change
to the database time zone files not only affects the way new data is handled, but
potentially alters data stored in TIMESTAMP WITH TIME ZONE columns.
When a load or import operation results in the following time zone related error, this
indicates that the version of your time zone files is out of date and you need to update
to the latest version available for your database:
ORA-39405: Oracle Data Pump does not support importing from a source
database with TSTZ version y
into a target database with TSTZ version n.
You can enable the AUTO_DST_UPGRADE feature that automatically upgrades an instance
to use the latest available version of the time zone files, and automatically updates the
rows on your database to use the latest time zone data. The latest version of time
zone files are applied and values in TIMESTAMP WITH TIME ZONE columns are updated
at the next database restart. However, depending on your database usage, the
required database restart might restrict you from using AUTO_DST_UPGRADE to upgrade
to the latest time zone files.
You can enable AUTO_DST_UPGRADE_EXCL_DATA to update your time zone files. This
option allows you to immediately update your database if you encounter import/export
errors due to a version mismatch of time zone files (where you receive the ORA-3940
error).
Enabling AUTO_DST_UPGRADE_EXCL_DATA on your database can be beneficial in the
following cases:
• Data on the instance is in UTC and the database is not impacted by DST time
zone file updates.
• The instance does not have any data in TIMESTAMP WITH TIME ZONE columns.
• Data in the instance uses dates that are not altered with new time zone or daylight
saving time policies.
In this case, when your Autonomous Database instance does not contain rows that are
adversely impacted by the new time zone rules and you encounter the ORA-3940 error,
you can enable the AUTO_DST_UPGRADE_EXCL_DATA option to update to the latest
version of the time zone files. The AUTO_DST_UPGRADE_EXCL_DATA option prioritizes the
success of Oracle Data Pump jobs over the data consistency issues due to changing
DST.
See Oracle Support Document 406410.1 to help you determine whether time zone
changes will affect your database.
Note:
Oracle recommends enabling the AUTO_DST_UPGRADE option when these
limiting cases do not apply to your database.
3-970
Chapter 3
Manage the Service
• Possible Data Inconsistency: Oracle recommends that you maintain your database on
the latest time zone file version by enabling AUTO_DST_UPGRADE. This option automatically
updates the rows on your database to use the latest time zone data.
The alternative option, AUTO_DST_UPGRADE_EXCL_DATA allows you to maintain your
database using the latest time zone version without requiring a database restart;
however, data that might be affected by the updated time zone files is not updated, which
could lead to possible data inconsistency.
• Database Restart: Oracle recommends that you maintain your database on the latest
time zone file version by enabling AUTO_DST_UPGRADE. This option requires a restart to
upgrade to the latest time zone files version and during the restart updates the rows for
existing data of TIMESTAMP WITH TIME ZONE data type to use the latest version. Enabling
AUTO_DST_UPGRADE can affect database restart time. A restart can require extra time
compared to a restart without this option enabled when there are new time zone files and
there is data that needs to be updated when the database restarts.
When you enable AUTO_DST_UPGRADE_EXCL_DATA, at the time you enable this option the
database is upgraded to the latest version the time zone files and during each
maintenance window, if newer time zone files are available, Autonomous Database
updates database to use the latest version. Enabling AUTO_DST_UPGRADE_EXCL_DATA does
not require that you restart your Autonomous Database instance to keep up to date with
the latest version of the time zone files.
See Datetime Data Types and Time Zone Support for more information.
Note:
By default, both AUTO_DST_UPGRADE and AUTO_DST_UPGRADE_EXCL_DATA are disabled.
You can enable one or the other of the automatic time zone file update options, but
not both.
ORA-39405: Oracle Data Pump does not support importing from a source
database with TSTZ version n+1
into a target database with TSTZ version n.
3-971
Chapter 3
Manage the Service
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE');
END;
/
PARAM_NAME PARAM_VALUE
----------------------- ----------------------------------------
auto_dst_upgrade enable
PARAM_NAME PARAM_VALUE
----------------------- -----------
latest_timezone_version 38
You can query the DB_NOTIFICATIONS view to see if the time zone version is the
latest version:
When AUTO DST UPGRADE is enabled, Autonomous Database upgrades to the latest
version of the time zone files (when you next restart, or stop and then start the
database). The columns in your database with the datatype TIMESTAMP WITH TIME
ZONE are converted to the new time zone version during the restart.
To disable AUTO_DST_UPGRADE:
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE');
END;
/
3-972
Chapter 3
Manage the Service
0 rows selected.
Note:
By default, both AUTO_DST_UPGRADE and AUTO_DST_UPGRADE_EXCL_DATA are disabled.
You can enable one or the other of the automatic time zone file update options, but
not both.
When a load or import operation results in the following time zone related error, this indicates
that your time zone files are out of date and you need to update to the latest version available
for your database:
ORA-39405: Oracle Data Pump does not support importing from a source
database with TSTZ version n+1
into a target database with TSTZ version n.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE_EXCL_DATA');
END;
/
PARAM_NAME PARAM_VALUE
---------------------------- ----------------------------------------
auto_dst_upgrade_excl_data enabled
3-973
Chapter 3
Manage the Service
PARAM_NAME PARAM_VALUE
----------------------- -----------
latest_timezone_version 38
You can query the DB_NOTIFICATIONS view to see if the time zone version is the
latest version:
To disable AUTO_DST_UPGRADE_EXCL_DATA:
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE_EXCL_DATA');
END;
/
0 rows selected.
3-974
Chapter 3
Manage the Service
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To view Database Availability for an Autonomous Database instance:
1. On the Details page under General Information, click the Database Availability link in the
Lifecycle State field.
3-975
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
Topics
3-976
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
View Patch and Maintenance Window Information, Set the Patch Level
Autonomous Database uses predefined maintenance windows to automatically patch your
database. You can view maintenance and patch information and see details for Autonomous
Database maintenance history. When you provision your database you can select a patch
level.
• About Scheduled Maintenance and Patching
All Autonomous Database instances are automatically assigned to a maintenance
window and different instances can have different maintenance windows.
• View Maintenance Event History
You can view Autonomous Database maintenance event history for details about past
maintenance events, such as the title, state, start time, and stop time.
• View Patch Details
You can view Autonomous Database patch information, including a list of resolved issues
and components.
• Set the Patch Level
When you provision or clone an Autonomous Database instance you can select a patch
level to apply to upcoming patches. There are two patch level options: Regular and
Early.
3-977
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
Note:
Your database remains available during the maintenance window. Your
existing database connections may get disconnected briefly; however, you
can immediately reconnect and continue using your database.
3-978
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To view maintenance history, do the following:
1. On the Autonomous Database Details page, under Maintenance, click View History.
2. The Oracle Cloud Infrastructure Console displays the Maintenance History page.
3. (Optional) Use the State selector to filter events by state.
For example, if you select Succeeded, the Maintenance History page shows only the
maintenance events with the Succeeded state.
The Maintenance History page shows details for each maintenance event, including the
following:
• Title: The name of the maintenance event.
• Maintenance Type: Planned or Unplanned.
• Resource Type: The type of the resource on which the maintenance event occurs:
Database or Infrastructure.
• State: Succeeded, Failed, or In progress.
• Start Time: Maintenance start time.
• End Time: Maintenance end time.
3-979
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
Note:
Maintenance event history is available starting with maintenance events after
February 2021.
3. For an issue of interest, query the view to obtain details for the issue:
See View Maintenance Status Notifications for details about the patches applied
during maintenance.
3-980
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
Note:
When cloning a source database with Early patch level, you can only choose Early
patch level for your clone.
3-981
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
• Autonomous Data Guard is only available for instances with patch level Regular.
When you configure an Autonomous Database instance with patch level Early,
you cannot enable Autonomous Data Guard.
• Always Free Autonomous Database instances do not provide the Early patch level
option.
• When the patch level of a source Autonomous Database instance is Regular, in
regions that support the Early patch level you can set the patch level of a clone to
Early.
• When the patch level of a source Autonomous Database instance is Early, you
can only choose the Early patch level for your clone.
3-982
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the links
under the Display name column.
To view or manage customer contact email addresses:
1. On the Autonomous Database Details page, under Maintenance, in the Customer
Contacts field, click the Manage link.
This shows the Manage Contacts page. On the Manage Contacts page you can view the
email contact list.
2. To add a contact, click Add Contacts.
3. Enter the contact email address.
Oracle recommends adding an administrator group's email address over an individual's
email address when possible, so as to not miss important notifications or
announcements.
4. To add additional contacts, click Add Contacts again.
5. Click Add Contacts at the bottom of the page to add the new contacts.
The lifecycle state changes to Updating while the customer contact list is updated.
3-983
Chapter 3
Managing and Viewing Maintenance Windows, Patching Options, Work Requests, and Customer Contacts
• You can continuously move this audit data to the object store for purposes such as
integration with Security Information and Event Management (SIEM) tools. See
Create and Configure a Pipeline for Export with Timestamp Column for more
information on using pipelines to continuously export query results to object store.
3-984
Chapter 3
Monitor and Manage Performance
DBA_OPERATOR_ACCESS View
The DBA_OPERATOR_ACCESS view describes top level statements executed in the Autonomous
Database instance by Oracle Cloud Infrastructure operations.
3-985
Chapter 3
Monitor and Manage Performance
3-986
Chapter 3
Monitor and Manage Performance
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
To view the Overview tab that shows general information about utilization, do the following:
1. Access Database Actions as the ADMIN user.
See Access Database Actions as ADMIN for more information.
2. On the Database Actions Launchpad, under Monitoring, click Database Dashboard.
Note:
You can bookmark the Launchpad URL and go to that URL directly without logging
in to the Oracle Cloud Infrastructure Console. If you logout and use the bookmark,
then you need to enter the ADMIN username and password, and click Sign in. See
Set the ADMIN Password in Autonomous Database if you need to change the
password for the ADMIN user.
3-987
Chapter 3
Monitor and Manage Performance
Provisioned storage is the amount of storage you select when you provision the
instance or when you modify storage by scaling storage.
Storage allocated is the amount of storage physically allocated to all data
tablespaces and temporary tablespaces and includes the free space in these
tablespaces. This does not include storage for the sample schemas.
Storage used is the amount of storage actually used in all data and temporary
tablespaces. This does not include storage for the sample schemas. The storage
used is the storage in the Autonomous Database as follows:
– Storage used by all database objects. Note: the chart does not include storage
for the sample schemas as they do not count against your storage.
– Storage for files users put in the file system.
– Storage used by temporary tablespaces.
– Used storage excludes the free space in the data and temporary tablespaces.
By default the chart does not show the used storage. Select Storage used to
expand the chart to see used storage (the values are calculated when you open
the chart).
For an Autonomous JSON Database the chart shows an additional field showing
the percentage of storage used that is not storing JSON documents.
3-988
Chapter 3
Monitor and Manage Performance
Note:
If you drop an object, the space continues to be consumed until you empty the
recycle bin. See Purging Objects in the Recycle Bin for more information.
See Use Sample Data Sets in Autonomous Database for information on sample schemas
SH and SSB.
• CPU utilization (%) for ECPU Compute Model: This chart shows the historical CPU
utilization of the service:
– Compute auto scaling disabled: this chart shows hourly data. A data point shows the
average CPU utilization for that hour. For example, a data point at 10:00 shows the
average CPU utilization for 9:00-10:00.
The utilization percentage is reported with respect to the number of ECPUs the
database is allowed to use. For example, if the database has four (4) ECPUs, the
percentage in this graph is based on 4 ECPUs.
– Compute auto scaling enabled: For databases with compute auto scaling enabled the
utilization percentage is reported with respect to the maximum number of ECPUs the
database is allowed to use, which is three times the number of ECPUs. For example,
if the database has four (4) ECPUs with auto scaling enabled, the percentage in this
graph is based on 12 ECPUs.
• CPU utilization (%) for OCPU Compute Model: This chart shows the historical CPU
utilization of the service:
3-989
Chapter 3
Monitor and Manage Performance
– Compute auto scaling disabled: this chart shows hourly data. A data point
shows the average CPU utilization for that hour. For example, a data point at
10:00 shows the average CPU utilization for 9:00-10:00.
The utilization percentage is reported with respect to the number of OCPUs
the database is allowed to use. For example, if the database has four (4)
OCPUs, the percentage in this graph is based on 4 OCPUs.
– Compute auto scaling enabled: For databases with compute auto scaling
enabled the utilization percentage is reported with respect to the maximum
number of OCPUs the database is allowed to use. For example, if the
database has four (4) OCPUs with auto scaling enabled, the percentage in this
graph is based on 12 OCPUs.
• Running SQL statements: This chart shows the average number of running SQL
statements historically. This chart shows hourly data. A data point shows the
running SQL statements for that hour. For example, a data point at 10:00 shows
the average number of running SQL statements for 9:00-10:00.
3-990
Chapter 3
Monitor and Manage Performance
3-991
Chapter 3
Monitor and Manage Performance
3-992
Chapter 3
Monitor and Manage Performance
Note:
Database Dashboard does not show this chart when the Autonomous Database
instance workload type is Data Warehouse.
The default retention period for performance data is thirty (30) days. The CPU utilization,
running statements, and average SQL response time charts show data for the last eight (8)
days by default.
You can change the retention period by modifying the Automatic Workload Repository
retention setting with the PL/SQL procedure
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(). The maximum retention you can
set is 30 days. See Oracle Database PL/SQL Packages and Types Reference.
If you need to store more performance data you can use the Operations Insights AWR Hub.
See Analyze Automatic Workload Repository (AWR) Performance Data for more information.
3-993
Chapter 3
Monitor and Manage Performance
Note:
The default view in the Monitor tab is real-time. This view shows
performance data for the last hour.
3-994
Chapter 3
Monitor and Manage Performance
This chart shows the average number of queued SQL statements in each consumer
group.
See Manage Concurrency and Priorities on Autonomous Database for detailed
information on consumer groups.
To see earlier data click Time period. The default retention period for performance data is
thirty (30) days. By default in the Time Period view the charts show information for the last
eight (8) days.
In the time period view you can use the calendar to look at a specific time period in the past
30 days. You can also use the time slider to change the period for which performance data is
shown.
Note:
The retention time can be changed by changing the Automatic Workload Repository
retention setting with the PL/SQL procedure
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS. Be aware that increasing
the retention time results in more storage usage for performance data. See Oracle
Database PL/SQL Packages and Types Reference.
Use Database Actions to Monitor Active Session History Analytics and SQL
Statements
In Database Actions the Performance Hub card provides information about Active Session
History (ASH) analytics and current and past monitored SQL statements.
Perform the following prerequisite step as necessary:
• Access Database Actions as the ADMIN user. See Access Database Actions a ADMIN
for more information.
• The Performance Hub Page
• Click Selector to display the navigation menu. Under Monitoring, select Performance
Hub.
Note:
The Performance Hub page is available in the following user interface languages:
French, Japanese, Korean, Traditional Chinese, and Simplified Chinese. If you
change the language to German, Spanish, Italian, or Portuguese in Preferences,
the Performance Hub page reverts to English.
3-995
Chapter 3
Monitor and Manage Performance
• Time Range Area: Use the controls in time range area at the top of the page to
specify the time period for which you want to view performance data.
• ASH Analytics Tab: Use this tab to explore ASH (Active Session History)
information across a variety of different dimensions for the specified time period.
• SQL Monitoring Tab: Use this tab to view the top 100 SQL statement executions
by different dimensions for the specified time period, and to view details of SQL
statement executions you select.
See Performance Hub in the Database service documentation for more information.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database,
and then click Autonomous Database.
1. On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
2. From the Autonomous Database details page click Performance Hub.
3-996
Chapter 3
Monitor and Manage Performance
3-997
Chapter 3
Monitor and Manage Performance
Component Description
Top Area The top of the Performance Hub page shows the following:
• Quick Select: Use to quickly set the time range to one of Last
Hour, Last 8 Hours, Last 24 Hours, or Last Week.
• Time Range: Use to set the date and time range for Performance
Hub to monitor.
• Time Zone: Select an entry from this list to specify times based on
Coordinated Universal Time (UTC), your local web browser time
(Browser), or the database time zone setting (Database).
• Activity Summary (Average Active Sessions): shows active
sessions during the selected time range. It displays the average
number of active sessions broken down by CPU, User I/O, and
Wait. It also shows the Max CPU usage.
You can hide or show the activity summary by selecting Hide
Activity Summary.
• Reports: Select to view the Automatic Workload Repository
(AWR) report.
The AWR report collects, processes, and maintains performance
statistics for problem detection and self-tuning purposes. This data
is both in memory and stored in the database.
Active Session History ASH Analytics is displayed by default. This tab shows Active Session
(ASH) Analytics History (ASH) analytics charts to explore the Active Session History
data.
You can drill down into database performance across multiple
dimensions such as Consumer Group, Wait Class, SQL ID, and User
Name. Select an Average Active Sessions dimension and view the top
activity for that dimension for the selected time period.
For more information, see ASH Analytics.
SQL Monitoring SQL statements are only monitored if they've been running for at least
five seconds or if they're run in parallel. The table displays monitored
SQL statement executions by dimensions including Last Active Time,
CPU Time, and Database Time. The table displays currently running
SQL statements and SQL statements that completed, failed, or were
terminated. The columns in the table provide information for monitored
SQL statements including Status, Duration, and SQL ID.
For more information, see SQL Monitoring.
Automatic Database ADDM tab provides access to analysis information gathered by the
Diagnostic Monitor Automatic Database Diagnostic Monitor (ADDM) tool.
(ADDM) ADDM analyzes AWR (Automatic Workload Repository) snapshots on
a regular basis, locates root causes of any performance problems,
provides recommendations for correcting the problems, and identifies
non-problem areas of the system.
AWR is a repository of historical performance data, therefore ADDM
can analyze performance issues after the event, often saving time and
resources in reproducing a problem.
For more information, see Automatic Database Diagnostic Monitor
(ADDM).
3-998
Chapter 3
Monitor and Manage Performance
Component Description
Workload Use the Workload tab to visually monitor the database workload to
identify spikes and bottlenecks.
This tab shows four chart areas that show database workload in
various ways:
• CPU Statistics: Charts CPU usage.
• Wait Time Statistics: Shows the wait time across the database's
foreground sessions, divided by wait classes.
• Workload Profile: Charts user (client) workload on the database.
• Sessions: Shows the number of sessions, successful logons, and
current logons.
For more information, see Workload.
Blocking Sessions Blocking Sessions hierarchically lists sessions that are waiting or are
blocked by sessions that are waiting. You can set the minimum wait
time required for sessions to be displayed in the list, and you can view
a variety of information about a session to determine whether to let it
continue or to end it.
For more information, see Blocking Sessions.
3-999
Chapter 3
Monitor and Manage Performance
For example, if you create an Autonomous Database with a Data Warehouse workload
type and specify the database name as DB2020, your service names are:
• db2022_high
• db2022_medium
• db2022_low
If you connect using the db2022_low service, the connection uses the LOW consumer
group.
The basic characteristics of these consumer groups are:
• HIGH: Highest resources, lowest concurrency. Queries run in parallel.
• MEDIUM: Less resources, higher concurrency. Queries run in parallel.
Picking one of the predefined services provides concurrency values that work well
for most applications. In cases where selecting one of the default services does
not meet your application’s performance needs, you can use the MEDIUM service
and modify the concurrency limit. For example, when you run single-user
benchmarks, you can set the concurrency limit of the MEDIUM service to 1 in
order to obtain the highest degree of parallelism (DOP).
Depending on your compute model, ECPU or OCPU, see the following for more
information.
– Change MEDIUM Service Concurrency Limit (ECPU Compute Model)
– Change MEDIUM Service Concurrency Limit (OCPU Compute Model)
• LOW: Least resources, highest concurrency. Queries run serially.
Note:
After connecting to the database using one service, do not attempt to
manually switch that connection to a different service by simply changing the
consumer group of the connection. When you connect using a service,
Autonomous Database performs more actions to configure the connection
than just setting its consumer group. You can use the procedure
CS_SESSION.SWITCH_SERVICE to switch to a different service.
See SWITCH_SERVICE Procedure for more information.
3-1000
Chapter 3
Monitor and Manage Performance
For example, if you create an Autonomous Database with a Transaction Processing workload
type and specify the database name as DB2020, your connection service names are:
• db2022_tpurgent
• db2022_tp
• db2022_high
• db2022_medium
• db2022_low
If you connect using the db2022_tp service, the connection uses the TP consumer group.
3-1001
Chapter 3
Monitor and Manage Performance
Note:
After connecting to the database using one service, do not attempt to
manually switch that connection to a different service by simply changing the
consumer group of the connection. When you connect using a service,
Autonomous Database performs more actions to configure the connection
than just setting its consumer group. You can use the procedure
CS_SESSION.SWITCH_SERVICE to switch to a different service.
See SWITCH_SERVICE Procedure for more information.
Note:
Sessions that are idle for more than 48 hours are terminated whether they
are holding resources or not.
Service Concurrency
The consumer groups of the predefined service names provide different levels of
performance and concurrency. The available service names are different depending on
your workload: Data Warehouse, Transaction Processing, or JSON Database.
Picking one of the predefined services provides concurrency values that work well for
most applications. In cases where selecting one of the default services does not meet
your application’s performance needs, you can use the MEDIUM service and modify
the concurrency limit. For example, when you run single-user benchmarks, you can
set the concurrency limit of the MEDIUM service to 1 in order to obtain the highest
degree of parallelism (DOP).
In this topic, the "number of ECPUs" is the ECPU count shown in the Oracle Cloud
Infrastructure Console. Likewise, if your database uses the OCPU compute model, the
"number of OCPUs" is the OCPU count.
3-1002
Chapter 3
Monitor and Manage Performance
Note:
OCPU is a legacy billing metric and has been retired for Autonomous Data
Warehouse (Data Warehouse workload type) and Autonomous Transaction
Processing (Transaction Processing workload type). Oracle recommends using
ECPUs for all new and existing Autonomous Database deployments. See Oracle
Support Document 2998742.1 for more information.
• Service Concurrency Limits for Data Warehouse Workloads (ECPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains three database
service names identifiable as high, medium, and low for Autonomous Database with Data
Warehouse workloads.
• Service Concurrency Limits for Transaction Processing Workloads (ECPU Compute
Model)
The tnsnames.ora file provided with the credentials zip file contains five database
service names identifiable as tpurgent, tp, high, medium, and low for Autonomous
Database with Transaction Processing or JSON workloads.
• Service Concurrency Limits for Data Warehouse Workloads (OCPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains three database
service names identifiable as high, medium and low for Autonomous Database with Data
Warehouse workloads.
• Service Concurrency Limits for Transaction Processing and JSON Database Workloads
(OCPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains five database
service names identifiable as tpurgent, tp, high, medium, and low for Autonomous
Database with Transaction Processing or with JSON Database workloads.
Service Concurrency Limits for Data Warehouse Workloads (ECPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains three database service
names identifiable as high, medium, and low for Autonomous Database with Data Warehouse
workloads.
The following shows the details for the number of concurrent statements for each connection
service for Data Warehouse workloads, with Compute auto scaling disabled, and with
Compute auto scaling enabled.
Note:
The values in this table apply when the number of ECPUs is equal to or greater
than 4.
When the number of ECPUs is 2, all services use the concurrency limit 150. When
the number of ECPUs is 3, all services use the concurrency limit 225 (this applies
for Compute auto scaling enabled or disabled).
3-1003
Chapter 3
Monitor and Manage Performance
When these concurrency levels are reached for the MEDIUM and HIGH consumer
groups, new SQL statements in that consumer group will be queued until one or more
running statements finish. With the LOW consumer group, when the concurrency limit
is reached you will not be able to connect new sessions.
For example, with auto scaling disabled on an Autonomous Database instance with 32
ECPUs, the HIGH consumer group will be able to run 3 concurrent SQL statements
when the MEDIUM consumer group is not running any statements. The MEDIUM
consumer group will be able to run 8 concurrent SQL statements when the HIGH
consumer group is not running any statements. The LOW consumer group will be able
to run 2400 concurrent SQL statements.
Note:
The HIGH consumer group can run at least 1 SQL statement when the
MEDIUM consumer group is also running statements.
For this example with auto scaling enabled on an Autonomous Database instance with
32 ECPUs, the HIGH consumer group will be able to run 9 concurrent SQL statements
when the MEDIUM consumer group is not running any statements. The MEDIUM
consumer group will be able to run 24 concurrent SQL statements when the HIGH
consumer group is not running any statements. The LOW consumer group will be able
to run 2400 concurrent SQL statements.
The following table shows sample concurrent connections values for a database with
32 ECPUs with Compute auto scaling disabled and with Compute auto scaling
enabled.
See Change MEDIUM Service Concurrency Limit (ECPU Compute Model) for more
information.
3-1004
Chapter 3
Monitor and Manage Performance
Service Concurrency Limits for Transaction Processing Workloads (ECPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains five database service
names identifiable as tpurgent, tp, high, medium, and low for Autonomous Database with
Transaction Processing or JSON workloads.
The following shows the details for the default number of concurrent statements for each
connection service for Transaction Processing or JSON workloads.
Note:
The values in this table apply when the number of ECPUs is equal to or greater
than 4.
When the number of ECPUs is 2, all services use the concurrency limit 150. When
the number of ECPUs is 3, all services use the concurrency limit 225 (this applies
for Compute auto scaling enabled or disabled).
See Change MEDIUM Service Concurrency Limit (ECPU Compute Model) for more
information.
Service Concurrency Limits for Data Warehouse Workloads (OCPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains three database service
names identifiable as high, medium and low for Autonomous Database with Data Warehouse
workloads.
The following shows the details for the number of concurrent statements for each connection
service for Data Warehouse workloads, with OCPU auto scaling disabled, and with OCPU
auto scaling enabled.
Note:
The values in this table apply when the number of OCPUs is greater than 1. With 1
OCPU, the concurrency limit for each service is 300 and the DOP is 1.
3-1005
Chapter 3
Monitor and Manage Performance
When these concurrency levels are reached for the MEDIUM and HIGH consumer
groups, new SQL statements in that consumer group will be queued until one or more
running statements finish. With the LOW consumer group, when the concurrency limit
is reached you will not be able to connect new sessions.
For example, for an Autonomous Database instance with 16 OCPUs and OCPU auto
scaling disabled, the HIGH consumer group will be able to run 3 concurrent SQL
statements when the MEDIUM consumer group is not running any statements. The
MEDIUM consumer group will be able to run 20 concurrent SQL statements when the
HIGH consumer group is not running any statements. The LOW consumer group will
be able to run 4800 concurrent SQL statements. The HIGH consumer group can run at
least 1 SQL statement when the MEDIUM consumer group is also running statements.
When OCPU auto scaling is enabled, for the HIGH and MEDIUM consumer groups the
concurrency values will be three times larger.
The following table shows sample concurrent connections values for a database with
16 OCPUs, showing the values with both OCPU auto scaling enabled and disabled.
Service Concurrency Limits for Transaction Processing and JSON Database Workloads
(OCPU Compute Model)
The tnsnames.ora file provided with the credentials zip file contains five database
service names identifiable as tpurgent, tp, high, medium, and low for Autonomous
Database with Transaction Processing or with JSON Database workloads.
The following shows the details for the default number of concurrent statements for
each connection service for Transaction Processing or JSON Database workloads,
with OCPU auto scaling disabled, and with OCPU auto scaling enabled.
Note:
The values in this table apply when the number of OCPUs is greater than 1.
With 1 OCPU, the concurrency limit for each service is 300 and the DOP is
1.
3-1006
Chapter 3
Monitor and Manage Performance
Note:
Changing the concurrency limit is only allowed for an instance that has four (4) or
more ECPUs.
For example, with Compute auto scaling disabled, if your instance is configured with 400
ECPUs, by default Autonomous Database provides a concurrency limit of 100 for the
MEDIUM service:
0.25125 x number of ECPUs sessions (up to 100 concurrent queries). A decimal result is
truncated.
In this example the MEDIUM service supports an application with up to 100 concurrent
queries with DOP of 4. If you only need 50 concurrent queries and you want a higher DOP
you can decrease the concurrency limit and the database increases the DOP. To do this, set
the MEDIUM service concurrency limit to 50. When you change the concurrency limit the
database calculates and sets the DOP based on the concurrency limit you select and the
number of ECPUs. For this example, with the concurrency limit set to 50, the new DOP is 12.
With Compute auto scaling enabled, the DOP is set to a value three times greater. In this
example the DOP value would be 36.
You can change the concurrency limit for the MEDIUM service in Database Actions or using
the PL/SQL package CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE.
Follow these steps to change the MEDIUM service concurrency limit in Database Actions:
1. Access Database Actions as the ADMIN user.
See Access Database Actions as ADMIN for more information.
3-1007
Chapter 3
Monitor and Manage Performance
4. For the MEDIUM service, change the value to the desired concurrency limit by
entering a value or by clicking the Decrement or Increment icons.
If the concurrency limit you specify is not valid, based on the number of ECPUs,
you will receive a message such as the following, listing the valid range of values
for your instance:
3-1008
Chapter 3
Monitor and Manage Performance
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'MEDIUM',
concurrency_limit => 2);
END;
/
If the concurrency_limit you specify is not valid, based on the number of ECPUs, the
procedure shows an error message, listing the valid range of values for your instance.
For example with 12 ECPUs:
This procedure returns the list of values for all consumer groups. After you modify the
concurrency limit as specified in Step 1, check the MEDIUM service CONCURRENCY_LIMIT
and DEGREE_OF_PARALLELISM values to verify your changes.
3. After you change the concurrency limit for the MEDIUM service, test your application by
connecting with the MEDIUM service to verify that the customized concurrency limit
meets your performance objectives.
When you want to go back to the default values, use the
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES PL/SQL procedure to revert to the default
settings for the MEDIUM service.
For example:
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'MEDIUM',
concurrency_limit => TRUE);
3-1009
Chapter 3
Monitor and Manage Performance
END;
/
• Changing the concurrency limit is only allowed for the MEDIUM service.
• Changing the concurrency limit is only allowed when the number of ECPUs is
greater than or equal to 4.
• Changing the concurrency limit also changes the degree of parallelism (in some
cases the value does not change, depending on the magnitude of the difference
between the old concurrency limit and the new value you set).
• The concurrency limit you set must be in the range:
– With Compute auto scaling disabled: between 1 and .75 x the number of
ECPUs
– With Compute auto scaling enabled: between 1 and 2.25 x the number of
ECPUs
• The MEDIUM service sets the following concurrency limit and DOP values by
default:
MEDIUM Database Service Default Value with Compute Auto Default Value with Compute Auto
Scaling Disabled Scaling Enabled
Concurrency Limit 0.25125 × number of ECPUs when 0.75375 × number of ECPUs when
the number of ECPUs ≥ 8 the number of ECPUs ≥ 8
A decimal result is truncated A decimal result is truncated
2 when the number of ECPUs is in the 6 when the number of ECPUs is in the
range 4 ≤ ECPUs < 8 range: 4 ≤ ECPUs < 8
DOP 4 when the number of ECPUs ≥ 8 4 when the number of ECPUs ≥ 8
or or
TRUNC (ECPU/2), when the number TRUNC (ECPU/2), when the number
of ECPUs < 8 of ECPUs < 8
• By changing the value of the concurrency limit, the DOP of the MEDIUM service
can go as low as 2 and as high as .75 * number of ECPUs (if Compute auto
scaling is disabled) or 2.25 x number of ECPUs (if Compute auto scaling is
enabled).
See Use Auto Scaling for information on Compute auto scaling.
• At any time you can return to the default values for the MEDIUM service
concurrency limit and DOP.
3-1010
Chapter 3
Monitor and Manage Performance
your application’s performance needs, you can use the MEDIUM service and modify the
concurrency limit. For example, when you run single-user benchmarks, you can set the
concurrency limit of the MEDIUM service to 1 in order to obtain the highest degree of
parallelism (DOP).
Note:
Changing the concurrency limit is only allowed for an instance that has two (2) or
more OCPUs.
For example, if your instance is configured with 100 OCPUs, by default Autonomous
Database provides a concurrency limit of 126 for the MEDIUM service:
1.26 x number of OCPUs sessions (up to 126 concurrent queries)
In this example using the MEDIUM service supports an application with up to 126 concurrent
queries with DOP of 4. If you only need 50 concurrent queries and you want a higher DOP
you can decrease the concurrency limit and thus increase the DOP. To do this, set the
MEDIUM service concurrency limit to 50. When you change the concurrency limit the system
calculates and sets the DOP based on the concurrency limit you select and the number of
OCPUs. For this example, with the concurrency limit set to 50, the new DOP is 12. When
OCPU auto scaling is enabled, the DOP is set to a value three times greater. In this example
the DOP value would be 36.
You can change the concurrency limit for the MEDIUM service in Database Actions or using
the PL/SQL package CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE.
Follow these steps to change the MEDIUM service concurrency limit in Database Actions:
1. Access Database Actions as the ADMIN user.
See Access Database Actions as ADMIN for more information.
2. On the Database Actions Launchpad, under Administration, click Set Resource
Management Rules.
3. On the Set Resource Management Rules page, select the Concurrency limit tab.
3-1011
Chapter 3
Monitor and Manage Performance
4. For the MEDIUM service, change the value to the desired concurrency limit by
entering a value or by clicking the Decrement or Increment icons.
If the concurrency limit you specify is not valid, based on the number of OCPUs,
you will receive a message such as the following, listing the valid range of values
for your instance:
This error message example is from an instance with 100 OCPUs (the 300
maximum value shown is 3 x number of OCPUs).
5. Click Save Changes.
6. Click OK.
To reset the concurrency limit for the MEDIUM service to its default value, click Load
Default Values and click Save Changes.
• Change MEDIUM Service Concurrency Limit with PL/SQL Procedure
UPDATE_PLAN_DIRECTIVE (OCPU Compute Model)
As an alternative to using the Set Resource Management Rules card in
Database Actions, you can use PL/SQL to change the concurrency limit for the
MEDIUM service.
• Change MEDIUM Service Concurrency Limit Notes (OCPU Compute Model)
Related Topics
• Change MEDIUM Service Concurrency Limit (ECPU Compute Model)
If your application requires customized concurrency, you can modify the
concurrency limit for your Autonomous Database MEDIUM service.
3-1012
Chapter 3
Monitor and Manage Performance
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'MEDIUM',
concurrency_limit => 2);
END;
/
If the concurrency_limit you specify is not valid, based on the number of OCPUs, you
will receive a message such as the following, listing the valid range of values for your
instance:
This procedure returns the list of values for all consumer groups. After you modify the
concurrency limit as specified in Step 1, check the MEDIUM service CONCURRENCY_LIMIT
and DEGREE_OF_PARALLELISM values to verify your changes.
3-1013
Chapter 3
Monitor and Manage Performance
3. After you change the concurrency limit for the MEDIUM service, test your
application by connecting with the MEDIUM service to verify that the customized
concurrency limit meets your performance objectives.
When you want to go back to the default values, use the
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES PL/SQL procedure to revert to the
default settings for the MEDIUM service.
For example:
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group =>
'MEDIUM', concurrency_limit => TRUE);
END;
/
• Changing the concurrency limit is only allowed for the MEDIUM service.
• Changing the concurrency limit is only allowed when the number of OCPUs is
greater than 1.
• Changing the concurrency limit also changes the degree of parallelism (in most
cases, depending on the magnitude of the difference between the old concurrency
limit and the new value you set).
• The concurrency limit you set must be in the range:
– With OCPU auto scaling disabled: between: 1 and 3 x the number of OCPUs
– With OCPU auto scaling enabled: between 1 and 9 x the number of OCPUs
• The MEDIUM service sets the following concurrency limit and DOP values by
default:
MEDIUM Database Service Default Value with OCPU Auto Default Value with OCPU Auto
Scaling Disabled Scaling Enabled
Concurrency Limit 1.26 × number of OCPUs when the 3.78 × number of OCPUs when the
number of OCPUs ≥ 4 number of OCPUs ≥ 4
5 when the number of OCPUs < 4 15 when the number of OCPUs < 4
DOP 4 when the number of OCPUs ≥ 4 4 when the number of OCPUs ≥ 4
or or
The number of OCPUs, when the The number of OCPUs, when the
number of OCPUs < 4 number of OCPUs < 4
• By changing the value of the concurrency limit, the DOP of the MEDIUM service
can go as low as 2 and as high as: 2 x number of OCPUs (if compute auto scaling
is disabled) or 6 x number of OCPUs (if compute auto scaling is enabled).
See Use Auto Scaling for information on compute auto scaling.
• At any time you can return to the default values for the MEDIUM service
concurrency limit and DOP.
3-1014
Chapter 3
Monitor and Manage Performance
The predefined job_class values, TPURGENT, TP, HIGH, MEDIUM and LOW map to the
corresponding consumer groups. These job classes allow you to specify the consumer group
a job runs in with DBMS_SCHEDULER.CREATE_JOB.
For example: use the following to create a single regular job to run in HIGH consumer group:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'update_sales',
job_type => 'STORED_PROCEDURE',
job_action => 'OPS.SALES_PKG.UPDATE_SALES_SUMMARY',
start_date => '28-APR-19 07.00.00 PM Australia/Sydney',
repeat_interval => 'FREQ=DAILY;INTERVAL=2',
end_date => '20-NOV-19 07.00.00 PM Australia/Sydney',
auto_drop => FALSE,
job_class => 'HIGH',
comments => 'My new job');
END;
/
3-1015
Chapter 3
Monitor and Manage Performance
You can set CPU/IO shares in Database Actions or using the PL/SQL package
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE.
To use Database Actions to change the CPU/IO share values for consumer groups:
1. Access Database Actions as the ADMIN user.
See Access Database Actions as ADMIN for more information.
2. On the Database Actions Launchpad, under Administration, click Set Resource
Management Rules.
3. Select the CPU/IO shares tab to set CPU/IO share values for consumer groups.
4. Set the desired CPU/IO share value for a consumer group by entering a value or
by clicking the Decrement or Increment icons.
5. Click Save Changes.
6. Click OK.
To reset CPU/IO shares values to the defaults, click Load Default Values and click
Save Changes to apply the populated values.
As an alternative to using Database Actions, you can use the PL/SQL procedure
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE to change the CPU/IO share values
for consumer groups.
For example, on an Autonomous Data Warehouse database, run the following script
as the ADMIN user to set the CPU/IO shares to 8, 2, and 1 for consumer groups HIGH,
MEDIUM, and LOW respectively. This allows the consumer group HIGH to use 4 times
more CPU/IO resources compared to the consumer group MEDIUM and 8 times
CPU/IO resources compared to the consumer group LOW:
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'HIGH', shares => 8);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'MEDIUM', shares => 2);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'LOW', shares => 1);
END;
/
3-1016
Chapter 3
Monitor and Manage Performance
to 12, 4, 2, 1, and 1 for the consumer groups TPURGENT, TP, HIGH, MEDIUM, and LOW
respectively. This allows the consumer group TPURGENT to use 3 times more CPU/IO
resources compared to the consumer group TP and 12 times CPU/IO resources compared to
the consumer group MEDIUM:
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'TPURGENT', shares => 12);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'TP', shares => 4);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'HIGH', shares => 2);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'MEDIUM', shares => 1);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group => 'LOW', shares => 1);
END;
/
When you want to go back to the default shares values you can use the PL/SQL procedure
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES to revert to the default settings.
For example, on an Autonomous Data Warehouse database, run the following script as the
ADMIN user to set the CPU/IO shares to default values for the consumer groups HIGH,
MEDIUM, and LOW:
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'HIGH', shares => TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'MEDIUM', shares => TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'LOW', shares => TRUE);
END;
/
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'TPURGENT', shares =>
TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'TP', shares => TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'HIGH', shares => TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'MEDIUM', shares => TRUE);
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(consumer_group => 'LOW', shares => TRUE);
END;
/
Follow these steps to use Database Actions to set runtime usage rules:
3-1017
Chapter 3
Monitor and Manage Performance
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group =>
'HIGH', io_megabytes_limit => 1000, elapsed_time_limit => 120);
END;
/
To reset the values and lift the limits, you can set the values to null:
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(consumer_group =>
'HIGH', io_megabytes_limit => null, elapsed_time_limit => null);
END;
/
3-1018
Chapter 3
Monitor and Manage Performance
cameras. For these applications, data might be collected and written to the database in high
volumes for later analysis.
Fast ingest is very different from normal Oracle Database transaction processing where data
is logged and never lost once "written" to the database (that is, committed). In order to
achieve the maximum ingest throughput, the normal Oracle transaction mechanisms are
bypassed, and it is the responsibility of the application to check to see that all data was
indeed written to the database. Special APIs have been added that can be called to check if
the data has been written to the database.
For information on fast ingest and the steps involved in using this feature, refer to Using Fast
Ingest in Database Performance Tuning Guide.
In addition, the following are required to use Fast Ingest on Autonomous Database:
• Enable the Optimizer to Use Hints:
To use fast ingest with Autonomous Database, you must enable the optimizer to use hints
by setting the optimizer_ignore_hints parameter to FALSE at the session or system
level, as appropriate.
Depending on your Autonomous Database workload type, by default
optimizer_ignore_hints may be set to FALSE at the system level. See Manage
Optimizer Statistics on Autonomous Database for more information.
• Create a Table for Fast Ingest:
Prerequisites for Fast Ingest Table contains the limitations for tables to be eligible for Fast
Ingest (tables with the specified characteristics cannot use fast ingest).
3-1019
Chapter 3
Monitor and Manage Performance
Note:
To view metrics you must have the required access as specified in an Oracle
Cloud Infrastructure policy (whether you're using the Console, the REST API,
or another tool). See Getting Started with Policies for information on policies.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To view metrics for an Autonomous Database instance:
1. On the Details page, under Resources, click Metrics.
2. There is a chart for each metric. In each chart you can select the Interval and
Statistic or use the default values.
The following list shows the default Oracle Cloud Infrastructure Console metrics. See
Available Metrics: oci_autonomous_database for a list of the database metrics and
dimensions.
3-1020
Chapter 3
Monitor and Manage Performance
Note:
Availability is calculated based on
the "Monthly Uptime Percentage"
described in the Oracle PaaS and
IaaS Public Cloud Services Pillar
Document document under Delivery
Policies (see Autonomous
Database Availability Service Level
Agreement).
3-1021
Chapter 3
Monitor and Manage Performance
Note:
To view logs and audit trials you must have the required access as specified
in an Oracle Cloud Infrastructure policy (whether you're using the Console,
the REST API, or another tool). See Getting Started with Policies for
information on policies.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the left navigation list click Observability & Management. Under
Monitoring, click Service Metrics.
To use the metrics service to view Autonomous Database metrics:
1. On the Service Metrics page, under Compartment select your compartment.
2. On the Service Metrics page, under Metric Namespace select
oci_autonomous_database.
3. If there are multiple Autonomous Databases in the compartment you can show
metrics aggregated across the Autonomous Databases by selecting Aggregate
Metric Streams.
4. If you want to limit the metrics you see, next to Dimensions click Add (click Edit if
you have already added dimensions).
a. In the Dimension Name field select a dimension.
b. In the Dimension Value field select a value.
c. Click Done.
In the Edit dimensions dialog click +Additional Dimension to add an additional
dimension. Click x to remove a dimension.
3-1022
Chapter 3
Monitor and Manage Performance
To create an alarm on a specific metric, click Options and select Create an Alarm on this
Query. See Managing Alarms for information on setting and using alarms.
3-1023
Chapter 3
Monitor and Manage Performance
Note:
Database Management features such as AWR Explorer and Performance
Hub on Managed Database Details are unavailable for Autonomous
Databases. However, Performance Hub for Autonomous Databases
continues to be a free feature that you can access from the Autonomous
Database Details page without enabling Database Management.
For further information, see Monitor Autonomous Database with
Performance Hub.
Note:
If you enable SQL Tracing your application performance for the session may
be degraded while the trace collection is enabled. This negative performance
impact is expected due to the overhead of collecting and saving trace data.
3-1024
Chapter 3
Monitor and Manage Performance
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
See CREATE_CREDENTIAL Procedure for details on the arguments for username and
password parameters for different object storage services.
3. Set initialization parameters to specify the Cloud Object Storage URL for a bucket for
SQL trace files and to specify the credentials to access the Cloud Object Storage.
a. Set database property DEFAULT_LOGGING_BUCKET to specify the logging bucket on
Cloud Object Storage.
For example, if you create the bucket with Oracle Cloud Infrastructure Object
Storage:
3-1025
Chapter 3
Monitor and Manage Performance
For example:
Including the schema name with the credential is required. In this example the
schema is "ADMIN".
Note:
If you enable SQL tracing your application performance for the session may
be degraded while the trace collection is enabled. This negative performance
impact is expected due to the overhead of collecting and saving trace data.
Before you enable SQL tracing you must configure the database to save SQL Trace
files. See Configure SQL Tracing on Autonomous Database for more information.
To enable SQL tracing, do the following:
1. (Optional) Set a client identifier for the application. This step is optional but is
recommended. SQL tracing uses the client identifier as a component of the trace
file name when the trace file is written to Cloud Object Store.
For example:
BEGIN
DBMS_SESSION.SET_IDENTIFIER('sqlt_test');
END;
/
2. (Optional) Set a module name for the application. This step is optional but is
recommended. SQL tracing uses the module name as a component of the trace
file name when the trace file is written to Cloud Object Store.
For example:
BEGIN
DBMS_APPLICATION_INFO.SET_MODULE('modname', null);
END;
/
3-1026
Chapter 3
Monitor and Manage Performance
2. (Optional) as needed for your environment, you may want to reset the database property
DEFAULT_LOGGING_BUCKET to clear the value for the logging bucket on Cloud Object
Storage.
For example:
When you disable SQL tracing, the tracing data collected while the session runs with tracing
enabled is copied to a table and sent to a trace file on Cloud Object Store. You have two
options to view trace data:
• View and analyze SQL Trace data in the trace file saved to Cloud Object Store. See View
Trace File Saved to Cloud Object Store on Autonomous Database for more information.
• View and analyze SQL Trace data saved to the view SESSION_CLOUD_TRACE. See View
Trace Data in SESSION_CLOUD_TRACE View on Autonomous Database for more
information.
The SQL Trace facility writes the trace data collected in the session to Cloud Object Store in
the following format:
default_logging_bucket/sqltrace/clientID/moduleName/sqltrace_numID1_numID2.trc
3-1027
Chapter 3
Monitor and Manage Performance
• numID1_numID2: are two identifiers that the SQL Trace facility provides. The numID1
and numID2 numeric values uniquely distinguish each trace file name from other
sessions using tracing and creating trace files in the same bucket in the Cloud
Object Storage.
When the database service supports parallelism and a session runs a parallel
query, the SQL Trace facility can produce multiple trace files with different numID1
and numID2 values.
Note:
When SQL tracing is enabled and disabled multiple times within the same
session, each trace iteration generates a separate trace file in Cloud Object
Store. To avoid overwriting previous traces that were generated in the
session, subsequently generated files follow the same naming convention
and add a numeric suffix to the trace file name. This numeric suffix starts
with the number 1 and is incremented by 1 for each tracing iteration
thereafter.
For example, the following is a sample generated trace file name when you set the
client identifier to "sql_test" and the module name to "modname":
sqltrace/sqlt_test/modname/sqltrace_5415_56432.trc
You can run TKPROF to translate the trace file into a readable output file.
1. Copy the trace file from Object Store to your local system.
2. Navigate to the directory in which the trace file is saved.
3. Run the TKPROF utility from the operating system prompt using the following
syntax:
tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n]
[aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table]
[explain=user/password] [record=filename4] [width=n]
The input and output files are the only required arguments.
4. To view online Help, invoke TKPROF without arguments.
See "Tools for End-to-End Application Tracing" in Oracle Database SQL Tuning Guide
for information about using the TKPROF utility.
3-1028
Chapter 3
Monitor and Manage Performance
DESC SESSION_CLOUD_TRACE
The ROW_NUMBER specifies the ordering for trace data found in the TRACE column. Each line of
trace output written to a trace file becomes a row in the table and is available in the TRACE
column.
After you disable SQL tracing for the session, you can run queries on the
SESSION_CLOUD_TRACE view.
For example:
The data in SESSION_CLOUD_TRACE persists for the duration of the session. After you log out or
close the session, the data is no longer available.
If SQL Trace is enabled and disabled multiple times within the same session,
SESSION_CLOUD_TRACE shows the trace data for all the iterations cumulatively. Thus, re-
enabling tracing in a session after previously disabling tracing does not remove the trace data
produced by the earlier iteration.
3-1029
Chapter 3
Monitor and Manage Performance
3-1030
Chapter 3
Monitor and Manage Performance
This topic covers Oracle Autonomous Database Serverless metrics. See Monitor
Databases with Autonomous Database Metrics for information on Oracle Autonomous
Database on Dedicated Exadata Infrastructure metrics.
• DISPLAYNAME
The friendly name of the Autonomous Database.
• REGION
The region in which the Autonomous Database resides.
• RESOURCEID
The OCID of the Autonomous Database.
• RESOURCENAME
The name of the Autonomous Database.
The metrics listed in the following table are automatically available for any Autonomous
Database that you create. You do not need to enable monitoring on the resource to get these
metrics.
Note:
Valid alarm intervals are 5 minutes or greater due to the frequency at which these
metrics are emitted. See To create an alarm for details on creating alarms.
In the following table, metrics that are marked with an asterisk (*) can be viewed only on the
Service Metrics page of the Oracle Cloud Infrastructure console. All metrics can be filtered
by the dimensions described in this topic.
3-1031
Chapter 3
Monitor and Manage Performance
3-1032
Chapter 3
Monitor and Manage Performance
3-1033
Chapter 3
Monitor and Manage Performance
3-1034
Chapter 3
Monitor and Manage Performance
3-1035
Chapter 3
Monitor and Manage Performance
3-1036
Chapter 3
Monitor and Manage Performance
3-1037
Chapter 3
Monitor and Manage Performance
3-1038
Chapter 3
Use Sample Data Sets in Autonomous Database
3-1039
Chapter 3
Use the Oracle Autonomous Database Free Container Image
AND sales.channel_id=channels.channel_id
AND times.calendar_month_desc IN ('2000-09', '2000-10')
AND country_iso_code='US'
GROUP BY channel_desc;
For more information on the SH schema see Sample Schemas and Schema
Diagrams.
For more information on database services, see Predefined Database Service Names
for Autonomous Database.
3-1040
Chapter 3
Use the Oracle Autonomous Database Free Container Image
3-1041
Chapter 3
Use the Oracle Autonomous Database Free Container Image
3-1042
Chapter 3
Use the Oracle Autonomous Database Free Container Image
Autonomous Database Free Container Image. Check the repository to find newer
versions of Autonomous Database Free Container Image.
• The following Autonomous Database built-in tools are not supported:
– Graph
– Oracle Machine Learning
– Data Transforms
• When Autonomous Database runs in a container, the container provides a local
Autonomous Database instance. The container image does not include the features that
are only available through the Oracle Cloud Infrastructure Console or the APIs. Some
features that are available in-database and also available through the Oracle Cloud
Infrastructure Console can still be used through in-database commands, such as
resetting the ADMIN password. The following lists some of the features that are not
available:
3-1043
Chapter 3
Use the Oracle Autonomous Database Free Container Image
Note:
When you run Autonomous Database Free Container Image in a
container, you can start an instance, stop an instance, or restart an
instance by starting, stopping or restarting the container.
For more details and additional information, search for "Oracle Autonomous
Database Free" on Oracle Cloud Infrastructure Registry.
GitHub Packages:
For example, use podman command to pull the Autonomous Database Free
Container Image from: GitHub Packages:
3-1044
Chapter 3
Use the Oracle Autonomous Database Free Container Image
podman run -d \
-p 1521:1522 \
-p 1522:1522 \
-p 8443:8443 \
-p 27017:27017 \
-e WORKLOAD_TYPE='ATP' \
-e WALLET_PASSWORD=*** \
-e ADMIN_PASSWORD=*** \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name adb-free \
container-registry.oracle.com/database/adb-free:latest
Command notes:
• The WORKLOAD_TYPE can be either ATP or ADW. The default value is ATP.
• By default, the database is named either MYATP or MYADW, depending on the passed
WORKLOAD_TYPE value. Optionally, if you want a different database name than the
default, you can set the DATABASE_NAME parameter. The database name can contain
only alphanumeric characters.
• On container startup, the corresponding MY<WORKLOAD_TYPE>.pdb is downloaded
and plugged in from a public Object Storage bucket.
• The wallet is generated using the provided WALLET_PASSWORD.
• It is necessary to change the ADMIN_PASSWORD upon initial login.
• For an OFS mount, the container starts with SYS_ADMIN capability. Also, virtual
device /dev/fuse must be accessible.
• This -p options specify that the following ports are forwarded to the container
process:
Port Description
1521 TLS
1522 mTLS
8443 HTTPS port for ORDS / APEX and Database Actions
27017 Mongo API
3-1045
Chapter 3
Use the Oracle Autonomous Database Free Container Image
podman run -d \
-p 1521:1522 \
-p 1522:1522 \
-p 8443:8443 \
-p 27017:27017 \
-e WORKLOAD_TYPE='ATP' \
-e WALLET_PASSWORD=*** \
-e ADMIN_PASSWORD=*** \
-e http_proxy=http://example-corp-proxy.com:80/ \
-e https_proxy=http://example-corp-proxy.com:80/ \
-e no_proxy=localhost,127.0.0.1 \
-e HTTP_PROXY=http://example-corp-proxy.com:80/ \
-e HTTPS_PROXY=http://example-corp-proxy.com:80/ \
-e NO_PROXY=localhost,127.0.0.1 \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name adb-free \
container-registry.oracle.com/database/adb-free:latest
Available Commands
You can view the list of available commands using the following command:
adb-cli --help
Usage: adb-cli [OPTIONS] COMMAND [ARGS]...
ADB-S Command Line Interface (CLI) to perform container-runtime
database operations
Options:
-v, --version Show the version and exit.
--help Show this message and exit.
Commands:
add-database
change-password
Add Database
You can add a database using the following command:
3-1046
Chapter 3
Use the Oracle Autonomous Database Free Container Image
Change Password
You can change the admin password using the following command:
ORDS/APEX/Database Actions
The container hostname is used to generate self-signed SSL certificates to serve HTTPS
traffic on port 8443. Oracle APEX and Database Actions can be accessed using the container
host (or simply localhost).
Application URL
Oracle APEX https://localhost:8443/ords/apex
Database Actions https://localhost:8443/ords/sql-developer
Note:
For additional databases plugged in using adb-cli add-database command, use
the URL formats https://localhost:8443/ords/{database_name}/apex
and https://localhost:8443/ords/{database_name}/sql-developer
to access APEX and Database Actions respectively.
3-1047
Chapter 3
Use the Oracle Autonomous Database Free Container Image
The TNS alias mappings for these connect strings are in $TNS_ADMIN/
tnsnames.ora. See Manage Concurrency and Priorities on Autonomous Database
for information on the service names in tnsnames.ora.
export TNS_ADMIN=/scratch/tls_wallet
3. If you want to connect to a remote host where the Autonomous Database Free
Container Image is running, replace localhost in $TNS_ADMIN/tnsnames.ora
with the remote host FQDN.
For example:
3-1048
Chapter 3
Use the Oracle Autonomous Database Free Container Image
sqlplus admin/password@myatp_low_tls
For example, use sqlplus to connect to the Data Warehouse workload Autonomous
Database instance:
sqlplus admin/password@myadw_low_tls
podman cp adb-free:/u01/app/oracle/wallets/tls_wallet/adb_container.cert
adb_container.cert
sudo cp adb_container.cert /etc/pki/ca-trust/source/anchors
sudo update-ca-trust
sqlplus admin/password@myatp_low
For example, use sqlplus to connect to the Data Warehouse workload Autonomous
Database instance:
sqlplus admin/password@myadw_low
3-1049
Chapter 3
Use the Oracle Autonomous Database Free Container Image
[
{
"Name": "adb_container_volume",
"Driver": "local",
"Mountpoint": "/share/containers/storage/volumes/
adb_container_volume/_data",
"CreatedAt": "2023-09-11T21:23:34.305877073Z",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true,
"NeedsChown": true
}
]
3. Start the source container, mounting the volume to /u01/data in the container.
For example:
podman run -d \
-p 1521:1522 \
-p 1522:1522 \
-p 8443:8443 \
-p 27017:27017 \
-e WORKLOAD_TYPE='ATP' \
-e WALLET_PASSWORD=*** \
-e ADMIN_PASSWORD=*** \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name source_adb_container \
--volume adb_container_volume:/u01/data \
container-registry.oracle.com/database/adb-free:latest
4. This example assumes you have previously created your data in schema,
app_user.
5. Export the data from app_user's schema to the container volume.
a. Connect as ADMIN and create ORA_EXP_DIR directory pointing to /u01/data.
sqlplus admin/**************@myadw_high
SQL> exec
DBMS_CLOUD_CONTAINER_ADMIN.create_export_directory('/u01/data');
3-1050
Chapter 3
Use the Oracle Autonomous Database Free Container Image
DIRECTORY_PATH
----------------------------------------------------------------------
----------
/u01/data
b. Run the export job in schema mode and for both ADMIN and APP_USER schemas.
DECLARE
h1 NUMBER;
s VARCHAR2(1000):=NULL;
errorvarchar VARCHAR2(100):= 'ERROR';
tryGetStatus NUMBER := 0;
success_with_info EXCEPTION;
PRAGMA EXCEPTION_INIT(success_with_info, -31627);
BEGIN
h1 := dbms_datapump.OPEN(operation => 'EXPORT', job_mode =>
'SCHEMA', job_name => 'EXPORT_MY_ADW_4', version => 'COMPATIBLE');
tryGetStatus := 1;
dbms_datapump.add_file(handle => h1, filename =>
'EXPORT_MY_ADW.LOG', directory => 'ORA_EXP_DIR', filetype =>
DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.metadata_filter(handle => h1, name =>
'SCHEMA_EXPR', VALUE => 'IN(''ADMIN'', ''APP_USER'')');
dbms_datapump.add_file(handle => h1, filename => 'MY_ADW_%U.DMP',
directory => 'ORA_EXP_DIR', filesize => '500M', filetype =>
DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.start_job(handle => h1, skip_current => 0,
abort_step => 0);
dbms_datapump.detach(handle => h1);
errorvarchar := 'NO_ERROR';
EXCEPTION
WHEN OTHERS THEN
BEGIN
IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN
DBMS_DATAPUMP.DETACH(h1);
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
RAISE;
END;
/
3-1051
Chapter 3
Use the Oracle Autonomous Database Free Container Image
Verify the export log (export log), check for any errors and successful
completion.
7. Stop and remove the source container.
Note:
The adb_container_volume will live across container restarts and
removals
podman run -d \
-p 1521:1522 \
-p 1522:1522 \
-p 8443:8443 \
-p 27017:27017 \
-e WORKLOAD_TYPE='ATP' \
-e WALLET_PASSWORD=*** \
-e ADMIN_PASSWORD=*** \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--name dest_adb_container \
--volume adb_container_volume:/u01/data \
container-registry.oracle.com/database/adb-free:latest
DIRECTORY_PATH
--------------------------------------------------------------------
------------
/u01/data
3-1052
Chapter 3
Use the Oracle Autonomous Database Free Container Image
DECLARE
h1 NUMBER;
s VARCHAR2(1000):=NULL;
errorvarchar VARCHAR2(100):= 'ERROR';
tryGetStatus NUMBER := 0;
success_with_info EXCEPTION;
PRAGMA EXCEPTION_INIT(success_with_info, -31627);
BEGIN
h1 := dbms_datapump.OPEN(operation => 'IMPORT', job_mode => 'SCHEMA',
job_name => 'IMPORT_MY_ADW_4', version => 'COMPATIBLE');
tryGetStatus := 1;
dbms_datapump.add_file(handle => h1, filename => 'IMPORT_MY_ADW.LOG',
directory => 'ORA_EXP_DIR', filetype =>
DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR',
VALUE => 'IN(''ADMIN'', ''APP_USER'')');
dbms_datapump.add_file(handle => h1, filename => 'MY_ADW_%U.DMP',
directory => 'ORA_EXP_DIR', filesize => '500M', filetype =>
DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step
=> 0);
dbms_datapump.detach(handle => h1);
errorvarchar := 'NO_ERROR';
EXCEPTION
WHEN OTHERS THEN
BEGIN
IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN
DBMS_DATAPUMP.DETACH(h1);
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
RAISE;
END;
/
3-1053
4
Tutorials
• Load and Analyze Your Data with Autonomous Database
Learn about Autonomous Database Serverless and learn how to create an Autonomous
Database in just a few clicks. Then load data into your database, query it, and visualize it.
• Self-service Tools for Everyone Using Oracle Autonomous Database
Introduces the suite of data tools built into Oracle Autonomous Database.
• Manage and Monitor Autonomous Database
Connect using secure wallets and monitor your Autonomous Database instances.
• Integrate, Analyze and Act on All Data using Autonomous Database
• Load the Autonomous Database with Data Studio
• QuickStart Java applications with Autonomous Database
Follow these easy steps to connect your Java applications to Autonomous Database
using the Oracle JDBC driver and Universal Connection Pool.
Data Studio - Self-service tools for everyone using Oracle Autonomous Database
This is an overview Data Studio workshop to introduce you to different tools using a sample
data analysis task. After this workshop you will have a working knowledge on how to use
these tools.
The tools covered are:
• Catalog
4-1
Chapter 4
Manage and Monitor Autonomous Database
• Data Load
• Data Transforms
• Data Analysis
• Data Insight
After completing this workshop, you can look at the individual in-depth workshops for
each tool.
4-2
Chapter 4
QuickStart Java applications with Autonomous Database
This workshop provides practical examples of how to load data from several different
systems, as well as how to link to, and feed data in from, data in cloud storage systems.
4-3
5
Features
5-1
Chapter 5
Using Data Lake Capabilities with Autonomous Database
5-2
Chapter 5
Using Data Lake Capabilities with Autonomous Database
benefit by using data stored in multiple places without needing to understand the complexities
of the data lake.
Note:
Security principals are not required for data that is stored publicly.
5-3
Chapter 5
Using Data Lake Capabilities with Autonomous Database
Load Data
You can use DBMS_CLOUD data loading APIs as part of your ETL processing flows.
DBMS_CLOUD APIs simplify the loading process by encapsulating the complexities of
dealing with different data sources and producing highly efficient loading jobs. You
specify the source properties (for example, data location, file type, and columns for
unstructured files) and their mapping to a target table. The documentation describes
the details of loading data from a variety of sources and describes debugging data
loads, including the processing steps and bad records that were encountered.
Loading data involves the following:
5-4
Chapter 5
Using Data Lake Capabilities with Autonomous Database
• Creating credentials
• Loading different types of files, including JSON, data pump, delimited text, Parquet, Avro,
and ORC.
• Replication using Oracle GoldenGate, and more.
Some of the key DBMS_CLOUD APIs for loading data are:
• COPY_DATA Procedure: Loads data from object storage sources. Specify the source
and how that source maps to a target table. The API supports a variety of file types,
including: delimited text, Parquet, Avro, and ORC. There are numerous features for
logging the processing.
• COPY_COLLECTION Procedure: Loads JSON data from object storage into collections.
• SEND_REQUEST Function and Procedure: The most flexible approach,
DBMS_CLOUD.SEND_REQUEST sends and processes REST requests to web service
providers. Use DBMS_CLOUD.SEND_REQUEST in your own PL/SQL procedures to load data
that is not available in an accessible file store (for example, weather data from
government services).
You can also use data pipelines for continuous incremental data loading from your data lake
(as data arrives on object store it is loaded to a database table). Data import pipelines
leverage the same underlying loading capabilities provided by DBMS_CLOUD.
The DBMS_CLOUD_PIPELINE package offers a set of APIs to create, manage, and run a
pipeline, including:
• CREATE_PIPELINE Procedure: Creates a new pipeline.
• SET_ATTRIBUTE Procedure: Allows you to specify pipeline properties, including: a
description of the source data and how it maps to a target, the load interval, the job
priority and more.
• START_PIPELINE Procedure: Starts a pipeline for continuous incremental data loading.
See Using Data Pipelines for Continuous Load and Export for more information.
Link Data
Autonomous Database external tables are used to link to data. An external table accesses
the files in object store directly without loading the data. This is ideal for cases where you
may want to quickly explore data that is in object storage to understand its value. Applications
query external tables as they would any other table in Autonomous Database, the location of
the source is transparent. This means that any SQL-based tool or application can query
across the data warehouse and data lake, finding new insights that would have otherwise
been difficult to achieve.
External tables support a variety of file types, including: delimited text, JSON, Parquet, Avro,
and ORC. Review the documentation to learn more about external concepts, including the
external table types, validating sources, and leveraging Oracle Cloud Infrastructure Data
Catalog to automatically generate external tables.
Listed below are a few of the DBMS_CLOUD APIs for working with external tables:
5-5
Chapter 5
Using Data Lake Capabilities with Autonomous Database
5-6
Chapter 5
Using Data Lake Capabilities with Autonomous Database
The location of data is completely transparent to the application. The application simply
connects to Autonomous Database and then uses all of the Oracle SQL query language to
query across your data sets.
This allows you to:
• Correlate information from data lake and data warehouse.
• Access data from any SQL tool or application.
• Preserve your investment in tools and skill sets.
• Safeguard sensitive data using Oracle Database advanced security policies.
5-7
Chapter 5
Using Data Lake Capabilities with Autonomous Database
Advanced Analytics
Integrating the various types of data allows business analysts to apply Autonomous
Database's built-in analytics across all data and you do not need to deploy specialized
analytic engines.
Using Autonomous Database set up as a data lakehouse eliminates costly data
replication, security challenges and administration overhead. Most importantly, it
allows cross-domain analytics. For example, machine learning models can easily take
advantage of spatial analytics to answer questions like, "What is the predicted impact
of the movement of a hurricane on our product shipments?"
The following table provides a snapshot of the advanced analytic capabilities that you
can take advantage of in Autonomous Database.
5-8
Chapter 5
Using Data Lake Capabilities with Autonomous Database
5-9
Chapter 5
The Data Studio Overview Page
business analysts and data stewards apply business terms and categories to the data.
You now have a collaborative environment for data providers/consumers to search
assets based on names, business terms, tags, and custom properties.
Autonomous Database integrates with Data Catalog, simplifying the administrative
process and promoting semantic consistency across the lakehouse. The data lake
metadata harvested by Data Catalog is automatically synchronized with Autonomous
Database. This allows business analysts to immediately query data in the object store
and combine that information with data in Autonomous Database using their favorite
SQL based tool and application.
View details about how to set up the connection to Data Catalog
5-10
Chapter 5
The Data Studio Overview Page
can navigate to the Data Load page, bring in the new contents, and return to your analysis in
progress.
Load your data from diverse sources such as files, databases and cloud storage, which are
all consolidated in a single location. Some of the cloud data warehouses the Data Studio tool
supports are Oracle Cloud Infrastructure (OCI), Amazon S3, Microsoft Azure Blob Storage
and Google Cloud Storage.
Once your data is available, analyze it, generate insights, and create reports.
On the right of the Data Studio Overview page, refer to the Getting Started panel to know
more about this tool.
Note:
You will lose your data once you click the Database Actions in the header or click
the selector icon.
Let us run analysis with data from different sources and generate insights from it.
5-11
Chapter 5
The Data Studio Overview Page
Load Data
Select the Data Load menu from the navigation pane in the Data Studio Overview
page to load data from files on your local device, from remote databases, or from
cloud storage buckets.
5-12
Chapter 5
The Data Studio Overview Page
• Data Load Jobs: You can select all the existing data load jobs and retrieve them as per
your convenience from this menu. Refer to the Checking Data Load Jobs section for
more details.
• Cloud Locations: You can view all the existing cloud storage links from this option. This
option enables you to edit or delete a link. You can also create new cloud storage links.
Refer to the Managing Cloud Storage Connections chapter for more details.
After you load the data using the data load job, it becomes available in the Data Load page.
You can later inspect, review and delete the table on clicking the Actions icon. You could
also reload the cart from the Reload Cart button to reload cart using the current cart in case
you want to make changes to the table. This data will be the source data of the Analytic View.
Use breadcrumbs at the top of the page to navigate back to the home page.
Follow the steps described in The Data Load Page chapter in this book to load data from
Oracle Cloud Infrastructure to your Autonomous Database.
Data Analysis
Analytic views provide metadata that highlight the parts of your data that are most important
to your organization. By creating an AV, you can improve the value of automatic insights and
the analysis of your data because the system has this metadata.
Select the Data Analysis menu from the navigation pane in the Data Studio Overview page
to create Analytic Views and analyze the data from the Analytic View. The table you loaded
using the Load Data feature will be used to create the Analytic View. You can select both
hierarchies and measures from Analytic Views. Analytic Views utilize diagrams, symbols, and
text, that represents how the data flows and the connection in between them. The Data
Analysis tool helps you find the structural errors in the Analytic View you create. It spots
anything from null values in a join column to duplicated fields. You can develop a visual
representation of your analysis in the form of tables, Pivot tables and bar charts. You can also
export the Analytic View to Tableau or Power BI for better visualization.
Create Analytic Views by referring to The Data Analysis Tool chapter in this book.
The Related Insights panel in the right works with the measures associated with the Analytic
View to visualize data in the form of bar charts.
Automatic Insights
Select the Automatic Insights menu from the navigation pane in the Data Studio Overview
page to generate insights automatically. The insights appear in the Data Insights dashboard
as a series of bar charts.
By selecting the user, Analytic View, and the column name, you can view the data points of
the actual values that deviate considerably from the forecast values. This type of predictive
analytics will enable you to understand a pattern better and make better decisions. Then you
can act on your data to optimize it or to improve its performance.
For example, consider that the fact table for the insights records values about different
companies, and the measures of the fact table are Value (USD), Acquisition date and
Acquisition year. Then you can generate insights for the Value measure. The dashboard will
have a series of charts labeled Acquisition date and Acquisition year. Let’s say you want to
view the value of your company A. Click the chart whose sale you want to display for a
particular range of Acquisition year. Clicking a chart displays more details about it.
Generate automatic insights by referring to The Data Insights Page chapter in this book.
Below is the chart that displays the values and insights.
5-13
Chapter 5
High Availability
According to the insights generated by the tool, you notice that your company is
lagging from the expected value by a significant number. In order to lead you need to
change a few factors that affect the sales of your organization.
The insights that Data Insights generates for the analytic view are more useful than
those for a table because of the additional metadata that an analytic view provides.
Catalog
Select the Catalog menu from the navigation pane in the Data Studio Overview page
to view information about the upstream dependencies of the entity, how the entity was
created and how it is linked to other entities.
This page lists details about the database objects such as cloud storage links and
tables you create from the Data Analysis tool and the Data load tool. It also displays
lists of the database dictionary objects such as tables, columns, and Analytic Views.
Use the Catalog page to locate objects in the catalog and perform tasks specific to
those objects.
Refer to The Catalog Page chapter of this book for more information on the Catalog
page.
High Availability
You can configure and manage Autonomous Data Guard and maintain continuous
availability using disaster recovery.
• Use Application Continuity on Autonomous Database
Autonomous Database provides application continuity features for making
connections to the database.
• Use Standby Databases with Autonomous Data Guard for Disaster Recovery
• Backup and Restore Autonomous Database Instances
This section describes backup and recovery tasks on Autonomous Database.
• Use OraMTS Recovery Feature on Autonomous Database
Use Oracle MTS (OraMTS) Recovery Service to resolve an in-doubt Microsoft
Transaction Server transaction.
5-14
Chapter 5
High Availability
5-15
Chapter 5
High Availability
• Notification. FAN is the first step to hiding outages. FAN notifies clients and
breaks them out of their current network wait when an outage occurs. This avoids
stalling applications for long network waits. For Autonomous Database, FAN is
handled at the driver and by the Autonomous Database cloud connection manger.
FAN notification automatically triggers closing idle connections, opening new
connections in the new service location, and allows a configurable time for active
work to complete in the soon-to-be-shutdown service location. The major third-
party JDBC mid-tiers, such as IBM WebSphere, allow for the same behavior when
configured with UCP. For JDBC-based applications that cannot use UCP, Oracle
provides solutions using Oracle Drivers and connection tests. On Autonomous
Database FAN for planned maintenance is sent in-Band.
• Recovery. After the client is notified, failover handling with Transparent Application
Continuity (TAC) or Application Continuity (AC) re-establishes a connection to the
Autonomous Database and replays in-flight, uncommitted, work when possible. By
replaying in-flight work, the application can usually continue executing without
knowing that any failure happened.
You enable Application Continuity on Autonomous Database in one of two
configurations, depending on the application:
• Application Continuity (AC)
Application Continuity hides outages for thin Java-based applications, and Oracle
Database Oracle Call Interface and ODP.NET based applications with support for
open-source drivers, such as Node.js, and Python. Application Continuity rebuilds
the session by recovering the session from a known point which includes session
states and transactional states. Application Continuity rebuilds all in-flight work.
The application continues as it was, seeing a slightly delayed execution time when
a failover occurs.
• Transparent Application Continuity (TAC)
Transparent Application Continuity (TAC) transparently tracks and records session
and transactional state so the database session can be recovered following
recoverable outages. This is done with no reliance on application knowledge or
application code changes, allowing Transparent Application Continuity to be
enabled for your applications. Application transparency and failover are achieved
by consuming the state-tracking information that captures and categorizes the
session state usage as the application issues user calls.
See Overview of Application Continuity for more information on Application Continuity.
Note:
By default Application Continuity is disabled on Autonomous Database.
5-16
Chapter 5
High Availability
execute DBMS_APP_CONT_ADMIN.ENABLE_AC(
'databaseid_tpurgent.adb.oraclecloud.com', 'LEVEL1', 600);
• Transparent Application Continuity (TAC): Set this failover option using the procedure
DBMS_APP_CONT_ADMIN.ENABLE_TAC. The ENABLE_TAC procedure takes three parameters:
SERVICE NAME is the service name to change, FAILOVER_RESTORE, set to AUTO to select
Transparent Application Continuity (TAC), and REPLAY_INITIATION_TIMEOUT is the replay
timeout that specifies how many seconds after a request is submitted to allow that
request to replay.
For example, as the ADMIN user, to enable Transparent Application Continuity for the TP
service with the replay timeout set to 20 minutes:
execute DBMS_APP_CONT_ADMIN.ENABLE_TAC(
'databaseid_tp.adb.oraclecloud.com', 'AUTO', 1200);
execute DBMS_APP_CONT_ADMIN.DISABLE_FAILOVER(
'databaseid_tp.adb.oraclecloud.com');
5-17
Chapter 5
High Availability
Notice the FAILOVER_TYPE for the high service has the no value and indicates that
Application Continuity is disabled.
5-18
Chapter 5
High Availability
The FAILOVER_TYPE value for the high service is now AUTO, indicating that Transparent
Application Continuity (TAC) is enabled and the FAILOVER_TYPE value for the tpurgent
service is now TRANSACTION, indicating that Application Continuity (AC) is enabled.
Use the following for JDBC connections using Oracle driver version 12.1 or earlier:
alias =
(DESCRIPTION =
(CONNECT_TIMEOUT= 15)(RETRY_COUNT=50)
(RETRY_DELAY=3)
(ADDRESS_LIST =
(LOAD_BALANCE=on)
(ADDRESS = (PROTOCOL = TCP)(HOST=scan-host)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME = service-name)))
5-19
Chapter 5
High Availability
• For JDBC and ODP clients, the pool connection wait time should be configured to
be longer than the CONNECT_TIMEOUT in the connect string.
• Do not use Easy Connect Naming on the client because such connections do not
have high-availability capabilities.
See Download Client Credentials (Wallets) for information on the tnsnames.ora file.
It is recommended to set the memory allocation for the initial Java heap size (ms)
and maximum heap size (mx) to the same value. This prevents using system
resources on growing and shrinking the memory heap.
5-20
Chapter 5
High Availability
3. When using the Universal Connection Pool (UCP), disable Fast Connection Failover. For
example:
PoolDataSource.setFastConnectionFailoverEnabled(false)
<ons>
<servers>
<!—Do not enter any values -->
</servers>
</ons>
Also, for Autonomous Database Serverless, do not configure the <fan> section:
<fan>
<!-- only possible values are "trace" or "error" -->
<subscription_failure_action>
</subscription_failure_action>
</fan>
<ons>
<servers>
<!—Do not enter any values -->
</servers>
</ons>
5-21
Chapter 5
High Availability
5-22
Chapter 5
High Availability
connection pool is a general recommendation regardless of whether you use FAN to drain, or
connection tests to drain.
For example:
SQL> EXECUTE
dbms_app_cont_admin.add_sql_connection_test('SELECT COUNT(1) FROM DUAL');
SQL> EXECUTE
dbms_app_cont_admin.enable_connection_test(dbms_app_cont_admin.sql_test,
'SELECT COUNT(1) FROM DUAL');
SQL> SELECT * FROM DBA_CONNECTION_TESTS;
5-23
Chapter 5
High Availability
Configure the same connection test that is enabled in your database at your
connection pool or application server. Also configure flushing and destroying the pool
on connection test failure to at least two times the maximum pool size or MAXINT.
If you would like to use connection tests that are local to the driver and cannot use
UCP’s full FAN support:
• Enable validate-on-borrow=true
• Set the Java system properties:
– -Doracle.jdbc.fanEnabled=true
– -Doracle.jdbc.defaultConnectionValidation=SOCKET
And then use one of the following tests:
• java.sql.Connection.isValid(int timeout)
• oracle.jdbc.OracleConnection.pingDatabase()
• oracle.jdbc.OracleConnection.pingDatabase(int timeout)
• A HINT at the start of your test SQL:
– /*+ CLIENT_CONNECTION_VALIDATION */
If you would like to use the OCI driver directly, use OCI_ATTR_SERVER_STATUS. This is
the only method that is a code change. In your code, check the server handle when
borrowing and returning connections to see if the session is disconnected. During
maintenance, the value of OCI_ATTR_SERVER_STATUS is set to
OCI_SERVER_NOT_CONNECTED. When using OCI session pool, this connection check is
done for you.
The following code sample shows how to use OCI_ATTR_SERVER_STATUS:
5-24
Chapter 5
High Availability
• Oracle strongly recommends that you use the latest client drivers. Oracle Database 19c
client drivers and later provide full support for Application Continuity (AC) and for
Transparent Application Continuity (TAC). Use one of the following supported clients
drivers:
– Oracle JDBC Replay Driver 19c or later. This is a JDBC driver feature provided with
Oracle Database 19c for Application Continuity
– Oracle Universal Connection Pool (UCP) 19c or later with Oracle JDBC Replay
Driver 19c or later
– Oracle Weblogic Server 12c with Active GridLink, or third-party JDBC application
servers using UCP with Oracle JDBC Replay Driver 19c or later
– Java connection pools or standalone Java applications using Oracle JDBC Replay
Driver 19c or later
– Oracle Call Interface Session Pool 19c or later.SQL*Plus 19c (19.8) or later
– ODP.NET pooled, Unmanaged Driver 19c or later ("Pooling=true" default in 12.2 and
later)
– Oracle Call Interface based applications using 19c OCI driver or later
5-25
Chapter 5
High Availability
For example:
Side Effects
When a database request includes an external call such as sending MAIL or
transferring a file then this is termed a side effect.
Side effects are external actions, they do not roll back. When replay occurs, there is a
choice as to whether side effects should be replayed. Many applications choose to
repeat side effects such as journal entries and sending mail as duplicate executions
cause no problem. For Application Continuity side effects are replayed unless the
request or user call is explicitly disabled for replay. Conversely, as Transparent
Application Continuity is on by default, TAC does not replay side effects. The capture
is disabled, and re-enables at the next implicit boundary created by TAC.
5-26
Chapter 5
High Availability
Do not embed COMMIT in PL/SQL and Avoid Commit on Success and Autocommit
It is recommended practice to use a top-level commit, (OCOMMIT or COMMIT() or
OCITransCommit). If your application is using COMMIT embedded in PL/SQL or AUTOCOMMIT or
COMMIT ON SUCCESS, it may not be possible to recover following an outage or timeout.
PL/SQL is not reentrant. Once a commit in PL/SQL has executed, that PL/SQL block cannot
be resubmitted. Applications either need to unpick the commit which is not sound as that data
may have been read, or for batch use a checkpoint and restart technique. When using
AUTOCOMMIT or COMMIT ON SUCCESS, the output is lost.
If your application is using a top-level commit, then there is full support for Transparent
Application Continuity (TAC), Application Continuity (AC), and TAF Select Plus. If your
application is using COMMIT embedded in PLSQL or AUTOCOMMIT or COMMIT ON SUCCESS, it
may not be possible to replay for cases where that the call including the COMMIT did not run to
completion.
5-27
Chapter 5
High Availability
5-28
Chapter 5
High Availability
Note:
Autonomous Data Guard is available with the Data Warehouse and Transaction
Processing workload types. Autonomous Data Guard is not available with the JSON
and APEX workload types.
By selecting from the disaster recovery options that Autonomous Database provides, you can
choose the features and options that meet your Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) requirements
By default, each Autonomous Database instance provides a local Backup-Based Disaster
Recovery peer database.
To add automatic failover and to lower your Recovery Time Objective (RTO), you can use a
local Autonomous Data Guard standby database.
To use the most resilient disaster recovery option that Autonomous Database offers, you can
use a local Autonomous Data Guard standby database and a cross-region Autonomous Data
Guard standby.
In addition, other options using Backup-Based Disaster Recovery enable you to provide lower
cost and higher Recovery Time Objective (RTO) disaster recovery options, as compared to
Autonomous Data Guard. See Use Backup-Based Disaster Recovery for details on Backup-
Based Disaster Recovery.
• Autonomous Data Guard with Local Standby
When you use an Autonomous Data Guard standby database in the current region,
Autonomous Database provisions a local standby database and monitors the primary
database; if the primary database goes down, the standby instance automatically
assumes the role of the primary instance.
• Autonomous Data Guard with Cross-Region Standby
• Autonomous Data Guard Recovery Time Objective (RTO) and Recovery Point Objective
(RPO)
• Autonomous Data Guard Operations
Autonomous Database provides the following operations with Autonomous Data Guard:
5-29
Chapter 5
High Availability
5-30
Chapter 5
High Availability
Licensing options have the same values after a failover to the standby database or after
you perform a switchover.
• OML Notebooks: Notebooks and users created in the primary database are available in
the standby.
• APEX Data and Metadata: APEX information created in the primary database is copied
to the standby.
• ACLs: The Access Control List (ACL) of the primary database is duplicated for the
standby.
• Private Endpoint: The private endpoint from the primary database applies to the
standby.
• APIs or Scripts: Any APIs or scripts you use to manage the Autonomous Database
continue to work without any changes after a failover operation or after you perform a
switchover.
• Client Application Connections: Client applications do not need to change their
connection strings to connect to the database after a failover to the standby database or
after you perform a switchover.
• Wallet Based Connections: You can continue using your existing wallets to connect to
the database after a failover to the standby database or after you perform a switchover.
5-31
Chapter 5
High Availability
private endpoints and adding tagging to define keys and values that are not replicated
between the primary database and the remote standby.
Note:
Autonomous Data Guard does not perform automatic failover for a cross-
region standby. If the primary database is unavailable and a local standby is
unavailable, you can perform a manual failover to make the standby
database assume the primary role.
5-32
Chapter 5
High Availability
• Wallet Based Connections: You must download a wallet and connect using the
connection strings from the primary database, the current primary database, to connect
to the database after a failover or after you perform a switchover. See Cross-Region
Disaster Recovery Connection Strings and Wallets for more information.
• Autonomous Database Tools: The tools have different URLs in the primary database,
the current primary database, after a failover or after you perform a switchover (the tools
URLs do not change for a switchover or failover to a local standby):
– Database Actions
– Oracle APEX
– Oracle REST Data Services (ORDS)
– Graph Studio
– Oracle Machine Learning Notebooks
– Data Transforms
– MongoDB API
• Oracle Cloud Infrastructure Object Storage Usage: After you failover or switchover
from the primary database to the standby database, on the primary database (the current
primary database) the credentials and the URLs that provide access to Object Storage
continue to work as they did before the failover or the switchover, providing access to the
following:
– External tables
– External partitioned tables
– External hybrid partitioned tables
Note:
This applies when the Object Storage is available. For rare scenarios when the
Object Storage is not available, Oracle recommends having Object Storage
backups or replication to a different region. If the Object Storage is not available
(that is the Object Storage resource you used with the Primary before a
switchover or failover), you may update your user credentials and parameters
that set URLs for Object Storage so that the parameters specify values to
access an available region's Object Storage. See Using Replication for more
information.
5-33
Chapter 5
High Availability
After you add a cross-region standby database, you can view the role in the Disaster
recovery area on the details page. The role is one of:
• The Role shows Primary on the primary database.
• After a switchover or failover, the same database shows the Role Standby.
• After you convert a cross-region peer to a snapshot standby, the database shows
the Role Snapshot Standby.
To view details for the peer, on the Autonomous Database Information page, under
Resources select Disaster recovery:
• Standby (local): the Peer role column shows Standby and the database has the
same display name in the Peer Autonomous Database column. The Region
column shows the name of the current region.
• Standby (cross-region) the Peer role column shows Standby for a remote
standby database and the database has the same with an "_Remote" extension in
the Peer Autonomous Database column. You can click the link to access the
remote database. The Region column shows the name of the remote region.
• Snapshot standby (local): the Peer role column shows Snapshot standby and
the database has the same display name in the Peer Autonomous Database
column. The Region column shows the name of the remote region.
5-34
Chapter 5
High Availability
5-35
Chapter 5
High Availability
Primary role, and the restore operation is not available from the Oracle Cloud
Infrastructure Console on the Standby database.
5-36
Chapter 5
High Availability
customer-managed key and you add an Autonomous Data Guard cross-region standby,
the Region list in the Add peer database dialog only shows the regions that contain the
replicated vault and keys. If you don't see a remote region listed, you need to replicate
your vault and keys to the region where you want your standby database (this must be a
paired region).
• Switching to customer-managed keys is allowed on the primary when you have an
Autonomous Data Guard cross-region standby. In the case when the database is using
Oracle-managed keys and you switch to customer-managed keys on the primary, you
only see the keys that are replicated in both the primary and the standby regions. The
Manage Encryption Key Vault and Master encryption key lists only show vaults and
keys that are replicated across both the primary and the standby regions. If you don't see
a key listed, replicate your vault and keys to a paired region.
See the following for more information:
• Replicating Vaults and Keys
• Overview of Vault
• Autonomous Database Cross-Region Paired Regions
5-37
Chapter 5
High Availability
Autonomous Data Guard Recovery Time Objective (RTO) and Recovery Point Objective
(RPO)
Autonomous Data Guard monitors the primary database and if the instance goes
down, the local standby instance assumes the role of the primary instance, according
to the Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
If a local Autonomous Data Guard standby instance is not available and you have
enabled cross-region disaster recovery, you can manually fail over to the cross-region
standby.
If you do not add a cross-region Autonomous Data Guard standby, you have the option
to add a cross-region Backup-Based Disaster Recovery peer. See Backup-Based
Disaster Recovery Recovery Time Objective (RTO) and Recovery Point Objective
(RPO) for details on the RTO and RPO with Backup-Based Disaster Recovery.
The RTO is the maximum amount of time required to restore database connectivity to
a standby database after a manual or automatic failover is initiated. The RPO is the
maximum duration of potential data loss on the primary database.
5-38
Chapter 5
High Availability
5-39
Chapter 5
High Availability
Note:
When the standby database is stopped, the standby state shows
Standby. A standby database never shows the Stopped state.
5-40
Chapter 5
High Availability
Note:
When you change your disaster recovery type to use a local standby database, you
also have the option to add a second cross-region disaster recovery option, either a
cross-region Autonomous Data Guard standby database or a cross-region backup
copy peer.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload, click one of: Autonomous Data Warehouse or
Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To change your disaster recovery type to add a local Autonomous Data Guard standby
database:
1. On the Autonomous Database Details page, in the Resources area select Disaster
recovery.
5-41
Chapter 5
High Availability
2. In the row showing the database's disaster recovery details, click at the end of
the row and select Update disaster recovery.
This shows the Update disaster recovery page.
3. Select Autonomous Data Guard.
4. In the Automatic Failover with data loss limit in seconds field, accept the
default data loss limit of 0, or enter a custom value for the automatic failover data
loss limit.
See Automatic Failover with a Standby Database for more information.
5. Click Submit.
The Autonomous Database Lifecycle State changes to Updating.
The State column shows Provisioning while Autonomous Database provisions the
standby database.
After some time the Lifecycle State shows Available and the standby database
provisioning continues.
When provisioning completes the DR Type column shows Autonomous Data
Guard.
5-42
Chapter 5
High Availability
Note:
While you add a new standby database, the primary database is available for
read/write operations. There is no downtime on the primary database.
Notes for changing the disaster recovery type to a local Autonomous Data Guard standby
database:
• Autonomous Database generates an Enable Autonomous Data Guard work request. To
view the request, under Resources click Work Requests.
The work request may complete to 100% before you see Autonomous Data Guard in the
DR Type column when you select Disaster recovery under Resources. The provisioning
process takes several minutes.
• While you add a local standby database, when the Lifecycle State field shows
Updating, the following actions are disabled for the primary database:
– Move Resource. See Move an Autonomous Database to a Different Compartment for
information on moving an instance.
– Stop. See Stop Autonomous Database for information on stopping an instance.
– Restart. See Restart Autonomous Database for information on restarting an instance.
– Restore. See Restore and Recover your Autonomous Database for information on
restoring.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload, click one of: Autonomous Data Warehouse or
Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To add a cross-region Autonomous Data Guard standby:
1. On the Autonomous Database Details page, under Resources, click Disaster
recovery.
2. Under Disaster recovery, click Add peer database.
3. In the Region field, select a region.
The region list shows the available remote regions where you can create a cross-region
standby. When you add a standby database, the list of available regions only shows a
remote region if your tenancy is subscribed to the remote region (you must be subscribed
5-43
Chapter 5
High Availability
5-44
Chapter 5
High Availability
In these Network access for standby fields you specify the private endpoint's VCN
and Subnet on the remote region where the standby is created.
Note:
If you change your network access on the source database to enable a
private endpoint after the standby is created, you must manually access the
standby to enable a private endpoint on the peer.
Note:
While you add a standby database, the primary database is available for read/
write operations. There is no downtime on the primary database.
When provisioning completes, on the Autonomous Database Details page under Disaster
recovery the Oracle Cloud Infrastructure Console shows the following:
• Role shows Primary
• The Local field shows either Backup-based or Autonomous Data Guard. When the
local field shows Backup-based, there is also a link, Upgrade to Autonomous Data
Guard. Click this link to upgrade your local disaster recovery to Autonomous Data Guard.
• The Cross-region field shows Autonomous Data Guard.
When you enable a cross-region standby, the standby database created in the remote region
has the same display name as the primary database with the extension: "_Remote".
When you click Disaster recovery under Resources, the Peer Autonomous Database
column shows the standby database name and a provides a link. Click the link to go to the
Oracle Cloud Infrastructure Console for the remote standby database.
Notes for adding a cross-region standby database:
• Autonomous Database generates the Enable cross-region disaster recovery work
request. To view the request, under Resources click Work requests.
• After you add a cross-region (remote) standby database, the wallet and connection string
from the primary database will contain only the hostname of the primary database, and
5-45
Chapter 5
High Availability
the wallet and connection string from the remote database will contain only the
hostname of the remote database. This applies for both instance and regional
wallets.
See Cross-Region Disaster Recovery Connection Strings and Wallets for more
information.
• If you select Enable cross-region backup replication to disaster recovery
peer, it can take between several minutes and several hours to replicate the
backups to the remote region, depending on the size of the backups. After
backups are replicated, when you select Backups under Resources on the peer
database's Oracle Cloud Infrastructure Console, you will see the list of replicated
backups.
• While you add a standby database and the Lifecycle State shows Updating, the
following actions are disabled for the primary database:
– Move Resource. See Move an Autonomous Database to a Different
Compartment for information on moving an instance.
– Stop. See Stop Autonomous Database for information on stopping an
instance.
– Restart. See Restart Autonomous Database for information on restarting an
instance.
– Restore. See Restore and Recover your Autonomous Database for
information on restoring.
• See Cross-Region Autonomous Data Guard Notes and Notes for Customer-
Managed Keys with Autonomous Data Guard for information on using customer-
managed keys and for additional notes for using Autonomous Data Guard with a
cross-region standby.
• Enable or Disable Backup Replication for an Existing Cross Region Standby
2. In a row that lists a cross-region standby, click at the end of the row and select
Update disaster recovery.
This shows the Update disaster recovery page.
5-46
Chapter 5
High Availability
Perform a Switchover
When you perform a switchover, the primary database becomes the standby database and
the standby database becomes the primary database, with no data loss.
A switchover is typically done to test failover to the standby database for audit or certification
reasons, or to test your application's failover procedures when you have added an
Autonomous Data Guard standby database.
For switchover to a standby database, the Oracle Cloud Infrastructure Console on the
primary database shows a Switchover link under Disaster recovery area when both the
primary database and a standby database are available. You can perform a switchover when
the primary database Lifecycle State shows Available or Stopped, and a standby database
is available (the State field shows Standby).
5-47
Chapter 5
High Availability
To see either local or cross-region standby database state, under Resources click
Disaster recovery and for the standby database listed in the Peer Autonomous
Database column, check that the State shows Standby.
Using the Autonomous Database API, you can initiate the switchover operation at any
time. See Use the API for more information.
• Perform a Switchover to a Local Standby
• Perform a Switchover to a Cross-Region Standby
• Notes for Performing a Switchover
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload, click one of: Autonomous Data Warehouse
or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To perform a switchover:
1. On the Autonomous Database Details page, under Disaster recovery, in the
Local field, click Switchover.
As an alternative, to initiate a switchover you can select More actions and
Switchover.
2. In the Confirm switchover to peer dialog, enter the peer database name to
confirm that you want to switch over.
3. In the Confirm switchover to peer dialog, click Confirm switchover to peer.
When concurrent operations such as scaling are active, the confirmation also
confirms either pausing or canceling the concurrent operation. See Concurrent
Operations on Autonomous Database for more information.
The database Lifecycle State changes to Updating. To see the state of the peer
database, under Resources click Disaster recovery. The state State shows Role
change in progress.
When the switchover completes, Autonomous Database reports the time of the last
switchover when you hover over the in the Role field.
See Notes for Performing a Switchover for more information.
5-48
Chapter 5
High Availability
Note:
For a cross-region switchover you must initiate the switchover from the Standby
database.
5-49
Chapter 5
High Availability
• During the switchover most of the actions on the Oracle Cloud Infrastructure
Console are not available and the Autonomous Database Information page shows
the Lifecycle State with the value Updating.
• The switchover operation keeps the original state of the primary database. If the
primary database is stopped when you perform a switchover, the primary database
is stopped after the switchover.
• Autonomous Database generates the Switchover Autonomous Database work
request. To view the request, under Resources click Work Requests.
• After a switchover or failover to the standby, the standby becomes the primary:
– The Oracle Cloud Infrastructure Metrics display information about the primary
database.
– The graphs on the Database Dashboard card in Database Actions display
information about the primary database.
– The graphs and metrics do not contain information about the database that
was the primary database, before the switchover or failover operation.
• You cannot cancel a cross-region switchover operation after the switchover begins
and the State shows Role change in progress. Your options are:
– Try or retry a switchover or a failover until the operation succeeds.
– File a service request at Oracle Cloud Support or contact your support
representative.
5-50
Chapter 5
High Availability
Autonomous Data Guard for a particular standby, either local or remote, is not enabled
while the system is provisioning the new standby database. After Autonomous Data
Guard completes the provisioning step for the standby database and it becomes
available, you then have a new standby database with Autonomous Data Guard enabled
on it.
• After automatic failover completes, Autonomous Database reports the time of the last
failover when you hover over the in the Role field.
• After an automatic Autonomous Data Guard failover, if there was a regional failure, when
the region comes back online the standby database is automatically reconnected or if
required reprovisioned.
If the primary database has failed or is unreachable and the conditions for Autonomous Data
Guard automatic failover have not been met, then Oracle Cloud Infrastructure console shows
a banner indicating that automatic failover did not succeed, provides a reason, such as
possible data loss above the default 0 RPO or above the limit you set for the automatic
failover data loss limit, and provides a link for you to initiate manual failover. See Perform
Manual Failover to a Local Standby Database for more information.
Note:
Autonomous Data Guard automatic failover is disabled when the Lifecycle State is
either: Restore in Progress or Upgrading.
See Autonomous Data Guard Recovery Time Objective (RTO) and Recovery Point Objective
(RPO) for more information.
5-51
Chapter 5
High Availability
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload, click one of: Autonomous Data Warehouse
or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To initiate a manual failover when the primary database is unavailable and the local
standby is available:
1. On the Details page, under Disaster recovery, in the Local field, click Failover.
This shows the Confirm Manual Failover to peer dialog, along with information
on possible data loss that may result if you perform the manual failover to standby.
2. In the Confirm manual failover to peer dialog, enter the Autonomous Database
name to confirm that you want to failover.
3. In the Confirm manual failover to peer dialog, click Confirm manual failover to
peer.
5-52
Chapter 5
High Availability
See Notes for Manual Failover with a Standby Database for information on steps that
Autonomous Data Guard takes after failover completes.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload, click one of: Autonomous Data Warehouse or
Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To initiate a manual failover to a cross-region standby:
1. On the standby database, perform a switchover. See Perform a Switchover to a Cross-
Region Standby for details.
2. If the switchover attempt in Step 1 fails, on the standby database the Role field shows a
Failover link. On the standby database, click the Failover link.
This shows the Confirm manual failover to standby dialog, along with information on
possible data loss that may result if you perform the manual failover to the standby
database.
3. In the Confirm manual failover to standby dialog, enter the Autonomous Database
name to confirm that you want to failover.
4. In the Confirm manual failover to standby dialog, click Confirm manual failover to
standby.
When concurrent operations such as scaling are active, the confirmation also confirms
either pausing or canceling the concurrent operation. See Concurrent Operations on
Autonomous Database for more information.
5-53
Chapter 5
High Availability
See Notes for Manual Failover with a Standby Database for information on steps that
Autonomous Data Guard takes after failover completes.
This information also shows when you hover over the in the Role field.
• After a manual Autonomous Data Guard failover, if there was a regional failure,
when the region comes back online the standby database is automatically
reconnected or if required reprovisioned.
5-54
Chapter 5
High Availability
While operating as a snapshot standby, updates from the source database are still sent to the
snapshot standby, and you are protected if the source database region were to encounter a
failure, however the updates are not applied to the snapshot standby until the database is
converted back to a disaster recovery peer.
• Snapshot Standby Features
Provides information on snapshot standby features.
• Snapshot Standby Operations
After you create a snapshot standby you can perform almost all database operations on
the snapshot standby. There are some operations that are not allowed on a snapshot
standby.
• Snapshot Standby Reconnect Time
A banner on the Oracle Cloud Infrastructure Console indicates the date and time when
the snapshot standby automatically reconnects to the source database. At the time
indicated on the banner, Autonomous Database converts a snapshot standby back to the
Standby role.
5-55
Chapter 5
High Availability
• Any changes on the snapshot standby from the time it was converted to a
snapshot standby, until the time it reconnects to the source are discarded. This
means all changes, including metadata, that is inserted, updated, or deleted while
the database operates as a snapshot standby are lost (discarded) when the
snapshot standby reconnects to its source database.
• All changes that occurred on the primary are replicated to the remote region while
the database operates as a snapshot standby, but the changes are not applied to
the snapshot standby. The changes that occur on the primary during this period
are applied to the snapshot standby when it is converted back to a disaster
recovery peer.
• On the Oracle Cloud Infrastructure Console, the role updates from Role:
Snapshot standby to Role: Standby or Role: Backup copy, depending on the
disaster recovery type.
Operation Description
Convert to You can convert a cross-region peer to a snapshot standby.
Snapshot See Convert Cross Region Disaster Recovery Peer to a Snapshot Standby
Standby for the steps to convert a peer database to snapshot standby.
Start or Restart When a snapshot standby is stopped, as indicated by the Lifecycle State
Stopped, you can start the database.
When a snapshot standby is available, as indicated by the Lifecycle State
Available, you can restart the database or stop the database.
Convert When a snapshot standby is in the Snapshot standby role, the database
Snapshot operates as a read-write database. A snapshot standby has a two day (48
Standby Back to hour) limit up to which it can stay in the Snapshot standby role. If you do
Disaster not manually convert the snapshot standby back within two days, the
Recovery Peer snapshot standby automatically converts back to a disaster recovery peer.
See Convert Snapshot Standby Back to a Cross-Region Disaster Recovery
Peer for more information.
Stop When a snapshot standby is stopped, database operations are not available
and charging for CPU usage on the snapshot standby stops.
Terminate You are not allowed to terminate a snapshot standby. You can reconnect the
snapshot standby to the primary.
See Convert Snapshot Standby Back to a Cross-Region Disaster Recovery
Peer for more information.
Create You are not allowed to create a refreshable clone on a snapshot standby.
Refreshable
Clone
Disaster You are not allowed to add an Autonomous Data Guard standby database
Recovery Peer or a Backup-Based Disaster Recovery peer to a snapshot standby.
5-56
Chapter 5
High Availability
indicated on the banner, Autonomous Database converts a snapshot standby back to the
Standby role.
Note:
When a snapshot standby is not reconnected within 48 hours, the snapshot standby
automatically reconnects to the source database.
Note:
All data, including metadata, that is inserted, updated, or deleted in the database
during the disconnect period will be lost when a snapshot standby reconnects to its
source database. All changes that occur on the primary during the disconnect
period are applied to the standby when it reconnects to the source database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload, click one of: Autonomous Data Warehouse or
Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
1. On the Primary Region Autonomous Database instance, on the Autonomous Database
Details page under Resources, select Disaster recovery.
2. Access the remote region peer.
On the Primary Region Autonomous Database instance the Disaster recovery
information area shows the Peer Autonomous Database column.
Under Peer Autonomous Database column, click the link to access the cross-region
peer.
3. On the cross-region peer, from the More actions drop-down list, select Convert to
snapshot standby database.
5-57
Chapter 5
High Availability
4. On the Convert to snapshot standby database page, enter the source database
name to confirm the disconnect.
5. Click Convert to snapshot standby database.
The Autonomous Database Lifecycle State changes to Updating.
Note:
While you convert to a snapshot standby, the primary database is
available for read/write operations. There is no downtime on the primary
database.
5-58
Chapter 5
High Availability
Note:
All data, including metadata, that is inserted, updated, or deleted in the snapshot
standby database during the disconnect period will be lost when the snapshot
standby reconnects to its source database.
All changes on the primary that were sent to the snapshot standby but not applied
during the disconnect period will be applied to the standby when it reconnects to the
source database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload, click one of: Autonomous Data Warehouse or
Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
1. On the snapshot standby, from the More actions drop-down list, select Reconnect to
source peer database.
5-59
Chapter 5
High Availability
2. On the Reconnect to source peer database page, enter the source database
name to confirm the reconnect.
3. Click Convert to disaster recovery peer.
The Autonomous Database Lifecycle State changes to Updating.
Note:
While you reconnect the standby to the source database, the primary
(source) database is available for read/write operations. There is no
downtime on the primary database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload, click one of: Autonomous Data Warehouse
or Autonomous Transaction Processing.
5-60
Chapter 5
High Availability
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To change the disaster recovery type:
1. On the Autonomous Database Details page, under Resources, select Disaster
recovery.
2. In the row showing the disaster recovery details, click the Actions icon (three dots) and
select Update DR type.
This shows the Update disaster recovery type page.
3. Select Backup-based disaster recovery.
4. Click Submit.
5. In the Disable Autonomous Data Guard dialog, click Disable Autonomous Data Guard.
While the standby database is terminating, the Lifecycle State changes to Updating.
Autonomous Database generates the Disable Autonomous Data Guard work request. To
view the request, under Resources, click Work Requests.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
5-61
Chapter 5
High Availability
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload, click one of: Autonomous Data Warehouse
or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To change your disaster recovery type to use a cross-region backup-copy peer:
1. On the Primary Role Autonomous Database, on the Autonomous Database
Details page under Resources, select Disaster recovery.
2. Access the remote standby.
On the primary Autonomous Database instance the Disaster recovery
information area shows the remote peer. The remote peer has the same display
name as the primary database, with an "_Remote" extension.
Under Peer Autonomous Database, click the link to access the cross-region
peer.
3. On the remote standby database, click Update disaster recovery under Disaster
recovery on the Oracle Cloud Infrastructure Console.
4. On the Update disaster recovery type page, select Backup-based disaster
recovery.
5. Click Submit.
The Autonomous Database Lifecycle State changes to Updating.
Note:
While you update the disaster recovery type, the primary database is
available for read/write operations. There is no downtime on the primary
database.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload, click one of: Autonomous Data Warehouse
or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To terminate a cross-region standby database:
1. On the primary database, on the Autonomous Database Details page, under
Resources select Disaster recovery.
5-62
Chapter 5
High Availability
4. On the Terminate Autonomous Database page enter the database name to confirm
that you want to terminate the cross-region standby database.
5. Click Terminate Autonomous Database.
While the standby database is terminating, the Lifecycle State changes to Terminating.
Note:
If there is a local Autonomous Data Guard standby database, terminating the cross-
region standby does not change the local disaster recovery option.
Note the following when your disaster recovery option includes a cross-region standby:
• A cross-region standby database cannot be terminated from the primary database.
• When there is a cross-region standby database, you must terminate the cross-region
standby before you terminate the primary database. If you attempt to terminate the
primary, the system shows the following message:
5-63
Chapter 5
High Availability
In this case, after you terminate the cross-region (remote) standby you can
terminate the Primary role database.
After Autonomous Data Guard is disabled, terminate the database. See Terminate an
Autonomous Database Instance for more information.
• If you manually terminated standby database, verify that the cross-region standby
count is 0 (that is there is no remote peer).
5-64
Chapter 5
High Availability
Note:
When Autonomous Data Guard is enabled with a remote standby, events are only
sent to the instance with the Primary role.
See the following Oracle Cloud Infrastructure topics for information on events and
notifications:
• Overview of Events
• Autonomous Database Event Types
5-65
Chapter 5
High Availability
• Notifications Overview
5-66
Chapter 5
High Availability
• The Number of ECPUs (OCPUs if your database uses OCPUs) allocated graph and the
CPU utilization graph on the Database Dashboard card in Database Actions displays
the ECPUs (OCPUs if your database uses OCPUs) allocated and the CPUs utilization for
the Primary database. These graphs do not include information about a local Standby
database or about a remote Standby database.
The CPU Utilization metrics on the Oracle Cloud Infrastructure Console metrics page
display the CPU utilization for the Primary database. Other metrics on this page also
apply to the Primary database. These metrics do not include information about the local
Standby database or remote Standby database.
• After a switchover or failover to the Standby, the Standby becomes the Primary and the
graphs on the Database Dashboard in Database Actions and the Oracle Cloud
Infrastructure Console metrics page display information about the Primary database. The
graphs and metric do not contain information about the database that was the Primary
before the switchover or failover operation.
• Automatic Failover to a local Standby is disabled during a Restore in Progress operation.
• Automatic Failover to a local Standby disabled when Upgrading a Database.
• When the Lifecycle State field for the Primary database shows Stopped, the Standby
database is also stopped. You may still perform a switchover when the Primary database
is Stopped.
• Cross-Region Autonomous Data Guard Notes
Note the following for using Autonomous Database with an Autonomous Data Guard
remote standby database:
5-67
Chapter 5
High Availability
a6gxf2example9ep_high = (description_list=
(failover=on) (load_balance=off)
(description= (retry_count=15)(retry_delay=3)
(address=(protocol=tcps)(port=1522)(host=adb.us-
ashburn-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_high.ad
b.oraclecloud.com))(security=(ssl_server_dn_match=yes)))
(description= (retry_count=15)(retry_delay=3)
(address=(protocol=tcps)(port=1522)(host=adb.us-
phoenix-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_high.ad
b.oraclecloud.com))(security=(ssl_server_dn_match=yes))))
5-68
Chapter 5
High Availability
a6gxf2example9ep_low =
(description_list= (failover=on) (load_balance=off)
(description= (retry_count=15)(retry_delay=3)(address=(protocol=tcps)
(port=1522)(host=adb.us-ashburn-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_low.adb.oracl
ecloud.com))(security=(ssl_server_dn_match=yes)))
(description= (retry_count=15)(retry_delay=3)(address=(protocol=tcps)
(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_low.adb.oracl
ecloud.com))(security=(ssl_server_dn_match=yes))))
a6gxf2example9ep_medium =
(description_list= (failover=on) (load_balance=off)
(description= (retry_count=15)(retry_delay=3)(address=(protocol=tcps)
(port=1522)(host=adb.us-ashburn-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_medium.adb.or
aclecloud.com))(security=(ssl_server_dn_match=yes)))
(description= (retry_count=15)(retry_delay=3)(address=(protocol=tcps)
(port=1522)(host=adb.us-phoenix-1.oraclecloud.com))
(connect_data=(service_name=mqssyowmexample_a6gxf2example9ep_medium.adb.or
aclecloud.com))(security=(ssl_server_dn_match=yes))))
5-69
Chapter 5
High Availability
Automatic Backups
Autonomous Database performs periodic backups including full, cumulative, and
incremental backups, to ensure data reliability and recoverability.
The choices for automatic backups are different depending on your compute model:
• OCPU compute model
Autonomous Database automatically backs up your database for you. The
retention period for backups is 60 days. You can restore and recover your
database to any point-in-time in this retention period.
• ECPU compute model
By default Autonomous Database automatically backs up your database for you
and the retention period for backups is 60 days. Optionally, you can choose the
backup retention period in days, from 1 to 60. You can restore and recover your
database to any point-in-time in the specified retention period.
Billing for automatic backups depends on the compute model that you select when you
provision or clone an Autonomous Database instance:
• OCPU compute model: the storage for automatic backups is included in the cost
of database storage.
• ECPU compute model: the storage for backups is billed separately and in
addition to database storage.
See How Is Autonomous Database Billed? for more information.
Recovery
You can initiate recovery for your Autonomous Database using the Oracle Cloud
Infrastructure Console. Autonomous Database automatically restores and recovers
your database to the point-in-time you specify or using the backup you select from a
list of backups.
See Restore and Recover your Autonomous Database for more information.
Listing Backups
The list of backups available for recovery is shown on the Autonomous Database
details page under Backups. Click Backups under Resources to show the backups.
The Oracle Cloud Infrastructure Console Autonomous Database details page, in the
Backup area shows the Total backup storage field. This fields shows the total storage
5-70
Chapter 5
High Availability
being billed, including for automatic backups and if there are long-term backups, this also
includes the long term backup storage.
See View Backup Information and Backups on Autonomous Database for more information.
• Long-Term Backups on Autonomous Database
Using the long-term backup feature you can create a long-term backup as a one-time
backup or as scheduled long-term backup. You select the retention period for long-term
backups in the range of a minimum of 3 months and a maximum of 10 years.
5-71
Chapter 5
High Availability
Editing the backup retention period is only available for instances using the ECPU
compute model. See Compute Models in Autonomous Database for more information.
Note:
The storage for backups incurs additional costs. See ECPU Compute Model
Billing Information for more information.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To change the automatic backup retention period:
1. On the Autonomous Database Details page, under Backup, in the Automatic
backup retention period field, click Edit.
5-72
Chapter 5
High Availability
5-73
Chapter 5
High Availability
Note:
Long-term backups incur additional costs. See Long-Term Backups on
Autonomous Database for more information.
The Autonomous Database instance must have at least one automatic backup before
you can create a long term backup. After you provision or clone an Autonomous
Database instance, you may need to wait up to 4 hours before an automatic backup is
available. See View Backup Information and Backups on Autonomous Database to
find out if a backup exists.
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To create a long-term backup:
1. On the Autonomous Database Details page, under Resources, click Backups.
2. On the Autonomous Database Details page, under Backups, click Create long-
term backup.
3. Set the long-term backup retention period.
In the Years, Months, and Days fields, accept the default value, enter a value, or
increment the values to increase or decrease the retention period for each of these
choices.
4. If you want to begin the backup immediately, do not select Schedule long-term
backup and skip to the next step.
If you want to create a backup schedule, select Schedule long-term backup.
5-74
Chapter 5
High Availability
5-75
Chapter 5
High Availability
5-76
Chapter 5
High Availability
If you click at the end of a row for a backup in the Backups list, this shows the actions you
can perform.
• Actions for Automatic Backups:
– Restore: See Restore and Recover your Autonomous Database for more information.
– Create clone: See Clone an Autonomous Database from a Backup for more
information.
• Actions for long-term backups:
– Create clone: Clone an Autonomous Database from a Backup for more information.
– Edit retention period: This allows you to change the retention period for a long-term
backup.
– Delete: This allows you to remove a long-term backup.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
5-77
Chapter 5
High Availability
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To restore and recover your database, do the following:
1. On the Autonomous Database Details page, from the More actions drop-down
list, select Restore to display the Restore prompt.
2. In the Restore prompt, select Enter Timestamp or Select Backup to restore to a
point in time or to restore from a specified backup.
• Enter Timestamp: Enter a timestamp to restore to in the Enter Timestamp
calendar field.
Click the Calendar icon to show the date and timestamp calendar selector. If
automatic backups are enabled on your instance, you can select a timestamp
to restore and recover your database to any point-in-time in the retention
period.
Manually edit the values in the Enter Timestamp field if you prefer to use a
more granular timestamp.
• Select Backup: Select a backup from the list of backups. Limit the number of
backups you see by specifying a period using the From and To calendar
fields.
5-78
Chapter 5
High Availability
3. Click Restore.
Note:
Restoring Autonomous Database puts the database in the unavailable state
during the restore operation. You cannot connect to a database in that state.
The only lifecycle management operation supported in unavailable state is
Terminate.
When concurrent operations such as scaling are active, the confirmation also confirms
either pausing or canceling the concurrent operation. See Concurrent Operations on
Autonomous Database for more information.
The details page shows Lifecycle State: Restore In Progress
4. When the restore operation finishes your Autonomous Database instance opens and the
Lifecycle State shows Available.
At this point you can connect to your Autonomous Database instance and check your data to
validate that the restore point you specified was correct. After checking your data if you find
5-79
Chapter 5
High Availability
that the restore date you specified was not the one you needed, you can initiate
another restore operation to another point in time.
Notes:
5-80
Chapter 5
High Availability
• After the long-term retention period is over a backup will be deleted and is no longer
retained.
• The encryption key option that is selected when a long-term backup is created applies
when you create a clone from the long-term backup. If a customer-managed master
encryption key is in use at the time a long-term backup is created, then the same
customer-managed master encryption key must be available to access a database that
you clone from the long-term backup.
See Manage Encryption Keys on Autonomous Database for more information.
• While backing up a database, the database is fully functional. However, during a long-
term backup certain lifecycle management operations, such as stopping the database
may affect the backup. See Long-Term Backup with Concurrent Operations for more
information.
• Cloning from a long-term backup creates a new database with the most current available
database version, which is possibly not the original database version in use at the time
when the long-term backup was created. A database cloned from a long-term backup
uses latest available database version.
For example, if a long-term backup is taken on a database with version 19c, and then 4
years later the long-term backup is used to clone a new database, the new database may
only offer the latest database version (for example database version 23c, if version 19c is
not available).
• Long-term backups are only accessible while a database exists. If you terminate an
Autonomous Database instance, you no longer have access to long-term backups. If you
want to retain long-term backups and minimize costs:
1. Drop all tables and scale down the database to the minimum storage.
2. Stop the database.
This preserves the long-term backups that were created for the Autonomous Database
instance.
5-81
Chapter 5
High Availability
Note:
Each computer can only have one instance of Oracle MTS (OraMTS)
Recovery Service installed.
5-82
Chapter 5
High Availability
• For your OraMTS Recovery Service, you must deploy the VM in the same private
network as the database.
• You must configure an OCI Private Load Balancer (LBaaS) and the Load Balancer
(LBaaS) must be able to access the VM on port 2030. See Load Balancer Management
for more information.
• Your database must be able to communicate with the Load Balancer (LBaaS) on port
443. To enable this you need an egress rule for port 443 in VCN's security list or in the
network security group.
• Your Load Balancer (LBaaS) must also be able to receive the communication from the
database. To enable this you need an ingress rule for your Load Balancer (LBaaS) for
port 443.
• Reserve a domain name with a domain provider.
• Generate an SSL certificate for the domain.
• You must configure a secure HTTPS endpoint using OCI Load Balancer to ensure that
the communication between the Autonomous Database and the MTS server uses HTTPS
protocol with SSL encryption. See Configuring Network Access with Private Endpoints
and Submit an HTTP Request to a Private Host with UTL_HTTP for more information.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'ORAMTS',
params => JSON_OBJECT('location_uri' VALUE 'https://
mymtsserver.mycorp.com')
);
END;
/
The first example enables the OraMTS Recovery Service on your Autonomous Database.
The feature_name parameter specifies the name of the feature to enable. The ORAMTS value
indicates that you are enabling the OraMTS recovery service feature for your database.
The location_uri parameter specifies the HTTPS URL for the OraMTS server in a customer
network.
The second example is a SQL statement that you can run to verify that the OraMTS
Recovery Service is enabled for your Autonomous Database.
5-83
Chapter 5
High Availability
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'ORAMTS');
END;
/
5-84
Chapter 5
High Availability
Note:
Backup-Based Disaster Recovery (backup copy) is available in all Autonomous
Database workload types. Backup-Based Disaster Recovery is not available with
Always Free Autonomous Database.
5-85
Chapter 5
High Availability
Region 1
Region 1 Region 2
5-86
Chapter 5
High Availability
Topics
• Backup-Based Disaster Recovery Recovery Time Objective (RTO) and Recovery Point
Objective (RPO)
When you perform a failover using with Backup-Based Disaster Recovery enabled, the
peer instance assumes the role of the primary instance, according to the Recovery Time
Objective (RTO) and Recovery Point Objective (RPO).
• Replicating Backups to a Cross-Region Backup Based Disaster Recovery Peer
When you add a cross-region Backup-Based Disaster Recovery peer you can enable
cross-region backup replication for automatic backups.
Backup-Based Disaster Recovery Recovery Time Objective (RTO) and Recovery Point Objective
(RPO)
When you perform a failover using with Backup-Based Disaster Recovery enabled, the peer
instance assumes the role of the primary instance, according to the Recovery Time Objective
(RTO) and Recovery Point Objective (RPO).
The RTO is the maximum amount of time required to restore database connectivity to a
backup copy database after a failover is initiated. The RPO is the maximum duration of
potential data loss, in minutes, on the primary database.
Backup-Based Disaster Recovery RTO and RPO numbers are:
5-87
Chapter 5
High Availability
The cross-region backup replication incurs an additional cost. See Oracle Autonomous
Database Serverless Features Billing for more information.
See the following for more information:
• Add a Cross-Region Disaster Recovery Peer
• Enable or Disable Backup Replication for an Existing Cross Region Peer
Note the following for cross-region backup replication:
• After a switchover or a failover, while the cross-region database is in the primary
role, backups are taken on the current primary and are replicated to the current
(remote) peer.
• When using Backup-Based Disaster Recovery with a cross-region peer, this
feature is supported for all workload types.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To view the current Disaster recovery configuration for your Autonomous Database, do
the following:
On the Autonomous Database Details page, in the Resources area click Disaster
recovery. The DR Type field indicates the type of disaster recovery, either Backup-
Based Disaster Recovery (Backup copy) or Autonomous Data Guard.
For example:
5-88
Chapter 5
High Availability
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To add a cross-region Backup-Based Disaster Recovery peer, do the following:
1. On the Autonomous Database Details page, under Disaster recovery, in the Cross-
region field, click Add peer database link, alternatively in the Resources area click
Disaster recovery.
5-89
Chapter 5
High Availability
In these Network access for standby fields you specify the private endpoint's
VCN and Subnet on the remote region where the standby is created.
Configure Private Endpoints.
Note:
If you change your network access on the source database to enable
a private endpoint after the standby is created, you must manually
access the standby to enable a private endpoint on the peer.
5-90
Chapter 5
High Availability
Note:
While you add a new peer, the primary database is available for read/write
operations. There is no downtime on the primary database.
5-91
Chapter 5
High Availability
2. In a row that lists a cross-region standby, click at the end of the row and select
Update disaster recovery.
This shows the Update disaster recovery page.
5-92
Chapter 5
High Availability
2. In the row showing the database's disaster recovery details, click at the end of the row
and select Update disaster recovery type.
3. On the Update disaster recovery type page, choose Autonomous Data Guard.
4. Click Submit. This starts provisioning an Autonomous Data Guard standby database.
5. The disaster recovery type is changed to, Autonomous Data Guard, as shown in the DR
Type column.
5-93
Chapter 5
High Availability
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To terminate a cross-region (remote) peer:
1. On the Primary database, on the Autonomous Database Details page under
Resources, select Disaster recovery.
2. Access the Oracle Cloud Infrastructure Console for the remote region peer.
On the Primary database, the Disaster recovery information area shows the Peer
Autonomous Database. The remote peer has the same display name as the
Primary Database. However, the display name has an "_Remote" extension.
Under Peer Autonomous Database, click the link for the remote peer to access
the cross-region peer.
3. On the Oracle Cloud Infrastructure Console for the remote peer, on the Details
page, from the More actions drop-down list, select Terminate.
4. On the Terminate Autonomous Database page, enter the database name to
confirm that you want to terminate the cross-region peer.
5. Click Terminate Autonomous Database.
While the peer is terminating, the Lifecycle State changes to Terminating.
There are limitations for disabling when an instance has a cross-region Backup-Based
Disaster Recovery peer, as follows:
• A peer in the remote region cannot be disabled from the primary database.
• When Backup-Based Disaster Recovery is enabled with a cross-region peer, you
must terminate the cross-region peer before you terminate the primary role
database. If you attempt to terminate the primary, the system gives you an error.
In this case, after you terminate the cross-region (remote) peer you can terminate
the primary database.
5-94
Chapter 5
High Availability
To see the peer state, under Resources click Disaster recovery and for the peer listed in
the Peer Autonomous Database column, check that the State shows Standby.
Using the Autonomous Database API, you can initiate the switchover operation at any time.
See Use the API for more information.
• Perform a Switchover to a Local Backup Copy Peer
When you perform a switchover the primary database becomes the peer, and the peer
becomes the primary database, with no data loss.
• Perform a Switchover to a Cross-Region Backup Copy Peer
• Notes for Performing a Switchover to a Backup Copy Peer
Provides notes for Backup-Based Disaster Recovery switchover:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To perform a switchover, do the following:
1. On the Autonomous Database Details page, under Disaster recovery, in the Role field,
click Switchover.
As an alternative, to initiate a switchover you can select More actions and Switchover
or select Disaster recovery under Resources and Switchover.
2. In the Confirm switchover to peer dialog, in the Select peer list, choose the peer to
switchover.
3. Enter the database name to confirm that you want to switch over.
4. Click Confirm switchover to peer.
5-95
Chapter 5
High Availability
When concurrent operations such as scaling are active, the confirmation also
confirms either pausing or canceling the concurrent operation. See Concurrent
Operations on Autonomous Database for more information.
The database Lifecycle State changes to Updating. To see the state of the peer,
under Resources, click Disaster recovery where the State column shows Role
Change in Progress.
When the switchover completes, Backup-Based Disaster Recovery does the following:
• The Backup-Based Disaster Recovery resource information is updated to reflect
the switchover. Select Disaster recovery under Resources to see the updated
information.
• Autonomous Database reports the time in the Role Changed On the field.
Note:
For a cross-region switchover you must initiate the switchover from the
cross-region peer.
5-96
Chapter 5
High Availability
Perform a Failover
When the primary database goes down, with Backup-Based Disaster Recovery you can
perform a manual failover to make the local peer the primary database.
Backup-Based Disaster Recovery does not provide an automatic failover option. If you want
to provide automatic failover, where the system monitors the primary instance and
automatically fails over to a local standby database in certain scenarios, you need to change
the disaster recovery type for the local instance to use Autonomous Data Guard.
With both a local Backup-Based Disaster Recovery peer and a cross-region Backup-Based
Disaster Recovery peer, Oracle recommends that you attempt a manual failover to the local
peer first (not to the cross-region peer).
Depending on how you enable Backup-Based Disaster Recovery, there are different steps to
perform a manual failover to a peer:
• When you configure Backup-Based Disaster Recovery with only a local peer:
5-97
Chapter 5
High Availability
When you have a local peer and the switchover is not successful, the Oracle
Cloud Infrastructure console shows a banner with information about why the
switchover was not successful and the Oracle Cloud Infrastructure console shows
a failover link in the Role field that you can click to initiate a failover to the local
peer. The failover link only shows when the Primary database is unavailable and a
peer is available. That is, the Primary database Lifecycle State field shows
Unavailable and the local peer is available. Using the API, you can initiate manual
failover at any time. See Use the API for information on using the API.
To see the peer status, under Resources click Disaster recovery and for the peer
listed in the Peer Autonomous Database column check that the State field shows
Available or Stopped.
• When you use Backup-Based Disaster Recovery with both a local peer and a
cross-region (remote) peer:
With Backup-Based Disaster Recovery enabled with both a local peer and a cross-
region peer, and the local peer is available, Oracle recommends that you attempt a
manual failover to the local peer first (not to the cross-region peer).
If a local peer is unavailable or a manual failover to the local peer fails, you can
perform a manual switchover to the cross-region peer. If the switchover to the
cross-region peer fails, on the peer the Oracle Cloud Infrastructure console shows
a failover link in the Role field that you can click to initiate a manual failover to the
peer.
When you initiate a manual failover it is failed over to the peer based on the Recovery
Time Objective (RTO) and Recovery Point Objective (RPO) targets. See Backup-
Based Disaster Recovery Recovery Time Objective (RTO) and Recovery Point
Objective (RPO) for more information.
Perform the following prerequisite steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the
links under the Display name column.
To initiate a manual failover to a cross-region backup copy, do the following:
1. On the peer, perform a switchover. See Perform a Switchover to a Local Backup
Copy Peer for details.
2. If the switchover attempt in Step 1 fails, on the peer the Role field shows a
Failover link. On the peer, click the Failover link.
This shows the Confirm manual failover to peer dialog, along with information
on possible data loss that may result if you perform the manual failover to the peer.
3. In the Confirm manual failover to peer dialog, enter the Autonomous Database
name to confirm that you want to failover.
4. In the Confirm manual failover to peer dialog, click Confirm manual failover to
peer.
5-98
Chapter 5
High Availability
When concurrent operations such as scaling are active, the confirmation also confirms
either pausing or canceling the concurrent operation. See Concurrent Operations on
Autonomous Database for more information.
To initiate a manual failover when the Primary database is unavailable and the local
Peer is available, do the following:
1. On the Details page, under Disaster recovery, in the Role field, click Failover.
This shows the Confirm manual failover to peer dialog, along with information on
possible data loss that may result if you perform the manual failover to peer.
2. In the Confirm manual failover to peer dialog, enter the Autonomous Database name
to confirm that you want to failover.
3. In the Confirm manual failover to peer dialog, click Confirm manual failover to peer.
When the failover completes, Backup-Based Disaster Recovery does the following:
• After a manual failover operation completes you can see any data loss associated with
the manual failover in the message on the Oracle Cloud Infrastructure console banner
and if you hover over the in the Role field. The manual failover data loss is specified
in minutes.
• After a manual failover with Backup-Based Disaster Recovery, if there was a regional
failure, when the region comes back online the peer is automatically reconnected or if
required reprovisioned.
• After you perform a manual failover to the cross-region peer, the cross-region peer
becomes the primary database. In this case, if a local Autonomous Data Guard standby
was enabled, a local Standby will be created and attached. If a local Autonomous Data
Guard was not enabled before the failover in the primary database, as is the default, you
have a local backup copy.
5-99
Chapter 5
High Availability
Snapshot standby storage usage is billed based on the storage of the snapshot
standby plus 1 x the storage of the primary database.
Topics
• About Disaster Recovery Snapshot Standby Databases
• Convert Cross Region Disaster Recovery Peer to a Snapshot Standby
• Convert Snapshot Standby Back to a Cross-Region Disaster Recovery Peer
5-100
Chapter 5
High Availability
5-101
Chapter 5
High Availability
See Manage User Roles and Privileges on Autonomous Database for more
information.
You can enable Flashback Time Travel for an existing table or a new table that you are
creating.
1. To enable Flashback Time Travel when you create a table:
After you enable Flashback Time Travel for a table you can disable Flashback Time
Travel. See Disable Flashback Time Travel for a Table for more information.
5-102
Chapter 5
High Availability
For example, to disable Flashback Time Travel for the employee table:
BEGIN
DBMS_CLOUD_ADMIN.SET_FLASHBACK_ARCHIVE_RETENTION
(retention_days => 365);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE
(scope => 'ALL');
END;
/
Example to purge the historical Flashback Time Travel data before a specified timestamp:
BEGIN
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE
(scope => 'TIMESTAMP', before_timestamp => '12-JUL-2023 10:24:00');
END;
/
Example to purge Flashback Time Travel historical data older than 1 day:
BEGIN
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE
(scope => 'TIMESTAMP', before_timestamp => SYSTIMESTAMP - INTERVAL '1'
DAY);
5-103
Chapter 5
High Availability
END;
/
Example to purge the historical Flashback Time Travel data before the specified scn:
BEGIN
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE
(scope => 'SCN',before_scn=> '5928826');
END;
/
View Description
*_FLASHBACK_ARCHIVE Displays information about Flashback Time Travel files.
*_FLASHBACK_ARCHIVE_T Displays tablespaces of Flashback Time Travel files.
S
*_FLASHBACK_ARCHIVE_T Displays information about tables that have Flashback Time Travel enabled.
ABLES
5-104
Chapter 5
Security
• After enabling Flashback Time Travel on a table, Oracle recommends initially waiting at
least 20 seconds before inserting data into the table and waiting up to 5 minutes before
using Flashback Query on the table.
• There is one Flashback Data Archive per Autonomous Database instance, named
flashback_archive, and you cannot create additional Flashback Data Archives.
• You cannot drop the Flashback Data Archive, flashback_archive, within an Autonomous
Database instance.
• You cannot create, modify, or drop tablespaces for the Flashback Data Archive. Hence,
you cannot run these statements:
• If you enable Flashback Time Travel on a table, but Automatic Undo Management (AUM)
is disabled, error ORA-55614 occurs when you try to modify the table.
• To enable Flashback Time Travel on a table, the table cannot use any of the following
Flashback Time Travel reserved words as column names:
– STARTSCN
– ENDSCN
– RID
– XID
– OP
– OPERATION
Security
Autonomous Database stores all data with inbuilt security. Only authenticated users and
applications can access the data when they connect to the database.
• Security Features Overview
Describes some of the robust security features that Autonomous Database provides.
• Security and Authentication in Oracle Autonomous Database
Oracle Autonomous Database stores all data in encrypted format in the database. Only
authenticated users and applications can access the data when they connect to the
database.
• Security Testing Policies on Autonomous Database
Autonomous Database is subject to the Oracle Cloud Security Testing policy which
describes when and how you may conduct certain types of security testing of Oracle
Cloud Infrastructure services, including vulnerability and penetration tests, as well as
tests involving data scraping tools.
• Manage Encryption Keys on Autonomous Database
Describes how to use customer-managed encryption keys with Autonomous Database,
and if you are using customer-managed encryption keys, shows how to rotate the keys,
switch to Oracle-managed encryption keys, or view the encryption key history.
5-105
Chapter 5
Security
• Configure Network Access with Access Control Rules (ACLs) and Private
Endpoints
Provides details on how to configure network access with access control rules or
using a private endpoint and describes the secure client connection options. Also
describes how to enable support for TLS connections (require mutual TLS only or
allow both mutual TLS and TLS authentication).
• Audit Autonomous Database
Autonomous Database provides auditing that allows you to monitor Oracle
database activities.
• Rotate Wallets for Autonomous Database
• Use Oracle Database Vault with Autonomous Database
Oracle Database Vault implements powerful security controls for your database.
These unique security controls restrict access to application data by privileged
database users, reducing the risk of insider and outside threats and addressing
common compliance requirements.
• IAM Policies for Autonomous Database
Provides information on IAM policies required for API operations on Autonomous
Database.
• Use Oracle Data Safe with Autonomous Database
Provides information on using Oracle Data Safe on Autonomous Database.
• Break Glass Access for SaaS on Autonomous Database
Autonomous Database supports break glass access for SaaS providers. Break
glass access allows a SaaS operations team, when explicitly authorized by a
SaaS customer, to access a customer's database to perform critical or emergency
operations.
• Encrypt Data While Exporting or Decrypt Data While Importing
For greater security you can encrypt data that you export to Object Storage. When
data on Object Storage is encrypted you can decrypt the data you import or when
you use the data in an external table.
5-106
Chapter 5
Security
5-107
Chapter 5
Security
Note:
Oracle Autonomous Database supports the standard security features of the
Oracle Database including privilege analysis, network encryption, centrally
managed users, secure application roles, transparent sensitive data
protection, and others. Additionally, Oracle Autonomous Database adds
Label Security, Database Vault, Data Safe, and other advanced security
features at no additional cost.
• Configuration Management
• Data Encryption
Oracle Autonomous Database uses always-on encryption that protects data at rest
and in transit. Data at rest and in motion is encrypted by default. Encryption cannot
be turned off.
• Data Access Control
Securing access to your Oracle Autonomous Database and your data uses
several different kinds of access control:
• Auditing Overview on Autonomous Database
Oracle Autonomous Database provides robust auditing capabilities that enable
you to track who did what on the service and on specific databases.
Comprehensive log data allows you to audit and monitor actions on your
resources, which helps you to meet your audit requirements while reducing
security and operational risk.
• Assessing the Security of Your Database and its Data
Oracle Autonomous Database integrates with Oracle Data Safe to help you
assess and secure your databases.
• Regulatory Compliance Certification
Oracle Autonomous Database meets a broad set of international and industry-
specific compliance standards.
Configuration Management
The Oracle Autonomous Database provides standard, hardened security
configurations that reduce the time and money managing configurations across your
databases. Security patches and updates are done automatically, so you don't spend
time, money, or attention to keeping security up to date. These capabilities protect your
databases and data from costly and potentially disastrous security vulnerabilities and
breaches.
Data Encryption
Oracle Autonomous Database uses always-on encryption that protects data at rest
and in transit. Data at rest and in motion is encrypted by default. Encryption cannot be
turned off.
5-108
Chapter 5
Security
AES256 tablespace encryption, each database has its own encryption key, and any backups
have their own different encryption keys.
By default, Oracle Autonomous Database creates and manages all the master encryption
keys used to protect your data, storing them in a secure PKCS 12 keystore on the same
systems where the database resides. If your company security policies require, Oracle
Autonomous Database can instead use keys you create and manage in the Oracle Cloud
Infrastructure Vault service. For more information, see About Master Encryption Key
Management on Autonomous Database.
Additionally, customer-managed keys can be rotated when needed in order to meet your
organization's security policies.
Note: When you clone a database, the new database gets its own new set of encryption
keys.
5-109
Chapter 5
Security
5-110
Chapter 5
Security
Database Vault
Oracle Database Vault comes preconfigured and ready to use. You can use its powerful
security controls to restrict access to application data by privileged database users, reducing
the risk threats, and addressing common compliance requirements.
You configure controls to block privileged account access to application data and control
sensitive operations inside the database. You configure trusted paths to add additional
security controls to authorized data access, database objects, and database commands.
Database Vault secures existing database environments transparently, eliminating costly and
time consuming application changes.
Before using Database Vault, be sure to review Use Oracle Database Vault with Autonomous
Database to gain an understanding of the impact of configuring and enabling Database Vault.
For detailed information on implementing Database Vault features, refer to Oracle Database
Vault Administrator’s Guide.
5-111
Chapter 5
Security
Allow
group <group-name>
to <control-verb>
<resource-type>
in compartment <compartment-name>
• group <group-name> specifies the "Who" by providing the name of an existing IAM
group.
• to <control-verb> specifies the "How" using one of these predefined control
verbs:
– inspect: the ability to list resources of the given type, without access to any
confidential information or user-specified metadata that may be part of that
resource.
– read: inspect plus the ability to get user-specified metadata and the actual
resource itself.
– use: read plus the ability to work with existing resources, but not to create or
delete them. Additionally, "work with" means different operations for different
resource types.
– manage: all permissions for the resource type, including creation and deletion.
• <resource-type> specifies the "What" using a predefined resource-type. The
resource-type values for infrastructure resources are:
– autonomous-databases
– autonomous-backups
You may create policy statements that refer to the tag-namespaces resource-type
value if tagging is used in your tenancy.
• in compartment <compartment-name> specifies the "Where" by providing the
name of an existing IAM compartment.
For information about how the IAM service and its components work and how to use
them, see Overview of Oracle Cloud Infrastructure Identity and Access Management.
For quick answers to common questions about IAM, see the Identity and Access
Management FAQ.
5-112
Chapter 5
Security
5-113
Chapter 5
Security
Oracle Data Safe helps you understand the sensitivity of your data, evaluate risks to
data, mask sensitive data, implement and monitor security controls, assess user
security, monitor user activity, and address data security compliance requirements in
your databases.
You use Oracle Data Safe to identify and protect sensitive and regulated data your
Oracle Autonomous Database by registering your database with Data Safe. Then, you
use the Data Safe console directly from the Details page of your database.
For more information about using Data Safe, see Use Oracle Data Safe Features.
Certification Description
C5 The Cloud Computing Compliance Controls Catalog (C5)
CSA STAR The Cloud Security Alliance (CSA) Security Trust, Assurance and
Risk (STAR)
Cyber Essentials (UK) Oracle Cloud Infrastructure has achieved Cyber Essentials and
Cyber Essentials Plus (UK) Cyber Essentials Plus certification in these regions:
• UK Gov South (London)
• UK Gov West (Newport)
DESC (UAE) Dubai Electronic Security Center CSP Security Standard
DoD IL4/IL5 DISA Impact Level 5 authorization in these regions:
• US DoD East (Ashburn)
• US DoD North (Chicago)
• US DoD West (Phoenix)
ENS High (Spain) The Esquema Nacional de Seguridad with the level of
accreditation High.
FedRAMP High Federal Risk and Authorization Management Program (U.S.
Government Regions only)
FSI (S. Korea) The Financial Security Institute
HDS The French Public Health Code requires healthcare organizations
that control, process, or store health or medical data to use
infrastructure, hosting, and platform service providers that are
Hébergeur de Données de Santé (HDS) accredited and certified
HIPAA Health Insurance Portability and Accountability Act
HITRUST The Health Information Trust Alliance
IRAP (Australia) The Infosec Registered Assessors Program. Sydney and
Melbourne regions
ISMS (S. Korea) The Information Security Management System
ISO/IEC 27001:2013 International Organization for Standardization 27001
ISO/IEC 27017:2015 Code of Practice for Information Security Controls Based on
ISO/IEC 27002 for Cloud Services
ISO/IEC 27018:2014 Code of Practice for Protection of Personally Identifiable
Information (PII) In Public Clouds Acting as PII Processors
MeitY (India) The Ministry of Electronics and Information Technology
5-114
Chapter 5
Security
Certification Description
MTCS (Singapore) Multi-Tier Cloud Service (MTCS) Level 3
PASF (UK OC4) Police Assured Secure Facilities (PASF) in these regions:
• UK Gov South (London)
• UK Gov West (Newport)
PCI DSS Payment Card Industry Data Security Standard is a set of
requirements intended to ensure that all companies that process,
store, or transmit credit card information maintain a secure
environment
SOC 1 System and Organization Controls 1
SOC 2 System and Organization Controls 2
For more information and a complete list of certifications, see Oracle Cloud Compliance.
5-115
Chapter 5
Security
5-116
Chapter 5
Security
Caution:
The customer-managed encryption key is stored in Oracle Cloud Infrastructure
Vault, external to the database host. If the customer-managed encryption key is
disabled or deleted, the database will be inaccessible.
5-117
Chapter 5
Security
Note:
Using the Oracle Cloud Infrastructure Console you can rotate an Oracle
Cloud Infrastructure Vault master encryption key with the Rotate Key
command. This is a separate action and does not result in a new master
encryption key for your Autonomous Database. To rotate the master
encryption key of your Autonomous Database, create a new master
encryption key in Oracle Cloud Infrastructure Vault and follow the steps
described below.
Note:
You must use these options when you create the key:
• Key Shape: Algorithm: AES (Symmetric key used for Encrypt and
Decrypt)
• Key Shape: Length: 256 bits
5-118
Chapter 5
Security
For more information, see To create a new master encryption key and Overview of Key
Management.
3. Create dynamic group and policy statements for the dynamic group to enable access to
Oracle Cloud Infrastructure resources (Vaults and Keys).
This step depends on whether the vault is on the same tenancy as the Autonomous
Database instance or on a different tenancy:
• Vault and keys are in the same tenancy as the Autonomous Database instance. See
Create Dynamic Group and Policies for Customer Managed Keys with Vault in Same
Tenancy as Database for more information.
• Vault and keys are in a different tenancy. See Create Dynamic Group and Policies for
Customer Managed Keys with Vault in Different Tenancy than the Database for more
information.
You must replicate the vault and keys to use customer-managed encryption keys with
Autonomous Data Guard with a remote Standby database. Replication of vaults and keys
across regions is only possible if you select the virtual private vault option when you create
the vault.
See Autonomous Data Guard with Customer Managed Keys for more information.
5-119
Chapter 5
Security
• Create Dynamic Group and Policies for Customer Managed Keys with Vault in
Same Tenancy as Database
Create dynamic group and policies to provide access to the vault and keys for
customer-managed keys when the vault and keys are in the same tenancy as the
Autonomous Database instance.
• Create Dynamic Group and Policies for Customer Managed Keys with Vault in
Different Tenancy than the Database
Perform these steps to use customer-managed keys when the Autonomous
Database instance and vaults and keys are in different tenancies.
Create Dynamic Group and Policies for Customer Managed Keys with Vault in Same
Tenancy as Database
Create dynamic group and policies to provide access to the vault and keys for
customer-managed keys when the vault and keys are in the same tenancy as the
Autonomous Database instance.
1. Create a dynamic group to make the master encryption key accessible to the
Autonomous Database instance.
a. In the Oracle Cloud Infrastructure console click Identity & Security.
b. Under Identity click Domains and select an identity domain (or create a new
identity domain).
c. Under Identity domain, click Dynamic groups.
d. Click Create dynamic group and enter a Name, a Description, and a rule.
• Create Dynamic Group for an existing database:
You can specify that an Autonomous Database instance is part of the
dynamic group. The dynamic group in the following example includes only
the Autonomous Database whose OCID is specified in the resource.id
parameter:
resource.id = '<your_Autonomous_Database_instance_OCID>'
resource.compartment.id = '<your_Compartment_OCID>'
e. Click Create.
2. Write policy statements for the dynamic group to enable access to Oracle Cloud
Infrastructure resources (vaults and keys).
a. In the Oracle Cloud Infrastructure console click Identity & Security and click
Policies.
b. To write policies for a dynamic group, click Create Policy, and enter a Name
and a Description.
5-120
Chapter 5
Security
c. Use the Policy Builder to create a policy for vault and keys in the local tenancy.
For example, the following allows the members of the dynamic group
DGKeyCustomer1 to access the vaults and keys in the compartment named training:
This sample policy applies for a single compartment. You can specify that a policy
applies for your tenancy, a compartment, a resource, or a group of resources.
To use customer-managed keys with Autonomous Data Guard with a remote
standby, the following policy is also required:
Create Dynamic Group and Policies for Customer Managed Keys with Vault in Different Tenancy
than the Database
Perform these steps to use customer-managed keys when the Autonomous Database
instance and vaults and keys are in different tenancies.
In this case, you need to supply OCID values when you change to customer-managed keys.
In addition, you need to define dynamic groups and policies that allow the Autonomous
Database instance to use vaults and keys in a different tenancy.
1. Copy the master encryption key OCID.
2. Copy the vault OCID.
3. Copy the tenancy OCID (the remote tenancy that contains vaults and keys).
4. On the tenancy with the Autonomous Database instance, create a dynamic group.
a. In the Oracle Cloud Infrastructure console, on the tenancy with the Autonomous
Database instance, click Identity & Security.
b. Under Identity click Domains and select an identity domain (or create a new identity
domain).
c. Under Identity domain, click Dynamic groups.
d. Click Create dynamic group and enter a Name, a Description, and a rule.
• Create Dynamic Group for an existing database:
5-121
Chapter 5
Security
resource.id = '<your_Autonomous_Database_instance_OCID>'
resource.compartment.id = '<your_Compartment_OCID>'
e. Click Create.
5. On the tenancy with the Autonomous Database instance, define the policies to
allow access to vaults and keys (where the vaults and keys are on a different
tenancy).
a. In the Oracle Cloud Infrastructure console click Identity & Security.
b. Under Identity click Policies.
c. To write a policy, click Create Policy.
d. On the Create Policy page, enter a Name and a Description.
e. On the Create Policy page, select Show manual editor.
f. In the policy builder, add policies so that the Autonomous Database instance is
able to access vaults and keys located in the different tenancy. Also add
policies for the IAM group that the IAM user belongs to so that the Oracle
Cloud Infrastructure Console for the Autonomous Database instance can show
details about the key that resides in a different tenancy.
5-122
Chapter 5
Security
For example, in the generic policy, call the tenancy with the Autonomous Database
instance Tenancy-1 and the tenancy with vaults and keys, Tenancy-2:
Copy the following policy and replace the variables and names with the values you
define, where the dynamic group name ADB-DynamicGroup is the dynamic group you
created in Step 4:
For example, the following allows the members of the dynamic group
DGKeyCustomer1 to access the remote vaults and keys in the tenancy named
training2:
5-123
Chapter 5
Security
For example define the following on the remote tenancy to allow the members
of the dynamic group DGKeyCustomer1 and the group REMGROUP to access the
remote vaults and keys in the tenancy named training2:
Caution:
The customer-managed encryption key is stored in Oracle Cloud
Infrastructure Vault, external to the database host. If the customer-managed
encryption key is disabled or deleted, the database will be inaccessible.
For details on using customer-managed keys where the Vault is located in a remote
tenancy, see Use Customer-Managed Encryption Key Located in a Remote Tenancy.
On Autonomous Database you can choose customer-managed keys as follows:
• While provisioning, under Advanced Options, on the Encryption Key tab.
• While cloning, under Advanced Options, on the Encryption Key tab
Follow these steps if your Autonomous Database is using Oracle-managed keys and
you want to switch to customer-managed encryption keys with the vault in the local
tenancy, or if you are using customer-managed encryption keys and you want to rotate
the master key.
1. Perform the required customer-managed encryption key prerequisite steps as
necessary. See Prerequisites to Use Customer-Managed Encryption Keys on
Autonomous Database for more information.
2. On the Details page, from the More actions drop-down list, select Manage
encryption key.
3. On the Manage encryption key page, select Encrypt using customer-managed
key in this tenancy.
5-124
Chapter 5
Security
If you are already using customer-managed keys and you want to rotate the TDE keys,
follow these steps and select a different key (select a key that is different from the
currently selected master encryption key).
4. Select a Vault.
Click Change Compartment to select a vault in a different compartment.
5. Select a Master encryption key.
Click Change Compartment to select a master encryption key in a different
compartment.
When cross-region Autonomous Data Guard is enabled the Vault and Master
encryption key values show the keys that are replicated in both the primary and the
remote standby region.
6. Click Save Changes.
The Lifecycle State changes to Updating. When the request completes, the Lifecycle State
shows Available.
After the request completes, on the Oracle Cloud Infrastructure Console, the key information
shows on the Autonomous Database Information page under the heading Encryption. This
area shows the Encryption Key field with a link to the Master Encryption Key and the
Encryption Key OCID field with the Master Encryption Key OCID.
See Notes for Using Customer-Managed Keys with Autonomous Database for more
information.
5-125
Chapter 5
Security
5-126
Chapter 5
Security
The Lifecycle State changes to Updating. When the request completes, the Lifecycle State
shows Available.
After the request completes, on the Oracle Cloud Infrastructure Console, the key information
shows on the Autonomous Database Information page under the heading Encryption. This
area shows the Encryption Key field with a link to the Master Encryption Key and the
Encryption Key OCID field with the Master Encryption Key OCID.
To view the key history from the Oracle Cloud Infrastructure Console:
1. On the Autonomous Database Details page, under Resources, click Key History.
2. This shows the Key History.
For example:
5-127
Chapter 5
Security
For example:
See Notes for Using Customer-Managed Keys with Autonomous Database for more
information.
Caution for Customer-Managed Encryption Keys when Oracle Cloud Infrastructure Vault is
Unreachable
After you switch to customer-managed keys, some database operations might be
affected when Oracle Cloud Infrastructure Vault is unreachable, as follows:
If the database is unable to access Oracle Cloud Infrastructure Vault for some reason,
such as a network outage, then Autonomous Database handles the outage as follows:
• There is a 2-hour grace period where the database remains up and running.
5-128
Chapter 5
Security
• If Oracle Cloud Infrastructure Vault is not reachable at the end of the 2-hour grace period,
the database Lifecycle State is set to Inaccessible. In this state existing connections are
dropped and new connections are not allowed.
• If Autonomous Data Guard is enabled, during or after the 2-hour grace period you can
manually try to perform a failover operation. Autonomous Data Guard automatic failover
is not triggered when you are using customer-managed encryption keys and the Oracle
Cloud Infrastructure Vault is unreachable.
• If Autonomous Database is stopped, then you cannot start the database when the Oracle
Cloud Infrastructure Vault is unreachable.
For this case, the work request shown when you click Work Requests on the Oracle
Cloud Infrastructure console under Resources shows:
Caution for Using Customer-Managed Encryption Keys When the Master Key is Disabled or
Deleted
After you switch to customer-managed keys, some database operations might be affected if
the Oracle Cloud Infrastructure Vault key is disabled or deleted.
• For disable/delete key operations where the Oracle Cloud Infrastructure Vault Master
Encryption Key State is any of the following:
– DISABLING
– DISABLED
– DELETING
– DELETED
– SCHEDULING_DELETION
– PENDING_DELETION
The database becomes Inaccessible after the Oracle Cloud Infrastructure Vault key
lifecycleState goes into one of these states. When the Oracle Cloud Infrastructure Vault
key is in any of these states, Autonomous Database drops existing connections and does
not allow new connections.
The database becoming inaccessible can take up to 15min after Oracle Cloud
Infrastructure Vault key operations (Autonomous Database checks Oracle Cloud
Infrastructure Vault at 15 minute intervals).
• If you disable or delete the Oracle Cloud Infrastructure Vault key used by your
Autonomous Database while Autonomous Data Guard is enabled, Autonomous Data
Guard will not perform an automatic failover.
• If you enable a disabled key, the database automatically goes into the Available state.
• If you delete the master key you can check the key history in the Oracle Cloud
Infrastructure Console to find the keys that were used for the database. The history
shows you which Oracle Cloud Infrastructure Vault key OCIDs were used, along with an
activation timestamp. If one of the older keys from the history list is still available in the
Vault, then you can restore the database to the time when the database was using that
key, or you can clone from a backup to create a new database with that timestamp.
5-129
Chapter 5
Security
Note:
When you are using Oracle-managed keys you can only switch to customer-
managed keys from the primary database. Likewise, when you are using
customer-managed keys you can only rotate keys or switch to Oracle-
managed keys from the primary database. The Manage encryption key
option under More actions is disabled for a standby database.
Note the following when you are using Autonomous Data Guard with a standby
database and customer-managed keys:
• In order for a remote standby to use the same master encryption key as primary,
the master encryption key must be replicated to the remote region. Replication of
vaults across regions is only possible if you select the virtual private vault option
when you create the vault.
See the following for more information:
– Autonomous Data Guard with Customer Managed Keys
– Overview of Vault
• The primary database and the standby database use the same key. In the event of
a switchover or failover to the standby, you can continue to use the same key.
• When you rotate the customer-managed key on the primary database, this is
reflected on the standby database.
• When you switch from customer-managed keys to Oracle-managed keys the
change is reflected on the standby database.
5-130
Chapter 5
Security
• When the Oracle Cloud Infrastructure Vault is unreachable, there is a 2-hour grace period
before the database goes into the INACCESSIBLE state. You can perform a failover during
or after this 2-hour grace period.
• If you disable or delete the Oracle Cloud Infrastructure master encryption key that your
Autonomous Database is using with customer-managed keys while Autonomous Data
Guard is enabled, Autonomous Data Guard does not perform an automatic failover.
• Autonomous Database does not support using customer-managed encryption keys with a
refreshable clone. You cannot create a refreshable clone from a source database that
uses customer-managed encryption keys. Additionally, you cannot switch to a customer-
managed encryption key on a source database that has one or more refreshable clones.
Configure Network Access with Access Control Rules (ACLs) and Private
Endpoints
Provides details on how to configure network access with access control rules or using a
private endpoint and describes the secure client connection options. Also describes how to
enable support for TLS connections (require mutual TLS only or allow both mutual TLS and
TLS authentication).
• Configuring Network Access with Access Control Rules (ACLs)
Specifying an access control list blocks all IP addresses that are not in the ACL list from
accessing the database. After you specify an access control list, the Autonomous
Database only accepts connections from addresses on the access control list and the
database rejects all other client connections.
• Configuring Network Access with Private Endpoints
You can specify that Autonomous Database uses a private endpoint inside your Virtual
Cloud Network (VCN) in your tenancy. You can configure a private endpoint during
provisioning or cloning your Autonomous Database, or you can switch to using a private
endpoint in an existing database that uses a public endpoint. This allows you to keep all
traffic to and from your database off of the public internet.
• Update Network Options to Allow TLS or Require Only Mutual TLS (mTLS)
Authentication on Autonomous Database
Describes how to update the secure client connection authentication options, Mutual TLS
(mTLS) and TLS.
5-131
Chapter 5
Security
only accepts connections from addresses on the access control list and the database
rejects all other client connections.
• Configure Access Control Lists When You Provision or Clone an Instance
When you provision or clone Autonomous Database with the Secure access from
allowed IPs and VCNs only option, you can restrict network access by defining
an Access Control List (ACL).
• Configure Access Control Lists for an Existing Autonomous Database Instance
You can control and restrict access to your Autonomous Database by specifying
network access control lists (ACLs). On an existing Autonomous Database
instance with a public endpoint you can add, change, or remove ACLs.
• Change from Private to Public Endpoints with Autonomous Database
If your Autonomous Database instance is configured to use a private endpoint you
can change the configuration to use a public endpoint.
• Access Control List Restrictions and Notes
Describes restrictions and notes for access control rules on Autonomous
Database.
5-132
Chapter 5
Security
2. In the Choose network access area, specify the access control rules by selecting an IP
notation type and entering Values appropriate for the type you select:
• IP Address:
In Values field enter values for the IP Address. An IP address specified in a network
ACL entry is the public IP address of the client that is visible on the public internet
that you want to grant access. For example, for an Oracle Cloud Infrastructure VM,
this is the IP address shown in the Public IP field on the Oracle Cloud Infrastructure
console for that VM.
Note:
Optionally click Add My IP Address to add your current IP address to the
ACL entry.
• CIDR Block:
In Values field enter values for the CIDR Block. The CIDR block specified is the
public CIDR block of the clients that are visible on the public internet that you want to
grant access.
• Virtual Cloud Network:
Use this option to specify the VCN for use with an Oracle Cloud Infrastructure
Service Gateway. See Access to Oracle Services: Service Gateway for more
information.
– In Virtual Cloud Network field select the VCN that you want to grant access
from. If you do not have the privileges to see the VCNs in your tenancy this list is
empty. In this case use the selection Virtual Cloud Network (OCID) to specify
the OCID of the VCN.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to whitelist specific clients in the
VCN.
• Virtual Cloud Network (OCID):
Use this option to specify the VCN (OCID) for use with an Oracle Cloud Infrastructure
Service Gateway. See Access to Oracle Services: Service Gateway for more
information.
– In the Values field enter the OCID of the VCN you want to grant access from.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to whitelist specific clients in the
VCN.
If you want to specify multiple IP addresses or CIDR ranges within the same VCN, do not
create multiple ACL entries. Use one ACL entry with the values for the multiple IP
addresses or CIDR ranges separated by commas.
3. Click + Access Control Rule to add a new value to the access control list.
4. Click x to remove an entry.
You can also clear the value in the IP Addresses or CIDR Blocks field to remove an
entry.
5. Require mutual TLS (mTLS) authentication.
After you enter an IP notation type and a value, you have the option to select this option.
The options are:
5-133
Chapter 5
Security
2. In the dialog, under Access Type, select Secure access from allowed IPs and
VCNs only and specify the access control rules by selecting an IP notation type
and values:
5-134
Chapter 5
Security
• IP Address:
In Values field enter values for the IP Address. An IP address specified in a network
ACL entry is the public IP address of the client that is visible on the public internet
that you want to grant access. For example, for an Oracle Cloud Infrastructure VM,
this is the IP address shown in the Public IP field on the Oracle Cloud Infrastructure
console for that VM.
Note:
Optionally click Add My IP Address to add your current IP address to the
ACL entry.
• CIDR Block:
In Values field enter values for the CIDR Block. The CIDR block specified is the
public CIDR block of the clients that are visible on the public internet that you want to
grant access.
• Virtual Cloud Network:
Use this option to specify the VCN for use with an Oracle Cloud Infrastructure
Service Gateway. See Access to Oracle Services: Service Gateway for more
information.
– In Virtual Cloud Network field select the VCN that you want to grant access
from. If you do not have the privileges to see the VCNs in your tenancy this list is
empty. In this case use the selection Virtual Cloud Network (OCID) to specify
the OCID of the VCN.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to allow specific clients in the
VCN.
• Virtual Cloud Network (OCID):
Use this option to specify the VCN for use with an Oracle Cloud Infrastructure
Service Gateway. See Access to Oracle Services: Service Gateway for more
information.
– In the Values field enter the OCID of the VCN you want to grant access from.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to allow specific clients in the
VCN.
If you want to specify multiple IP addresses or CIDR ranges within the same VCN, do not
create multiple ACL entries. Use one ACL entry with the values for the multiple IP
addresses or CIDR ranges separated by commas.
3. Click + Access Control Rule to add a new value to the access control list.
4. Click x to remove an entry.
You can also clear the value in the IP Addresses or CIDR Blocks field to remove an
entry.
5. Click Update.
If the Lifecycle State is Available when you click Update the Lifecycle State changes to
Updating until the ACL is set. The database is still up and accessible, there is no downtime.
When the update is complete the Lifecycle State returns to Available and the network ACLs
from the access control list are in effect.
5-135
Chapter 5
Security
3. In the dialog, under Configure access control rules specify rules by selecting an IP
notation type and values:
• IP Address:
In Values field enter values for the IP Address. An IP address specified in a
network ACL entry is the public IP address of the client that is visible on the
public internet that you want to grant access. For example, for an Oracle
Cloud Infrastructure VM, this is the IP address shown in the Public IP field on
the Oracle Cloud Infrastructure console for that VM.
5-136
Chapter 5
Security
Note:
Optionally click Add My IP Address to add your current IP address to the
ACL entry.
• CIDR Block:
In Values field enter values for the CIDR Block. The CIDR block specified is the
public CIDR block of the clients that are visible on the public internet that you want to
grant access.
• Virtual Cloud Network:
– In Virtual Cloud Network field select the VCN that you want to grant access
from. If you do not have the privileges to see the VCNs in your tenancy this list is
empty. In this case use the selection Virtual Cloud Network (OCID) to specify
the OCID of the VCN.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to allow specific clients in the
VCN.
• Virtual Cloud Network (OCID):
– In the Values field enter the OCID of the VCN you want to grant access from.
– Optionally, in the IP Addresses or CIDRs field enter private IP addresses or
private CIDR blocks as a comma separated list to allow specific clients in the
VCN.
If you want to specify multiple IP addresses or CIDR ranges within the same VCN, do not
create multiple ACL entries. Use one ACL entry with the values for the multiple IP
addresses or CIDR ranges separated by commas.
4. Click + Access Control Rule to add a new value to the access control list.
5. Click x to remove an entry.
You can also clear the value in the IP Addresses or CIDR Blocks field to remove an
entry.
6. Click Update.
7. In the Confirm dialog, type the Autonomous Database name to confirm the change.
8. In the Confirm dialog, click Update.
The Lifecycle State changes to Updating until the operation completes.
Notes for changing from private endpoint to public endpoint network access:
• After updating the network access type all database users must obtain a new wallet and
use the new wallet to access the database. See Download Client Credentials (Wallets)
for more information.
• After the update completes, you can change or define new access control rules ACLs for
the public endpoint. See Configure Access Control Lists for an Existing Autonomous
Database Instance for more information.
• The URL for Database Actions and for the Database Tools are different when a database
uses a private endpoint compared to using a public endpoint. Click Database Actions on
the Oracle Cloud Infrastructure Console to find the updated Database Actions URL and in
Database Actions click the appropriate cards to find the updated Database Tools URLs,
after changing from a private endpoint to a public endpoint.
5-137
Chapter 5
Security
5-138
Chapter 5
Security
5-139
Chapter 5
Security
The IAM service uses groups, compartments and policies to control which cloud
users can access which resources. In particular, a policy defines what kind of access a
group of users has to a particular kind of resource in a particular compartment. For
more information, see Getting Started with Policies.
In addition to the policies required to provision and manage an Autonomous Database,
some network policies are needed to use private endpoints. The following table lists
the IAM policies required for a cloud user to add a private endpoint.
Note:
The listed policies are the minimum requirements to add a private endpoint.
You can also use a policy rule that is broader. For example, if you set the
policy rule:
This rule also works because it is a superset that contains all the required
policies.
5-140
Chapter 5
Security
Note:
If you select Private endpoint access only, this only allows connections from
the specified private network (VCN), from peered VCNs, and from on-prem
networks connected to your VCN. You can configure an Autonomous Database
instance on a private endpoint to allow connections from on-prem networks.
See Example: Connecting from Your Data Center to Autonomous Database for
an example.
If you want to allow connections from public IP addresses, then you need to select either
Secure access from everywhere or Secure access from allowed IPs and VCNs only
when you provision or clone your Autonomous Database.
2. Select a Virtual cloud network in your compartment or if the VCN is in a different
compartment click Change Compartment and select the compartment that contains the
VCN and then select a virtual cloud network.
See VCNs and Subnets for more information.
3. Select the Subnet in your compartment to attach the Autonomous Database to or if the
Subnet is in a different compartment click Change Compartment and select the
compartment that contains the Subnet and then select a subnet.
See VCNs and Subnets for more information.
5-141
Chapter 5
Security
hostname_prefix.adb.region.oraclecloud.com
5-142
Chapter 5
Security
Note:
Incoming and outgoing connections are limited by the combination of
ingress and egress rules defined in NSGs and the Security Lists defined
with the VCN. When there are no NSGs, ingress and egress rules defined
in the Security Lists for the VCN still apply. See Security Lists for more
information on working with Security Lists.
5-143
Chapter 5
Security
Note:
If you select Private endpoint access only, this only allows connections
from the specified private network (VCN), from peered VCNs, and from
on-prem networks connected to your VCN. Thus, you can configure an
Autonomous Database instance on a private endpoint to allow
connections from on-prem networks. See Example: Connecting from
Your Data Center to Autonomous Database for an example.
5-144
Chapter 5
Security
This specifies a hostname prefix for the Autonomous Database and associates a
DNS name with the database instance, in the following form:
hostname_prefix.adb.region.oraclecloud.com
Note:
Incoming and outgoing connections are limited by the combination of
ingress and egress rules defined in NSGs and the Security Lists defined
with the VCN. When there are no NSGs, ingress and egress rules defined
in the Security Lists for the VCN still apply. See Security Lists for more
information on working with Security Lists.
5-145
Chapter 5
Security
Note:
With ROUTE_OUTBOUND_CONNECTIONS not set to PRIVATE_ENDPOINT, all
outgoing connections to the public internet pass through the Network
Address Translation (NAT) Gateway of the service VCN. In this case, if the
target host is on a public endpoint the outgoing connections are not subject
to the Autonomous Database instance private endpoint VCN or NSG egress
rules.
When you configure a private endpoint for your Autonomous Database instance and
set ROUTE_OUTBOUND_CONNECTIONS to PRIVATE_ENDPOINT, this setting changes the
handling of outbound connections for the following:
• Database links
• APEX_LDAP, APEX_MAIL, and APEX_WEB_SERVICE
• UTL_HTTP, UTL_SMTP, and UTL_TCP
• DBMS_LDAP
• CMU with Microsoft Active Directory
See Use Microsoft Active Directory with Autonomous Database for more
information.
5-146
Chapter 5
Security
To set ROUTE_OUTBOUND_CONNECTIONS:
If the property is not set the query does not return results.
• This property only applies for database links that you create after you set the property to
the value PRIVATE_ENDPOINT. Thus, database links that you created prior to setting the
property continue to use the NAT Gateway of the service VCN and are not subject to the
Autonomous Database instance private endpoint's egress rules.
• Only set ROUTE_OUTBOUND_CONNECTIONS to the value PRIVATE_ENDPOINT when you are
using Autonomous Database with a private endpoint.
• By default, when you are accessing other private endpoints the connection is subject to
your VCN's egress rules. Setting ROUTE_OUTBOUND_CONNECTIONS has no effect in this
case. The ROUTE_OUTBOUND_CONNECTIONS property applies when you want outgoing
connections to follow the private endpoint egress rules even when accessing public
endpoints.
See NAT Gateway for more information on Network Address Translation (NAT) gateway.
5-147
Chapter 5
Security
– Network Security Groups: This field includes links to the NSG(s) configured
with the private endpoint.
• After provisioning or cloning completes, you can change the Autonomous
Database configuration to use a public endpoint.
See Change from Private to Public Endpoints with Autonomous Database for
information on changing to a public endpoint.
• You can specify up to five NSGs to control access to your Autonomous Database.
• You can change the private endpoint Network Security Group (NSG) for the
Autonomous Database.
To change the NSG for a private endpoint, do the following:
1. On the Autonomous Databases page select an Autonomous Database from
the links under the Display name column.
2. On the Autonomous Database Details page, under Network in the Network
Security Groups field, click Edit.
• You can connect your Oracle Analytics Cloud instance to your Autonomous
Database that has a private endpoint using the Data Gateway like you do for an
on-premises database. See Configure and Register Data Gateway for Data
Visualization for more information.
• The following Autonomous Database tools are supported in databases configured
with a private endpoint:
– Database Actions
– Oracle APEX
– Oracle Graph Studio
– Oracle Machine Learning Notebooks
– Oracle REST Data Services
– Oracle Database API for MongoDB
Additional configuration is required to access these Autonomous Database tools
from on-premises environments. See Example: Connecting from Your Data Center
to Autonomous Database to learn more.
Accessing Oracle APEX, Database Actions, Oracle Graph Studio, or Oracle REST
Data Services using a private endpoint from on-premises environments without
completing the additional private endpoint configuration shows the error:
404 Not Found
• After you update the network access to use a private endpoint, the URL for the
Database Tools is different compared to using a public endpoint. You can find the
updated URLs on the console, after changing from a public endpoint to a private
endpoint.
• In addition to the default Oracle REST Data Services (ORDS) preconfigured with
Autonomous Database, you can configure an alternative ORDS deployment that
provides more configuration options and that can be used with private endpoints.
See About Customer Managed Oracle REST Data Services on Autonomous
Database to learn about an alternative ORDS deployment that can be used with
private endpoints.
5-148
Chapter 5
Security
• Modifying a private IP address is not allowed after you provision or clone an instance,
whether the IP address is automatically assigned when you enter a value in the Private
IP address field.
There is an Autonomous Database instance which has a private endpoint in the VCN named
"Your VCN". The VCN includes two subnets: "SUBNET B" (CIDR 10.0.1.0/24) and "SUBNET
A" (CIDR 10.0.2.0/24).
The Network Security Group (NSG) associated with the Autonomous Database instance is
shown as "NSG 1 - Security Rules". This Network Security Group defines security rules that
allow incoming and outgoing traffic to and from the Autonomous Database instance. Define a
rule for the Autonomous Database instance as follows:
• For Mutual TLS authentication, add a stateful ingress rule to allow connections from the
source to the Autonomous Database instance; the source is set to the address range you
want to allow to connect to your database, IP Protocol is set to TCP, and the Destination
Port Range is set to 1522.
5-149
Chapter 5
Security
• For TLS authentication, add a stateful ingress rule to allow connections from the
source to the Autonomous Database instance; the source is set to the address
range you want to allow to connect to your database, IP Protocol is set to TCP,
and the Destination Port Range is set to 1521.
• To use Oracle APEX, Database Actions, and Oracle REST Data Services, add
port 443 to the NSG rule.
The following figure shows a sample stateful security rule to control traffic for the
Autonomous Database instance:
After you configure the security rules, your application can connect to the Autonomous
Database instance using the client credentials wallet. See Download Client
Credentials (Wallets) for more information.
See Network Security Groups for information on configuring Network Security Groups.
5-150
Chapter 5
Security
To connect from your data center, you connect the on-premise network to the VCN with
FastConnect and then set up a Dynamic Routing Gateway (DRG). To resolve the
Autonomous Database private endpoint, a Fully Qualified Domain Name (FQDN), requires
that you add an entry in your on-premise client's hosts file. For example, /etc/hosts file for
Linux machines. For example:
To use Oracle APEX, Database Actions, and Oracle REST Data Services, add another entry
with the same IP. For example:
5-151
Chapter 5
Security
• For Mutual TLS authentication, create a stateful rule to allow ingress traffic from
the data center. This is a stateful ingress rule with the source set to the address
range you want to allow to connect to your database, protocol set to TCP, source
port range set to CIDR range (172.16.0.0/16), and destination port set to 1522.
• For TLS authentication, create a stateful rule to allow ingress traffic from the data
center. This is a stateful ingress rule with the source set to the address range you
want to allow to connect to your database, protocol set to TCP, source port range
set to CIDR range (172.16.0.0/16), and destination port set to 1521.
• To use Oracle APEX, Database Actions, and Oracle REST Data Services, add
port 443 to the NSG rule.
The following figure shows the security rule that controls traffic for the Autonomous
Database instance:
After you configure the security rule, your on-premise database application can
connect to the Autonomous Database instance using the client credentials wallet. See
Download Client Credentials (Wallets) for more information.
Update Network Options to Allow TLS or Require Only Mutual TLS (mTLS)
Authentication on Autonomous Database
Describes how to update the secure client connection authentication options, Mutual
TLS (mTLS) and TLS.
• Network Access Prerequisites for TLS Connections
Describes the network access configuration prerequisites for TLS connections.
• Update your Autonomous Database Instance to Allow both TLS and mTLS
Authentication
• Update your Autonomous Database Instance to Require mTLS and Disallow TLS
Authentication
If your Autonomous Database instance is configured to allow TLS connections,
you can update the instance to require mTLS connections and disallow TLS
connections.
5-152
Chapter 5
Security
See Configuring Network Access with Access Control Rules (ACLs) for more information.
• When an Autonomous Database instance is configured with a private endpoint you can
use TLS authentication to connect to the database. To validate that a private endpoint is
defined, in the Network area on the Autonomous Database Details page view the
Access Type field. This field shows Virtual Cloud Network when a private endpoint is
defined.
See Configuring Network Access with Private Endpoints for more information.
Note:
When an Autonomous Database instance is configured with the network access
type: Allow secure access from everywhere, you can only use TLS connections
to connect to the database if you specify ACLs to restrict access.
Update your Autonomous Database Instance to Allow both TLS and mTLS Authentication
If your Autonomous Database instance is configured to only allow mTLS connections, you
can update the instance to allow both mTLS and TLS connections.
When you update your configuration to allow both mTLS and TLS, you can use both
authentication types at the same time and connections are no longer restricted to require
mTLS authentication.
You can allow TLS connections when network access is configured as follows:
• With network access configured with ACLs defined.
• With network access configured with a private endpoint defined.
Note:
When you configure your Autonomous Database instance network access with
ACLs or a private endpoint, the ACLs or the private endpoint apply for both mTLS
and TLS connections.
Perform the network access configuration prerequisites. See Network Access Prerequisites
for TLS Connections for more information.
Perform the following steps as necessary:
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To change the Autonomous Database instance to allow TLS authentication, do the following:
1. On the Autonomous Database Details page, under Network, click Edit in the Mutual
TLS (mTLS) Authentication field.
5-153
Chapter 5
Security
3. Click Update.
The Autonomous Database Lifecycle State changes to Updating.
After some time, the Lifecycle State shows Available and the Mutual TLS
(mTLS) Authentication field changes to show Not Required.
After you define ACLs or configure a private endpoint and the Mutual TLS (mTLS)
Authentication field shows Not Required, the ACLs or the private endpoint you
specify apply to all connection types (mTLS and TLS).
Depending on the type of client, TLS connections have the following support with
Autonomous Database:
• If the client is connecting with JDBC Thin using TLS authentication, the client can
connect without providing a wallet. See Connect with JDBC Thin Driver for more
information.
• If the client is connecting with managed ODP.NET or ODP.NET Core versions
19.13 or 21.4 (or above) using TLS authentication, the client can connect without
providing a wallet. See Connect Microsoft .NET, Visual Studio Code, and Visual
Studio Without a Wallet for more information.
• If the client is connecting with SQLNet and Oracle Call Interface (OCI), and for
certain other connection types with TLS authentication, the clients must provide
the CA certificate in a wallet. See Prepare for Oracle Call Interface, ODBC, and
JDBC OCI Connections Using TLS Authentication and Connect SQL*Plus Without
a Wallet for more information.
5-154
Chapter 5
Security
Update your Autonomous Database Instance to Require mTLS and Disallow TLS Authentication
If your Autonomous Database instance is configured to allow TLS connections, you can
update the instance to require mTLS connections and disallow TLS connections.
Note:
When you update an Autonomous Database instance to require Mutual TLS
(mTLS) connections, existing TLS connections are disconnected.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select your Autonomous Database from the links
under the Display name column.
To change the Autonomous Database instance to require mTLS authentication and to not
allow TLS authentication, do the following:
1. On the Autonomous Database Details page, under Network, click Edit in the Mutual
TLS (mTLS) Authentication field.
This shows the Edit Mutual TLS Authentication page.
2. Select Require mutual TLS (mTLS) authentication.
3. Click Update.
The Autonomous Database Lifecycle State changes to Updating.
5-155
Chapter 5
Security
After some time, the Lifecycle State shows Available and the Mutual TLS
(mTLS) Authentication field changes to show Required.
5-156
Chapter 5
Security
5-157
Chapter 5
Security
Depending on the number and type of audit policies you use and the amount of
activity, over time the audit trail can grow to use a large amount of storage.
Autonomous Database provides the following ways to limit the storage required for
audit data:
• Each Autonomous Database instance runs an automated purge job once a day to
remove all audit records older than fourteen (14) days.
• Users with the AUDIT_ADMIN role can purge audit records manually using the
DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL procedure. See DBMS_AUDIT_MGMT for
more information.
If you need a longer audit data retention period than 14 days, use Oracle Data Safe to
retain audit data. See Extend Audit Record Retention with Oracle Data Safe on
Autonomous Database for more information.
Autonomous Database audits and logs every operation carried out in your database by
the Oracle Cloud Infrastructure Operations teams. See View Oracle Cloud
Infrastructure Operations Actions for more information on how to audit Operations
activities.
5-158
Chapter 5
Security
a. Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
b. From the Oracle Cloud Infrastructure left navigation menu click Oracle Database and
then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
c. On the Autonomous Databases page select an Autonomous Database from the links
under the Display Name column.
2. Register your Autonomous Database instance with Oracle Data Safe.
a. On the Autonomous Database Details page, under Data Safe, click Register.
5-159
Chapter 5
Security
View and Manage Oracle Data Safe Audit Trails on Autonomous Database
Data Safe uses audit trails to define where to retrieve the audit data and to collect
Autonomous Database audit records. During the registration process, Oracle Data
Safe discovers the audit trails and creates an audit trail resource.
Oracle Data Safe lists resources on the Audit Trails page. To access the Audit Trails
page, under Security Center in Data Safe click Activity Auditing, and then, on the
Activity Auditing page under Related Resources click Audit Trails. You can discover
new audit trails at any time and remove audit trail resources in Oracle Data Safe as
needed.
First register your Autonomous Database instance with Oracle Data Safe. See
Register Oracle Data Safe on Autonomous Database for more information.
When the Autonomous Database is stopped or restarted, the following happens:
• The audit trail switches to a retrying state and Data Safe makes multiple attempts
to reconnect for four (4) hours. The Audit Trail Collection State field shows:
RETRYING.
In this case, if the Autonomous Database (target database) starts, the audit trail
automatically resumes.
• After four hours, the audit trail switches to a stopped state. The Audit Trail
Collection State field shows: STOPPED_NEEDS_ATTN.
In this case, when the Autonomous Database (target database) starts and the
audit trail collection state is STOPPED_NEEDS_ATTN, you can manually resume
the audit trail.
You can also manually stop or delete the audit trail. Deleting the audit trail does not
remove audit records that have already been collected. Those records remain in Data
Safe until the retention period is reached.
See View and Manage Audit Trails for more information.
5-160
Chapter 5
Security
View and Manage Audit Policies with Oracle Data Safe on Autonomous Database
Use Oracle Data Safe to set audit policies for your Autonomous Database instance.
First register your Autonomous Database instance with Oracle Data Safe. See Register
Oracle Data Safe on Autonomous Database for more information.
After your Autonomous Database instance is registered, access Oracle Data Safe to set audit
policies.
See View and Manage Audit Policies for more information.
5-161
Chapter 5
Security
Note:
If you want to terminate all connections immediately after the wallet
rotation completes, Oracle recommends that you restart the Autonomous
Database instance. This provides the highest level of security for your
database.
Note:
If you want to terminate all connections immediately after the wallet
rotation completes, Oracle recommends that you restart the Autonomous
Database instances in the region. This provides the highest level of
security for your database.
To immediately rotate the client certification key for a given database or for all
Autonomous Database instances that a cloud account owns in a region:
1. Navigate to the Autonomous Database details page.
2. Click Database connection.
3. On the Database connection page select the Wallet type:
• Instance wallet: Wallet rotation for a single database only; this provides a
database-specific wallet rotation.
5-162
Chapter 5
Security
• Regional wallet: Wallet rotation for all Autonomous Databases for a given tenant
and region (this option rotates the client certification key for all service instances that
a cloud account owns).
4. Click Rotate wallet.
5. Enter the name as shown in the dialog to confirm the wallet rotation.
6. In the Rotate Wallet dialog, click Rotate.
The Database Connection page shows: Rotation in Progress.
After the rotation completes, the Wallet last rotated field shows the last rotation date and
time.
Oracle recommends you provide a database-specific instance wallet to end users and for
application use whenever possible, with Wallet type set to Instance wallet when you use
Download wallet. Regional wallets should only be used for administrative purposes that
require potential access to all Autonomous Databases within a region.
You can also use the Autonomous Database API to rotate wallets using
UpdateAutonomousDatabaseRegionalWallet and UpdateAutonomousDatabaseWallet. See
Autonomous Database Wallet Reference for more information.
Note:
After the grace period completes, if you want to terminate any connections
using the old wallet, Oracle recommends that you restart the Autonomous
Database instance.
5-163
Chapter 5
Security
– For the region whose certification key is rotated, both regional and database
specific instance wallets will be void. After the grace period expires you have
to download new regional or instance wallets to connect to any database in
the region.
– After the grace period expires, existing connections using the old wallets
continue to work.
Note:
After the grace period completes, if you want to terminate any
connections using the old wallets, Oracle recommends that you restart
every Autonomous Database instance in the region.
To rotate the client certification key with a grace period for a given database or for all
for all Autonomous Database instances that a cloud account owns in a region:
1. Navigate to the Autonomous Database details page.
2. Click Database connection.
3. On the Database connection page select the Wallet type:
• Instance wallet: Wallet rotation for a single database only; this provides a
database-specific wallet rotation.
• Regional wallet: Wallet rotation for all Autonomous Databases for a given
tenant and region (this option rotates the client certification key for all service
instances in the region that a cloud account owns).
4. Click Rotate wallet.
5. Select After a grace period.
6. In the Grace period (in hours) area, either enter a value in the text field or use
the slider to select a value.
5-164
Chapter 5
Security
7. Enter the name as shown in the dialog to confirm the wallet rotation.
8. In the Rotate Wallet dialog, click Rotate.
The Database Connection page shows: Rotation in Progress.
After the rotation completes, the Wallet last rotated field shows the last rotation date and
time.
Notes for wallet rotation with a grace period:
• Always Free Autonomous Databases only support immediate wallet rotation (wallet
rotation with a grace period is not supported).
• Oracle recommends you provide a database-specific instance wallet to end users and for
application use whenever possible, with Wallet type set to Instance wallet when you use
Download wallet. Regional wallets should only be used for administrative purposes that
require potential access to all Autonomous Databases within a region.
5-165
Chapter 5
Security
You can also use the Autonomous Database API to rotate wallets using
UpdateAutonomousDatabaseRegionalWallet and UpdateAutonomousDatabaseWallet.
See Autonomous Database Wallet Reference for more information.
5-166
Chapter 5
Security
EXEC DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT('adb_dbv_owner',
'adb_dbv_acctmgr');
Where:
• adb_dbv_owner is the Oracle Database Vault owner.
• adb_dbv_acctmgr is the account manager.
See CONFIGURE_DATABASE_VAULT Procedure for more information.
2. Enable Oracle Database Vault:
EXEC DBMS_CLOUD_MACADM.ENABLE_DATABASE_VAULT;
NAME STATUS
-------------------- -----------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS TRUE
Note:
Autonomous Database maintenance operations such as backups and patching are
not affected when Oracle Database Vault is enabled.
See Disable Oracle Database Vault on Autonomous Database for information on disabling
Oracle Database Vault.
5-167
Chapter 5
Security
EXEC DBMS_CLOUD_MACADM.DISABLE_DATABASE_VAULT;
NAME STATUS
-------------------- -----------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS FALSE
1. As a user granted DV_ACCTMGR and DV_ADMIN roles you can disable user
management for specified components.
2. To disable user management for a specified component, for example for the APEX
component, use the following command:
EXEC DBMS_CLOUD_MACADM.DISABLE_USERMGMT_DATABASE_VAULT('APEX');
5-168
Chapter 5
Security
EXEC DBMS_CLOUD_MACADM.ENABLE_USERMGMT_DATABASE_VAULT('APEX');
5-169
Chapter 5
Security
• AUTONOMOUS_DATABASE_DELETE
• AUTONOMOUS_DATABASE_INSPECT
• AUTONOMOUS_DATABASE_UPDATE
• AUTONOMOUS_DB_BACKUP_CONTENT_READ
• AUTONOMOUS_DB_BACKUP_CREATE
• AUTONOMOUS_DB_BACKUP_INSPECT
• NETWORK_SECURITY_GROUP_UPDATE_MEMBERS
• VNIC_ASSOCIATE_NETWORK_SECURITY_GROUP
5-170
Chapter 5
Security
5-171
Chapter 5
Security
5-172
Chapter 5
Security
Resource-Types
An aggregate resource-type covers the list of individual resource-types that directly follow.
For example, writing one policy to allow a group to have access to the autonomous-
database-family is equivalent to writing four separate policies for the group that would grant
access to the autonomous-databases, autonomous-backups resource-types. For more
information, see Resource-Types.
Resource-Types for Autonomous Database
Aggregate Resource-Type:
autonomous-database-family
Individual Resource-Types:
autonomous-databases
autonomous-backups
Supported Variables
General variables are supported. See General Variables for All Requests for more
information.
Additionally, you can use the target.workloadType variable, as shown in the following table:
5-173
Chapter 5
Security
The following tables show the Permissions and API operations covered by each verb.
For information about permissions, see Permissions.
Note:
The resource family covered by autonomous-database-family can be used
to grant access to database resources associated with all the Autonomous
Database workload types.
5-174
Chapter 5
Security
autonomous-backups
5-175
Chapter 5
Security
5-176
Chapter 5
Security
• Your organization's policies require that you monitor your databases and retain audit
records.
• You need to protect against common database attacks coming from risks such as
compromised accounts.
• Your developers need to use copies of production data for work on a new application and
you're wondering what kinds of sensitive information the production data contains.
• You need to make sure that staff changes haven't left dormant user accounts on your
databases.
Oracle Data Safe provides the following:
• Security Assessment: Configuration errors and configuration drift are significant
contributors to data breaches. Use security assessment to evaluate your database's
configuration and compare it to Oracle and industry best practices. Security assessment
provides reports on areas of risk and notifies you when configurations change.
• User Assessment: Many breaches start with a compromised user account. User
Assessment helps you spot the riskiest database accounts, those accounts which if
compromised could cause the most damage. User Assessment helps you take proactive
steps to secure these accounts. User Assessment Baselines make it easy to know when
new accounts are added, or when an account's privileges are modified. You can use
Oracle Cloud Infrastructure Events to receive proactive notifications when a database
deviates from its baseline.
• Data Discovery: Provides support to locate and to manage sensitive data in your
applications. Data discovery scans your database for over 150 different types of sensitive
data and helps you to understand what types and how much sensitive data you are
storing. Use the data discovery reports to formulate audit policies, develop data masking
templates, and create effective access control policies.
• Data Masking Minimize the amount of sensitive data your organization maintains to help
you meet compliance requirements and satisfy data privacy regulations. Data masking
helps you remove risk from your non-production databases by replacing sensitive
information with masked data. With reusable masking templates, over 50 included
masking formats, and the ability to easily create custom formats for your organization's
unique requirements, data masking can streamline your application development and
testing operations.
• Activity Auditing Activity auditing collects audit records from databases and helps you
manage audit policies. Understanding and reporting on user activity, data access, and
changes to database structures supports regulatory compliance requirements and can
aid in post-incident investigations. Audit insights make it easy to identify inefficient audit
policies, while alerts based on audit data proactively notify you of risky activity.
Note:
One (1) million audit records per database per month are included for your
Autonomous Database if using the audit collection for Activity Auditing in Oracle
Data Safe.
5-177
Chapter 5
Security
5-178
Chapter 5
Security
5-179
Chapter 5
Security
• read-only: provides read only access to the instance. This is the default access
type and includes default roles: CREATE SESSION, SELECT ANY TABLE, SELECT ANY
DICTIONARY, SELECT_CATALOG_ROLE.
• read/write: provides read/write access to the instance. The default roles for this
type are: CREATE SESSION, SELECT ANY TABLE, SELECT ANY DICTIONARY,
SELECT_CATALOG_ROLE, INSERT ANY TABLE, and UPDATE ANY TABLE.
• admin: provides admin access to the instance. The default roles for this type are:
CREATE SESSION and PDB_DBA.
5-180
Chapter 5
Security
Before you enable the SAAS_ADMIN user to access a database you must obtain values for the
required parameters.
Parameter Description
isEnabled Specifies a Boolean value. Use TRUE to enable.
password Specifies the password for the SAAS_ADMIN user. If you specify secretId
you cannot specify password.
The password you provide as a parameter must conform to the Autonomous
Database password requirements. See About User Passwords on
Autonomous Database for more information.
secretId Specifies the value of a secret's Oracle Cloud Infrastructure Vault secret
OCID. If you specify password you cannot specify secretId. See Overview
of Vault for more information.
The password you provide as a secret in the Oracle Cloud Infrastructure
Vault must conform to the Autonomous Database password requirements.
See About User Passwords on Autonomous Database for more information.
secretVersionNumber Specifies the version number of the secret specified with the secretId. This
parameter is optional. The default is to use the latest secret version. This
parameter only applies when secretId is also specified.
accessType One of: read-only, read/write, or admin. The default is read-only.
duration Specifies the duration in hours, in the range of 1 hour to 24 hours. The
default is 1 hour.
1. Use the API or the CLI and specify a value for the password to enable SAAS_ADMIN with a
password.
For example:
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/autonomousDatabases/
ocid1.autonomousdatabase.oc1.phx.uniqueId/actions/configureSaasAdminUser
{ "isEnabled": true,
"password": password,
"accessType": "READ_ONLY",
5-181
Chapter 5
Security
"duration": 17
}
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/
autonomousDatabases/ocid1.autonomousdatabase.oc1.phx.uniqueId/
actions/getSaasAdminUserStatus
{ "isEnabled": true,
"accessType": "READ_ONLY",
"timeSaasAdminUserEnabled": "2023-11-23T01:59:07.032Z"
}
This response shows that the SAAS_ADMIN user is enabled and that access type is
READ_ONLY. The timestamp shows the time when SAAS_ADMIN was enabled (time is
in UTC).
See getSaasAdminUserStatus for more information.
To enable SAAS_ADMIN with a secretId with the secret stored in Oracle Cloud
Infrastructure Vault:
1. Use the API or the CLI and specify an OCID value for the secretId.
For example:
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/
autonomousDatabases/ocid1.autonomousdatabase.oc1.phx.uniqueId/
actions/configureSaasAdminUser
{ "isEnabled": true,
"secretId": "ocid1.key.co1.ap-
mumbai-1.example..aaaaaaaauq5ok5nq3bf2vwetkpqsoa",
"accessType": "READ_ONLY",
"duration": 20
}
5-182
Chapter 5
Security
Specifying a secret version is optional. If you specify a secret version in the API call with
secretVersionNumber, the specified secret version is used. If you do not specify a secret
version the call uses the latest secret version.
See configureSaasAdminUser for more information.
2. Verify that SAAS_ADMIN user is enabled.
For example:
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/autonomousDatabases/
ocid1.autonomousdatabase.oc1.phx.uniqueId/actions/getSaasAdminUserStatus
{ "isEnabled": true,
"accessType": "READ_ONLY",
"timeSaasAdminUserEnabled": "2023-11-23T01:59:07.032Z"
}
This response shows that the SAAS_ADMIN user is enabled and the access type is
READ_ONLY. The timestamp shows the time when the user was enabled (time is in UTC).
See getSaasAdminUserStatus for more information.
By default access expires after one hour if the duration parameter is not set when
SAAS_ADMIN is enabled. If the duration parameter is set when SAAS_ADMIN is enabled, then
access expires after the specified duration number of hours. As an alternative to letting the
access expire based on the either the default expiration time or the duration you specify, you
can use configureSaasAdminUser to explicitly disable SAAS_ADMIN user access.
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/autonomousDatabases/
ocid1.autonomousdatabase.oc1.phx.uniqueId/actions/configureSaasAdminUser
{
"isEnabled": false
}
POST https://dbaas-api.svc.ad3.us-phoenix-1/20160918/autonomousDatabases/
ocid1.autonomousdatabase.oc1.phx.uniqueId/actions/getSaasAdminUserStatus
{ "isEnabled": false
}
5-183
Chapter 5
Security
The presence of a value for auth_revoker means the access has been revoked
and shows the user who revoked access.
The auth_end shows a planned time. After authorization is revoked, if the
authorization expired at the time end of the duration specified when SAAS_ADMIN
user was enabled, the planned time will be the same as the actual time. If the
planned and the actual time are different, this means that SAAS_ADMIN
authorization was revoked before the duration expired.
For example, if SAAS_ADMIN is enabled with a duration of 2 hours, and after 1 hour
access is disabled by calling the API configureSaasAdminUser to disable the
SAAS_ADMIN user, the auth_end planned and actual times will be different.
If this query shows no rows, then the SAAS_ADMIN user does not exist. It may have
been created and dropped or it was never created.
5-184
Chapter 5
Security
Topics
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user1@example.com',
password => 'password'
);
END;
/
5-185
Chapter 5
Security
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
See CREATE_CREDENTIAL Procedure for more information.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
3. Create a credential to store the encryption key (the encryption key to be used for
encrypting data).
When you encrypt data using DBMS_CRYPTO encryption algorithms you store the
encryption key in a credential. The key is specified in the password field in a
credential you create with DBMS_CLOUD.CREATE_CREDENTIAL.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'ENC_CRED_NAME',
username => 'Any_username',
password => 'password'
);
END;
/
As an alternative you can create a credential to store the key in a vault. For
example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'ENC_CRED_NAME',
params => JSON_OBJECT ('username' value
'Any_username',
'region' value 'Region',
'secret_id' value
'Secret_id_value'));
END;
/
Note:
The username parameter you specify in the credential that stores the key
can be any string.
5-186
Chapter 5
Security
Use the format parameter with the encryption option. The encryption type specifies the
DBMS_CRYPTO encryption algorithm to use to encrypt the table data and the
credential_name value is credential that specifies the secret (encryption key).
For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA (
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namepace-string/b/bucketname/o/encrypted.csv',
query => 'SELECT * FROM ADMIN.employees',
format => json_object(
'type' value 'csv',
'encryption' value json_object(
'type' value DBMS_CRYPTO.ENCRYPT_AES256 +
DBMS_CRYPTO.CHAIN_CBC + DBMS_CRYPTO.PAD_PKCS5,
'credential_name' value 'ENC_CRED_NAME'))
);
END;
/
This encrypts and exports the data from the EMPLOYEES table into a CSV file.
See DBMS_CRYPTO Algorithms for more information on encryption algorithms.
In this example, namespace-string is the Oracle Cloud Infrastructure object storage
namespace and bucketname is the bucket name. See Understanding Object Storage
Namespaces for more information.
See EXPORT_DATA Procedure and DBMS_CLOUD Package Format Options for
EXPORT_DATA for more information.
After you encrypt files with DBMS_CLOUD.EXPORT_DATA, when you use DBMS_CRYPTO encryption
algorithms to encrypt the files, you have these options for using or importing the files you
exported:
• You can use DBMS_CLOUD.COPY_DATA or DBMS_CLOUD.COPY_COLLECTION with the same
encryption algorithm options and the key to decrypt the files.
See Decrypt and Load Data Using DBMS_CRYPTO Algorithms for more information.
• You can query the data in an external table by supplying the same encryption algorithm
options and the key to decrypt the files, with any of the following procedures:
– DBMS_CLOUD.CREATE_EXTERNAL_TABLE
– DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
– DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
For DBMS_CLOUD.CREATE_HYBRID_PART_TABLE this option is only applicable to the
Object Storage files.
See Decrypt and Load Data Using DBMS_CRYPTO Algorithms for more information.
• On a system that is not an Autonomous Database you can use the DBMS_CRYPTO package
with the same algorithm options and the key to decrypt the files.
Note that the key is stored as a VARCHAR2 in the credential in Autonomous Database but
DBMS_CRYPTO uses RAW type for the key parameter.
5-187
Chapter 5
Security
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object
Storage service you are using.
See CREATE_CREDENTIAL Procedure for more information.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
3. Create a user-defined callback function to encrypt data.
For example:
5-188
Chapter 5
Security
This creates the ENCRYPTION_FUNC encryption function. This function encrypts data using
a stream or block cipher with a user supplied key.
Note:
You must create an encryption key to be used as a value in the KEY paramter.
See DBMS_CRYPTO Operational Notes for more information on generating the
encryption key.
4. Run DBMS_CLOUD.EXPORT_DATA with the format parameter, include the encryption option
and specify a user_defined_function.
For example:
BEGIN
DBMS_CLOUD.EXPORT_DATA (
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namepace-string/b/bucketname/o/encrypted.csv',
query => 'SELECT * FROM ADMIN.emp',
format => json_object(
'type' value 'csv',
'encryption' value
json_object('user_defined_function' value 'admin.encryption_func'))
);
END;
/
This encrypts the data from the specified query the on EMP table and exports the data as
a CSV file on Cloud Object Storage. The format parameter with the encryption value
specifies the user-defined encryption function to use to encrypt the data.
Note:
You must have EXECUTE privilege on the encryption function.
5-189
Chapter 5
Security
Note:
This option is only supported for Object Storage files less than 4 GB.
Topics
As a prerequisite you must have encrypted files and uploaded the files into Object
Storage. This example uses a CSV file and it is assumed that the file is encrypted
using DBMS_CRYPTO.ENCRYPT_AES256 + DBMS_CRYPTO.CHAIN_CBC +
DBMS_CRYPTO.PAD_PKCS5 algorithm and uploaded to your Cloud Object Storage.
5-190
Chapter 5
Security
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object Storage
service you are using.
See CREATE_CREDENTIAL Procedure for more information.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Create a credential to store the key using DBMS_CLOUD.CREATE_CREDENTIAL. For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'ENC_CRED_NAME',
username => 'Any_username',
password => 'password'
);
END;
/
As an alternative you can create credentials to store the key in a vault. For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'ENC_CRED_NAME',
params => JSON_OBJECT ('username' value 'Any_username',
'region' value 'Region',
'secret_id' value
'Secret_id_value'));
END;
/
Note:
The username parameter you specify in the credential that stores the key can be
any string.
This creates the ENC_CRED_NAME credential which is a vault secret credential, where the
secret (decryption/encryption key) is stored as a secret in the Oracle Cloud Infrastructure
Vault.
See CREATE_CREDENTIAL Procedure for more information.
5-191
Chapter 5
Security
BEGIN
DBMS_CLOUD.COPY_DATA (
table_name => 'CSV_COPY_DATA',
credential_name => 'OCI$RESOURCE_PRINCIPAL',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namepace-string/b/bucketname/o/
encrypted.csv',
format => json_object( 'type' value 'csv',
'encryption' value json_object('type' value
DBMS_CRYPTO.ENCRYPT_AES256 + DBMS_CRYPTO.CHAIN_CBC +
DBMS_CRYPTO.PAD_PKCS5,'credential_name' value 'ENC_CRED_NAME'))
);
END;
/
This decrypts the ENCRYPTED.CSV file in the Object Storage. The data is then
loaded into the CSV_COPY_DATA table. The format parameter encryption option
value specifies a DBMS_CRYPTO encryption algorithm to be used to decrypt data.
See DBMS_CRYPTO Algorithms for more information on encryption algorithms.
In this example, namespace-string is the Oracle Cloud Infrastructure object
storage namespace and bucketname is the bucket name. See Understanding
Object Storage Namespaces for more information.
For detailed information about the parameters, see COPY_DATA Procedure.
For detailed information about the available format parameters, you can use with
DBMS_CLOUD.COPY_DATA, see DBMS_CLOUD Package Format Options.
5-192
Chapter 5
Security
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user1@example.com',
password => 'password'
);
END;
/
The values you provide for username and password depend on the Cloud Object Storage
service you are using.
See CREATE_CREDENTIAL Procedure for more information.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
3. Create a user-defined function decryption callback function.
For example:
5-193
Chapter 5
Performance Monitor and Management
END decryption_func;
/
This creates the DECRYPTION_FUNC decryption function. This function decrypts data
using a stream or block cipher with a user supplied key. The user supplied key in
the example is stored in Oracle Cloud Infrastructure Vault and is retrieved
dynamically by making a REST call to Oracle Cloud Infrastructure Vault service.
4. Run DBMS_CLOUD.COPY_DATA and specify the format option encryption and specify
the user-defined function you created to decrypt the data.
BEGIN
DBMS_CLOUD.COPY_DATA (
table_name => 'CSV_COPY_DATA',
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namepace-string/b/bucketname/o/
encrypted.csv',
format => json_object(
'type' value 'csv',
'encryption' value
json_object('user_defined_function' value 'admin.decryption_func'))
);
end;
/
This decrypts the ENCRYPTED.CSV file in the Object Storage. The data is then
loaded into the CSV_COPY_DATA table. The format parameter encryption option
value specifies a user-defined function name to use to decrypt data.
Note:
You must have the EXECUTE privilege on the user-defined function.
5-194
Chapter 5
Performance Monitor and Management
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS('SH', options=>'GATHER AUTO');
5-195
Chapter 5
Performance Monitor and Management
END;
/
This example gathers statistics for all tables that have stale statistics in the SH schema.
ALTER SESSION
SET OPTIMIZER_IGNORE_HINTS=FALSE;
You can also enable PARALLEL hints in your SQL statements by setting
OPTIMIZER_IGNORE_PARALLEL_HINTS to FALSE at the session or system level
using ALTER SESSION or ALTER SYSTEM. For example, the following command enables
PARALLEL hints in your session:
ALTER SESSION
SET OPTIMIZER_IGNORE_PARALLEL_HINTS=FALSE;
Note:
The automatic statistics gathering maintenance window is different than the
maintenance window on the Oracle Cloud Infrastructure console. The Oracle
Cloud Infrastructure maintenance window shows system patching
information.
For more information on automatic statistics gathering maintenance window times and
automatic optimizer statistics collection, see Database Administrator’s Guide.
5-196
Chapter 5
Performance Monitor and Management
Managing Optimizer Hints with Transaction Processing and JSON Database Workloads
Autonomous Database with Transaction Processing and JSON Database workloads honors
optimizer hints and PARALLEL hints in SQL statements by default. You can disable optimizer
hints by setting the parameter OPTIMIZER_IGNORE_HINTS to TRUE at the session or system
level using ALTER SESSION or ALTER SYSTEM. For example, the following command disables
hints in your session:
ALTER SESSION
SET OPTIMIZER_IGNORE_HINTS=TRUE;
You can also disable PARALLEL hints in your SQL statements by setting
OPTIMIZER_IGNORE_PARALLEL_HINTS to TRUE at the session or system level using ALTER
SESSION or ALTER SYSTEM.
ALTER SESSION
SET OPTIMIZER_IGNORE_PARALLEL_HINTS=TRUE;
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');
This enables automatic indexing in a database and creates any new auto indexes as
visible indexes, so that they can be used in SQL statements.
2. Use the DBMS_AUTO_INDEX package to report on the automatic task and to set automatic
indexing preferences.
Note:
When automatic indexing is enabled, index compression for auto indexes is
enabled by default.
5-197
Chapter 5
Performance Monitor and Management
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');
When you use SODA with Autonomous Database the following restrictions apply:
• Automatic indexing is not supported for SQL and PL/SQL code that uses the SQL/
JSON function json_exists. See SQL/JSON Condition JSON_EXISTS for more
information.
• Automatic indexing is not supported for SODA query-by-example (QBE).
See Managing Auto Indexes for more information.
Note:
Automatic partitioning does not interfere with existing partitioning strategies
and is complementary to manual partitioning in Autonomous Database.
Manually partitioned tables are excluded as candidates for automatic
partitioning.
5-198
Chapter 5
Performance Monitor and Management
Note:
Automatic partitioning requires explicit calls to the DBMS_AUTO_PARTITION PL/SQL
APIs to recommend and apply automatic partitioning to an Autonomous Database.
When invoked, automatic partitioning identifies candidate tables for automatic partitioning,
evaluates partition schemes, and implements a partitioning strategy.
When you invoke automatic partitioning it performs the following tasks:
5-199
Chapter 5
Performance Monitor and Management
1. Identifies candidate tables for automatic partitioning by analyzing the workload for
selected candidate tables.
By default, automatic partitioning uses the workload information collected in an
Autonomous Database for analysis. Depending on the size of the workload, a
sample of queries might be considered.
2. Evaluates partition schemes based on workload analysis, quantification, and
verification of the performance benefits:
a. Candidate empty partition schemes with synthesized statistics are created
internally and analyzed for performance.
b. The candidate scheme with the highest estimated IO reduction is chosen as
the optimal partitioning strategy and is internally implemented to test and verify
performance.
c. If a candidate partition scheme does not improve performance beyond
specified performance and regression criteria, automatic partitioning is not
recommended.
3. Implements the optimal partitioning strategy, if configured to do so, for the tables
analyzed by the automatic partitioning procedures.
EXEC DBMS_AUTO_PARTITION.CONFIGURE('AUTO_PARTITION_MODE','IMPLEMENT');
EXEC DBMS_AUTO_PARTITION.CONFIGURE('AUTO_PARTITION_MODE','REPORT
ONLY');
EXEC DBMS_AUTO_PARTITION.CONFIGURE('AUTO_PARTITION_MODE','OFF');
Note:
This mode does not disable existing automatically partitioned tables.
5-200
Chapter 5
Performance Monitor and Management
Note:
When automatic partitioning is invoked, all schemas and tables in user-managed
schemas are considered for automatic partitioning if both the inclusion and
exclusion lists are empty.
• Assuming the inclusion list and the exclusion list are empty, add the HR schema and the
SH.SALES table to the exclusion list, preventing only those objects from automatic
partitioning analysis.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_SCHEMA',
PARAMETER_VALUE => 'HR',
ALLOW => FALSE);
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_TABLE',
PARAMETER_VALUE => 'SH.SALES',
ALLOW => FALSE);
END;
/
• After the previous example runs, use the following to remove the HR schema from the
exclusion list.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_SCHEMA',
PARAMETER_VALUE => 'HR',
ALLOW => NULL);
END;
/
• Use the following command to remove all schemas from the exclusion list.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_SCHEMA',
PARAMETER_VALUE => NULL,
ALLOW => TRUE);
END;
/
• Assuming the inclusion and exclusion lists are empty, the following example adds the HR
schema to the inclusion list. As soon as the inclusion list is no longer empty, only
schemas in the inclusion list are considered.
With this example, only the HR schema is a candidate for automatic partitioning.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_SCHEMA',
5-201
Chapter 5
Performance Monitor and Management
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_REPORT_RETENTION',
PARAMETER_VALUE => '365');
END;
/
5-202
Chapter 5
Performance Monitor and Management
create the auxiliary table and verify performance. You will also temporarily need
additional space, of 1 - 1.5 times, your largest candidate table.
3. Apply the recommendation.
Any recommendation can be implemented with the APPLY_RECOMMENDATION procedure in
the database where the recommendation analysis occurred. Alternatively, any
recommendation can be extracted from the database used for analysis and applied to
any database, such as a production system. The script needed for manual modification is
stored in column MODIFY_TABLE_DDL in the DBA_AUTO_PARTITION_RECOMMENDATION view.
Oracle recommends applying automatic partitioning to your database at off-peak time.
While your tables will be modified to automatically partitioned tables, the conversion adds
additional resource requirements to your system, such as additional CPU and IO.
Automatic partitioning requires as much as 1.5 times the size of the table to being
modified as additional free space, depending on concurrent ongoing DML operations on
those tables.
DECLARE
Report clob := NULL
BEGIN
Report := DBMS_AUTO_PARTITION.REPORT_ACTIVITY();
END;
/
Generate a report, in HTML format, of automatic partitioning operations for MAY 2021
This example generates a report containing basic information about the automatic partitioning
operations for the month of MAY 2021. The report is generated in the HTML format, and it
includes only a summary of automatic partitioning operations.
DECLARE
Report clob := NULL
BEGIN
Report := DBMS_AUTO_PARTITION.REPORT_ACTIVITY(
ACTIVITY_START => TO_TIMESTAMP('2021-05-01', 'YYYY-MM-
DD'),
ACTIVITY_END => TO_TIMESTAMP('2021-06-01', 'YYYY-MM-
DD'),
TYPE => 'HTML',
SECTION => 'SUMMARY',
LEVEL => 'BASIC' );
END;
/
5-203
Chapter 5
Performance Monitor and Management
DECLARE
Report clob := NULL
BEGIN
Report := DBMS_AUTO_PARTITION.REPORT_LAST_ACTIVITY();
END;
/
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_MODE',
PARAMETER_VALUE => 'REPORT ONLY');
END;
/
SELECT DBMS_AUTO_PARTITION.VALIDATE_CANDIDATE_TABLE(
TABLE_OWNER => 'TPCH',
TABLE_NAME => 'LINEITEM')
FROM DUAL;
If the table is a valid candidate, when you invoke automatic partitioning for a
recommendation analysis it returns as VALID. Otherwise, the violation criteria is
shown.
See VALIDATE_CANDIDATE_TABLE Function for a list of criteria for eligible
candidate tables.
5-204
Chapter 5
Performance Monitor and Management
Additionally, query the same view to get the performance analysis report generated for
the workload after the table was partitioned according to the recommendation.
SELECT REPORT
FROM DBA_AUTO_PARTITION_RECOMMENDATIONS
WHERE RECOMMENDATION_ID = :RECOMMENDATION_ID;
5. After manual validation of the recommendation, apply the recommendation. If you are
applying the recommendation in the database where the recommendation analysis has
taken place, apply the recommendation by executing the APPLY_RECOMMENDATION
procedure.
BEGIN
DBMS_AUTO_PARTITION.APPLY_RECOMMENDATION(
RECOMMENDATION_ID => :RECOMMENDATION_ID);
END;
/
If you want to apply the recommendation to a different database, such as your production
environment, extract the modification DDL. Then, run the extracted modification DDL in
your target database. The query to extract the modification DDL is as follows:
SELECT MODIFY_TABLE_DDL
FROM DBA_AUTO_PARTITION_RECOMMENDATIONS
WHERE RECOMMENDATION_ID = :RECOMMENDATION_ID;
5-205
Chapter 5
Performance Monitor and Management
BEGIN
-- DBMS_AUTO_PARTITION RECOMMENDATION_ID
C3F7A59E085C2F25E05333885A0A55EA
-- FOR TABLE "TPCH"."LINEITEM"
-- GENERATED AT 06/04/2021 20:52:29
DBMS_AUTO_PARTITION.BEGIN_APPLY(EXPECTED_NUMBER_OF_PARTITIONS
=> 10);
EXECUTE IMMEDIATE
'ALTER TABLE "TPCH"."LINEITEM"
MODIFY PARTITION BYLIST(SYS_OP_INTERVAL_HIGH_BOUND
("L_SHIPDATE", INTERVAL ''10'' MONTH, TIMESTAMP
''1992-01-01 00:00:00''))
AUTOMATIC /* SCORE=23533.11; */
(PARTITION P_NULL VALUES(NULL))
AUTO ONLINE PARALLEL';
DBMS_AUTO_PARTITION.END_APPLY;
EXCEPTION WHEN OTHERS THEN
DBMS_AUTO_PARTITION.END_APPLY;
RAISE;
END;
6. Verify that the table was automatically partitioned, query the catalog views.
Use this query to identify when automatic partitioning was applied to a given table.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
5-206
Chapter 5
Performance Monitor and Management
4. Use this query to drill-down in the report for a specific table that was analyzed in the run,
the TPCH.LINEITEM table in this example.
SELECT REPORT
FROM DBA_AUTO_PARTITION_RECOMMENDATIONS
WHERE RECOMMENDATION_ID = :RECOMMENDATION_ID
AND TABLE_OWNER = 'TPCH'
AND TABLE_NAME = 'LINEITEM';
BEGIN
DBMS_AUTO_PARTITION.APPLY_RECOMMENDATION(
RECOMMENDATION_ID => :RECOMMENDATION_ID);
END;
/
Alternately, apply recommendations for a specific table that was analyzed, the
TPCH.LINEITEM table in this example.
BEGIN
DBMS_AUTO_PARTITION.APPLY_RECOMMENDATION(
RECOMMENDATION_ID => :RECOMMENDATION_ID,
TABLE_OWNER => 'TPCH',
TABLE_NAME => 'LINEITEM');
5-207
Chapter 5
Performance Monitor and Management
END;
/
Note:
Recommendations of automatic partitioning generated by the
RECOMMEND_PARTITION_METHOD function have a time limit, specified by the
TIME_LIMIT parameter, with a default of 1 day. If you are analyzing a large
system with many candidate tables, a single invocation may not generate a
recommendation for all tables that can be partitioned. You can safely invoke
the recommendation for auto partitioning repeatedly to generate
recommendations for additional tables. When the function is invoked and
zero rows are in DBA_AUTO_PARTITION_RECOMMENDATIONS for the
RECOMMENDATION_ID, then the function did not find any additional candidate
tables for automatic partitioning.
BEGIN
DBMS_AUTO_PARTITION.CONFIGURE(
PARAMETER_NAME => 'AUTO_PARTITION_MODE',
PARAMETER_VALUE => 'IMPLEMENT');
END;
/
5-208
Chapter 5
Performance Monitor and Management
4. Use the REPORT_LAST_ACTIVITY function to retrieve the report on the actions taken during
the last run.
Column Description
PARAMETER_NAME Name of the configuration parameter
PARAMETER_VALUE Value of the configuration parameter
LAST_MODIFIED Time, in UTC, at which the parameter value was last modified.
MODIFIED_BY User who last modified the parameter value
Column Description
TABLE_OWNER Owner of the table
TABLE_NAME Owner of the table
5-209
Chapter 5
Using Oracle Graph with Autonomous Database
Column Description
PARTITION_METHOD Recommended partition method. See CONFIGURE Procedure
PARTITION_KEY Recommended partition key. NULL means that analysis complete and
recommendation is not to partition the table.
GENERATE_TIMESTAMP Time, in UTC, when this recommendation was generated.
RECOMMENDATION_ID ID used with DBMS_AUTO_PARTITION APIs to get additional information
about this recommendation.
RECOMMENDATION_SEQ When a recommendation ID has recommendations for multiple tables,
provides the order in which the recommendations were generated.
Performance reports are generated assuming that earlier
recommendations have been applied. For example, the report for
RECOMMENDATION_SEQ = 2 assumes recommendations have been
applied for both RECOMMENDATION_SEQ = 1 and
RECOMMENDATION_SEQ = 2.
MODIFY_TABLE_DDL DDL that would be, or was, used to apply the recommendation.
APPLY_TIMESTAMP_START Time, in UTC, when application of this recommendation was started.
NULL if recommendation was not applied.
APPLY_TIMESTAMP_END Time, in UTC, when application of this recommendation was finished.
NULL if recommendation was not applied or if application has not
finished.
REPORT SQL Performance Analyzer report from SQL execution on database
after recommendation is applied.
5-210
Chapter 5
Using Oracle Graph with Autonomous Database
• When using RDF graphs, querying external data sources using SPARQL federated
queries is not supported.
• Triple level security in RDF graphs using Oracle Label Security are not supported. See
Fine-Grained Access Control for RDF Data for more information.
Database roles must be set to use Graph Studio on an Autonomous Database instance. See
Required Roles to Access Tools from Database Actions for more information.
• About Oracle Graph Studio on Autonomous Database
• Building Applications Using Graph APIs with Autonomous Database
If you are developing an application using graphs, you can also access Property Graph
APIs and RDF Graph APIs with Autonomous Database without Graph Studio.
5-211
Chapter 5
Use Oracle Machine Learning with Autonomous Database
5-212
Chapter 5
Use Oracle Machine Learning with Autonomous Database
5-213
Chapter 5
Creating Analytic Views
When you create an Analytic View, you identify a fact table that contains the data to
inspect. The Generate Hierarchies and Measures button looks at the contents of that
table, identifies any hierarchies in the fact table, and searches for other tables that
may contain related hierarchies.
While creating an Analytic View, you can enable or disable the following advanced
options:
• Autonomous Aggregate Cache, which uses the dimensional metadata of the
Analytic View to manage a cache and that improves query response time.
5-214
Chapter 5
Creating Analytic Views
• Analytic View Transparency Views, which presents Analytic Views as regular database
views and enables you to use your analytic tools of choice while gaining the benefits of
Analytic Views.
• Analytic View Base Table Query Transformation, which enables you to use your existing
reports and tools without requiring changes to them.
5-215
Chapter 5
Creating Analytic Views
The Search for Dimension Tables check box when selected, enables you to search
for dimension tables while generating hierarchies and measures.
After the hierarchies and measures are generated, they are displayed under their
respective tabs. Review the hierarchies and measures created for you.
Specify the Name, Fact Table and select Advanced Options in the General tab of
Create Analytic View pane. Click Create to generate an Analytic View.
5-216
Chapter 5
Creating Analytic Views
You can view all the fact tables and views associated with the analytic view.
In the filter field, you can either manually look for the source or start typing to search for the
fact table or views from the list of available fact tables and views. After typing the full name of
the source, the tool automatically matches the fact table or view.
Select Generate and Add hierarchy from Source to generate analysis and hierarchies
associated with the source data you select.
Select Find and Add Joins to link all the data sources with the fact table. You can add
multiple join entries for a single hierarchy.
Click OK to select the source.
The Generating Hierarchies and Measures dialog box displays the progress of analyzing the
dimension tables and creating the hierarchies. When the process completes, click Close.
5-217
Chapter 5
Creating Analytic Views
Note:
When you add a hierarchy from the data source, you see the new hierarchy
in the list of hierarchies in the Hierarchies tab. You can navigate between the
Data Sources tab, the Hierarchies tab, the Measures tab, the Calculations
tab. You can add a hierarchy from a source that is not connected by
navigating back to the Data Sources tab.
Select Remove Hierarchy Source to remove the hierarchies you create from the data
sources. You cannot remove hierarchies generated from the fact table you select from
this option.
Expand Joins to view the Hierarchy Source, Hierarchy Column and the Fact
column mapped with the Analytic View. The Joins is visible only when the hierarchy
table differs from the fact table. You can add multiple join entries for a single hierarchy.
Expand Sources to view the fact table associated with the Analytic View. The data
model expands to include the data from the source that you added.
Pointing to an item displays the name, application, type, path and the schema of the
table. Click the Actions (three vertical dots) icon at the right of the item to display a
menu to expand or collapse the view of the table.
An expanded item displays the columns of the table. Pointing to a column displays the
name, application, type, path, and schema of the column.
5-218
Chapter 5
Creating Analytic Views
The lines that connect the dimension tables to the fact table indicate the join paths between
them. Pointing to a line displays information about the join paths of the links between the
tables. If the line connects a table that is collapsed, then the line is dotted. If the line connects
two expanded tables, then the line is solid and connects the column in the dimension table to
the column in the fact table.
5-219
Chapter 5
Creating Analytic Views
To remove the hierarchy, select the hierarchy you want to remove from the list and
click Remove Hierarchy
Select Move Up or Move Down to position the order of the Hierarchy in the resulting
view.
Click Switch Hierarchy to Measure to change the hierarchy you select to a measure
in the Measures list.
You can also Add Hierarchy and Add Hierarchy From Table by right-clicking the
Hierarchy tab.
If you click on a hierarchy name, a dialog box displays the Hierarchy Name and
Source.
To change the source, select a different source from the drop-down list.
Select Add Level to add a level to the hierarchy. Click Remove Level to remove the
selected level from the hierarchy.
5-220
Chapter 5
Creating Analytic Views
To view the data in the fact table and statistics about the data, click the Preview Data button.
In the Preview and Statistics pane, the Preview tab shows the columns of the table and the
data in the columns. The Statistics tab shows the size of the table and the number of rows
and columns.
If you click on a particular level in the Hierarchy tab, a dialog box displays it's respective Level
Name, Level Key, Alternate Level Key, Member Name, Member Caption, Member
Description, source, and Sort By drop-down. To change any of the field values, enter the
value in the appropriate field.
Note:
You can enter multiple level keys Member Name, Member Caption, Member
Description and Sort By.
Member Captions and Member Descriptions generally represent detailed labels for objects.
These are typically end-user-friendly names. For example, you can caption a hierarchy
representing geography areas named GEOGRAPHY_HIERARCHY as "Geography" and
specify its description as "Geographic areas such as cities, states, and countries."
To see the measures for the Analytic View, click Measures tab. To immediately create the
Analytic View, click Create. To cancel the creation, click Cancel.
5-221
Chapter 5
Creating Analytic Views
To alternatively add a measure from the data source, right- click the Measures tab.
This pops up a list of columns that can be used as measures. Select one measure
from the list.
You can exclude a column from the measures on right-clicking the Measures tab and
selecting Remove Measure.
Click Switch Measure to Hierarchy to change the measure you select to hierarchy in
the Hierarchies list.
5-222
Chapter 5
Creating Analytic Views
You must specify a measure as the default measure for the analytic view; otherwise, the first
measure in the definition is the default. Select Default Measure from the drop-down.
To add a measure, right-click the Measures tab and select Add Measure. To remove a
measure, select the particular measure you want to remove, right-click on it and select
Remove Measure.
You can select a different column for a measure from the Column drop-down list. You can
select a different operator from the Expression drop-down list.
In creating an analytic view, you must specify one or more hierarchies and a fact table that
has at least one measure column and a column to join to each of the dimension tables
outside of the fact table.
Note:
You can create the measures without increasing the size of the database since the
calculated measures do not store the data. However, they may slow performance.
You need to decide which measures to calculate on demand.
The Analytic Views provides easy-to-use templates for creating calculated measures.
Once you create a calculated measure, it appears in the list of measures of the Analytic
View .You can create a calculated measure at any time which is available for querying in
SQL.
The Data Analysis tool provides easy-to-use templates for creating calculated measures.
5-223
Chapter 5
Creating Analytic Views
Click Add Calculated Measure to add calculations to the measures. You can view the
new calculation with system generated name in the Calculations tab.
Click the newly created calculated measure.
In the Measure Name field, enter the name of the calculated measure.
You can select preferred category of calculation from a list of options such as Prior and
Future Period, Cumulative Aggregates, Period To Date, Parallel Period, Moving
Aggregates, Share, Qualified Data Reference, and Ranking using the Calculation
Category drop-down.
Your choice of category of calculation dynamically changes the Calculation Template.
For more details on how to use Calculation templates, see Using Calculation
Templates.
Select the Measure and Hierarchy on which you want to base the calculated
measures.
Select Offset value by clicking the up or the down arrow. The number specifies the
number of members to move either forward or backward from the current member.
The ordering of members within a level is dependent on the definition of the attribute
dimension used by the hierarchy. The default value is 0 which represents POSITION
FROM BEGINNING.
The Expression field lists the expressions which the calculated measure uses.
On the creation of the Analytic view, the calculated measure appears in the navigation
tree in the Calculated Measures folder.
Click Create. A confirmation dialog box appears that asks for your confirmation. Select
Yes to proceed with the creation of Analytic View.
After creating the Analytic View, you will view a success message informing you of its
creation.
On editing the Analytic View you create, you can view the calculated measure in the
navigation tree in the Calculations folder.
Click the Tour icon for a guided tour of the worksheet highlighting salient features and
providing information if you are new to the interface.
Click the help icon to open the contextual or online help for the page you are viewing.
5-224
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Click Show DDL to generate Data Definition Language statements for the analytic view.
Usage Guidelines
Provides usage guidelines that ensure effective and proper usage of natural language
prompts for SQL generation to ensure an enhanced user experience.
Intended Use
5-225
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
This feature is intended for the generation and running of SQL queries resulting from
user-provided natural language prompts. It automates what a user could do manually
based on their schema metadata in combination with a large language model (LLM) of
their choice.
While any prompt can be provided, including those that do not relate to the production
of SQL query results, Select AI focuses on SQL query generation. Select AI enables
submitting general requests with the chat action.
WARNING:
Large language models (LLMs) have been trained on a broad set of text
documentation and content, typically from the Internet. As a result, LLMs
may have incorporated patterns from invalid or malicious content, including
SQL injection. Thus, while LLMs are adept at generating useful and relevant
content, they also can generate incorrect and false information including SQL
queries that produce inaccurate results and/or compromise security of your
data.
The queries generated on your behalf by the user-specified LLM provider will
be run in your database. Your use of this feature is solely at your own risk,
and, notwithstanding any other terms and conditions related to the services
provided by Oracle, constitutes your acceptance of that risk and express
exclusion of Oracle’s responsibility or liability for any damages resulting from
that use.
5-226
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Note:
Users must have an account with the AI provider and provide their credentials
through DBMS_CLOUD_AI objects that the Autonomous Database uses.
In addition to specifying tables and views in the AI profile, you can also specify tables
mapped with external tables, including those described in Query External Data with Data
Catalog. This enables you to query data not just inside the database, but also data stored in a
data lake's object store.
Note:
Network ACL is not applicable for OCI Generative AI.
5-227
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Configure DBMS_CLOUD_AI
To configure DBMS_CLOUD_AI:
1. Grant the EXECUTE privilege on the DBMS_CLOUD_AI package to the user who wants
to use Select AI.
By default, only ADMIN user is granted the EXECUTE privilege. The ADMIN user
can grant EXECUTE privilege to other users.
2. Grant network ACL access to the user who wants to use Select AI and for the AI
provider endpoint.
The ADMIN user can grant network ACL access. See APPEND_HOST_ACE
Procedure for more information.
3. Create a credential to enable access to your AI provider.
See CREATE_CREDENTIAL Procedure for more information.
The following example grants the EXECUTE privilege to ADB_USER:
The following example grants ADB_USER the privilege to use the api.openai.com
endpoint.
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'api.openai.com',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'ADB_USER',
principal_type => xs_acl.ptype_db)
);
END;
/
Parameter Description
host The host, which can be the name or the IP address of the host. You can use
a wildcard to specify a domain or a IP subnet. The host or domain name is
not case sensitive.
For OpenAI, use api.openai.com.
For Cohere, use api.cohere.ai.
For Azure OpenAI Service, use
<azure_resource_name>.openai.azure.com. See Profile Attributes to know
more about azure_resource_name.
ace The access control entries (ACE). The XS$ACE_TYPE type is provided to
construct each ACE entry for the ACL. For more details, see Creating ACLs
and ACEs.
5-228
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
DBMS_CLOUD.CREATE_CREDENTIAL Parameters
Parameter Description
credential_name The name of the credential to be stored. The credential_name parameter must
conform to Oracle object naming conventions, which do not allow spaces or
hyphens.
username The username and password arguments together specify your AI provider
credentials.
The username is a user-specified user name.
password The username and password arguments together specify your AI provider
credentials.
The password is your AI provider secret API key, and depends on the provider:
• OpenAI: See Use OpenAI to get your API keys.
• Cohere: See Use Cohere to get your API keys.
• Azure OpenAI Service: See Use Azure OpenAI Service to get your API keys
and how to configure the service.
Note:
If you are using the Azure OpenAI Service principle to
authenticate, you can skip the
DBMS_CLOUD.CREATE_CREDENTIAL procedure. See Examples of
Using Select AI for an example of authenticating using Azure
OpenAI Service principle.
• Use OpenAI
To enable OpenAI to generate SQL from natural language prompts, obtain API keys from
your OpenAI paid account.
• Use Cohere
To enable Cohere to generate SQL from natural language prompts, obtain API keys from
your Cohere paid account.
• Use Azure OpenAI Service
To enable Azure OpenAI Service to generate SQL from natural language prompts,
configure and provide access to the AI provider.
Use OpenAI
To enable OpenAI to generate SQL from natural language prompts, obtain API keys from
your OpenAI paid account.
You can find your secret API key in your User settings.
5-229
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Use Cohere
To enable Cohere to generate SQL from natural language prompts, obtain API keys
from your Cohere paid account.
Click Dashboard, and click API Keys on the left navigation. Copy the default API key
or create another key. See API-Keys for more information.
Tip:
5-230
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Note:
You must run DBMS_CLOUD_AI.SET_PROFILE in each new database session
(connection) before you use SELECT AI.
The following example with the OpenAI provider creates an AI profile called OPENAI and sets
the OPENAI profile for the current user session.
-- Create AI profile
--
SQL> BEGIN
DBMS_CLOUD_AI.create_profile(
'OPENAI',
'{"provider": "openai",
"credential_name": "OPENAI_CRED",
"object_list": [{"owner": "SH", "name": "customers"},
{"owner": "SH", "name": "sales"},
{"owner": "SH", "name": "products"},
{"owner": "SH", "name": "countries"}]
}');
END;
/
--
-- Enable AI profile in current session
--
SQL> EXEC DBMS_CLOUD_AI.set_profile('OPENAI');
Note:
You cannot run PL/SQL statements, DDL statements, or DML statements using the
AI keyword.
5-231
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Syntax
The syntax for running AI prompt is:
Parameters
The following are the parameters available for the action parameter:
Parameter Description
runsql Run the provided SQL command using a natural language prompt.
This is the default action and it is optional to specify this parameter.
showsql Displays the SQL statement for a natural language prompt.
narrate The output of the prompt is explained in natural language. This option
sends the SQL result to the AI provider to produce a natural language
summary.
chat Generates a response directly from the LLM based on the prompt. If
conversation in the DBMS_CLOUD_AI.CREATE_PROFILE function is
set to true, this option includes content from prior interactions or
prompts, potentially including schema metadata.
explainsql The SQL generated from the prompt is explained in natural language.
This option sends the generated SQL to the AI provider to produce a
natural language explanation.
Usage Notes
• Select AI is not supported in Database Actions or APEX Service. You can use only
DBMS_CLOUD_AI.GENERATE function.
• The AI keyword is supported only in a SELECT statement.
• You cannot run PL/SQL statements, DDL statements, or DML statements using
the AI keyword.
• The sequence is SELECT followed by AI. These keywords are not case-sensitive.
After a DBMS_CLOUD_AI.SET_PROFILE is configured, the text after SELECT AI is a
natural language prompt. If an AI profile is not set, SELECT AI reports the following
error:
• Special character usage rules apply according to Oracle guidelines. For example,
use single quotes twice if you are using an apostrophe in a sentence.
• LLMs are subject to hallucinations and results are not always correct:
– It is possible that SELECT AI may not be able to run the generated SQL for a
specific natural language prompt.
5-232
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
– It is possible that SELECT AI may not be able to generate SQL for a specific natural
language prompt.
In such a scenario, SELECT AI responds with information to assist you in generating valid
SQL.
• Use the chat action, with SELECT AI chat, to learn more about SQL constructs. For
better results with the chat action, use database views or tables with contextual column
names or consider adding column comments explaining values stored in the columns.
• To access DBA or USER views, see DBMS_CLOUD_AI Views.
CUSTOMER_COUNT
--------------
55500
RESPONSE
----------------------------------------------------
SELECT COUNT(*) AS total_customers
FROM SH.CUSTOMERS
RESPONSE
------------------------------------------------------
There are a total of 55,500 customers in the database.
RESPONSE
-----------------------------------------------------------------------------
---
It is impossible to determine the exact number of customers that exist as it
con
stantly changes due to various factors such as population growth, new
businesses
, and customer turnover. Additionally, the term "customer" can refer to
5-233
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
individu
als, businesses, or organizations, making it difficult to provide a
specific num
ber.
RESPONSE
-----------------------------------------------------------------------
---------
SELECT COUNT(*) AS customer_count
FROM SH.CUSTOMERS AS c
WHERE c.CUST_STATE_PROVINCE = 'San Francisco' AND
c.CUST_MARITAL_STATUS = 'Married';
Explanation:
- We use the 'SH' table alias for the 'CUSTOMERS' table for better
readability.
- The query uses the 'COUNT(*)' function to count the number of rows
that match the given conditions.
- The 'WHERE' clause is used to filter the results:
- 'c.CUST_STATE_PROVINCE = 'San Francisco'' filters customers who
have 'San Francisco' as their state or province.
- 'c.CUST_MARITAL_STATUS = 'Married'' filters customers who have
'Married' as their marital status.
The result of this query will give you the count of customers in San
Francisco who are married, using the column alias 'customer_count' for
the result.
Remember to adjust the table and column names based on your actual
schema if they differ from the example.
Feel free to ask if you have more questions related to SQL or database
in general.
Note:
Only an ADMIN user can run EXECUTE privileges and network ACL
procedure.
5-234
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
--
-- Create Credential for AI provider
--
SQL> EXEC DBMS_CLOUD.CREATE_CREDENTIAL('OPENAI_CRED', 'OPENAI', '<your api
token>');
--
-- Create AI profile
--SQL>
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'OPENAI',
'{"provider":
"openai",
"credential_name":
"OPENAI_CRED",
"object_list": [{"owner": "SH", "name": "customers"},
{"owner": "SH", "name": "countries"},
{"owner": "SH", "name":
"supplementary_demographics"},
{"owner": "SH", "name": "profits"},
{"owner": "SH", "name": "promotions"},
{"owner": "SH", "name": "products"}],
"conversation": "true"
}');
END;
/
--
-- Enable AI profile in current session
--
SQL> EXEC DBMS_CLOUD_AI.SET_PROFILE('OPENAI');
5-235
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
--
-- Use AI
--
SQL> select ai how many customers exist;
CUSTOMER_COUNT
--------------
55500
MARRIED_CUSTOMERS
-----------------
18
SQL> select ai showsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------
---------
SELECT COUNT(*) AS married_customers_count
FROM SH.CUSTOMERS c
WHERE c.CUST_CITY = 'San Francisco'
AND c.CUST_MARITAL_STATUS = 'Married'
SQL> select ai narrate what are the top 3 customers in San Francisco;
RESPONSE
-----------------------------------------------------------------------
---------
The top 3 customers in San Francisco are:
RESPONSE
-----------------------------------------------------------------------
---------
Autonomous Database is a cloud-based database service provided by
Oracle. It is
designed to automate many of the routine tasks involved in managing a
database,
such as patching, tuning, and backups. Autonomous Database uses
machine learning
and automation to optimize performance, security, and availability,
allowing us
ers to focus on their applications and data rather than database
administration
5-236
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
SQL> select ai explainsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------------
---
SELECT COUNT(*) AS customer_count
FROM SH.CUSTOMERS AS c
WHERE c.CUST_STATE_PROVINCE = 'San Francisco' AND c.CUST_MARITAL_STATUS =
'Married';
Explanation:
- We use the 'SH' table alias for the 'CUSTOMERS' table for better
readability.
- The query uses the 'COUNT(*)' function to count the number of rows that
match the given conditions.
- The 'WHERE' clause is used to filter the results:
- 'c.CUST_STATE_PROVINCE = 'San Francisco'' filters customers who have
'San Francisco' as their state or province.
- 'c.CUST_MARITAL_STATUS = 'Married'' filters customers who have 'Married'
as their marital status.
The result of this query will give you the count of customers in San
Francisco who are married, using the column alias 'customer_count' for the
result.
Remember to adjust the table and column names based on your actual schema if
they differ from the example.
Feel free to ask if you have more questions related to SQL or database in
general.
5-237
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Note:
Only an ADMIN user can run EXECUTE privileges and network ACL
procedure.
--
-- Grant Network ACL for Cohere endpoint
--
SQL> BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'api.cohere.ai',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'ADB_USER',
principal_type => xs_acl.ptype_db)
);
END;
/
/
--
-- Create AI profile
--SQL> BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'COHERE',
'{"provider": "cohere",
"credential_name": "COHERE_CRED",
"object_list": [{"owner": "SH", "name": "customers"},
{"owner": "SH", "name": "sales"},
{"owner": "SH", "name": "products"},
{"owner": "SH", "name": "countries"}]
}');
END;
/
--
-- Enable AI profile in current session
--
5-238
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
--
-- Use AI
--
SQL> select ai how many customers exist;
CUSTOMER_COUNT
--------------
55500
--
-- Grant Network ACL for OpenAI endpoint
--SQL> BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '<azure_resource_name>.openai.azure.com',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'ADMIN',
principal_type => xs_acl.ptype_db)
);
END;
/
--
-- Create AI profile
--
SQL>
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'AZUREAI',
'{"provider": "azure",
5-239
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
"azure_resource_name": "<azure_resource_name>",
"azure_deployment_name":
"<azure_deployment_name>"
"credential_name":
"AZURE_CRED",
"object_list": [{"owner": "SH", "name":
"customers"},
{"owner": "SH", "name":
"countries"},
{"owner": "SH", "name":
"supplementary_demographics"},
{"owner": "SH", "name":
"profits"},
{"owner": "SH", "name":
"promotions"},
{"owner": "SH", "name": "products"}],
"conversation": "true"
}');
END;
--
-- Enable AI profile in current session
--
SQL> EXEC DBMS_CLOUD_AI.SET_PROFILE('AZUREAI');
--
-- Use AI
--
SQL> select ai how many customers exist;
CUSTOMER_COUNT
--------------
55500
MARRIED_CUSTOMERS
-----------------
18
SQL> select ai showsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------
5-240
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
---------
SELECT COUNT(*) AS married_customers_count
FROM SH.CUSTOMERS c
WHERE c.CUST_CITY = 'San Francisco'
AND c.CUST_MARITAL_STATUS = 'Married'
SQL> select ai narrate what are the top 3 customers in San Francisco;
RESPONSE
-----------------------------------------------------------------------------
---
The top 3 customers in San Francisco are:
RESPONSE
-----------------------------------------------------------------------------
---
Autonomous Database is a cloud-based database service provided by Oracle. It
is
designed to automate many of the routine tasks involved in managing a
database,
such as patching, tuning, and backups. Autonomous Database uses machine
learning
and automation to optimize performance, security, and availability,
allowing us
ers to focus on their applications and data rather than database
administration
tasks. It offers both Autonomous Transaction Processing (ATP) for
transactional
workloads and Autonomous Data Warehouse (ADW) for analytical workloads.
Autonomo
us Database provides high performance, scalability, and reliability, making
it a
n ideal choice for modern cloud-based applications.
SQL> select ai explainsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------------
---
SELECT COUNT(*) AS customer_count
FROM SH.CUSTOMERS AS c
WHERE c.CUST_STATE_PROVINCE = 'San Francisco' AND c.CUST_MARITAL_STATUS =
'Married';
Explanation:
- We use the 'SH' table alias for the 'CUSTOMERS' table for better
5-241
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
readability.
- The query uses the 'COUNT(*)' function to count the number of rows
that match the given conditions.
- The 'WHERE' clause is used to filter the results:
- 'c.CUST_STATE_PROVINCE = 'San Francisco'' filters customers who
have 'San Francisco' as their state or province.
- 'c.CUST_MARITAL_STATUS = 'Married'' filters customers who have
'Married' as their marital status.
The result of this query will give you the count of customers in San
Francisco who are married, using the column alias 'customer_count' for
the result.
Remember to adjust the table and column names based on your actual
schema if they differ from the example.
Feel free to ask if you have more questions related to SQL or database
in general.
Note:
Only an ADMIN user can run EXECUTE privileges and network ACL
procedure.
-- Copy the consent url from cloud_integrations view and consents the
ADB-S application.
SQL> select param_value from CLOUD_INTEGRATIONS where param_name =
'azure_consent_url';
PARAM_VALUE
-----------------------------------------------------------------------
---------
https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/authorize?
5-242
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
client_id=<client_id>&response_type=code&scope=User.read
-- On the Azure OpenAI IAM console, search for the Azure application name
and assign the permission to the application.
-- You can get the application name in the cloud_integrations view.
SQL> select param_value from CLOUD_INTEGRATIONS where param_name =
'azure_app_name';
PARAM_VALUE
-----------------------------------------------------------------------------
---
ADBS_APP_DATABASE_OCID
--
-- Grant Network ACL for Azure OpenAI endpoint
--SQL> BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'azure_resource_name.openai.azure.com',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'ADB_USER',
principal_type => xs_acl.ptype_db)
);
END;
/
--
-- Create AI profile
--SQL>
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'AZUREAI',
'{"provider":
"azure",
"credential_name": "AZURE$PA",
"object_list": [{"owner": "SH", "name":
"customers"},
{"owner": "SH", "name":
"countries"},
{"owner": "SH", "name":
"supplementary_demographics"},
{"owner": "SH", "name":
"profits"},
{"owner": "SH", "name":
"promotions"},
{"owner": "SH", "name":
"products"}],
"azure_resource_name":
"<azure_resource_name>",
"azure_deployment_name": "<azure_deployment_name>"
}');
5-243
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
END;
--
-- Enable AI profile in current session
--
SQL> EXEC DBMS_CLOUD_AI.SET_PROFILE('AZUREAI');
--
-- Use AI
--
SQL> select ai how many customers exist;
CUSTOMER_COUNT
--------------
55500
MARRIED_CUSTOMERS
-----------------
18
SQL> select ai showsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------
---------
SELECT COUNT(*) AS married_customers_count
FROM SH.CUSTOMERS c
WHERE c.CUST_CITY = 'San Francisco'
AND c.CUST_MARITAL_STATUS = 'Married'
SQL> select ai narrate what are the top 3 customers in San Francisco;
RESPONSE
-----------------------------------------------------------------------
---------
The top 3 customers in San Francisco are:
5-244
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
RESPONSE
-----------------------------------------------------------------------------
---
Autonomous Database is a cloud-based database service provided by Oracle. It
is
designed to automate many of the routine tasks involved in managing a
database,
such as patching, tuning, and backups. Autonomous Database uses machine
learning
and automation to optimize performance, security, and availability,
allowing us
ers to focus on their applications and data rather than database
administration
tasks. It offers both Autonomous Transaction Processing (ATP) for
transactional
workloads and Autonomous Data Warehouse (ADW) for analytical workloads.
Autonomo
us Database provides high performance, scalability, and reliability, making
it a
n ideal choice for modern cloud-based applications.
SQL> select ai explainsql how many customers in San Francisco are married;
RESPONSE
-----------------------------------------------------------------------------
---
SELECT COUNT(*) AS customer_count
FROM SH.CUSTOMERS AS c
WHERE c.CUST_STATE_PROVINCE = 'San Francisco' AND c.CUST_MARITAL_STATUS =
'Married';
Explanation:
- We use the 'SH' table alias for the 'CUSTOMERS' table for better
readability.
- The query uses the 'COUNT(*)' function to count the number of rows that
match the given conditions.
- The 'WHERE' clause is used to filter the results:
- 'c.CUST_STATE_PROVINCE = 'San Francisco'' filters customers who have
'San Francisco' as their state or province.
- 'c.CUST_MARITAL_STATUS = 'Married'' filters customers who have 'Married'
as their marital status.
The result of this query will give you the count of customers in San
Francisco who are married, using the column alias 'customer_count' for the
result.
Remember to adjust the table and column names based on your actual schema if
they differ from the example.
Feel free to ask if you have more questions related to SQL or database in
general.
5-245
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
Note:
OCI Generative AI uses cohere.command as the default model if you do not
specify the model_name. To learn more about the parameters, see Profile
Attributes.
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name =>
'GENAI_CRED',
user_ocid => 'ocid1.user.oc1..aaaa...',
tenancy_ocid => 'ocid1.tenancy.oc1..aaaa...',
private_key => '<your_api_key>',
fingerprint => '<your_fingerprint>'
);
END;
--
-- Create AI profile
--
SQL>
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'GENAI',
'{"provider":
"oci",
"credential_name": "GENAI_CRED"
}');
END;
5-246
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
--
-- Enable AI profile in current session
--
SQL> EXEC DBMS_CLOUD_AI.SET_PROFILE('GENAI');
--
-- Use AI
--
RESPONSE
-----------------------------------------------------------------------------
---
Autonomous Database is a cloud-based database service provided by Oracle. It
is
designed to automate many of the routine tasks involved in managing a
database,
such as patching, tuning, and backups. Autonomous Database uses machine
learning
and automation to optimize performance, security, and availability,
allowing us
ers to focus on their applications and data rather than database
administration
tasks. It offers both Autonomous Transaction Processing (ATP) for
transactional
workloads and Autonomous Data Warehouse (ADW) for analytical workloads.
Autonomo
us Database provides high performance, scalability, and reliability, making
it a
n ideal choice for modern cloud-based applications.
5-247
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
• To get access to all Generative AI resources in the entire tenancy, use the
following policy:
Note:
OCI Generative AI uses cohere.command as the default model if you do not
specify the model_name. To learn more about the parameters, see Profile
Attributes.
--
-- Create AI profile
--
SQL>BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'GENAI',
'{"provider":
"oci",
"credential_name": "OCI$RESOURCE_PRINCIPAL"
}');
END;
--
-- Enable AI profile in current session
5-248
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
--
SQL>EXEC DBMS_CLOUD_AI.SET_PROFILE('GENAI');
-- Use AI
RESPONSE
-----------------------------------------------------------------------------
---
Autonomous Database is a cloud-based database service provided by Oracle. It
is
designed to automate many of the routine tasks involved in managing a
database,
such as patching, tuning, and backups. Autonomous Database uses machine
learning
and automation to optimize performance, security, and availability,
allowing us
ers to focus on their applications and data rather than database
administration
tasks. It offers both Autonomous Transaction Processing (ATP) for
transactional
workloads and Autonomous Data Warehouse (ADW) for analytical workloads.
Autonomo
us Database provides high performance, scalability, and reliability, making
it a
n ideal choice for modern cloud-based applications.
SQL>
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name =>
'GENAI_CRED',
user_ocid => 'ocid1.user.oc1..aaa',
tenancy_ocid => 'ocid1.tenancy.oc1..aaa',
private_key => '<your_api_key>',
fingerprint => '<your_fingerprint>'
5-249
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
);
END;
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'GENAI',
'{"provider": "oci",
"model": "meta.llama-2-70b-chat",
"oci_runtimetype":"LLAMA"}');
END;
SQL>
BEGIN
DBMS_CLOUD_AI.SET_ATTRIBUTE(
'GENAI', 'credential_name',
'GENAI_CRED');
END;
SQL >
BEGIN
DBMS_CLOUD_AI.SET_ATTRIBUTE(
END;
5-250
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
-----------------------------------------------------------------------------
-------------------------------------------------------------------------
.
Subject: Action-packed movie recommendations for you!
Dear Gilbert,
I hope this email finds you well! I wanted to reach out to you today to
recommend two action-thriller movies that are currently available on our
Movie
Stream service. I think you'll really enjoy them!
The first movie I recommend is "John Wick" starring Keanu Reeves. This
movie follows the story of a retired hitman who seeks vengeance against a
power
ful crime lord and his army of assassins. The action scenes are intense
and non-stop, and Keanu Reeves delivers an outstanding performance.
RESPONSE
-----------------------------------------------------------------------------
-------------------------------------------------------------------------
The second movie I recommend is "Mission: Impossible - Fallout" starring
Tom Cruise. This movie follows Ethan Hunt and his team as they try to prevent
a global catastrophe. The action scenes are heart-stopping and the
stunts are truly impressive. Tom Cruise once again proves why he's one of
the grea
test action stars of all time.
Both of these movies are sure to keep you on the edge of your seat and
provide plenty of thrills and excitement. They're available to stream now on
Mo
vieStream, so be sure to check them out!
If you have any questions or need assistance with MovieStream, please
don't hesitate to reach out to me. I'm always here to help.
Thank you for being a valued customer, and I hope you enjoy the movies!
RESPONSE
-----------------------------------------------------------------------------
-------------------------------------------------------------------------
Best regards,
[Your Name]
MovieStream Customer Service
5-251
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
-----------------------------------------------------------------------
-----------------------------------------------------------------------
--------
Rock climbing is an exhilarating and challenging sport that's
perfect for athletes looking to push their limits and test their
strength, endurance, an
d mental toughness. Whether you're a seasoned athlete or just
starting out, rock climbing offers a unique and rewarding experience
that will have you
hooked from the very first climb. With its combination of physical
and mental challenges, rock climbing is a great way to build strength,
improve flex
ibility, and develop problem-solving skills. Plus, with the
supportive community of climbers and the breathtaking views from the
top of the climb, you
'll be hooked from the very first climb. So, if you're ready to
take on a new challenge and experience the thrill of adventure, then
it's time to get
started with rock climbing!
-- TABLE1
COMMENT ON TABLE table1 IS 'Contains movies, movie titles and the year
it was released';
COMMENT ON COLUMN table1.c1 IS 'movie ids. Use this column to join to
other tables';
COMMENT ON COLUMN table1.c2 IS 'movie titles';
COMMENT ON COLUMN table1.c3 IS 'year the movie was released';
-- TABLE2
COMMENT ON TABLE table2 IS 'transactions for movie views - also known
as streams';
COMMENT ON COLUMN table2.c1 IS 'day the movie was streamed';
COMMENT ON COLUMN table2.c2 IS 'genre ids. Use this column to join to
other tables';
COMMENT ON COLUMN table2.c3 IS 'movie ids. Use this column to join to
other tables';
COMMENT ON COLUMN table2.c4 IS 'customer ids. Use this column to join
to other tables';
COMMENT ON COLUMN table2.c5 IS 'device used to stream, watch or view
the movie';
COMMENT ON COLUMN table2.c6 IS 'sales from the movie';
COMMENT ON COLUMN table2.c7 IS 'number of views, watched, streamed';
5-252
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
-- TABLE3
COMMENT ON TABLE table3 IS 'Contains the genres';
COMMENT ON COLUMN table3.c1 IS 'genre id. use this column to join to other
tables';
COMMENT ON COLUMN table3.c2 IS 'name of the genre';
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
profile_name => 'myprofile',
attributes =>
'{"provider": "azure",
"azure_resource_name": "my_resource",
"azure_deployment_name": "my_deployment",
"credential_name": "my_credential",
"comments":"true",
"object_list": [
{"owner": "moviestream", "name": "table1"},
{"owner": "moviestream", "name": "table2"},
{"owner": " moviestream", "name": "table3"}
]
}'
);
DBMS_CLOUD_AI.SET_PROFILE(
profile_name => 'myprofile'
);
END;
/
--Prompts
select ai what are our total views;
RESPONSE
-------------------------------------------------
TOTAL_VIEWS
-----------
97890562
RESPONSE
-------------------------------------------------------------------------
SELECT SUM(QUANTITY_SOLD) AS total_views
FROM "moviestream"."table"
DEVICE TOTAL_VIEWS
-------------------------- -----------
mac 14719238
iphone 20793516
ipad 15890590
pc 14715169
galaxy 10587343
5-253
Chapter 5
Use Select AI to Generate SQL from Natural Language Prompts
pixel 10593551
lenovo 5294239
fire 5296916
8 rows selected.
select ai showsql what are our total views broken out by device;
RESPONSE
-----------------------------------------------------------------------
----------------
SELECT DEVICE, COUNT(*) AS TOTAL_VIEWS
FROM "moviestream"."table"
GROUP BY DEVICE
Terminology
Definition of some of the terms used in Select AI feature are described.
The following are the terms related to Select AI feature:
Term Definition
Database Credential Database Credentials are authentication credentials used to
access and interact with databases. They typically consist of a
user name and a password, sometimes supplemented by
additional authentication factors like security tokens. These
credentials are used to establish a secure connection between an
application or user and a database, ensuring that only authorized
individuals or systems can access and manipulate the data stored
within the database.
Hallucination in LLM Hallucination in the context of Large Language Models refers to a
phenomenon where the model generates text that is incorrect,
nonsensical, or unrelated to the input prompt. Despite being a
result of the model's attempt to generate coherent text, these
instances can contain information that is fabricated, misleading,
or purely imaginative. Hallucination can occur due to biases in
training data, lack of proper context understanding, or limitations
in the model's training process.
IAM Oracle Cloud Infrastructure Identity and Access Management
(IAM) lets you control who has access to your cloud resources.
You can control what type of access a group of users have and to
which specific resources. To learn more, see Overview of Identity
and Access Management.
Large Language Model Large Language Models refer to advanced artificial intelligence
(LLM) models that are trained on massive amounts of text data to
understand and generate human-like language, software code,
and database queries. These models are capable of performing a
wide range of natural language processing tasks, including text
generation, translation, summarization, question answering,
sentiment analysis, and more. LLMs are typically neural network-
based architectures that learn patterns, context, and semantics
from the input data, enabling them to generate coherent and
contextually relevant text.
5-254
Chapter 5
JSON Document Stores
Term Definition
Natural Language Prompts Natural Language Prompts are human-readable instructions or
requests provided to guide generative AI models, such as Large
Language Models. Instead of using specific programming
languages or commands, users can interact with these models by
entering prompts in a more conversational or natural language
form. The models then generate output based on the provided
prompt.
Network Access Control A Network Access Control List is a set of rules or permissions
List (ACL) that define what network traffic is allowed to pass through a
network device, such as a router, firewall, or gateway. ACLs are
used to control and filter incoming and outgoing traffic based on
various criteria such as IP addresses, port numbers, and
protocols. They play a crucial role in network security by enabling
administrators to manage and restrict network traffic to prevent
unauthorized access, potential attacks, and data breaches.
5-255
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
5-256
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
Database Pipe
PACK_MESSAGE UNPACK_MESSAGE
PACK_MESSAGE
User 1 User 2
• Implicit Pipe: Automatically created when a message is sent with an unknown pipe
name using the DBMS_PIPE.SEND_MESSAGE function.
• Explicit Pipe: Created using the DBMS_PIPE.CREATE_PIPE function with a user specified
pipe name.
• Public Pipe: Accessible by any user with EXECUTE permission on DBMS_PIPE package.
• Private Pipe: Accessible by sessions with the same user as the pipe creator.
Singleton Pipes provide the ability to cache a single message in the memory of the
Autonomous Database instance.
The following shows the general workflow for using singleton pipes.
Singleton Pipe
Create
Singleton Pipe
Cached
Send Message Message
to Cache Data
Concurrent
Reading Sessions
Read
Message
5-257
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
– A Singleton Pipe can cache one message in the pipe, hence the name
"singleton".
– The message in a Singleton Pipe can be comprised of multiple fields, up to a
total message size of 32,767 bytes.
– DBMS_PIPE supports the ability to pack multiple attributes in a message using
DBMS_PIPE.PACK_MESSAGE procedure.
– For a Public Singleton Pipe, the message can be received by any database
session with execute privilege on DBMS_PIPE package.
– For Private Singleton Pipe, the message can be received by sessions with the
same user as the creator of the Singleton Pipe.
• High Message Throughput for Reads
– Singleton Pipes cache the message in the pipe until it is invalidated or purged.
Database sessions can concurrently read a message from the Singleton Pipe.
– Receiving a message from a Singleton Pipe is a non-blocking operation.
• Message Caching
– A message is cached in a Singleton Pipe using DBMS_PIPE.SEND_MESSAGE.
– If there is an existing cached message in the Singleton Pipe, then
DBMS_PIPE.SEND_MESSAGE overwrites the previous message to maintain only
one message in the Singleton Pipe.
• Message Invalidation
– Explicit Invalidation: purges the pipe with the procedure DBMS_PIPE.PURGE or
by overwriting the message using DBMS_PIPE.SEND_MESSAGE.
– Automatic Invalidation: a message can be invalidated automatically after the
specified shelflife time has elapsed.
• No Eviction from Database Memory
– Singleton Pipes do not get evicted from Oracle Database memory.
– An Explicit Singleton Pipe continues to reside in database memory until it is
removed using DBMS_PIPE.REMOVE_PIPE or until the database restarts.
– An Implicit Singleton Pipe stays in database memory until there is one cached
message in the pipe.
5-258
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
• The cache function can be specified when reading a message from a Singleton Pipe
using DBMS_PIPE.RECEIVE_MESSAGE.
• When there is no message in the Singleton Pipe, DBMS_PIPE.RECEIVE_MESSAGE calls the
cache function.
• When the message shelflife time has elapsed, the database automatically populates a
new message in the Singleton Pipe.
Using a cache function simplifies working with Singleton Pipes. You do not need to handle
failure cases for receiving a message from an empty pipe. In addition, a cache function
ensures there is no cache-miss when you read messages from a Singleton Pipe, providing
maximum use of the cached message.
Singleton Pipe
Cache Function
Create
Singleton Pipe Update Message
Cached in Cache
Message
Cache miss/expire Send Message
Invoke Cache Function to Cache Data
Read
Message
When you define a cache function, the function name must be fully qualified with the owner
schema:
• OWNER.FUNCTION_NAME
• OWNER.PACKAGE.FUNCTION_NAME
5-259
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
5-260
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
DBMS_PIPE.UNPACK_MESSAGE(l_msg);
pipe row(msg_types.t_rcv_row(l_msg));
END LOOP;
RETURN;
END;
1. Create an explicit singleton pipe named PIPE_TEST with shelflife parameter set to 3600
(seconds).
DECLARE
l_status INTEGER;
BEGIN
l_status := DBMS_PIPE.CREATE_PIPE(
pipename => 'MY_PIPE1',
private => TRUE,
singleton => TRUE,
shelflife => 3600);
END;
/
STATUS
----------
0
MESSAGE
--------------------------------------------------------------------------
5-261
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
------
This is a real message that you can get multiple times
This is a real message that you can get multiple times
EXEC DBMS_PIPE.PURGE('&pipename');
SELECT DBMS_PIPE.REMOVE_PIPE('&pipename') status FROM DUAL;
Note:
The current session user invoking DBMS_PIPE.RECEIVE_MESSAGE must
have required privilege to execute the cache function.
2. Receive with a cache function and confirm the message populates in pipe. The
pipe must exist as a private pipe created in the cache function.
5-262
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
MESSAGE
---------------
This is a placeholder cache message for an empty pipe
MESSAGE
---------------
This is a placeholder cache message for an empty pipe
This is a placeholder cache message for an empty pipe
5-263
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
Note:
Oracle recommends creating an explicit pipe before you send or receive
messages with persistent messaging. Creating an explicit pipe with
DBMS_PIPE.CREATE_PIPE ensures that the pipe is created with the access
permissions you want, either public or private (by setting the private
parameter).
The following shows the general workflow for DBMS_PIPE with persistent messaging:
Cloud Pipe
PACK_MESSAGE UNPACK_MESSAGE
PACK_MESSAGE
Existing applications using DBMS_PIPE can continue to operate with minimal changes.
You can configure existing applications that use DBMS_PIPE with a credential object and
location URI using a logon trigger or using some other initialization routine. After
setting the DBMS_PIPE credential and location URI, no other changes are needed to
use persistent messaging. All subsequent use of the pipe stores the messages in
Cloud Object Store instead of in database memory. This allows you to change the
storage method for messages from in-memory to persistent Cloud Object Storage,
with minimal changes.
5-264
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
• Messages can be sent and retrieved across multiple Autonomous Database instances in
the same region or across regions.
• Persistent messages are guaranteed to either be written or read by exactly one process.
This prevents message content inconsistency due to concurrent writes and reads. Using
a persistent messaging pipe, DBMS_PIPE allows only one operation, sending a message
or a receiving message to be active at a given time. However, if an operation is not
possible due to an ongoing operation, the process retries periodically until the timeout
value is reached.
• DBMS_PIPE uses DBMS_CLOUD to access Cloud Object Store. Messages can be stored in
any of the supported Cloud Object Stores. See DBMS_CLOUD Package File URI
Formats for more information.
• DBMS_PIPE uses DBMS_CLOUD to access Cloud Object Store and all supported credential
types are available:
– DBMS_CLOUD.CREATE_CREDENTIAL: See CREATE_CREDENTIAL Procedure for more
information.
– Policy and role based credentials: See Accessing Cloud Resources by Configuring
Policies and Roles for more information.
To use a public pipe, the user, database session, must have execute privilege on DBMS_PIPE.
For a public pipe using persistent messaging and storing messages to Cloud Object Store,
the user, database session, must have execute privilege on DBMS_CLOUD and execute
privilege on the credential object (or you can create a credential object that is allowed to
access the location URI that contains the message.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'my_persistent_pipe_cred',
username => 'adb_user@example.com',
password => 'password'
5-265
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
);
END;
/
This operation stores the credentials in the database in an encrypted format. You
can use any name for the credential name. Note that this step is required only
once unless your object store credentials change. After you store the credentials
you can then use the same credential name to access Cloud Object Store to send
and receive messages with DBMS_PIPE.
For detailed information about the parameters, see CREATE_CREDENTIAL
Procedure. For Oracle Cloud Infrastructure Object Storage, it is required that the
credential uses native Oracle Cloud Infrastructure authentication.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not
required if you enable resource principal credentials. See Use Resource Principal
to Access Oracle Cloud Infrastructure Resources for more information.
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand
character (&) as a special character. If you have the ampersand
character in your password use the SET DEFINE OFF command in those
tools as shown in the example to disable the special character and get
the credential created properly.
2. Create an explicit pipe to send and retrieve messages. For example, create a pipe
named ORDER_PIPE.
DECLARE
r_status INTEGER;
BEGIN
r_status := DBMS_PIPE.CREATE_PIPE(pipename => 'ORDER_PIPE');
END;
/
4. Use DBMS_PIPE procedures to set the default access credential and location URI to
store persistent messages to Cloud Object Store.
BEGIN
DBMS_PIPE.SET_CREDENTIAL_NAME('my_persistent_pipe_cred');
DBMS_PIPE.SET_LOCATION_URI('https://objectstorage.us-
5-266
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/');
END;
/
These procedures set the default credential name and the default location URI for use
with DBMS_PIPE procedures.
If you use Oracle Cloud Infrastructure Object Storage to store messages, you can use
Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location URI and the
credential must match in type as follows:
• If you use a native URI format to access Oracle Cloud Infrastructure Object Storage,
you must use Native Oracle Cloud Infrastructure Signing Keys authentication in the
credential object.
• If you use Swift URI format to access Oracle Cloud Infrastructure Object Storage, you
must use an auth token authentication in the credential object.
See SET_CREDENTIAL_NAME Procedure and SET_LOCATION_URI Procedure for
more information.
5. Pack and send a message on the pipe.
DECLARE
l_result INTEGER;
l_date DATE;
BEGIN
l_date := sysdate;
DBMS_PIPE.PACK_MESSAGE(l_date); -- date of order
DBMS_PIPE.PACK_MESSAGE('C123'); -- order number
DBMS_PIPE.PACK_MESSAGE(5); -- number of items in order
DBMS_PIPE.PACK_MESSAGE('Printers'); -- type of item in order
l_result := DBMS_PIPE.SEND_MESSAGE(
pipename => 'ORDER_PIPE',
credential_name => DBMS_PIPE.GET_CREDENTIAL_NAME,
location_uri => DBMS_PIPE.GET_LOCATION_URI);
IF l_result = 0 THEN
DBMS_OUTPUT.put_line('DBMS_PIPE sent order successfully');
END IF;
END;
/
5-267
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
When you are on the same Autonomous Database instance and the pipe exists,
you do not need to run DBMS_PIPE.CREATE_PIPE before you receive a message.
This applies when the pipe was created on the same instance, as shown in Create
an Explicit Persistent Pipe and Send a Message.
2. Receive a message from the pipe.
DECLARE
message1 DATE;
message2 VARCHAR2(100);
message3 INTEGER;
message4 VARCHAR2(100);
l_result INTEGER;
BEGIN
DBMS_PIPE.SET_CREDENTIAL_NAME('my_persistent_pipe_cred');
DBMS_PIPE.SET_LOCATION_URI('https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/');
l_result := DBMS_PIPE.RECEIVE_MESSAGE (
pipename => 'ORDER_PIPE',
timeout => DBMS_PIPE.MAXWAIT,
credential_name => DBMS_PIPE.GET_CREDENTIAL_NAME,
location_uri => DBMS_PIPE.GET_LOCATION_URI);
IF l_result = 0 THEN
DBMS_PIPE.unpack_message(message1);
5-268
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
DBMS_PIPE.unpack_message(message2);
DBMS_PIPE.unpack_message(message3);
DBMS_PIPE.unpack_message(message4);
END;
/
When you are on the same Autonomous Database instance, the credential already exists
and you do not need to run DBMS_CLOUD.CREATE_CREDENTIAL to receive a message. This
applies when the pipe was created on the same instance, as shown in Create an Explicit
Persistent Pipe and Send a Message.
See SET_CREDENTIAL_NAME Procedure and SET_LOCATION_URI Procedure for more
information.
See RECEIVE_MESSAGE Function for more information.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'my_persistent_pipe_cred',
username => 'adb_user@example.com',
password => 'password'
);
END;
/
This operation stores the credentials in the database in an encrypted format. You can use
any name for the credential name. Note that this step is required only once unless your
object store credentials change. Once you store the credentials you can then use the
same credential name to access the Cloud Object Store to send and receive messages
with DBMS_PIPE.
For detailed information about the parameters, see CREATE_CREDENTIAL Procedure.
Creating a credential to access Oracle Cloud Infrastructure Object Store is not required if
you enable resource principal credentials. See Use Resource Principal to Access Oracle
Cloud Infrastructure Resources for more information.
5-269
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
Note:
Some tools like SQL*Plus and SQL Developer use the ampersand
character (&) as a special character. If you have the ampersand
character in your password use the SET DEFINE OFF command in those
tools as shown in the example to disable the special character and get
the credential created properly.
2. Create an explicit pipe with the same name as the pipe that sent the message. For
example, create a pipe named ORDER_PIPE.
DECLARE
r_status INTEGER;
BEGIN
r_status := DBMS_PIPE.CREATE_PIPE(pipename => 'ORDER_PIPE');
END;
/
4. Use DBMS_PIPE procedures to set the default access credential and location URI
for Object Store so that DBMS_PIPE can access the persistent message.
BEGIN
DBMS_PIPE.SET_CREDENTIAL_NAME('my_persistent_pipe_cred');
DBMS_PIPE.SET_LOCATION_URI('https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/');
END;
/
These procedures set the default credential name and the default location URI for
use with DBMS_PIPE procedures.
If you use Oracle Cloud Infrastructure Object Storage to store messages, you can
use Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location
URI and the credential must match in type as follows:
• If you use a native URI format to access Oracle Cloud Infrastructure Object
Storage, you must use Native Oracle Cloud Infrastructure Signing Keys
authentication in the credential object.
• If you use Swift URI format to access Oracle Cloud Infrastructure Object
Storage, you must use an auth token authentication in the credential object.
5-270
Chapter 5
Cache Messages with Singleton Pipes and Using Pipes for Persistent Messaging
DECLARE
message1 DATE;
message2 VARCHAR2(100);
message3 INTEGER;
message4 VARCHAR2(100);
l_result INTEGER;
BEGIN
DBMS_PIPE.SET_CREDENTIAL_NAME('my_persistent_pipe_cred');
DBMS_PIPE.SET_LOCATION_URI('https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/');
l_result := DBMS_PIPE.RECEIVE_MESSAGE (
pipename => 'ORDER_PIPE',
timeout => DBMS_PIPE.MAXWAIT,
credential_name => DBMS_PIPE.GET_CREDENTIAL_NAME,
location_uri => DBMS_PIPE.GET_LOCATION_URI);
IF l_result = 0 THEN
DBMS_PIPE.unpack_message(message1);
DBMS_PIPE.unpack_message(message2);
DBMS_PIPE.unpack_message(message3);
DBMS_PIPE.unpack_message(message4);
END;
/
DECLARE
l_result INTEGER;
BEGIN
5-271
Chapter 5
Query Text in Object Storage
l_result := DBMS_PIPE.REMOVE_PIPE('ORDER_PIPE');
END;
/
The REMOVE_PIPE function removes the pipe from the Autonomous Database
instance where it runs, however REMOVE_PIPE does not affect other Autonomous
Database instances with a pipe with the same name that uses the same location
URI.
2. On the Autonomous Database instance were you run DBMS_PIPE.REMOVE_PIPE,
verify that the pipe is removed.
No rows selected
5-272
Chapter 5
Query Text in Object Storage
example PDF and DOC (MS Word) formats. You can configure a refresh rate that indicates
the frequency in minutes at which the index is refreshed for any new uploads or deletes.
A local table with the standard suffix INDEX_NAME$TXTIDX is created when you create an index
on the object storage, and you can utilize the table to perform a search using the CONTAINS
keyword.
See Indexing with Oracle Text for more information.
You can include a stop word list when you specify the stop_words format option with
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX.
See Indexing with Oracle Text for more information on Oracle Text stop words and working
with binary files.
1. Create a credential object to access the source location.
See CREATE_CREDENTIAL Procedure for more information.
2. Run the DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX procedure to create a text index on
the object storage files.
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX (
credential_name => 'DEFAULT_CREDENTIAL',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/ts_data/'
index_name => 'EMP',
format => JSON_OBJECT ('refresh_rate' value 10)
);
END;
/
This example creates a text index EMP on the object storage files located at the URI
specified in the location_uri parameter. The refresh_rate option in the format
parameter specifies that the index EMP is refreshed at an interval of 10 minutes.
This creates a local table INDEX_NAME$TXTIDX. You can utilize the table
INDEX_NAME$TXTIDX to perform a search using CONTAINS.
For example:
This query returns the object or file names that contain the string king.
See Text Index Reference Table for more information.
5-273
Chapter 5
Query Text in Object Storage
You can query an external table using the EXTERNAL MODIFY clause to retrieve the
actual records.
Note:
The external table EMPEXTTAB is a sample external table that is created
on the same location_url.
See Querying External Data with Autonomous Database for more information.
See CREATE_EXTERNAL_TEXT_INDEX Procedure for more information.
See Configure Policies and Roles to Access Resources for more information.
BEGIN
DBMS_CLOUD.DROP_EXTERNAL_TEXT_INDEX (
index_name => 'EMP',
);
END;
/
You can query the INDEX_NAME$TXTIDX table to search for a string using the CONTAINS
keyword. For example, when you call DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX
procedure the INDEX_NAME value as EMP, this creates the EMP$TXTIDX the text reference
table.
The text reference table has the following columns:
• object_name: is the file name on the object storage that contains the searched text
string.
5-274
Chapter 5
Query Text in Object Storage
• object_path: is the object storage bucket or folder URI that contains the object storage
file.
• mtime: is the last modified timestamp of the object storage file. This is the time when the
file was last accessed by DBMS_CLOUD.
For example:
OBJECT_PATH
OBJECT_NAME
------------------------------------------------------------------------------------------
------------------------------------
https://objectstorage.us-phoenix-1.oraclecloud.com/n/example1/b/adbs_data_share/o/
ts_data/ data_2_20221026T195313585601Z.json
This query returns the file names and location URI on the object storage which contains the
text string king, in either upper or lowercase.
OBJECT_NAME MTIME
----------------------------- -------------------------------------
data_1_20220531T165402Z.json 31-MAY-22 04.54.02.979000 PM +00:00
data_1_20220531T165427Z.json 31-MAY-22 04.54.27.997000 PM +00:00
This query returns file name and last modified timestamp of the object files on which the
index EMP is created.
You can query the ALL_SCHEDULER_JOB_RUN_DETAILS view to obtain the status and any error
reported by the index creation job.
The name of the DBMS_SCHEDULER job is derived from the INDEX_NAME parameter specified
when you call DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX.
To query the ALL_SCHEDULER_JOB_RUN_DETAILS view, you must be logged in as the ADMIN user
or have READ privilege on the ALL_SCHEDULER_JOB_RUN_DETAILS view.
For example, the following SELECT statement with a WHERE clause on job_name shows the run
details for the job:
You can also query for the existence of an index creation scheduler job.
5-275
Chapter 5
Use Oracle Workspace Manager on Autonomous Database
For example:
SELECT status
FROM all_scheduler_jobs where LOWER(job_name) = LOWER('index_name$JOB');
5-276
Chapter 5
Use Oracle Workspace Manager on Autonomous Database
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'OWM');
END;
/
PARAM_NAME PARAM_VALUE
---------- -----------
owm enabled
4. (Optional) If you have existing data using Oracle Workspace Manager on another
database that you want to migrate to Autonomous Database, Oracle Workspace
Manager provides import and export procedures that enable you to migrate the data. See
Export_Schemas and Import_Schemas for additional information.
5. Continue with the imported schemas or the new schemas.
See ENABLE_FEATURE Procedure for more information.
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'OWM');
END;
/
5-277
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
0 rows selected.
Note:
Elastic pools are only available for Autonomous Database instances that use
the ECPU compute model.
Topics
5-278
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
Note:
By default each instance in an elastic pool is automatically assigned a
maintenance window. By selecting a pool shape that is 1024 ECPUs or greater,
you have the option of assigning a custom 2-hour maintenance window during
which the leader and all elastic pool members are patched together. To select a
custom maintenance window for your elastic pool, file a Service Request at
Oracle Cloud Support.
• Pool Capacity: The pool capacity is the maximum number of ECPUs that an elastic pool
can use, and is four times (x4) the pool size.
5-279
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
Note:
In these examples Autonomous Data Guard is not enabled. See About
Elastic Pools with Autonomous Data Guard Enabled for information on using
elastic pools with Autonomous Data Guard.
5-280
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
The following are examples of Autonomous Database instances that could be in an elastic
pool with a pool size of 128 and a pool capacity of 512 ECPUs:
• Each of these are valid for pool members in an elastic pool with a pool size of 128
ECPUs:
– 1 instance with 512 ECPUs, for a total of 512 ECPUs
– 128 instances with 4 ECPUs, for a total of 512 ECPUs
– 256 instances with 2 ECPUs, for a total of 512 ECPUs
• Similarly, each of the following are valid for pool members in an elastic pool with a pool
size of 128 ECPUs:
– 1 instance with 128 ECPUs, 2 instances with 64 ECPUs, 32 instances with 4 ECPUs,
and 64 instances with 2 ECPUs, for a total of 512 ECPUs
– 256 instances with 1 ECPU, 64 instances with 2 ECPUs, for a total of 384 ECPUs,
which is less than the pool capacity of 512 ECPUs.
Topics
5-281
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
instance, and in this example you would pay for 1024 ECPUs. Using an elastic pool
provides up to 87% compute cost savings.
After you create an elastic pool, the total ECPU usage for a given hour is charged to
the Autonomous Database instance that is the pool leader. With the exception of the
pool leader, individual Autonomous Database instances that are pool members are not
charged for ECPU usage while they are members of an elastic pool.
Elastic pool billing is as follows:
• If the total aggregated peak ECPU utilization is equal to or below the pool size for
a given hour, you are charged for the pool size number of ECPUs (one times the
pool size).
After an elastic pool is created ECPU billing continues at a minimum of one times
the pool size rate, even when databases that are part of the pool are stopped. This
applies to pool member databases and to the pool leader.
In other words, if the aggregated peak ECPU utilization of the pool is less than or
equal to the pool size for a given hour, you are charged for the pool size number of
ECPUs (one times the pool size). This represents up to 87% compute cost
savings over the case in which these databases are billed separately without
using elastic pools.
• If the aggregated peak ECPU utilization of the pool leader and the members
exceeds the pool size at any point in time in a given billing hour:
– Aggregated peak ECPU utilization of the pool is equal to or less than two
times of the pool size number of ECPUs: For usage that is greater than one
times the pool size number of ECPUs and up to and including two times the
number of ECPUs in a given billing hour: Hourly billing is two times the pool
size number of ECPUs.
In other words, if the aggregated peak ECPU utilization of the pool exceeds
the pool size, but is less than or equal to two times the pool size for a given
hour, you are charged for twice the pool size number of ECPUs (two times the
pool size). This represents up to 75% compute cost savings over the case
in which these databases are billed separately without using elastic pools.
– Aggregated peak ECPU utilization of the pool is equal to or less than
four times the pool size number of ECPUs: For usage that is greater than
two times the pool size number of ECPUs and up and including to four times
the pool size number of ECPUs in a given billing hour: Hourly billing is four
times the pool size number of ECPUs.
In other words, if the aggregated peak ECPU utilization of the pool exceeds
twice the pool size for a given hour, you are charged for four times the pool
size number of ECPUs (four times the pool size). This represents up to 50%
compute cost savings over the case in which these databases are billed
separately without using elastic pools.
For example, consider an elastic pool with a pool size of 128 ECPUs and a pool
capacity of 512 ECPUs:
– Case-1: The aggregated peak ECPU utilization of the pool leader and the
members is 40 ECPUs between 2:00pm and 2:30pm, and 128 ECPUs
between 2:30pm and 3:00pm.
The elastic pool is billed 128 ECPUs, one times the pool size, for this billing
hour (2-3pm). This case applies when the peak aggregated ECPU usage of
the elastic pool for the billing hour is less than or equal to 128 ECPUs.
5-282
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
– Case-2: The aggregated peak ECPU utilization of the pool leader and the members
is 40 ECPUs between 2:00pm and 2:30pm, and 250 ECPUs between 2:30pm and
3:00pm.
The elastic pool is billed 256 ECPUs, two times the pool size, for this billing hour
(2-3pm). This case applies when the peak aggregated ECPU usage of the elastic
pool for the billing hour is less than or equal to 256 ECPUs and greater than 128
ECPUs.
– Case-3: The aggregated peak ECPU utilization of the pool leader and the members
is 80 ECPUs between 2:00pm and 2:30pm, and 509 ECPUs between 2:30pm and
3:00pm.
The elastic pool is billed 512 ECPUs, four times the pool size, for this billing hour
(2-3pm). This case applies when the peak aggregated ECPU usage of the elastic
pool for the billing hour is less than or equal to 512 ECPUs and greater than 256
ECPUs.
See How to Achieve up to 87% Compute Cost Savings with Elastic Resource Pools on
Autonomous Database for more details.
5-283
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
For example, assume there is an elastic pool with a pool size of 128 ECPUs; if in a
given billing hour the aggregated peak ECPU utilization of the pool leader and the
members is 80 ECPUs for the billing hour, and during this hour the combined total
ECPU utilization for instances using built-in tools is 30 ECPUs, the leader is charged
for the pool size (128 ECPUs), plus the built-in tool ECPU usage (30 ECPUs), for a
total of 158 ECPUs for that hour.
5-284
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
– 1 instance with 128 ECPUs with cross-region Autonomous Data Guard enabled
(using a total of 128 ECPUs from the pool).
– 64 instances with 2 ECPUs each with both local and cross-region Autonomous Data
Guard enabled (using a total of 256 ECPUs from the pool).
– 128 instances with 1 ECPU, each with cross-region Autonomous Data Guard
enabled (using 128 ECPUs from the pool).
Operation Description
Create an elastic pool The Autonomous Database instance that creates an elastic pool is
the pool leader. See Create Join or Manage an Elastic Pool for
more information.
Remove an elastic pool member An elastic pool leader can remove a member from the elastic pool.
See As Pool Leader Remove Members from an Elastic Pool for
more information.
Terminate an elastic pool When an elastic pool has no pool members, the pool leader can
terminate the elastic pool. See Terminate an Elastic Pool for more
information.
Modify elastic pool size An elastic pool leader can modify the pool size. See Change the
Elastic Pool Shape for more information.
List pool members A pool leader can list pool members.
See List Elastic Pool Members for more information.
Operation Description
Add instance to elastic pool An Autonomous Database instance can be added as a pool
member as long as the instance is one of the supported workload
types, the instance uses the ECPU compute model, and the
instance is not a pool member of a different pool. The supported
workload types are: Transaction Processing, Data Warehouse,
JSON Database, or APEX.
See Join an Existing Elastic Pool for more information.
Remove an elastic pool member An elastic pool member can remove themselves from the elastic
pool.
See Remove Pool Members from an Elastic Pool for more
information.
5-285
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then click Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
To create an elastic pool:
Note:
To create an elastic pool the instance must use the ECPU compute model
and the workload type must be Transaction Processing.
5-286
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
After you create an elastic pool, click Manage resource allocation to display the elastic pool
information. In the Manage resource allocation area, the Elastic pool field shows Enabled,
the Pool role field shows Leader, and the Pool ECPU count field shows the pool size you
selected.
5-287
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
Note:
Only a pool leader can modify the pool shape.
5-288
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
Note:
Decreasing the CPU allocation, Pool ECPU count, to a value that cannot
accommodate all the members of the elastic pool is not allowed.
For example, for an elastic pool with a Pool ECPU count of 256 ECPUs and a pool capacity
of 1024 ECPUs: If the elastic pool contains eight (8) Autonomous Database instances with 80
ECPUs each for a total of 640 ECPUs, the elastic pool leader cannot decrease the Pool
ECPU count to 128 ECPUs. In this case, if the pool size were reduced to 128 ECPUs, the
pool capacity would be 512 ECPUs, which is less than the total allocation for the pool
members (640 ECPUs).
Note:
To create an elastic pool the instance must use the ECPU compute model and the
workload type you select must be Transaction Processing.
1. In the Configure the database area, click Show advanced options to show advanced
options.
2. Deselect Compute auto scaling.
3. Select Enable elastic pool.
4. Select Create a elastic pool.
5. In the Pool ECPU count field, select a pool size from the list of pool shapes.
The valid values that you can select are: 128, 256, 512, 1024, 2048, or 4096.
5-289
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
Note:
To join an elastic pool the instance must use the ECPU compute model and
the workload type must be one of Transaction Processing, Data
Warehouse, JSON Database, or APEX.
5-290
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
1. In the Configure the database area, click Show advanced options to show advanced
options.
2. Deselect Compute auto scaling.
3. Select Enable elastic pool.
4. Select Join an existing elastic pool.
5. In the Select pool leader in compartment field choose a pool leader in a compartment.
a. Use the compartment shown or click Change Compartment to select a different
compartment.
b. Select a pool leader from the list of available pool leaders in the selected
compartment.
5-291
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
If you click at the end of a row in the list, you can select an action to perform for the
member. The possible actions are:
• View details: Shows the member's Oracle Cloud Infrastructure Console
• Copy OCID: Copies the member's Autonomous Database instance OCID.
• Remove from pool: Brings up a dialog where you can confirm to remove the
Autonomous Database instance from the pool.
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then, depending on your workload click one of: Autonomous Data Warehouse,
Autonomous JSON Database, or Autonomous Transaction Processing.
• On the Autonomous Databases page select an Autonomous Database from the
links under the Display name column.
As a pool member, you can remove your instance from an elastic pool:
1. On the Details page, under Resource allocation, in the Elastic pool field click
Leave pool.
This shows the Leave pool confirmation dialog.
2. In the Leave pool confirmation dialog, enter the database name.
3. Click Leave.
When you click Leave, the Lifecycle state changes to Scaling in Progress. After
the Lifecycle state changes to Available the changes apply immediately.
• As Pool Leader Remove Members from an Elastic Pool
An elastic pool leader can remove pool members from an elastic pool.
5-292
Chapter 5
Use and Manage Elastic Pools on Autonomous Database
2. Click at the end of a row for an instance you want to remove and in the drop down list
select Remove from pool.
This shows the Remove from pool confirmation dialog.
3. Click Leave to confirm.
When you click Leave, the Lifecycle state changes to Scaling in Progress. After the
Lifecycle state changes to Available the changes apply immediately.
Note:
Terminating an elastic pool is only allowed when there are no pool members in the
elastic pool.
5-293
Chapter 5
Migrate Oracle Databases to Autonomous Database
• Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle
Cloud.
• From the Oracle Cloud Infrastructure left navigation menu click Oracle Database
and then click Autonomous Transaction Processing.
• On the Autonomous Databases page select the Autonomous Database that is the
pool leader from the links under the Display Name column.
To terminate an elastic pool:
1. On the Autonomous Database Details page, select Manage resource allocation.
2. Select Terminate pool.
3. Click Apply to terminate the elastic pool.
When you click Apply, the Lifecycle State changes to Scaling in Progress. After the
Lifecycle State changes to Available the changes apply immediately.
5-294
Chapter 5
Migrate Oracle Databases to Autonomous Database
Migration Prerequisites
Before starting your migration, it is recommended to run the Cloud Premigration Advisor Tool
(CPAT), which helps you evaluate your source database for compatibility with the
Autonomous Database.
CPAT identifies potential actions you may need to take before or during the migration,
prioritizing their importance and suggesting resolutions. Some migration tools and services
automatically run this advisor.
See Cloud Premigration Advisor Tool for more information.
Online Migration
If your source database needs to be always online and you can only take minimal downtime
for the migration, you can use one of the online migration services and tools below,
independent of your database size. Note that both options require the source database to be
on version 11.2.0.4 or later. If your source database is using an older version, you need to
upgrade it first if you want to use one of the online migration methods.
Following are the online migration methods:
• OCI Database Migration: OCI Database Migration, a managed cloud service, provides
validated, cross-version, fault-tolerant, and incremental Oracle Database migrations for
online and offline use cases. You can choose this option if you want a fully managed
cloud service that handles the migration for you.
See OCI Database Migration for more information.
• Zero Downtime Migration (ZDM): Zero Downtime Migration (ZDM) is a tool with a
command line interface that you install and run on a host you provision. Zero Downtime
Migration provides online and offline migration options. You can use this option if you
want more control over the migration.
See Zero Downtime Migration for more information.
Note:
These methods also run the Cloud Premigration Advisor Tool (CPAT) to check
your source database for compatibility with the Autonomous Database and use
Oracle Data Pump and Oracle GoldenGate to carry out the migration for you.
Offline Migration
If you can take your source database offline for the migration, you can use one of the
following migration options in addition to OCI Database Migration and Zero Downtime
Migration (ZDM).
5-295
Chapter 5
Migrate Oracle Databases to Autonomous Database
Note:
This is a manual method; you must orchestrate the export/import manually.
Note:
These methods require more manual steps, coding, and advanced
knowledge of each method. You must also migrate your source schema
metadata using your own DDL scripts or tools such as Oracle Data Pump
before initiating the data load using these methods. These options may be
preferable for small data sets for which you want to avoid installing additional
tools or using cloud migration services.
5-296
Chapter 5
Migrate Oracle Databases to Autonomous Database
See Attach Network File System to Autonomous Database for attaching NFS directories
to your Autonomous Database.
See Load Data or Query Data from Files in a Directory for more information about loading
data from files on a filesystem.
• Materialized Views: Using database links from your Autonomous Database to your
source database, you can also create materialized views in your target database. As
changes happen on your source tables, you can refresh the materialized views to keep
them updated during the migration.
See Managing Read-Only Materialized Views for more information about using
materialized views to replicate data.
Note:
Deciding the migration tool or service depends on multiple factors such as your
source database, source data format, data volume, and complexity. To help you
identify the most optimal solution for migrating your data to the Autonomous
Database, Oracle provides an advisory utility called Oracle Cloud Migration Advisor.
See Migrate Oracle Databases to OCI for more information about this utility.
5-297
6
Reference
6-1
Chapter 6
Previous Feature Announcements
December 2023
Feature Description
Cross Tenancy Cloning You can clone an Autonomous Database instance from one
tenancy, source tenancy, to a different tenancy (destination
tenancy). The cross tenancy cloning option is only available using
the CLI or the Autonomous Database REST APIs. This option is
not available using the Oracle Cloud Infrastructure Console.
See Cross Tenancy Cloning for more information.
Invoke Azure Remote You can invoke Azure remote functions in your Autonomous
Functions Database as SQL functions.
See Steps to Invoke Azure Function as SQL Functions for more
information.
Create Database Links to You can create database links from an Autonomous Database to a
Oracle RAC Database target Oracle RAC database and specify multiple hostnames.
See Create Database Links from Autonomous Database to an
Oracle Database on a Private Endpoint for more information.
New Cloud Links Views Obtain information on the Cloud Link specific privileges granted to
all users or to the current user.
See Monitor and View Cloud Links Information for more
information.
Documentation Addition: Provides an example for loading data into an Autonomous
Load Data from AWS S3 Database instance from AWS S3.
See Load Data into Autonomous Database from AWS S3 for more
information.
6-2
Chapter 6
Previous Feature Announcements
November 2023
Feature Description
Oracle APEX 23.2 Autonomous Database uses Oracle APEX Release 23.2.
See Create Applications with Oracle APEX in Autonomous Database
for more information.
Select AI with Azure OpenAI Autonomous Database can interact with AI service providers. Select AI
Service now supports the Azure OpenAI Service, OpenAI, and CohereAI. One
way LLMs can work with Oracle database is by generating SQL from
natural language prompts that allow you to talk to your database.
See Use Select AI to Generate SQL from Natural Language Prompts
for more information.
Break Glass Access with Autonomous Database supports break glass access for SaaS
SAAS_ADMIN User providers. Break glass access allows a SaaS operations team, when
explicitly authorized by a SaaS customer, to access a customer's
database to perform critical or emergency operations.
See Break Glass Access for SaaS on Autonomous Database for more
information.
Database Links Without a You can create database links from one Autonomous Database
Wallet (TLS) instance to a publicly accessible Autonomous Database without a
wallet (TLS).
See Create Database Links from Autonomous Database to a Publicly
Accessible Autonomous Database without a Wallet (TLS) for more
information.
October 2023
Feature Description
Vault Secret Credential You can use vault secret credentials to access cloud resources or to
Objects access other databases (use vault secret credentials anywhere where
username/password type credentials are required). Supported vaults
are:
• Oracle Cloud Infrastructure Vault
• Azure Key Vault
• AWS Secrets Manager
• GCP Secret Manager
See Use Vault Secret Credentials for more information.
Documentation Addition: Documentation includes summary and detailed Autonomous Database
Billing Information billing information.
See How Is Autonomous Database Billed? for more information.
Documentation Addition: Documentation includes information on migrating your Oracle Database
Oracle Database Migration to Autonomous Database.
Information See Migrate Oracle Databases to Autonomous Database for more
information.
6-3
Chapter 6
Previous Feature Announcements
Feature Description
ServiceNow Support for Autonomous Database support for Oracle-managed heterogeneous
Database Links with Oracle- connectivity makes it easy to create database links to ServiceNow.
Managed Heterogeneous When you use database links with Oracle-managed heterogeneous
Connectivity connectivity, Autonomous Database configures and sets up the
connection.
See Create Database Links to Non-Oracle Databases with Oracle-
Managed Heterogeneous Connectivity for more information.
Invoke Oracle Cloud You can invoke Oracle Cloud Infrastructure Functions and AWS
Infrastructure Functions or Lambda remote functions in your Autonomous Database as SQL
AWS Lambda Remote functions.
Functions See Invoke User Defined Functions for more information.
September 2023
Feature Description
Use Select AI to Generate Autonomous Database can interact with AI service providers
SQL from Natural including: OpenAI and CohereAI to provide access to Generative
Language Prompts AI capability from Oracle Database that uses Large Language
Models (LLMs). One way LLMs can work with Oracle database is
by generating SQL from natural language prompts that allow you
to talk to your database.
See Use Select AI to Generate SQL from Natural Language
Prompts for more information.
Cloud Links Usability and Cloud Links provide a cloud-based method to collaborate between
Security Autonomous Databases. Cloud Links now allow you to set a data
set owner. Cloud Links can be configured to require an additional
authorization step, where a data set only allows specified
databases to access the data set. When you register a data set,
you can also specify refreshable clones to offload access to the
data set.
See Use Cloud Links for Read Only Data Access on Autonomous
Database for more information.
Elastic Pools Use an elastic pool to consolidate your Autonomous Database
instances, in terms of their allocation of compute resources, and
to provide up to 87% cost savings.
See Use and Manage Elastic Pools on Autonomous Database for
more information.
Autonomous Database Use the Oracle Autonomous Database Free Container Image to
Free Container Image run Autonomous Database in a container in your own
environment, without requiring access to the Oracle Cloud
Infrastructure Console or to the internet.
See Use the Oracle Autonomous Database Free Container Image
for more information.
ECPU-based Databases You may now provision, clone, and scale using Gigabyte (GB)
with Transaction increments of storage for ECPU-based Autonomous Databases
Processing Workload Type provisioned with the Transaction Processing workload type. The
Support per-Gigabyte minimum storage allowed for an Autonomous Database instance
Increments of Storage in ECPU compute model with Transaction Processing workload is
20 GB.
See Provision Autonomous Database for more information.
6-4
Chapter 6
Previous Feature Announcements
Feature Description
Google Analytics Support Use Autonomous Database Oracle-managed heterogeneous
for Database Links with connectivity to create database links to Google Analytics. When
Oracle-Managed you use database links with Oracle-managed heterogeneous
Heterogeneous connectivity, Autonomous Database configures and sets up the
Connectivity connection.
See Create Database Links to Non-Oracle Databases with
Oracle-Managed Heterogeneous Connectivity for more
information.
August 2023
Feature Description
DBMS_SHARE Package You can use the subprograms and views in the DBMS_SHARE package
to share data with external systems.
See DBMS_SHARE Package for more information.
Logical Partition Change Logical Partition Change Tracking enables you to create logical
Tracking (LPCT) partitions on base tables. It evaluates the staleness of the base tables
for individual logical partitions without using a materialized view log or
requiring any of the tables used in the materialized view to be
partitioned.
Logical Partition Change Tracking provides the capability to leverage
the user-supplied logical partitioning information of base tables of
materialized view for a more fine-grained, partition-level tracking of
stale data for both refresh and rewrite purposes. While classical
Partitioning Change Tracking (PCT) relies on the physical partitioning of
tables, LPCT has no dependency on tables being physically partitioned;
you can use LPCT with both partitioned and nonpartitioned tables.
See Logical Partition Change Tracking and Materialized Views for more
information.
Update OCPU to ECPU You can update an Autonomous Database instance from the OCPU
Compute Model billing model to the ECPU billing model.
See Update to ECPU Billing Model on Autonomous Database for more
information.
Choose Backup Retention With the ECPU billing model you have the option to select the backup
Period with ECPU Compute retention period for automatic backups, with a retention period between
Model 1 day and up to 60 days.
See Edit Automatic Backup Retention Period on Autonomous Database
for more information.
AWS Glue Data Catalog Autonomous Database allows you to synchronize with Amazon Web
Integration Services (AWS) Glue Data Catalog metadata. You can query data
stored in S3 from Autonomous Database without having to manually
derive the schema for the external data sources and create external
tables.
See Query External Data with AWS Glue Data Catalog for more
information.
6-5
Chapter 6
Previous Feature Announcements
July 2023
Feature Description
Track Table Changes with Use Flashback Time Travel to view past states of database
Flashback Time Travel objects or to return database objects to a previous state without
using point-in-time media recovery.
See Tracking Table Changes with Flashback Time Travel for more
information.
Database Actions Quick Database Actions on the Oracle Cloud Infrastructure Console
Links provides quick links to select Database Actions cards or to select
the Database Actions Launchpad.
See Access Database Actions as ADMIN for more information.
Object Storage and REST DBMS_CLOUD supports a number of preconfigured Object Store
Endpoint Management and REST endpoints. DBMS_CLOUD also allows you to access
additional customer-managed endpoints.
See DBMS_CLOUD Endpoint Management for more information.
detectfieldorder The detectfieldorder format option specifies that the fields in
DBMS_CLOUD Format external data files are in a different order than the columns in the
Option table. Specify this option with DBMS_CLOUD.COPY_DATA,
DBMS_CLOUD.CREATE_EXTERNAL_TABLE,
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE, or
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE to detect the order
of fields using the first row of each external data file and map it to
the columns of the table.
See DBMS_CLOUD Package Format Options for more
information.
Vault Secret Credential You can create vault secret credentials with secrets stored in
Objects Oracle Cloud Infrastructure Vault. You can use vault secret
credentials to access cloud resources or to access other
databases (use vault secret credentials anywhere where
username/password type credentials are required).
See Use Vault Secret Credentials for more information.
Export to Directory with Export files as text or as Oracle Data Pump dump files to a
DBMS_CLOUD.EXPORT_DAT directory.
A See Export Data to a Directory for more information.
June 2023
Feature Description
Oracle Cloud Infrastructure Oracle Data Pump import using impdp supports credentials
Signing Key Based created using Oracle Cloud Infrastructure key based attributes.
Authentication for Oracle See Import Data Using Oracle Data Pump on Autonomous
Data Pump Import Database for more information.
6-6
Chapter 6
Previous Feature Announcements
Feature Description
Configurable Automatic For Autonomous Data Guard you can specify an automatic
Failover Data Loss Limit for failover data loss limit between 0 and 3600 seconds. Autonomous
a Local Standby Database Data Guard performs automatic failover to a local standby
database when a local standby database is available and the
system can guarantee either the default zero data loss RPO or up
to the data loss limit you specify.
See Automatic Failover with a Standby Database for more
information.
Oracle Workspace Oracle Workspace Manager provides an infrastructure that
Manager enables applications to create workspaces and group different
versions of table row values in different workspaces. Use Oracle
Workspace Manager to version-enable one or more user tables in
the database.
See Using Oracle Workspace Manager on Autonomous Database
for more information.
Automatic Failover Begin The AutonomousDatabase-AutomaticFailoverBegin and
and Automatic Failover End AutonomousDatabase-AutomaticFailoverEnd events are
Events generated when an automatic failover begins and ends. These
events are only be triggered if you are using Autonomous Data
Guard.
See About Critical Events on Autonomous Database for more
information.
Operator Access Event The OperatorAccess event is generated when operator access
is detected for the database.
Autonomous Database operations team never access your data
unless you explicitly grant permission through a Service Request
for a specified duration.
See Information Events on Autonomous Database for more
information.
Built-in Tool Availability Oracle will use commercially reasonable efforts to have the listed
Service Level Objective built-in tools meet the Availability objectives as defined in Service
(SLO) Level Objective (SLO) documentation.
See Availability Service Level Objectives (SLOs) for more
information.
CMU-AD with Active There are now two options for configuring Autonomous Database
Directory Servers on a with Centrally Managed Users (CMU) with Microsoft Active
Private Endpoint Directory:
• Active Directory (AD) servers publicly accessible: the Active
Directory servers are accessible from Autonomous Database
through the public internet.
• Active Directory (AD) servers reside on a private endpoint:
the Active Directory servers reside on a private endpoint and
are not accessible from Autonomous Database through the
public internet.
See Use Microsoft Active Directory with Autonomous Database
for more information.
6-7
Chapter 6
Previous Feature Announcements
May 2023
Feature Description
Oracle APEX 23.1 Autonomous Database uses Oracle APEX Release 23.1.
See Creating Applications with Oracle APEX in Autonomous
Database for more information.
Google BigQuery Support Autonomous Database support for Oracle-managed
for Database Links with heterogeneous connectivity to create database links to Google
Oracle-Managed BigQuery. When you use database links with Oracle-managed
Heterogeneous heterogeneous connectivity, Autonomous Database configures
Connectivity and sets up the connection.
See Create Database Links to Non-Oracle Databases with
Oracle-Managed Heterogeneous Connectivity for more
information.
Log File Options for DBMS_CLOUD provides format parameter options for logging
DBMS_CLOUD customization: enablelogs, logprefix, logdir, and
logretention.
See DBMS_CLOUD Package Format Options for more
information.
RESULT_CACHE_MODE Set the RESULT_CACHE_MODE parameter to specify which queries
Parameter is Modifiable at are eligible to store result sets in the result cache. Only query
Session and System Level execution plans with the result cache operator will attempt to read
from or write to the result cache.
See RESULT_CACHE_MODE for more information.
Stripe Financial Views Stripe is an online payment processing and credit card processing
platform for businesses. The Stripe views allow you to query views
created on top of Stripe APIs with DBMS_CLOUD to get Stripe
information such as products, invoices, plans, accounts,
subscriptions, and customers.
See Stripe Views on Autonomous Database for more information.
Oracle GoldenGate Parallel Integrated Parallel Replicat (iPR) supports GoldenGate replication
Replicat (Integrated Mode) apply for procedural replication, Automatic CDR, and DML
handlers.
See Deciding Which Apply Method to Use and About Parallel
Replicat for more information.
Persistent Messaging in The DBMS_PIPE package has extended functionality on
DBMS_PIPE Autonomous Database to support persistent messaging, where
messages are stored in Cloud Object Store.
See Use Persistent Messaging with Messages Stored in Cloud
Object Store for more information.
Access Oracle Cloud Using the Oracle Cloud Infrastructure Logging Interface you can
Infrastructure Logging Data access log data from an Autonomous Database instance, in
with Autonomous Database relational format. You can query log data in Oracle Cloud
Views Infrastructure across all compartments and regions.
See Oracle Cloud Infrastructure Logging Interface Views for more
information.
6-8
Chapter 6
Previous Feature Announcements
April 2023
Feature Description
Oracle Spatial Features for Oracle Spatial on Autonomous Database includes features for
Geocoding Address Data and geocoding address data and for reverse geocoding longitude/latitude
Reverse Geocoding Location data to a street address.
Data See Using Oracle Spatial with Autonomous Database for more
information.
Kerberos Authentication for You can configure Autonomous Database to use Kerberos
CMU-AD authentication for CMU with Microsoft Active Directory users. This
configuration allows CMU Active Directory (CMU-AD) users to access
an Autonomous Database instance using Kerberos credentials.
User Defined Notification Database Scheduler provides an email notification mechanism to track
Handler for Scheduler Jobs the status of periodically running or automated jobs. In addition to this,
the Database Scheduler also supports user-defined PL/SQL Scheduler
job notification handler procedure. Adding a scheduler job notification
handler procedure allows you to monitor scheduled or automated jobs
running in your Autonomous Database.
See User Defined Notification Handler for Scheduler Jobs for more
information.
Database Links with You can create database links from an Autonomous Database to an
Customer-Managed customer-managed Oracle Database Gateway to access Non-Oracle
Heterogeneous Connectivity databases that are on a private endpoint.
to Non-Oracle Databases on See Create Database Links with Customer-Managed Heterogeneous
a Private Endpoint Connectivity to Non-Oracle Databases on a Private Endpoint for more
information.
Oracle Machine Learning Oracle Machine Learning Notebooks Early Adopter is an enhanced
Notebooks Early Adopter web-based notebook platform for data engineers, data analyst, R and
Python users, and data scientists. You can write code, text, create
visualizations, and perform data analytics including machine learning.
Notebooks work with interpreters in the back-end. In Oracle Machine
Learning, notebooks are available in a project within a workspace,
where you can create, edit, delete, copy, move, and even save
notebooks as templates.
See Get Started with Notebooks Early Adopter for Data Analysis and
Data Visualization for more information.
Salesforce Support for Autonomous Database support for Oracle-managed heterogeneous
Database Links with Oracle- connectivity to create database links to Salesforce databases. When
Managed Heterogeneous you use database links with Oracle-managed heterogeneous
Connectivity connectivity, Autonomous Database configures and sets up the
connection.
See Create Database Links to Non-Oracle Databases with Oracle-
Managed Heterogeneous Connectivity for more information.
Snapshot Standbys for A disaster recovery cross-region peer can be converted to a snapshot
Cross-Region Disaster standby. This converts either an Autonomous Data Guard cross-region
Recovery standby database or a cross-region Backup-Based Disaster Recovery
peer to a read-write database for up to two days.
See Convert Cross Region Peer to Snapshot Standby for more
information.
6-9
Chapter 6
Previous Feature Announcements
Feature Description
Singleton Pipe in DBMS_PIPE Singleton Pipe is an addition to the DBMS_PIPE package that allows
you to cache a custom message. Using a Singleton Pipe you can send
and retrieve a custom message and share the message across multiple
database sessions with concurrent reads.
See Caching Messages with Singleton Pipes for more information.
Migrate Databases Across You may now terminate, as well as perform all other Primary database
Regions with Minimal actions on a remote Autonomous Data Guard standby when it becomes
Downtime Using Autonomous the Primary database, after a switchover or after a failover. This
Data Guard enables you to use a remote peer for migrating your database to the
remote region.
See Using Standby Databases with Autonomous Data Guard for
Disaster Recovery for more information.
Send Microsoft Teams Use Microsoft Teams notifications. You can send messages, alerts, or
Notifications the output of a query from Autonomous Database to a Microsoft Teams
Channel.
See Send Microsoft Teams Notifications from Autonomous Database
for more information.
Send Email Using You can send email to a public SMTP endpoint using the Oracle Cloud
DBMS_CLOUD_NOTIFICATION Infrastructure Email Delivery service.
See Send Email from Autonomous Database Using
DBMS_CLOUD_NOTIFICATION for more information.
March 2023
Feature Description
Cloud Links Cloud Links provide a cloud-based method to collaborate between
Autonomous Databases. Using Cloud Links you can access read-
only data based on the cloud identities of Autonomous Database
instances. The scope of collaboration can be the region where an
Autonomous Database resides, individual tenancies,
compartments, or particular Autonomous Database instances.
See Using Cloud Links for Read Only Data Access on
Autonomous Database for more information.
Backup-Based Disaster Backup-Based Disaster Recovery uses backups to instantiate a
Recovery peer database at the time of switchover or failover. For local
Backup-Based Disaster Recovery, existing local backups are
utilized and there are no additional costs. Cross-Region Backup-
Based Disaster Recovery is also available and incurs additional
costs.
See Using Backup-Based Disaster Recovery for more information.
Documentation Addition: Provides information on options for using Oracle Machine
Oracle Machine Learning Learning with Autonomous Database.
with Autonomous Database See Machine Learning with Autonomous Database for more
information.
Export Data to Cloud Use DBMS_CLOUD.EXPORT_DATA to export data as text. The text
Object Storage as Parquet format export options are CSV, JSON, Parquet, or XML.
See Export Data to Object Store as Text (CSV, JSON, Parquet, or
XML) for more information.
6-10
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Real Application Use Capture and Replay from one Autonomous Database
Testing Capture and instance to another Autonomous Database instance. This enables
Replay you to compare workloads across different Autonomous Database
instances. These Autonomous Database instances may vary at
patch levels, database versions, or regions.
See Using Oracle Real Application Testing for more information.
Availability Domain You can obtain tenancy details, including the Availability Domain
Information (AD) for an Autonomous Database instance.
See Obtain Tenancy Details for more information.
Provision an Autonomous The provisioning option Save as Stack allows you to create and
Database Instance and deploy an Autonomous Database instance using Resource
Save as Stack Manager.
See Provision Autonomous Database and Overview of Resource
Manager for more information.
Oracle Client for Microsoft Oracle Client for Microsoft Tools (OCMT) is a graphical user
Tools (OCMT) interface (GUI) native Microsoft Software installer (MSI) that
simplifies ODP.NET setup and provides Autonomous Database
connectivity to multiple Microsoft data tools.
See Connect Power BI and Microsoft Data Tools to Autonomous
Database for more information.
Long-Term Backups You can create long-term backups on Autonomous Database with
a retention period between three months and up to ten years.
When you create a long-term backup you can create a one-time
backup or set a schedule to automatically create backups that are
taken weekly, monthly, quarterly, or annually (yearly).
See About Backup and Recovery on Autonomous Database and
Create Long-Term Backups on Autonomous Database for more
information.
Use Directories to Load As an alternative to an object store location URI, you can specify
Data with DBMS_CLOUD a directory with DBMS_CLOUD procedures to load or unload data
Procedures from files in a directory, including directories created on attached
network file systems.
See Load Data from Directories in Autonomous Database for
more information.
January 2023
Feature Description
ECPU Compute Model Autonomous Database offers two compute models when you create or
clone an instance: ECPU and OCPU.
See Compute Models in Autonomous Database for more information.
Configure Built-in Tools Autonomous Database includes built-in tools that you can enable and
disable when you provision or clone a database, or at any time for an
existing database.
See Managing Autonomous Database Built-in Tools for more
information.
6-11
Chapter 6
Previous Feature Announcements
Feature Description
Configure a Custom Private You can configure a private endpoint when you provision or clone an
IP Address for a Private Autonomous Database instance, or if your instance is configured to use
Endpoint a public endpoint you can change the configuration to use a private
endpoint. Optionally when you configure a private endpoint you can
enter a custom private IP address. Providing a custom IP address
allows you to preserve an existing private IP address when you migrate
from another system.
See Configure Private Endpoints for more information.
Cross-Region Clone from Autonomous Database provides the option to create a clone in another
Backup region when you are cloning a database and you select Clone from a
backup. In this case, you choose your preferred region to select the
region where you want to create the clone.
See Clone an Autonomous Database from a Backup for more
information.
Send Slack Notifications You can send slack notifications from Autonomous Database. This
allows you to send messages to a Slack channel or send the results of
a query to a Slack channel.
See Send Slack Notifications from Autonomous Database for more
information.
February 2023
Feature Description
Maximum Storage Space The StorageMax metric shows the maximum amount of storage
Metric reserved for the database.
See Autonomous Database Metrics and Dimensions for more
information.
Oracle Cloud Infrastructure When you create or clone an Autonomous Database instance or
Vault Secret for ADMIN when you reset the ADMIN password, you can use an Oracle
Password Cloud Infrastructure vault secret to specify the ADMIN password.
See Use Oracle Cloud Infrastructure Vault Secret for ADMIN
Password for more information.
Inactive Connections The InactiveConnectionsDetected event is generated when
Detected Event the number of inactive database connections detected is above a
certain ratio, compared to all database connections for the
Autonomous Database instance. Subscribing to this event can
help you keep track of unused connections.
See About Information Events on Autonomous Database for more
information.
Enable Automatic Time Autonomous Database lets you choose to enable the feature that
Zone File Updates automatically updates time zone files on your Autonomous
Database instance.
See Manage Time Zone File Version on Autonomous Database
for more information.
6-12
Chapter 6
Previous Feature Announcements
Feature Description
Database Relocate and The FailedLoginWarning event is generated when the number
Failed Logon Events of failed login attempts reaches 3 * number of total users in the
last three (3) hours.
The InstanceRelocateBegin and InstanceRelocateEnd
events are triggered when an Autonomous Database instance is
relocated to another Exadata infrastructure due to server
maintenance, hardware refresh, a hardware issue, or as part of a
resource scale up.
See About Critical Events on Autonomous Database for more
information.
December 2022
Feature Description
Data Pipelines for Loading Data pipelines allow you to repeatedly load data from object store, or
and Exporting export data to object store.
Load pipelines provide continuous incremental data loading from
external sources (as data arrives on object store it is loaded to a
database table). Export pipelines provide continuous incremental data
exporting to object store (as new data appears in a database table it is
exported to object store).
See Using Data Pipelines for Continuous Load and Export for more
information.
Credential Objects for use You can use credential objects to set authentication for use with
with UTL_HTTP, UTL_SMTP, UTL_HTTP, UTL_SMTP or DBMS_LDAP.
and DBMS_LDAP See Use Credential Objects to set SMTP Authentication, Use
Credential Objects to Set HTTP Authentication, and PL/SQL Packages
for more information.
6-13
Chapter 6
Previous Feature Announcements
Feature Description
Medium and High Service The number of concurrent statements for the high and medium
Limits with Auto Scaling services increases by three times when OCPU Auto Scaling is enabled.
See Service Concurrency for more information.
Multiple Data Catalog Instead of each Autonomous Database instance supporting a single
Support connection to a Data Catalog instance, this feature allows connections
to multiple Data Catalog instances.
See About Querying with Data Catalog for more information.
November 2022
Feature Description
Cloud Code Repository Additions and improvements for Cloud Code Repository in the
Branch Management and DBMS_CLOUD_REPO package for (Git) repository branch
Schema Export and Install management and schema export and install.
See Using and Managing a Cloud Code Repository with
Autonomous Database for more information.
Log File Prefix and DBMS_CLOUD.COPY_DATA supports the format parameter options:
Retention Format Options logretention and logprefix.
for COPY_DATA
See DBMS_CLOUD Package Format Options for more
information.
Cross-Region Autonomous The wallet.zip and the connection strings provided for a
Data Guard: Remove database with cross-region Autonomous Data Guard enabled no
Second Hostname from longer contain both the primary and standby database hostnames
Database Connection in a single connection string or wallet.
Strings To avoid connection retry delays and optimize your disaster
recovery setup, it is recommended to use the Primary database
wallet or connection string to connect to the Primary database,
and the remote database wallet or connection string to connect to
the remote database (when the remote database is Available to
connect to, after a switchover or failover).
See Cross-Region Autonomous Data Guard Connection Strings
and Wallets for more information.
Data Transforms in Data Transforms is a built-in data integration tool and is accessible
Database Actions from Database Actions. Data Transforms is a simple-to-use, drag-
and-drop, no-code tool to load and transform data from
heterogeneous sources to Autonomous Database. Use Data
Transforms for all data integration needs such as building data
warehouses and feeding analytics applications.
See The Data Transforms Page for more information.
Access Network File You can attach a Network File System to a directory location in
System (NFS) Directories your Autonomous Database. This allows you to load data from
Oracle Cloud Infrastructure File Storage in your Virtual Cloud
Network (VCN) or from any other Network File System in on-
premises data centers.
See Access Network File System from Autonomous Database for
more information.
6-14
Chapter 6
Previous Feature Announcements
Feature Description
Customer Managed Keys Autonomous Database fully supports using customer-managed
with Cross-Region keys with a remote standby database with Autonomous Data
Autonomous Data Guard Guard.
See Autonomous Data Guard with Customer Managed Keys and
Notes for Customer-Managed Keys with Autonomous Data Guard
for more information.
Cross-Region Refreshable Create one or more clones in regions other than the region of your
Clones primary (source) database. Clones in remote regions can be
refreshed from the source database.
See Using Refreshable Clones with Autonomous Database for
more information.
Use Text Index to Query You can build an Oracle Text index on object store files. This
Text on Object Storage allows you to search the object store for text and use wildcards
with your search.
See Query Text in Object Storage for more information.
Documentation Addition: A new section provides information on using Autonomous
Data Lake Capabilities with Database as a data lakehouse.
Autonomous Database See Using Data Lake Capabilities with Autonomous Database for
more information.
Bulk Operations for Files in The PL/SQL package DBMS_CLOUD offers parallel execution
the Cloud support for bulk file upload, download, copy, and transfer activities,
which streamlines the user experience and delivers optimal
performance for bulk file operations.
See Bulk Operations for Files in the Cloud for more information.
October 2022
Feature Description
Clone from Backup Select When you clone an Autonomous Database instance you can select
Latest Available Timestamp clone from latest backup. This option selects the latest available backup
Option timestamp as the clone source. You can select this option if a database
becomes unavailable and you want to create a clone, or for any reason
when you want to create a clone based on the most recent backup.
See Clone Autonomous Database from a Backup for more information.
Use Customer-Managed You can use customer-managed master encryption keys with the Vault
Encryption Keys Located in a and Keys in a different tenancy. The Vault and the Autonomous
Different Tenancy Database instance can be in a different tenancy, but they must be in the
same region.
See Use Customer-Managed Encryption Key Located in a Remote
Tenancy for more information.
Use Google Service Account You can use a Google service account to access Google Cloud
to Access GCP Resources Platform (GCP) resources from an Autonomous Database instance.
See Use Google Service Account to Access Google Cloud Platform
Resources for more information.
SYSDATE_AT_DBTIMEZONE Depending on the value of SYSDATE_AT_DBTIMEZONE, you see either
Parameter is Modifiable at the date and time based on the default Autonomous Database time
Session and System Level zone, Coordinated Universal Time (UTC), or based on the time zone
that you set in your database.
See SYSDATE_AT_DBTIMEZONE for more information.
6-15
Chapter 6
Previous Feature Announcements
Feature Description
View Oracle Cloud The DBA_OPERATOR_ACCESS view stores information on the actions
Operations Actions that Oracle Cloud Infrastructure cloud operations performs on your
Autonomous Database. This view will not show results if Oracle Cloud
Infrastructure cloud operations hasn't performed any actions or run any
statements in your Autonomous Database instance.
See View Oracle Cloud Infrastructure Operations Actions for more
information.
September 2022
Feature Description
Expanded list of Databases Autonomous Database support for Oracle-managed
with Oracle-Managed heterogeneous connectivity makes it easy to create database
Heterogeneous links to non-Oracle databases. When you use database links with
Connectivity Oracle-managed heterogeneous connectivity, Autonomous
Database configures and sets up the connection to the non-
Oracle database.
The list of supported non-Oracle databases is expanded to
include Hive and MongoDB.
See Create Database Links to Non-Oracle Databases with
Oracle-Managed Heterogeneous Connectivity for more
information.
Oracle APEX Upgrade The APEXUpgradeBegin and APEXUpgradeEnd events are
Events generated when you are using Oracle APEX and your
Autonomous Database instance begins an upgrade or completes
an upgrade to a new Oracle APEX release.
See About Events Based Notification and Automation on
Autonomous Database for more information.
Oracle MTS (OraMTS) Use Oracle MTS (OraMTS) Recovery Service to resolve in-doubt
Recovery Service Microsoft Transaction Server transactions on Autonomous
Database.
See Using OraMTS Recovery Feature on Autonomous Database
for more information.
August 2022
Feature Description
Documentation Addition: You can connect Python applications to your Autonomous
Updated Python Database instance either with a wallet (mTLS) or without a wallet
Connection Information (TLS), using the python-oracledb driver.
See Connect Python Applications to Autonomous Database for
more information.
Autonomous Data Guard Autonomous Database provides a higher 99.995% availability
SLA SLA service commitment with Autonomous Data Guard enabled
with a standby database.
Database availability SLA is calculated based on the "Monthly
Uptime Percentage" described in the Oracle PaaS and IaaS
Public Cloud Services Pillar Document, document under Delivery
Policies.
6-16
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Machine Learning Oracle Machine Learning for R (OML4R) is a component of the
for R (OML4R) Oracle Machine Learning family of products, which integrates R
with Autonomous Database.
Use Oracle Machine Learning for R to:
• Perform data exploration and data preparation while
seamlessly leveraging Oracle Database as a high-
performance computing environment.
• Run user-defined R functions on database spawned and
controlled R engines, with system-supported data-parallel
and task-parallel capabilities.
• Access and use powerful in-database machine learning
algorithms from R language.
See About Oracle Machine Learning for R for more information.
Synchronization with Data Use the DBMS_DCAT$SYNC_LOG view to easily access the log for
Catalog Enhancements the latest Data Catalog synchronization execution.
See DBMS_DCAT SYNC_LOG View for more information.
Regional Availability You can view regional availability metrics by data center for Oracle
Metrics Autonomous Database.
See Monitor Regional Availability of Autonomous Databases for
more information.
July 2022
Feature Description
Use Oracle Java Autonomous Database supports Oracle JVM. Oracle JVM is a
standard, Java-compatible environment that runs any pure Java
application.
See Using Oracle Java on Autonomous Database for more information.
Oracle APEX 22.1 Autonomous Database uses Oracle APEX Release 22.1.
See Creating Applications with Oracle APEX in Autonomous Database
for more information.
Wallet Rotation with a Grace Autonomous Database allows you to rotate wallets for an Autonomous
Period Database instance or for all instances that a cloud account owns in a
region, with a grace period of 1 hour to 24 hours.
See Rotate Wallets for Autonomous Database for more information.
Change Display Name You can change the display name for an Autonomous Database
instance.
See Update the Display Name for an Autonomous Database Instance
for more information.
Default Database Name The default database name and the matching default display name
supplied when you provision or clone an instance is a generated 16-
character string.
See Provision Autonomous Database and Clone an Autonomous
Database Instance for more information.
6-17
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Real Application You can use Oracle Real Application Testing Database Replay to
Testing: Database Replay capture workload from an on-premises or other cloud service database
and replay it on an Autonomous Database instance. This enables you
to compare workloads between an on-premises database or other
cloud service database and an Autonomous Database.
See Using Oracle Real Application Testing - Database Replay for more
information.
Export Data with Column You can optionally write column names as the first line in output files
Names in Header Row when you use the EXPORT_DATA procedure with CSV output.
See EXPORT_DATA Procedure and DBMS_CLOUD Package Format
Options for EXPORT_DATA with Text Files (CSV, JSON, and XML for
more information.
June 2022
Feature Description
Character Set Support The Autonomous Database default database character set is
Unicode AL32UTF8 and the default national character set is
AL16UTF16. When you provision a database, depending on the
workload type, you can select a database character set and a
national character set.
See Choose a Character Set for Autonomous Database for more
information.
Support for Oracle Autonomous Database supports using Oracle LogMiner.
LogMiner See Oracle LogMiner for more information.
Synchronization with Data Synchronize Data Catalog partitioned metadata with Autonomous
Catalog Enhancements Database to create partitioned external tables. See the
DBMS_DCAT RUN_SYNC procedure.
Database Actions Changes Database Actions includes new features.
See Changes in Oracle Database Actions for more information.
Network Security Group For a private endpoint, the incoming and outgoing connections are
(NSG) Configuration with limited by the combination of ingress and egress rules defined in
Private Endpoint NSGs and in the Security Lists associated with the private
endpoint's VCN. Adding an NSG is now optional; when you do not
include an NSG the ingress and egress rules defined in the
Security Lists for the VCN still apply.
See Configuring Network Access with Private Endpoints for more
information.
Simplified Configuration You can configure Autonomous Database to authenticate and
Steps with CMU for authorize Microsoft Active Directory users. The configuration
Microsoft Active Directory steps are simplified for enabling Autonomous Database with
Centrally Managed Users (CMU).
See Use Microsoft Active Directory with Autonomous Database
for more information.
6-18
Chapter 6
Previous Feature Announcements
May 2022
Feature Description
Azure Active Directory (Azure Azure Active Directory (Azure AD ) users can connect to an
AD ) Integration Autonomous Database instance using Azure OAuth2 access tokens.
See Use Azure Active Directory (Azure AD) with Autonomous
Database for more information.
OCPU Limit for License Type The license type Bring Your Own License (BYOL), Oracle Database
Bring Your Own License Standard Edition sets the maximum number of OCPUs that you can
(BYOL) use to 8.
See View and Update Your License and Oracle Database Edition on
Autonomous Database for more information.
Kerberos Authentication You can use Kerberos to authenticate Autonomous Database users.
Support See Configure Kerberos Authentication for more information.
Autonomous Data Guard Autonomous Data Guard shows a standby database in "Standby" state.
Standby State See Autonomous Database Standby Database State for more
information.
Database Actions Additions Database Actions provides all the functionality that is available on the
Autonomous Database Service Console. The Service Console will be
deprecated soon.
See Service Console Replacement with Database Actions for more
information on where to find Service Console functionality in Database
Actions.
Autonomous Data Guard Autonomous Data Guard supports operation from Terraform scripts
Support for Terraform See the following for more information:
• Use the API
• oci_database_autonomous_database
• Oracle Cloud Infrastructure Provider
• Terraform Registry Data Source: Data Source:
oci_database_autonomous_database
Use Azure Service Principal You can use an Azure service principal with Autonomous Database to
access Azure resources without having to create and store your own
credential objects in the database.
See Use Azure Service Principal to Access Azure Resources for more
information.
Monitor Autonomous You can monitor availability information for your Autonomous Database
Database Availability instance form the Oracle Cloud Infrastructure Console or by viewing
metrics.
See Monitor Autonomous Database Availability and View Metrics for an
Autonomous Database Instance for more information.
Database Names Length The length limit for database names when you create or clone your
Limit database has been increased from 14 to 30 characters. This change
provides more flexibility, giving you the option to create longer database
names.
See Provision Autonomous Database or Clone an Autonomous
Database Instance for more information.
6-19
Chapter 6
Previous Feature Announcements
Feature Description
Egress Rules for all When you define a private endpoint for your Autonomous Database
Outbound Connections with instance you can provide enhanced security by setting a database
Private Endpoints property to enforce that all outgoing connections to a target host are
subject to and limited by the private endpoint's egress rules.
See Enhanced Security for Outbound Connections with Private
Endpoints for more information.
April 2022
Feature Description
Display Autonomous You can monitor Autonomous Database availability from the
Database Availability Oracle Cloud Infrastructure Console or using metrics.
See Monitor Autonomous Database Availability for more
information.
OML4Py SQL API for OML4Py embedded Python execution on Autonomous Database
Embedded Python supports a SQL API in addition to the REST API. Embedded
Execution execution allows you to run user-defined Python functions in
database-spawned and managed Python engines, along with
data-parallel and task-parallel automation.
See SQL API for Embedded Python Execution with On-premises
Database for more information.
Create Database Links You can create database links from an Autonomous Database to a
from Autonomous target Oracle Database that is on a private endpoint and connect
Database to Oracle without a wallet (TCP).
Databases on a Private See Create Database Links from Autonomous Database to Oracle
Endpoint without a Wallet Databases on a Private Endpoint without a Wallet for more
information.
Availability Service Level Describes the Service Level Objectives (SLOs) for Oracle
Objectives (SLOs) Autonomous Database.
See Availability Service Level Objectives (SLOs) for more
information.
Synchronization with Data The DBMS_DCAT RUN_SYNC procedure has new parameters to
Catalog Enhancements give you more control over what objects are synchronized and
how the synchronization is performed.
See CREATE_SYNC_JOB Procedure and RUN_SYNC Procedure
for more information.
Data Catalog Views Discover all accessible data catalogs in the current region or
across all regions.
See ALL_DCAT_GLOBAL_ACCESSIBLE_CATALOGS View and
ALL_DCAT_LOCAL_ACCESSIBLE_CATALOGS View for more
information.
March 2022
Feature Description
Oracle APEX 21.2 Autonomous Database uses Oracle APEX Release 21.2.
See Creating Applications with Oracle APEX in Autonomous
Database for more information.
6-20
Chapter 6
Previous Feature Announcements
Feature Description
OML4Py Client Access A new client package for OML4Py provides a “universal" client in
that users can access Autonomous Database from a standalone
Python engine on Linux to connect to Autonomous Database as
well as 19c and newer Oracle Database releases. This supports
working with OML4Py on Autonomous Database, local third-party
IDEs, and other notebook environments, like JupyterLab. It further
allows switching between on-premises and Autonomous
Database instances using the same client package.
See Install OML4Py Client for Linux for Use With Autonomous
Database for more information.
Oracle APEX Upgrade The APEXUpgradeAvailable event is generated when you are
Available Event using Oracle APEX and a new release becomes available.
See About Events Based Notification and Automation on
Autonomous Database for more information.
Database Links with Autonomous Database support for Oracle-managed
Oracle-Managed heterogeneous connectivity makes it easy to create database
Heterogeneous links to non-Oracle databases. When you use database links with
Connectivity Oracle-managed heterogeneous connectivity, Autonomous
Database configures and sets up the connection to the non-
Oracle database.
See Create Database Links to Non-Oracle Databases with
Oracle-Managed Heterogeneous Connectivity for more
information.
Metadata Columns in To identify which file each row is coming from in your external
External Tables tables, you can query the columns named file$name and
file$path to find the source file name and the Object Store path
URL for that file.
See External Table Metadata Columns for more information.
External Tables with When you use external file partitioning with the partitions specified
Partitioning Specified in in external source files, Autonomous Database analyzes Cloud
Source Files Object Store path information to determine the partition columns
and data types.
See Query External Tables with Partitioning Specified in Source
Files for more information.
Identity and Access You can configure Autonomous Database to use Oracle Cloud
Management (IAM) Infrastructure Identity and Access Management (IAM)
Authentication Additional authentication and authorization to allow IAM users to access an
Features Autonomous Database with IAM credentials. New additions for
IAM support include the following: defining global user mapping,
defining global roles mapping, support for resource principal
usage with IAM, and proxy user support.
See Use Identity and Access Management (IAM) Authentication
with Autonomous Database for more information.
Storage Auto Scaling With Storage auto scaling enabled the Autonomous Database
can expand to use up to three times the reserved base storage. If
you need additional storage, the database automatically uses the
reserved storage without any manual intervention required.
Storage auto scaling is disabled by default.
See Storage Auto Scaling for more information.
6-21
Chapter 6
Previous Feature Announcements
Feature Description
Documentation Addition: When you use Autonomous Database, sometimes you need to
Get Help, Search Forums, get help from the community or to talk to someone in Oracle
and Contact Support support. This documentation addition provides information about
getting help by viewing and posting questions on forums and
using Oracle Cloud Support to create a support request.
See Get Help, Search Forums, and Contact Support for more
information.
February 2022
Feature Description
Track Oracle Cloud Autonomous Database provides details on Oracle Cloud
Infrastructure Resources, Infrastructure resources, cost and usage.
Cost and Usage See Track Oracle Cloud Infrastructure Resources, Cost and
Usage Reports with Autonomous Database Views for more
information.
Critical Events with Autonomous Database events are triggered when an instance is
Customer-Managed Keys using customer-managed keys and the database becomes
inaccessible or when the database recovers from being
inaccessible.
See About Critical Events on Autonomous Database for more
information.
Time Zone Handling in The initialization parameter SYSDATE_AT_DBTIMEZONE parameter
Calls to SYSDATE and enables special handling in a session for the date and time value
SYSTIMESTAMP returned in calls to SYSDATE and SYSTIMESTAMP. Depending on
the value of SYSDATE_AT_DBTIMEZONE, you see either the date
and time based on the default Autonomous Database time zone,
Coordinated Universal Time (UTC), or based on the time zone
that you set in your database.
See SYSDATE_AT_DBTIMEZONE for more information.
January 2022
Feature Description
Use a Cloud Code Autonomous Database provides routines to manage and store
Repository files in Cloud Code (Git) Repositories. The supported Cloud Code
Repositories are: GitHub Repository, AWS CodeCommit, and
Azure Repos.
See Using and Managing a Cloud Code Repository with
Autonomous Database for more information.
Oracle Machine Learning Oracle Machine Learning Notebooks repositories are stored in a
Notebooks Repository schema on the Autonomous Database instance. This enables
database instance-specific backup and restore of Oracle Machine
Learning Notebooks as well as support for cross-region
Autonomous Data Guard.
See What's New in Oracle Machine Learning on Autonomous
Database for more information.
6-22
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Machine Learning Oracle Machine Learning Notebooks supports importing and
Notebooks Import and exporting Jupyter format notebooks. This makes it easier for users
Export of Jupyter Format to interoperate with other notebook environments.
Notebooks See Export a Notebook and Import a Notebook for more
information.
Add My IP Address Button When you provision an Autonomous Database instance and
configure network access with Access Control Lists (ACLs) or
when you update network access, click Add My IP Address to
add your current IP address to the ACL entry.
See Configuring Network Access with Access Control Rules
(ACLs) for more information.
Reconnect a Disconnected You can reconnect a refreshable clone to the source database.
Refreshable Clone This allows you to use a refreshable clone as a test database and
run DML and make changes while the database is disconnected.
When you are done with your testing you can reconnect to the
source database, which refreshes the clone to the point where it
was when you disconnected.
See Reconnect a Refreshable Clone to the Source Database for
more information.
GitHub Raw URL support Use GitHub Raw URLs with DBMS_CLOUD APIs to access source
in DBMS_CLOUD files that reside on a GitHub Repository.
See GitHub Raw URL Format for more information.
View Maintenance Status The DB_NOTIFICATIONS view stores information about
and Timezone Version maintenance status notifications and timezone version upgrade
Notifications notifications for your Autonomous Database instance.
See View Maintenance Status and Timezone Version Notifications
for more information.
One-way TLS Connections Using TLS authentication for Oracle Call Interface connections on
on Linux x64 Clients using Linux x64, a wallet is not required when the client program
Oracle Call Interface connects with Oracle Instant Client 19.13. Using TLS
without a Wallet authentication for Oracle Call Interface connections on all other
platforms, or without Oracle Instant Client 19.13, clients must
provide a generic CA root certificate wallet.
See Prepare for Oracle Call Interface, ODBC, and JDBC OCI
Connections Using TLS Authentication for more information.
6-23
Chapter 6
Previous Feature Announcements
• May 2021
• April 2021
• March 2021
• February 2021
• January 2021
December 2021
Feature Description
Oracle Database API for Oracle Database API for MongoDB lets applications interact with
MongoDB collections of JSON documents in Autonomous Database using
MongoDB commands.
See Using Oracle Database API for MongoDB for more
information.
Oracle Data Miner Access Oracle Data Miner in SQL Developer can be used with Oracle
to Autonomous Database Autonomous Database. Oracle Data Miner is a SQL Developer
extension that enables users to create, schedule, and deploy
analytical workflows through a drag-and-drop user interface.
Oracle Data Miner serves as a productivity tool for data scientists
and an enabler for citizen data scientists with a no-code machine
learning environment that automates some common machine
learning steps.
See Oracle Data Miner for more information.
Autonomous Data Guard The Autonomous Data Guard paired region list is expanded with
Paired Regions additional regions. Autonomous Data Guard paired regions are
remote regions where you can create a cross-region standby
database.
See Autonomous Data Guard Paired Regions for more
information.
DBMS_PIPE PL/SQL The DBMS_PIPE package that lets two or more sessions in the
Package same instance communicate is available in Autonomous
Database.
Cross-Region Cloning When you clone an Autonomous Database instance using the
Oracle Cloud Infrastructure Console, you can select your
preferred region for the clone. You can create the clone in the
current region or in a remote region. The list of available regions
to create a clone only shows the regions that you are subscribed
to.
See Clone an Autonomous Database Instance for more
information.
Identity and Access You can configure Autonomous Database to use Oracle Cloud
Management (IAM) Infrastructure Identity and Access Management (IAM)
Authentication authentication and authorization to allow IAM users to access an
Autonomous Database with IAM credentials.
See Use Identity and Access Management (IAM) Authentication
with Autonomous Database for more information.
6-24
Chapter 6
Previous Feature Announcements
Feature Description
Schedule Start and Stop When you enable an Auto Start/Stop Schedule for an
Times Autonomous Database instance, the instance automatically starts
and stops according to the schedule you specify. This allows you
to reduce costs by scheduling shutdown periods for times when a
system is not in use.
See Schedule Start and Stop Times for an Autonomous Database
Instance for more information.
PL/SQL API for Switching Use the CS_SESSION Package to switch the database service and
Consumer Groups consumer group of an existing session.
See CS_SESSION Package for more information.
Export Data as CSV, You can export data from Autonomous Database as text in the
JSON, or XML Data Files following formats: CSV, JSON, or XML.
See Move Data to Object Store as CSV, JSON, or XML Using
EXPORT_DATA for more information.
Use Database Links with a Create database links from an Autonomous Database source to a
Target Database on a target Oracle Database that is on a private endpoint. To create a
Private Endpoint database link to a target database on a private endpoint, the
target database must be accessible from the source database's
Oracle Cloud Infrastructure VCN.
See Create Database Links from Autonomous Database to Oracle
Databases on a Private Endpoint for more information.
November 2021
Feature Description
Database Management The Associated Services area on the Oracle Cloud Infrastructure
Console includes a Database Management link. You can use
Database Management service to monitor the health of a single
Autonomous Database or a fleet of Autonomous Databases.
See Use Database Management Service to Monitor Databases for
more information.
Database Actions Access Access Database Actions on the Oracle Cloud Infrastructure Console
with Database Actions button.
See Connect with Built-in Oracle Database Actions for more
information.
DBMS_LDAP and UTL_TCP The DBMS_LDAP and UTL_TCP packages are available, with some
Packages restrictions.
See Restrictions and Notes for Database PL/SQL Packages for more
information.
Concurrent Operations with When you initiate operations that take some time to complete, including
Lifecycle Management scaling the system (Scale Up/Down), these operations no longer
Actions prevent you from performing other operations. For example, if you
perform certain database lifecycle management actions such as
stopping the database during a long-running operation, depending on
the operation, the long running operation is either canceled or paused.
See Concurrent Operations on Autonomous Database for more
information.
6-25
Chapter 6
Previous Feature Announcements
October 2021
Feature Description
Import JSON Data from You can import JSON data from Cloud Object Store into a table.
Cloud Object Store This import method supports all the Cloud Object Stores
supported by Autonomous Database, and you can use an Oracle
Cloud Infrastructure resource principal to access your Oracle
Cloud Infrastructure Object Store or Amazon Resource Names
(ARNs) to access AWS Simple Storage Service (S3).
See Create Credentials and Copy JSON Data into an Existing
Table for more information.
Blockchain Tables Support Blockchain tables are insert-only tables that organize rows into a
number of chains. Blockchain tables protect data that records
important actions, assets, entities, and documents from
unauthorized modification or deletion by criminals, hackers, and
fraud. Blockchain tables prevent unauthorized changes made
using the database and detect unauthorized changes that bypass
the database.
See Managing Blockchain Tables for more information.
Immutable Tables Support Immutable tables are read-only tables that prevent unauthorized
data modifications by insiders and accidental data modifications
resulting from human errors.
See Managing Immutable Tables for more information.
Integrate with Oracle Cloud OCI Data Catalog metadata is synchronized with Autonomous
Infrastructure Data Catalog Database creating schemas and external tables. You are allowed
to immediately query data available in object store.
See Query External Data with Data Catalog for more information.
Oracle APEX 21.1 Autonomous Database uses Oracle APEX Release 21.1.
See Creating Applications with Oracle Application Express in
Autonomous Database for more information.
September 2021
Feature Description
TLS and Mutual TLS Autonomous Database by default supports Mutual TLS (mTLS)
Connections connections. You have the option to configure an Autonomous
Database instance to support both mTLS and TLS connections.
See the following for more information:
• About Connecting to an Autonomous Database Instance
• Update Network Options to Allow TLS or Require Only
Mutual TLS (mTLS) Authentication on Autonomous Database
6-26
Chapter 6
Previous Feature Announcements
Feature Description
Vanity URLs for APEX, You can configure a custom domain name or vanity URL for your
ORDSand Database Tools APEX apps, Oracle REST Data Services (ORDS), and developer
tools by placing an Oracle Cloud Infrastructure Load Balancer
directly in front of your Autonomous Database.
See the following for additional details, including prerequisites and
step-by-step instructions:
• Access Oracle APEX, Oracle REST Data Services, and
Developer Tools Using a Vanity URL
• Access Oracle REST Data Services, Oracle APEX, and
Developer Tools Using a Vanity URL
August 2021
Feature Description
Cross-Region Autonomous Autonomous Data Guard allows you to enable a cross-region standby
Data Guard (peer) database to provide data protection and disaster recovery for
your Autonomous Database instance.
When you enable Autonomous Data Guard, the system creates a
standby database that is continuously updated with the changes from
the primary database. You can enable Autonomous Data Guard with a
standby in the current region, a local standby, or with a standby in a
different region, a cross-region standby. You can also enable
Autonomous Data Guard with both a local standby and a cross-region
standby.
See Using Standby Databases with Autonomous Database for Disaster
Recovery for more information.
SQL Tracing for Database Use SQL tracing to help you identify the source of an excessive
Session database workload, such as a high load SQL statement in your
application.
See Perform SQL Tracing on Autonomous Database for more
information.
Export Data as JSON to Use the procedure DBMS_CLOUD.EXPORT_DATA to export data to your
Cloud Object Store Cloud Object Store as JSON, by specifying a query.
This export method supports all the Cloud Object Stores supported by
Autonomous Database, and you can use an Oracle Cloud
Infrastructure resource principal to access your Oracle Cloud
Infrastructure Object Store or Amazon Resource Names (ARNs) to
access AWS Simple Storage Service (S3).
See Move Data to Object Store as JSON Data Using EXPORT_DATA
for more information.
Documentation Addition: The documentation provides an example for loading data from a fixed-
Load Data from a Source File width source file to an external table.
with Fixed Width Data See Example Loading Data from a Fixed Width File for more
information.
6-27
Chapter 6
Previous Feature Announcements
Feature Description
Set Patch Level When you provision or clone an Autonomous Database instance you
can select a patch level to apply upcoming patches. There are two
patch level options: Regular and Early.
After your provision or clone an Autonomous Database instance and
set the patch level to Early, maintenance windows for the instance are
scheduled and applied one week before instances with the patch level
set to Regular. The Early patch level allows you to use and to test
upcoming patches before they are applied to all systems.
See Set the Patch Level for more information.
July 2021
Feature Description
View Patch Details You can view Autonomous Database patch information, including
a list of resolved issues.
See View Patch Information for more information.
Include Developer Tools The wallet README file includes links for Autonomous Database
Links in Wallet Readme tools and resources, including: Database Actions, Graph Studio,
File Oracle APEX, Oracle Machine Learning Notebooks, Autonomous
Database Service Console, and SODA drivers.
See Wallet README File for more information.
June 2021
Feature Description
Change MEDIUM Service If your application requires a customized concurrency limit not
Concurrency Limit from available with the predefined services, you can modify the
Service Console concurrency limit for the MEDIUM service from the Autonomous
Database Service Console or using PL/SQL procedures.
See Change MEDIUM Service Concurrency Limit for more
information.
Automatic Partitioning Automatic partitioning analyzes and automates partition creation
for tables and indexes of a specified schema to improve
performance and manageability in Autonomous Database.
Automatic partitioning, when applied, is transparent and does not
require user interaction or maintenance. Automatic partitioning
does not interfere with existing partitioning strategies and
manually partitioned tables are excluded as candidates for
automatic partitioning.
See Manage Automatic Partitioning on Autonomous Database for
more information.
Use Customer-Managed Autonomous Database provides two options for Transparent Data
Encryption Keys Encryption (TDE) to encrypt data in the database:
• Oracle-managed encryption keys
• Customer-managed encryption keys
See Managing Encryption Keys on Autonomous Database for
more information.
6-28
Chapter 6
Previous Feature Announcements
Feature Description
Autonomous Database You can use Autonomous Database as a Recovery Manager
RMAN Recovery Catalog (RMAN) recovery catalog. A recovery catalog is a database
schema that RMAN uses to store metadata about one or more
Oracle databases.
See Autonomous Database RMAN Recovery Catalog for more
information.
Read-Only Mode for You can set the Autonomous Database operation mode to read-
Sessions only for a session. In read-only mode users for the session can
only run queries.
See Change Autonomous Database Operation Mode for a
Session for more information.
Documentation Addition: The CS_RESOURCE_MANAGER package provides procedures to list
CS_RESOURCE_MANAGER and update consumer group parameters, and to revert parameters
Package to default values.
See CS_RESOURCE_MANAGER Package for more information.
May 2021
Feature Description
Transparent Application Autonomous Database provides application continuity features for
Continuity (TAC) on making connections to the database. You enable Application Continuity
Autonomous Database on Autonomous Database in one of two configurations: Application
Continuity(AC) or Transparent Application Continuity (TAC).
Application Continuity hides outages for thin Java-based applications,
Oracle Database Oracle Call Interface, and ODP.NET based
applications with support for open-source drivers, such as Node.js, and
Python.
Transparent Application Continuity (TAC) transparently tracks and
records session and transactional state so the database session can
be recovered following recoverable outages. This is done with no
reliance on application knowledge or application code changes.
See Using Application Continuity on Autonomous Database for more
information.
Amazon Resource Names You can use Amazon Resource Names (ARNs) to access AWS
(ARNs) to Access AWS resources with Autonomous Database. When you use ARN role based
Resources authentication with Autonomous Database, you can securely access
AWS resources without creating and saving credentials based on long-
term AWS IAM access keys.
See Use Amazon Resource Names (ARNs) to Access AWS Resources
for more information.
Resource Principal to Access When you use a resource principal with Autonomous Database, you or
Oracle Cloud Infrastructure your tenancy administrator define the Oracle Cloud Infrastructure
Resources policies in a dynamic group that allows you to access resources. With a
resource principal you do not need to create a credential object and
Autonomous Database creates and secures the resource principal
credentials you use to access the Oracle Cloud Infrastructure
resources specified in the dynamic group.
See Use Resource Principal to Access Oracle Cloud Infrastructure
Resources for more information.
6-29
Chapter 6
Previous Feature Announcements
Feature Description
Graph Studio Graph Studio features include automated modeling to create graphs
from database tables, an integrated notebook to run graph queries and
analytics, and native graph and other visualizations. You can invoke
nearly 60 pre-built graph algorithms and visualize your data with many
visualization options. Graph Studio is a fully integrated, automated
feature with Autonomous Database.
See Using Oracle Graph with Autonomous Database for more
information.
April 2021
Feature Description
Change MEDIUM Service If your application requires a customized concurrency limit not
Concurrency Limit available with the predefined services, you can modify the
concurrency limit for the MEDIUM service.
See Change MEDIUM Service Concurrency Limit for more
information.
DBMS_CLOUD REST API The DBMS_CLOUD REST API functions allow you to make HTTP
Results Cache requests and obtain and save results. Saving results in the cache
allows you to view past results with the
SESSION_CLOUD_API_RESULTS view. Saving and querying
historical results of DBMS_CLOUD REST API requests can help you
when you need to work with previous results in your applications.
See DBMS_CLOUD REST API Results Cache for more
information.
View and Manage When customer contacts are set, Oracle sends notifications to the
Customer Contacts for specified email addresses to inform you of service-related issues.
Operational Issues and Contacts in the customer contacts list receive unplanned
Announcements maintenance notices and other notices, including but not limited to
notices for database upgrades and upcoming wallet expiration.
See View and Manage Customer Contacts for Operational Issues
and Announcements for more information.
Always Free Autonomous You have the option to create a limited number of Always Free
JSON Database Autonomous JSON Databases that do not consume cloud credits.
Always Free Autonomous JSON Databases can be created in
Oracle Cloud Infrastructure accounts that are in a trial period,
have paying status, or are always free.
See Always Free Autonomous Database for more information.
Always Free Oracle APEX You have the option to create a limited number of Always Free
Application Development APEX Services that do not consume cloud credits. Always Free
APEX Service can be created in Oracle Cloud Infrastructure
accounts that are in a trial period, have paying status, or are
always free.
See Always Free Oracle APEX Application Development for more
information.
6-30
Chapter 6
Previous Feature Announcements
Feature Description
Expiring Wallets Starting six weeks before the wallet expiration date Oracle sends
Notification notification emails each week, indicating the wallet expiration
date. These emails provide notice before your wallet expires that
you need to download a new wallet. You can also use the
WalletExpirationWarning event to be notified when a wallet is
due to expire.
See Download Client Credentials (Wallets) for more information.
Documentation Addition: Oracle Graph with Autonomous Database enables you to create
Oracle Graph graphs from data in your Autonomous Database. With graphs you
can analyze your data based on connections and relationships
between data entities.
See Using Oracle Graph with Autonomous Database for more
information.
Documentation Addition: Oracle Spatial with Autonomous Database allows developers and
Oracle Spatial analysts to get started easily with location intelligence analytics
and mapping services.
See Using Oracle Spatial with Autonomous Database for more
information.
Download Data Pump To support using Oracle Data Pump Import to import a dump file
Dump File Set with Script set to a target database, you can use a script that supports
substitution characters to download all the dump files from your
Object Store in a single command.
See Download Dump Files, Run Data Pump Import, and Clean Up
Object Store for more information.
March 2021
Feature Description
Autonomous Database Tools Autonomous Database provides the following data tools:
for Database Actions • Data Load: helps you select data to load from your local computer,
from tables in other databases, or from cloud storage. Then you
can add the data you select to new or existing tables or views in
Autonomous Database.
See The Data Load Page for more information.
• Catalog: helps you to see information about the entities in your
Autonomous Database, and to see the effect that changing an
object has on other objects. Catalog provides a tool for you to
examine data lineage and understand the impact of changes.
See The Catalog Page for more information.
• Data Insights: crawls a table or business model and discovers
hidden patterns, anomalies, and outliers in your data. Insights are
automatically generated by built-in analytic functions. The results
of the insight analysis appear as bar charts in the Data Insights
dashboard.
See The Data Insights Page for more information.
• Business Models: describes the business entities that are derived
from data in your Autonomous Database schema or from other
sources. This allows you to create a semantic model on top of your
data identifying hierarchies, measures, and dimensions.
See The Business Models Page for more information.
6-31
Chapter 6
Previous Feature Announcements
Feature Description
Maintenance History You can view Autonomous Database maintenance history to see details
about past maintenance, such as the start time and stop time for past
maintenance events.
See About Autonomous Database Maintenance Windows and History
for more information.
Oracle Machine Learning AutoML User Interface (AutoML UI) is an Oracle Machine Learning
AutoML User Interface interface that provides you with no-code automated machine learning.
When you create and run an experiment in AutoML UI, it automatically
performs algorithm and feature selection, as well as model tuning and
selection, thereby enhancing productivity as well as model accuracy
and performance.
See Get Started with AutoML for more information.
Oracle Machine Learning REST endpoints are available so that you can store Machine Learning
Models models and create scoring endpoints for the models. The REST API for
Oracle Machine Learning Services supports both Oracle Machine
Learning models and ONNX format models.
See REST API for Oracle Machine Learning Services for more
information.
ADMIN Password Expiration The AdminPasswordWarning event produces an event when the
Warning Event Autonomous Database ADMIN password is expiring within 30 days or
is expired.
See About Events Based Notification and Automation on Autonomous
Database for more information.
Oracle Machine Learning for Oracle Machine Learning for Python (OML4Py) on Autonomous
Python (OML4Py) Database provides a Python API to explore and prepare data, and build
and deploy machine learning models using the database as a high-
performance compute engine. OML4Py is available through the Python
interpreter in Oracle Machine Learning Notebooks.
See Get Started with Notebooks for Data Analysis and Data
Visualization for more information.
February 2021
Feature Description
Oracle Database Actions SQL Developer Web is now called Database Actions. Database
Actions is a web-based interface that provides development tools,
data tools, administration, and monitoring features for
Autonomous Database. Using Database Actions you can load
data and run SQL statements, queries, and scripts in a
worksheet.
See Connect with Built-in Oracle Database Actions for more
information.
Query Big Data Service Big Data Service provides enterprise-grade Hadoop as a service,
Hadoop (HDFS) Data with end-to-end security, high performance, and ease of
management and upgradeability. After deploying the Oracle Cloud
SQL Query Server to Big Data Service, you can easily query data
available on Hadoop clusters at scale from Autonomous Database
using SQL.
See Query Big Data Service Hadoop (HDFS) Data from
Autonomous Database for more information.
6-32
Chapter 6
Previous Feature Announcements
January 2021
Feature Description
Autonomous Database Autonomous Database generates events that you can subscribe to with
Events Oracle Cloud Infrastructure Events. There are two new Information
events: ScheduledMaintenanceWarning and
WalletExpirationWarning.
See Use Autonomous Database Events for more information.
Private Endpoint Support for You can access Oracle APEX, Oracle SQL Developer Web, Oracle
Tools (Oracle Machine REST Data Services, and Oracle Machine Learning Notebooks with
Learning Notebooks are Private Endpoints.
supported with Private See Configuring Network Access with Private Endpoints for more
Endpoints) information.
December 2020
Feature Description
Replicate Data and Capture Using Oracle GoldenGate you can capture changes from an Oracle
Data with Oracle GoldenGate Autonomous Database and replicate to any target database or platform
that Oracle GoldenGate supports, including another Oracle
Autonomous Database.
See Oracle GoldenGate Capture for Oracle Autonomous Database for
more information.
6-33
Chapter 6
Previous Feature Announcements
Feature Description
Oracle APEX Application Oracle APEX Application Development (APEX Service) is a low cost
Development Oracle Cloud service offering convenient access to the Oracle
Application Express platform for rapidly building and deploying low-
code applications.
See Oracle APEX Application Development for more information.
Oracle Database 21c with Oracle Database 21c is available with Always Free Autonomous
Always Free Autonomous Database. See the following for more information:
Database • Always Free Autonomous Database
• Always Free Autonomous Database Oracle Database 21c
Features
Oracle APEX 20.2 Autonomous Database uses Oracle APEX 20.2.
See Creating Applications with Oracle Application Express in
Autonomous Database for more information.
November 2020
Feature Description
Amazon S3 Presigned You can use a presigned URL in any DBMS_CLOUD procedure that
URLs takes a URL to access files in Amazon Simple Storage Service,
without the need to create a credential.
See Amazon S3 Compatible URI Format for more information.
Azure Blob Storage Shared You can use Shared Access Signatures (SAS) URL in any
Access Signatures (SAS) DBMS_CLOUD procedure that takes a URL to access files in Azure
URLs Blob Storage, without the need to create a credential.
See Azure Blob Storage URI Format for more information.
October 2020
Feature Description
User-Defined Profiles If you create a user-defined profile with a Password Verification
Minimum Password Length Function (PVF), then the minimum password length can be 8
characters.
See Manage Password Complexity on Autonomous Database for
more information.
PL/SQL SDK for Oracle PL/SQL SDK for Oracle Cloud Infrastructure enables you to write
Cloud Infrastructure code to manage Oracle Cloud Infrastructure resources. The
Resources PL/SQL SDK is on all Autonomous Database Serverless
products.
See PL/SQL SDK for more information.
Documentation Addition: You can quickly create users and modify account settings for
Oracle Database Actions users with Oracle Database Actions.
User Management See Create Users on Autonomous Database for more information.
Autonomous Database Autonomous Database generates events that you can subscribe
Events to with Oracle Cloud Infrastructure Events. Subscribing to
Autonomous Database events allows you to create automation
and to receive notifications when the events occur.
See Use Autonomous Database Events for more information.
6-34
Chapter 6
Previous Feature Announcements
September 2020
Feature Description
Private Endpoint Support for You can access Oracle APEX, Oracle SQL Developer Web, and Oracle
Tools REST Data Services with Private Endpoints.
See Configuring Network Access with Private Endpoints for more
information.
Read-Only and Restricted You can select an Autonomous Database operation mode. The default
Modes mode is Read/Write. If you select Read-Only mode users can only run
queries. In addition, for either mode you can restrict access to only
allow privileged users to connect to the database.
See Change Autonomous Database Mode to Read/Write or Read-Only
for more information.
Refreshable Clones Autonomous Database provides cloning where you can choose to
create a full clone of the active instance, create a metadata clone, or
create a refreshable clone. With a refreshable clone the system creates
a clone that can be easily updated with changes from the source
database.
See Using Refreshable Clones with Autonomous Database for more
information.
Documentation Addition: This new chapter describes the auditing features that allow you to track,
Auditing Autonomous monitor, and record activity on your database. Auditing can help you
Database detect security risks and improve regulatory compliance for your
database.
See Auditing Autonomous Database for more information.
August 2020
Feature Description
Autonomous JSON Database If your database version is Oracle Database 19c or higher, you can
provision an Autonomous JSON Database. Autonomous JSON
Database is an Oracle Cloud service that is specialized for developing
NoSQL-style applications that use JavaScript Object Notation (JSON)
documents.
See About Autonomous JSON Database for more information.
Amazon S3-Compatible Autonomous Database supports Amazon S3-Compatible object stores,
Object Stores such as Oracle Cloud Infrastructure Object Storage, Google Cloud
Storage, and Wasabi Hot Cloud Storage.
See Load Data from Files in the Cloud for more information.
Asynchronous Requests The DBMS_CLOUD SEND_REQUEST function supports long running
using SEND_REQUEST requests with the additional parameters: async_request_url,
wait_for_states, and timeout.
See SEND_REQUEST Function for more information.
Rename Database If your database version is Oracle Database 19c or higher, you can
rename your Autonomous Database.
See Rename Autonomous Database for more information.
6-35
Chapter 6
Previous Feature Announcements
July 2020
Feature Description
Oracle APEX Release 20.1 Autonomous Database supports Oracle APEX Release 20.1.
See Creating Applications with Oracle Application Express in
Autonomous Database for more information.
If your Autonomous Database instance is on Oracle Database
18c, to use APEX 20.1 you must upgrade to Oracle Database
19c. After you upgrade to Oracle Database 19c, APEX is
automatically upgraded to APEX 20.1.
Use a Standby Database Autonomous Database provides the Autonomous Data Guard
feature to enable a standby (peer) database to provide data
protection and disaster recovery for your Autonomous Database
instance.
See Using a Standby Database with Autonomous Database for
more information.
Create and Alter User You can create and alter database user profiles.
Profiles See Manage User Profiles with Autonomous Database for more
information.
Manage Time Zone File The time zone files are periodically updated to reflect the latest
Versions time zone specific changes. Autonomous Database automatically
picks up the updated time zone files.
See Manage Time Zone File Version on Autonomous Database
for more information.
Network Access Changes If your Autonomous Database network access is configured to use
a private endpoint you can change the configuration to use a
public endpoint. Likewise, if your Autonomous Database instance
is configured to use a public endpoint, you can change the
configuration to use a private endpoint.
Autonomous Database network access changes from a public to a
private endpoint, or from a private to a public endpoint are only
supported with database versions Oracle Database 19c onwards.
See Configuring Network Access with Access Control Rules
(ACLs) and Private Endpoints for more information.
Use Database Links to You can create database links from an Autonomous Database to
Access Non-Oracle an Oracle Database Gateway to access Non-Oracle databases.
Databases See Create Database Links to an Oracle Database Gateway to
Access Non-Oracle Databases for more information.
June 2020
Feature Description
ORC Format and Complex Autonomous Database supports loading and querying data in
Types object store in ORC format, in addition to Avro and Parquet. Also,
for ORC, Avro, and Parquet structured file types you can load and
query complex data types. Support for ORC format and complex
types requires Oracle Database 19c.
See Load Data from Files in the Cloud and Query External Data
with ORC, Parquet, or Avro Source Files for more information.
6-36
Chapter 6
Previous Feature Announcements
Feature Description
PUT_OBJECT Maximum File The DBMS_CLOUD.PUT_OBJECT procedure maximum size limit for
Transfer Size Increase file transfers to Oracle Cloud Infrastructure Object Storage is
increased to 50 GB.
See PUT_OBJECT Procedure for more information.
Customer Managed Oracle You can use a customer managed environment to run Oracle
REST Data Services REST Data Services (ORDS) if you want to control the
configuration and management of ORDS. Installing and
configuring a customer managed environment for ORDS allows
you to run ORDS with configuration options that are not possible
using the default Oracle managed ORDS available with
Autonomous Database.
See About Customer Managed Oracle REST Data Services on
Autonomous Database for more information.
May 2020
Feature Description
Move Selective Data to You can export data to Oracle Data Pump dump files on Object Store
Object Store by specifying a query to select the data to export.
See Move Data to Object Store as Oracle Data Pump Files Using
EXPORT_DATA for more information.
Obtain Tenancy Details for a When you file a service request for Autonomous Database, you need to
Service Request provide the tenancy details for your instance. Tenancy details for the
instance are available on the Oracle Cloud Infrastructure console or
you can obtain these details by querying the database.
See Obtain Tenancy Details to File a Service Request for more
information.
Upgrade Autonomous If your Autonomous Database instance is on Oracle Database 18c, you
Database to Oracle Database can upgrade to Oracle Database 19c by clicking Upgrade to 19c.
19c
April 2020
Feature Description
SODA Documents and Autonomous Database supports loading and using Simple Oracle
Collections Document Architecture (SODA) documents and collections. SODA
allows you to store, search, and retrieve document collections, typically
JSON documents, without using SQL. You can also access document
collections with SQL/JSON operators, providing you with the simplicity
of a document database but with the power of a relational database.
See Using JSON Documents with Autonomous Database for more
information.
Load Data with Oracle Data You can use Oracle Data Pump dump files in the Cloud as source files
Pump Access Driver Dump to load your data. The files for this load type must be exported from the
Files source system using the ORACLE_DATAPUMP access driver in External
Tables.
See Create Credentials and Load Data Pump Dump Files into an
Existing Table for more information.
6-37
Chapter 6
Previous Feature Announcements
Feature Description
Create External Tables and You can query Oracle Data Pump dump files in the Cloud by creating
Query Data with Oracle Data an external table. The source files to create this type of external table
Pump Access Driver Dump must be exported from the source system using the ORACLE_DATAPUMP
Files access driver in External Tables.
See Query External Data Pump Dump Files for more information.
Oracle Extensions for IDEs You can use Oracle extensions for IDEs to develop applications on
Autonomous Database. Extensions are available for Eclipse, Microsoft
Visual Studio, and Microsoft Visual Studio Code (VS Code). These
extensions enable you to connect to, browse, and manage Autonomous
Databases in Oracle Cloud directly from the IDE.
See Use Oracle Extensions for IDEs to Develop Applications for more
information.
Per-Second Billing Autonomous Database instance CPU and storage usage is billed by
the second, with a minimum usage period of 1 minute. Previously
Autonomous Database billed in one-hour minimum increments and
partial usage of an hour was rounded up to a full hour.
Stateful Rule Support in When you configure a private endpoint you define a security rule(s) in a
Private Endpoints Network Security Group (NSG); this creates a virtual firewall for your
Autonomous Database and allows connections to the Autonomous
Database instance. The NSG can now be configured with stateful rules.
See Configure Private Endpoints with Autonomous Database for more
information.
Wallet zip File Contains The wallet file that contains client credentials (wallet files) for your
README File Autonomous Database instance now also has a README file with
wallet expiration information. Wallet files that were downloaded prior to
April 2020 do not contain this file.
See Download Client Credentials (Wallets) for more information.
March 2020
Feature Description
Restart Autonomous Use Restart to restart an Autonomous Database instance.
Database instance See Restart Autonomous Database for more information.
Oracle Data Pump Direct Depending on your cloud object store, you can use Oracle Data
Export to Object Store Pump to directly export to your object store to move data between
Autonomous Database and other Oracle databases.
See Move Data with Data Pump Export to Object Store for more
information.
Oracle Data Pump Pre- If your source files reside on Oracle Cloud Infrastructure Object
authenticated URLs for Storage, you can use pre-authenticated URLs with Oracle Data
Dump Files Pump.
See Import Data Using Oracle Data Pump on Autonomous
Database for more information.
Documentation Addition: Provides information on upgrading to Oracle Database 19c if your
Upgrade Autonomous Autonomous Database instance is on Oracle Database 18c.
Database to Oracle
Database 19c
6-38
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Database Versions Depending on the region where you provision or clone your
by Region database, Autonomous Database supports one or more Oracle
Database versions. Oracle Database 19c is available in all
commercial regions.
Oracle Application Express Autonomous Database supports Oracle Application Express
Release 19.2 Release 19.2.
See Creating Applications with Oracle Application Express in
Autonomous Database for more information.
February 2020
Feature Description
Private Endpoints This configuration option assigns a private endpoint, private IP and
hostname, to a database and allows traffic only from the VCN you
specify; access to the database from all public IPs or VCNs is blocked.
This allows you to define security rules at the Network Security Group
(NSG) level and to control traffic to your Autonomous Database.
See About Network Access Options for more information.
Oracle Database Vault Oracle Database Vault implements powerful security controls for your
database. These unique security controls restrict access to application
data by privileged database users, reducing the risk of insider and
outside threats and addressing common compliance requirements.
See Using Oracle Database Vault with Autonomous Database for more
information.
Oracle Application Express If your database instance is an Always Free Autonomous Database or
Release 19.2 is a database with Oracle Database release 19c, then your database
has Oracle Application Express Release 19.2.
See Creating Applications with Oracle Application Express in
Autonomous Database for more information.
Database Resident Using DRCP provides you with access to a connection pool in your
Connection Pool (DRCP) database that enables a significant reduction in key database
resources required to support many client connections.
See Use Database Resident Connection Pooling with Autonomous
Database for more information.
January 2020
Feature Description
Clone from a backup Clone from a backup lets you create a clone by selecting a backup from
a list of backups, or with a timestamp for a point-in-time to clone.
See Clone Autonomous Database from a Backup for more information.
Check and set By default the database uses extended data types and the value of
MAX_STRING_SIZE value MAX_STRING_SIZE is set to the value EXTENDED. To support migration
from older Oracle Databases or applications you can set
MAX_STRING_SIZE to the value STANDARD.
See Checking and Setting MAX_STRING_SIZE for more information.
6-39
Chapter 6
Previous Feature Announcements
Feature Description
Number of concurrent The maximum number of concurrent statements is increased,
statements increased depending on the connection service you are using.
depending on the service you See Predefined Database Service Names for Autonomous Database
use for more information.
Documentation addition The documentation shows the Oracle Database version available with
showing available Oracle Autonomous Database by region.
Database versions See Oracle Database Version Availability by Region for more
information.
DBMS_CLOUD REST API The DBMS_CLOUD REST API functions allow you to make HTTP
functions requests using DBMS_CLOUD.SEND_REQUEST. These functions provide
a generic API that lets you call REST APIs from supported cloud
services.
See DBMS_CLOUD REST APIs for more information.
Documentation addition to The documentation is updated to provide the steps and sample code
describe sending mail with for sending email using UTL_SMTP.
UTL_SMTP See Sending Mail with Email Delivery on Autonomous Database for
more information.
Use ACLs to control access You can now use Virtual Cloud Network, Virtual Cloud Network (OCID),
for Autonomous Database IP address, or CIDR block ACLs to control access to Oracle APEX
tools (APEX), RESTful services, and SQL Developer Web.
See Use an Access Control List with Autonomous Database for more
information.
6-40
Chapter 6
Previous Feature Announcements
December 2019
Feature Description
Support for UTL_HTTP and Autonomous Database supports the Oracle Database PL/SQL
UTL_SMTP PL/SQL Packages packages UTL_HTTP, UTL_SMTP, and DBMS_NETWORK_ACL_ADMIN with
with restrictions restrictions.
See Restrictions for Database PL/SQL Packages for more information.
Microsoft Active Directory You can configure Autonomous Database to authenticate and authorize
Users Microsoft Active Directory users. This allows Active Directory users to
access a database using their Active Directory credentials.
See Use Microsoft Active Directory with Autonomous Database for
more information.
Validate external partitioned You can validate individual partitions of an external partitioned table or
tables and hybrid partitioned a hybrid partitioned table with the procedures
tables DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE and
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE.
See Validate External Partitioned Data and Validate Hybrid Partitioned
Data for more information.
Use ACL IP address or CIDR You can now use IP address and CIDR based ACLs to control access
block settings to control to Oracle APEX (APEX), RESTful services, and SQL Developer Web.
access for Autonomous See Use an Access Control List with Autonomous Database for more
Database tools information.
Use Oracle Data Safe Oracle Data Safe provides features that help you protect sensitive and
regulated data in your database.
See Safeguard Your Data with Data Safe on Autonomous Database for
more information.
November 2019
Feature Description
Use ACLs to specify Oracle Use ACLs to specify Oracle Cloud Infrastructure VCNs that can
Cloud Infrastructure VCNs connect to your Autonomous Database.
SQL Developer Web Data In SQL Developer Web in the Worksheet page, you can upload data
Loading from local files into an existing table.
Access APEX from Oracle The Tools Tab provides access to APEX from Oracle Cloud
Cloud Infrastructure Console Infrastructure Console.
See Access Oracle Application Express Administration Services for
more information.
Access SQL Developer Web The Tools Tab provides access to SQL Developer Web from Oracle
from Oracle Cloud Cloud Infrastructure Console.
Infrastructure Console See Access SQL Developer Web as ADMIN for more information.
Access Oracle Machine The Tools Tab provides access to Oracle Machine Learning User
Learning from Oracle Cloud Administration from Oracle Cloud Infrastructure Console.
Infrastructure Console
View Maintenance Schedule The Autonomous Database Information tab shows the schedule for
from Oracle Cloud upcoming maintenance.
Infrastructure Console See About Autonomous Database Maintenance Windows for more
information.
6-41
Chapter 6
Previous Feature Announcements
Feature Description
New fields with LIST_FILES The functions DBMS_CLOUD.LIST_FILES and
and LIST_OBJECTS DBMS_CLOUD.LIST_OBJECTS produce additional metadata for files and
objects.
See LIST_FILES Function and LIST_OBJECTS Function for more
information.
October 2019
Feature Description
Preview Version for Oracle periodically provides a preview version of Autonomous
Autonomous Database Database that allows you to test your applications and to become
familiar with features in the next release of Autonomous
Database.
See Preview Versions for Autonomous Database for more
information.
Download database You can download database specific instance wallets or regional
specific instance wallets or wallets.
regional wallets See Download Client Credentials (Wallets) for more information.
Rotate database specific You can rotate database specific instance wallets or regional
instance wallets or regional wallets.
wallets See Rotate Wallets for Autonomous Database for more
information.
Create partitioned external You can create and validate external partitioned tables with the
tables with DBMS_CLOUD DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE and
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE statements.
See Query External Partitioned Data for more information.
Numeric Formats With the numberformat and numericcharacters format
options Autonomous Database supports formats to interpret
numeric strings correctly.
See DBMS_CLOUD Package Format Options for more
information.
September 2019
Feature Description
Always Free Autonomous You have the option to create a limited number of Always Free
Database Autonomous Databases that do not consume cloud credits.
Always Free databases can be created in Oracle Cloud
Infrastructure accounts that are in a trial period, have paying
status, or are always free.
See Always Free Autonomous Database for more information.
Autonomous Database You can monitor the health, capacity, and performance of your
Metrics databases with metrics, alarms, and, notifications. You can use
Oracle Cloud Infrastructure console or Monitoring APIs to view
metrics.
See Monitor Performance with Autonomous Database Metrics for
more information.
6-42
Chapter 6
Previous Feature Announcements
Feature Description
Object Store Using Public If your source files reside on an Object Store that provides public
URL URLs, you can use public URLs with DBMS_CLOUD procedures.
Public means the Object Storage service supports anonymous,
unauthenticated access to the Object Store files.
See URI Format Using Public URL for more information.
Work Requests Work Requests are available in the Oracle Cloud Infrastructure
console. Work Requests let you track progress of database
lifecycle management operations such as creating, terminating,
backing up (manual), restoring, scaling, and cloning an
Autonomous Database. Work requests allow you to track the
progress and steps completed in a database operation.
See About Work Requests for more information.
August 2019
Feature Description
Create Directory and Drop The data_pump_dir directory is available in a database. You can
Directory commands use CREATE DIRECTORY to create additional directories. To drop a
directory, you can use the DROP DIRECTORY command.
See Creating and Managing Directories for more information.
July 2019
Feature Description
Performance Hub Oracle Cloud Infrastructure console includes Performance Hub for
Autonomous Database. You can view real-time and historical
performance data from the Performance Hub.
See Monitor Autonomous Database with Performance Hub for more
information.
Oracle Cloud Infrastructure Autonomous Database supports native authentication with Oracle
Native Object Store Cloud Infrastructure Object Store. Using native authentication you can
Authentication use key based authentication to access the Object Store (instead of
using a username and password).
See CREATE_CREDENTIAL Procedure for more information.
Move to a different You can move a database to a different Oracle Cloud Infrastructure
Compartment compartment.
See Move an Autonomous Database to a Different Compartment for
more information.
6-43
Chapter 6
Previous Feature Announcements
June 2019
Feature Description
Application Express Oracle APEX (APEX) is a low-code development platform that
(APEX) enables you to build scalable, secure enterprise applications with
world-class features that can be deployed anywhere. Each
Autonomous Database instance includes a dedicated instance of
Oracle APEX; you can use this instance to create multiple
workspaces. A workspace is a shared work area where you can
build applications.
See About Oracle Application Express for more information.
SQL Developer Web Oracle SQL Developer Web provides a provides a development
environment and a data modeler interface for Autonomous
Database.
See About SQL Developer Web for more information.
Oracle REST Data You can develop and deploy RESTful Services with native Oracle
Services (ORDS) REST Data Services (ORDS) support on a database
See Developing RESTful Services in Autonomous Database for
more information.
Auto Scaling Enabling auto scaling allows a database to use up to three times
more CPU and IO resources than the currently specified CPU
Core Count. When auto scaling is enabled, if your workload
requires additional CPU and IO resources the database
automatically uses the resources without any manual intervention
required.
Azure object store with Autonomous Database now supports Microsoft Azure cloud
Data Pump storage with Oracle Data Pump.
See Import Data Using Oracle Data Pump on Autonomous
Database for more information.
May 2019
Feature Description
Support for creating You can create database links with
database links DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK to access objects
on another database.
See Use Database Links with Autonomous Database for more
information.
Oracle Spatial and Graph
with limitations
Oracle Text with limitations See Restrictions for Oracle Text for more information.
Oracle Cloud Infrastructure For importing data from Data Pump files using impdp, if your
Native URI Format source files reside on the Oracle Cloud Infrastructure Object
Supported with Oracle Storage you can use the Oracle Cloud Infrastructure native URIs
Data Pump in addition to the Swift URIs.
See DBMS_CLOUD Package File URI Formats for more
information.
6-44
Chapter 6
Previous Feature Announcements
Feature Description
Oracle Cloud Infrastructure If your source files reside on the Oracle Cloud Infrastructure
Pre-Authenticated URI Object Storage you can use the Oracle Cloud Infrastructure Pre-
Format Supported with Authenticated URIs. When you create a pre-authenticated
DBMS_CLOUD request, a unique URL is generated. You can use a pre-
authenticated URL in any DBMS_CLOUD procedure that takes a
URL to access files in the Oracle Cloud Infrastructure object store,
without the need to create a credential.
See DBMS_CLOUD Package File URI Formats for more
information.
Oracle XML DB with Oracle XML DB is a high-performance, native XML storage and
limitations retrieval technology that is delivered as a part of Oracle Database.
Limitations apply to Oracle XML DB with Autonomous Database
database.
See Restrictions for Oracle XML DB for more information.
Oracle Management Cloud Oracle Management Cloud allows you to monitor your
support Autonomous Database database availability and performance.
See Using Oracle Database Management for Autonomous
Databases for more information.
Modify CPU/IO Shares Autonomous Database comes with predefined CPU/IO shares
from Service Console assigned to different consumer groups. You can modify predefined
CPU/IO shares if your workload requires different CPU/IO
resource allocations.You can change CPU/IO shares from the
Service Console or using a PL/SQL package.
See Manage CPU/IO Shares on Autonomous Database for more
information.
April 2019
Feature Description
Support for Network Access Autonomous Database now supports setting a network Access Control
Control Lists List (ACL) to restrict access to a specific Autonomous Database
database. When you set a network ACL the database only accepts
connections from addresses specified on the ACL and rejects all other
client connections.
See Configuring Network Access with Access Control Rules (ACLs) for
more information.
Support for Updating License You can now update your license type from the Oracle Cloud
Type Infrastructure console Actions list.
See Update License Type on Autonomous Database for more
information.
Access Avro files in Object Autonomous Database allows you to directly query and load data
Stores stored as Apache Avro format files (Apache Parquet format files are
also supported). You can create external tables for Parquet or Avro
format data files.
See DBMS_CLOUD Package Format Options for Parquet and Avro and
CREATE_EXTERNAL_TABLE Procedure for Parquet or Avro Files for
more information.
6-45
Chapter 6
Previous Feature Announcements
Feature Description
Enhanced newline handling Now, when reading a file, by default DBMS_CLOUD tries to automatically
for file record delimiter with find the correct newline character to use as the record delimiter, either
DBMS_CLOUD for Windows, newline character "\r\n", or for UNIX/Linux newline
character "\n". If DBMS_CLOUD finds one of these it sets the record
delimiter for the file. You can specify the record delimiter explicitly if you
want to override the default behavior.
See DBMS_CLOUD Package Format Options for more information.
Modify CPU/IO Shares Autonomous Database comes with predefined CPU/IO shares
assigned to different consumer groups. You can modify these
predefined CPU/IO shares if your workload requires different CPU/IO
resource allocations.
See Manage CPU/IO Shares on Autonomous Database for more
information.
March 2019
Feature Description
Clone Your Database Autonomous Database provides cloning where you can choose to
clone either the full database or only the database metadata.
See Cloning a Database with Autonomous Database for more
information.
Oracle Cloud Infrastructure If your source files reside on the Oracle Cloud Infrastructure
Native URI Format Object Storage you can use the Oracle Cloud Infrastructure native
Supported with URIs in addition to the Swift URIs.
DBMS_CLOUD See DBMS_CLOUD Package File URI Formats for more
information.
UI/API Unification and Oracle Cloud Infrastructure Console and the Oracle Cloud
Workload Type field Infrastructure APIs for Autonomous Data Warehouse and
Autonomous Transaction Processing are converged to a single,
unified framework. These changes allow you to more easily
manage both types of Autonomous Databases. There is a new
Workload Type field on the console showing values of either
“Transaction Processing” or “Data Warehouse”, depending on the
type of database you are viewing. You can also select a Workload
Type when you provision an Autonomous Database.
See Workload Types for more information.
Service Gateway You can now set up a Service Gateway with Autonomous
Database. A service gateway allows connectivity to Autonomous
Database from private IP addresses in private subnets without
requiring an Internet Gateway in your VCN.
See Access Autonomous Database with Service Gateway and
Access to Oracle Services: Service Gateway for more information.
Documentation changes The procedure DBMS_CLOUD.DELETE_ALL_OPERATIONS is now
documented.
See DELETE_ALL_OPERATIONS Procedure for more
information.
6-46
Chapter 6
Previous Feature Announcements
February 2019
Feature Description
Application Continuity You can now enable and disable Application Continuity. Application
Continuity masks outages from end users and applications by
recovering the in-flight work for impacted database sessions following
outages. Application Continuity performs this recovery beneath the
application so that the outage appears to the application as a slightly
delayed execution.
See Using Application Continuity on Autonomous Database for more
information.
Documentation changes Autonomous Database documentation includes a new chapter that
describes moving data to other Oracle databases. See Moving Data
from Autonomous Database to Other Oracle Databases for more
information.
January 2019
Feature Description
Access Parquet files in Object Autonomous Database allows you to directly query and load data
Stores stored as parquet files in object stores. You can also create external
parquet format tables in object stores. See COPY_DATA Procedure for
Parquet Files and CREATE_EXTERNAL_TABLE Procedure for Parquet
Files for more information.
6-47
Chapter 6
Previous Feature Announcements
October 2018
Feature Description
Oracle Cloud Infrastructure The Autonomous Database console has a new layout and
Console changes provides new buttons that improve the usability of Autonomous
Database. These changes include a new DB Connection button
that makes it easier to download client credentials.
September 2018
Feature Description
Table compression In addition to Hybrid Columnar Compression all table
methods compression types are now available in Autonomous Database.
For more information, see Managing DML Performance and
Compression.
Idle timeout changes The idle timeout of 60 minutes is now lifted. Idle sessions that do
not hold resources required by other sessions will not be
terminated after 60 minutes now. For more information, see
Manage Concurrency and Priorities on Autonomous Database.
August 2018
Feature Description
The Oracle Cloud The sign in for Oracle Cloud Infrastructure now lists the new
Infrastructure page has a product, Oracle Autonomous Transaction Processing, in addition
new option Autonomous to Oracle Autonomous Database.
Transaction Processing
July 2018
Feature Description
SQL Developer 18.2.0 and When you connect to Autonomous Database with SQL Developer
later without Keystore 18.2.0 or later, you do not need to supply a keystore password.
Password field for The keystore pasword was required in previous SQL Developer
connections versions.
6-48
Chapter 6
Previous Feature Announcements
June 2018
Feature Description
New management interfaces Autonomous Database is now provisioned and managed using the
native Oracle Cloud Infrastructure. This provides a more intuitive user
interface to make managing your Autonomous Database instances
easier with additional capabilities including sorting and filtering.
For more information, see Oracle Help Center.
Better authorization With OCI Identity and Access Management (IAM) you can better
management organize and isolate your Autonomous Database instances using
Compartments and control what type of access a user or group of
users have.
Built-in auditing The OCI Audit service records use of Autonomous Database
application programming interface (API) endpoints as log events in
JSON format. You can view, copy, and analyze audit events with
standard log analysis tools or using the Audit service Console, the
Audit APIs, or the Java SDK.
Availability of the Phoenix Autonomous Database is now available in the Phoenix region in
region addition to Ashburn and Frankfurt regions.
User Assistance changes The documentation, videos, and examples are updated on the Oracle
Help Center to include the procedures to create and control
Autonomous Database instances with the Oracle Cloud Infrastructure
Console.
A new Related Resources page shows related resources including the
Autonomous Data Warehouse Forum .
Idle timeouts in database The idle timeouts for three database services, high, medium, and low,
services have now been relaxed. The previous idle timeout setting of 5 minutes
will now apply to only idle sessions that hold resources needed by
other active users. See Manage Concurrency and Priorities on
Autonomous Database.
May 2018
Feature Description
Oracle Cloud Infrastructure The name Swift Password is now Auth token. Use of Swift password in
Object Storage Credentials the documentation is now replaced with Auth token.
April 2018
Feature Description
Microsoft Azure Blob Storage Autonomous Database now supports Microsoft Azure Blob Storage for
integration data loading and querying external data. You can load data from or run
queries on files residing on Azure Blob Storage. See Loading Data
from Files in the Cloud.
6-49
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
Feature Description
Manage files on the local file You can now list files residing on the local file system on Autonomous
system Database, see LIST_FILES Function.
You can also remove files from the local file system. See
DELETE_FILE Procedure.
Note:
Both SH and SSB are provided as schema-only users, so you cannot unlock
or drop those users or set a password. And the storage of the sample data
sets does not count towards your database storage.
6-50
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
6-51
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
6-52
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
p_category;
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
customer_hier.c_region,
customer_hier.member_name as customer,
lo_quantity,
lo_revenue
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier)
WHERE
dwdate_hier.d_year = '1998'
AND dwdate_hier.level_name = 'MONTH'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.level_name = 'NATION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order;
SELECT
dwdate_hier.member_name as time,
part_hier.member_name as part,
customer_hier.member_name as customer,
supplier_hier.member_name as supplier,
6-53
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
lo_quantity,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier,
supplier_hier)
WHERE
dwdate_hier.d_year = '1998'
AND dwdate_hier.level_name = 'MONTH'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.c_nation = 'CANADA'
AND customer_hier.level_name = 'CITY'
AND supplier_hier.s_region = 'ASIA'
AND supplier_hier.level_name = 'REGION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order,
supplier_hier.hier_order;
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
customer_hier.member_name as customer,
supplier_hier.member_name as supplier,
lo_quantity,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
customer_hier,
supplier_hier)
WHERE
dwdate_hier.d_yearmonth = 'Apr1998'
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'MANUFACTURER'
AND customer_hier.c_region = 'AMERICA'
AND customer_hier.c_nation = 'CANADA'
AND customer_hier.level_name = 'CITY'
AND supplier_hier.level_name = 'REGION'
ORDER BY
dwdate_hier.hier_order,
part_hier.hier_order,
customer_hier.hier_order,
supplier_hier.hier_order;
SELECT
dwdate_hier.member_name as year,
part_hier.member_name as part,
supplier_hier.member_name as supplier,
6-54
Chapter 6
Sample Star Schema Benchmark (SSB) Queries and Analytic Views
lo_quantity,
lo_extendedprice,
lo_ordtotalprice,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier,
supplier_hier)
WHERE
dwdate_hier.level_name = 'YEAR'
AND part_hier.level_name = 'MANUFACTURER'
AND supplier_hier.level_name = 'SUPPLIER'
AND supplier_hier.s_suppkey = '23997';
SELECT
dwdate_hier.member_name as time,
part_hier.p_container,
part_hier.member_name as part,
lo_quantity,
lo_extendedprice,
lo_ordtotalprice,
lo_revenue,
lo_supplycost
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier)
WHERE
dwdate_hier.member_name = 'June 10, 1998 '
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'PART'
AND part_hier.p_size = 32;
SELECT
dwdate_hier.member_name as time,
part_hier.member_name as part,
part_hier.p_name,
part_hier.p_color,
lo_quantity,
lo_revenue,
lo_supplycost,
lo_revenue - lo_supplycost as profit
FROM ssb.ssb_av
HIERARCHIES (
dwdate_hier,
part_hier)
WHERE
dwdate_hier.d_yearmonth = 'Aug1996'
AND dwdate_hier.d_dayofweek = 'Friday '
AND dwdate_hier.level_name = 'DAY'
AND part_hier.level_name = 'PART'
AND part_hier.p_color in ('ivory','coral')
ORDER BY
6-55
Chapter 6
Autonomous Database Supplied Package Reference
dwdate_hier.hier_order,
part_hier.hier_order;
• DBMS_CLOUD Package
The DBMS_CLOUD package provides database routines for working with cloud
resources.
• DBMS_CLOUD_ADMIN Package
The DBMS_CLOUD_ADMIN package provides administrative routines for configuring a
database.
• DBMS_CLOUD_AI Package
The DBMS_CLOUD_AI package facilitates and configures the translation of natural
language prompts to SQL statements.
• DBMS_CLOUD_FUNCTION Package
The DBMS_CLOUD_FUNCTION package allows you invoke OCI and AWS Lambda
remote functions in your Autonomous Database as SQL functions.
• DBMS_CLOUD_LINK Package
The DBMS_CLOUD_LINK package allows a user to register a table or a view as a data
set for read only access with Cloud Links.
• DBMS_CLOUD_LINK_ADMIN Package
• DBMS_CLOUD_MACADM Package
This section covers the DBMS_CLOUD_MACADM subprograms provided with
Autonomous Database.
• DBMS_CLOUD_NOTIFICATION Package
The DBMS_CLOUD_NOTIFICATION package allows you to send messages or the
output of a SQL query to a provider.
• DBMS_DATA_ACCESS Package
The DBMS_DATA_ACCESS package provides routines to generate and manage Pre-
Authenticated Request (PAR) URLs for data sets.
• DBMS_PIPE Package
• DBMS_CLOUD_PIPELINE Package
The DBMS_CLOUD_PIPELINE package allows you to create data pipelines for loading
and exporting data in the cloud. This package supports continuous incremental
data load of files in object store into the database. DBMS_CLOUD_PIPELINE also
supports continuous incremental export of table data or query results from the
database to object store based on a timestamp column.
• DBMS_CLOUD_REPO Package
The DBMS_CLOUD_REPO package provides for use of and management of cloud
hosted code repositories from Oracle Database. Supported cloud code
repositories include GitHub, AWS CodeCommit , and Azure Repos.
6-56
Chapter 6
Autonomous Database Supplied Package Reference
• DBMS_DCAT Package
The DBMS_DCAT package provides functions and procedures to help Autonomous
Database users leverage the data discovery and centralized metadata management
system of OCI Data Catalog.
• DBMS_MAX_STRING_SIZE Package
The DBMS_MAX_STRING_SIZE package provides an interface for checking and changing the
value of the DBMS_MAX_STRING_SIZE initialization parameter.
• DBMS_AUTO_PARTITION Package
The DBMS_AUTO_PARTITION package provides administrative routines for managing
automatic partitioning of schemas and tables.
• CS_RESOURCE_MANAGER Package
The CS_RESOURCE_MANAGER package provides an interface to list and update consumer
group parameters, and to revert parameters to default values.
• CS_SESSION Package
The CS_SESSION package provides an interface to switch the database service and
consumer group of the existing session.
• DBMS_SHARE Package
The DBMS_SHARE subprograms and views are used to share local data with external
systems.
DBMS_CLOUD Package
The DBMS_CLOUD package provides database routines for working with cloud resources.
6-57
Chapter 6
Autonomous Database Supplied Package Reference
Note:
To run DBMS_CLOUD subprograms with a user other than ADMIN you need to
grant EXECUTE privileges to that user. For example, run the following
command as ADMIN to grant privileges to adb_user:
Subprogram Description
CREATE_CREDENTIAL Procedure This procedure stores cloud service credentials in
Autonomous Database.
DROP_CREDENTIAL Procedure This procedure removes an existing credential from
Autonomous Database.
6-58
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
REFRESH_VAULT_CREDENTIAL Procedure This procedure immediately refreshes the vault secret of
a vault secret credential to get the latest version of the
vault secret for the specified credential_name in
Autonomous Database.
UPDATE_CREDENTIAL Procedure This procedure updates cloud service credential
attributes in Autonomous Database.
• CREATE_CREDENTIAL Procedure
This procedure stores cloud service credentials in Autonomous Database.
• DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Database.
• REFRESH_VAULT_CREDENTIAL Procedure
This procedure refreshes the vault secret of a vault secret credential.
• UPDATE_CREDENTIAL Procedure
This procedure updates an attribute with a new value for a specified credential_name.
CREATE_CREDENTIAL Procedure
This procedure stores cloud service credentials in Autonomous Database.
Use stored cloud service credentials to access the cloud service for data loading, for querying
external data residing in the cloud, or for other cases when you use DBMS_CLOUD procedures
with a credential_name parameter. This procedure is overloaded:
Syntax
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
username IN VARCHAR2,
password IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
user_ocid IN VARCHAR2,
6-59
Chapter 6
Autonomous Database Supplied Package Reference
tenancy_ocid IN VARCHAR2,
private_key IN VARCHAR2,
fingerprint IN VARCHAR2);
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name IN VARCHAR2,
params IN CLOB DEFAULT);
Parameters
Parameter Description
credential_name The name of the credential to be stored. The credential_name
parameter must conform to Oracle object naming conventions,
which do not allow spaces or hyphens.
username The username and password arguments together specify your
cloud service credentials. See the usage notes for what to specify
for the username and password for different cloud services.
password The username and password arguments together specify your
cloud service credentials.
user_ocid Specifies the user's OCID. See Where to Get the Tenancy's OCID
and User's OCID for details on obtaining the User's OCID.
tenancy_ocid Specifies the tenancy's OCID. See Where to Get the Tenancy's
OCID and User's OCID for details on obtaining the Tenancy's OCID.
private_key Specifies the generated private key. Private keys generated with a
passphrase are not supported. You need to generate the private key
without a passphrase. See How to Generate an API Signing Key for
details on generating a key pair in PEM format.
fingerprint Specifies a fingerprint. After a generated public key is uploaded to
the user's account the fingerprint is displayed in the console. Use
the displayed fingerprint for this argument. See How to Get the
Key's Fingerprint and How to Generate an API Signing Key for more
details.
params Specifies credential parameters for one of the following:
• Amazon Resource Names (ARNs) credentials
• Google Analytics or Google BigQuery credentials
• Vault secret credentials for use with username/password type
credentials where you store the password in a supported vault:
– Oracle Cloud Infrastructure Vault
– Azure Key Vault
– AWS Secrets Manager
– GCP Secret Manager
To create a vault secret credential you must have EXECUTE
privilege on the DBMS_CLOUD package.
Usage Notes
• This operation stores the credentials in the database in an encrypted format.
• You can see the credentials in your schema by querying the user_credentials
table.
• The ADMIN user can see all the credentials by querying the dba_credentials table.
6-60
Chapter 6
Autonomous Database Supplied Package Reference
• You only need to create credentials once unless your cloud service credentials change.
Once you store the credentials you can then use the same credential name for
DBMS_CLOUD procedures that require a credential_name parameter.
• This procedure is overloaded. If you provide one of the key based authentication
attributes, user_ocid, tenancy_ocid, private_key, or fingerprint, the call is assumed
to be an Oracle Cloud Infrastructure Signing Key based credential.
• You can list credentials from the view ALL_CREDENTIALS. For example, run the following
command to list credentials:
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => 'adb_user@example.com',
password => 'password' );
END;
/
Use Auth Token based credentials when you are authenticating calls to OCI Object Storage.
For calls to any other type of Oracle Cloud Infrastructure cloud service, use Oracle Cloud
Infrastructure Signing Key Based Credentials.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => ‘OCI_KEY_CRED’,
user_ocid =>
‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf2vwetkpqsoa’,
tenancy_ocid =>
‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoypicjqqge32ewnr
cyx2a’,
private_key =>
‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQQ0QLGPD8NM//
JEBg=’,
fingerprint => ‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
6-61
Chapter 6
Autonomous Database Supplied Package Reference
Private keys generated with a passphrase are not supported. You need to generate
the private key without a passphrase. See How to Generate an API Signing Key for
more information.
Parameter Value
aws_role_arn Specifies the Amazon Resource Name (ARN) that identifies the
AWS role.
If this parameter is not supplied when creating the credential,
ORA-20041 is raised.
6-62
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Value
external_id_type Optionally set the external_id_type to use the Autonomous
Database compartment OCID, database OCID, or tenancy OCID
by supplying one of: compartment_ocid, database_ocid, or
tenant_ocid.
If this parameter is not given when creating the credential, the
default value is database_ocid.
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'MY_CRED',
params => JSON_OBJECT(
'aws_role_arn' value 'arn:aws:iam::123456:role/
AWS_ROLE_ARN',
'external_id_type' value 'database_ocid'));
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'MY_GITHUB_CRED',
username => 'user@example.com',
password => 'your_personal_access_token' );
END;
/
Parameter Value
gcp_oauth2 Specifies OAuth 2.0 access for Google Analytics or Google BigQuery
with a JSON object that includes the following parameters and their
values:
• client_id: See the Google API Console to obtain the client ID.
• client_secret: See the Google API Console to obtain the client
secret.
• refresh_token: A refresh token allows your application to obtain
new access tokens.
See Using OAuth 2.0 to Access Google APIs for more information on
Google OAuth credentials.
6-63
Chapter 6
Autonomous Database Supplied Package Reference
For example:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'GOOGLE_BIGQUERY_CRED',
params => JSON_OBJECT('gcp_oauth2' value
JSON_OBJECT(
'client_id' value
'client_id',
'client_secret' value
'client_secret',
'refresh_token' value
'refresh_token' )));
END;
/
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OCI_SECRET_CRED',
params => JSON_OBJECT(
'username' value 'scott',
'region' value 'us-ashburn-1',
'secret_id' value 'ocid1.vaultsecret.co1.ap-
mumbai-1.example..aaaaaaaauq5ok5nq3bf2vwetkpqsoa'));
END;
/
Notes for using an Oracle Cloud Infrastructure Vault secret to store vault secrets:
• When you use an Oracle Cloud Infrastructure Vault, on the Autonomous Database
instance you must enable principal authentication with
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL.
• On Oracle Cloud Infrastructure you must specify a policy for the resource principal
to access the secret.
6-64
Chapter 6
Autonomous Database Supplied Package Reference
To create a vault secret credential you must have EXECUTE privilege on the DBMS_CLOUD
package.
DROP_CREDENTIAL Procedure
This procedure removes an existing credential from Autonomous Database.
Syntax
DBMS_CLOUD.DROP_CREDENTIAL (
credential_name IN VARCHAR2);
6-65
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
credential_name The name of the credential to be removed.
REFRESH_VAULT_CREDENTIAL Procedure
This procedure refreshes the vault secret of a vault secret credential.
This procedure lets you immediately refresh the vault secret of a vault secret
credential to get the latest version of the vault secret for the specified
credential_name.
Syntax
DBMS_CLOUD.REFRESH_VAULT_CREDENTIAL (
credential_name IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to refresh.
Usage Notes
• The ADMIN user can see all the credentials by querying the dba_credentials table.
• You can list credentials from the view ALL_CREDENTIALS. For example, run the
following command to list credentials:
Example
BEGIN
DBMS_CLOUD.REFRESH_VAULT_CREDENTIAL(
credential_name => 'AZURE_SECRET_CRED');
END;
/
UPDATE_CREDENTIAL Procedure
This procedure updates an attribute with a new value for a specified credential_name.
Use stored credentials for data loading, for querying external data residing in the
Cloud, or wherever you use DBMS_CLOUD procedures with a credential_name
parameter.
6-66
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD.UPDATE_CREDENTIAL (
credential_name IN VARCHAR2,
attribute IN VARCHAR2,
value IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to be updated.
attribute Name of attribute to update.
For a username/password type credential, the valid attribute values
are: USERNAME and PASSWORD.
For a credential for an Amazon ARN, the valid attribute values are:
aws_role_arn and external_id_type.
For a credential for Google BigQuery or Google Analytics, the valid
attribute values are: client_id, client_secret, and
refresh_token.
Depending on the vault you are using, for Vault Secret Credentials the
valid attribute values are:
• Oracle Cloud Infrastructure Vault: secret_id, region
• Azure Key Vault: secret_id, azure_vault_name
• AWS Secrets Manager: secret_id, region
• GCP Secret Manager: secret_id, gcp_project_id
See CREATE_CREDENTIAL Procedure for more information.
value New value for the specified attribute.
Usage Notes
• The username value is case sensitive. It cannot contain double quotes or spaces.
• The ADMIN user can see all the credentials by querying dba_credentials.
• You only need to create credentials once unless your cloud service credentials change.
Once you store the credentials you can then use the same credential name for
DBMS_CLOUD procedures that require a credential_name parameter.
• You can list credentials from the view ALL_CREDENTIALS. For example, run the following
command to list credentials:
Examples
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
attribute => 'PASSWORD',
value => 'password');
6-67
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
BEGIN
DBMS_CLOUD.UPDATE_CREDENTIAL(
credential_name => 'ARN_CRED',
attribute => 'aws_role_arn',
value => 'NEW_AWS_ARN');
END;
/
Subprogram Description
COPY_COLLECTION Procedure This procedure loads data into existing SODA collection
either from Cloud Object Storage or from files in a
directory.
COPY_DATA Procedure This procedure loads data into existing Autonomous
Database tables either from Cloud Object Storage or
from files in a directory.
COPY_DATA Procedure for Avro, ORC, or Parquet Files This procedure with the format parameter type set to
the value orc, parquet, or avro loads data into existing
Autonomous Database tables from ORC, Parquet, or
Avro files in the Cloud or from ORC, Parquet, or Avro
files in a directory.
Similar to text files, the data is copied from the source
ORC, Parquet, or Avro file into the preexisting internal
table.
COPY_OBJECT Procedure This procedure copies files from one Cloud Object
Storage bucket to another.
CREATE_EXTERNAL_TABLE Procedure This procedure creates an external table on files in the
Cloud or on files in a directory. This allows you to run
queries on external data from Autonomous Database.
CREATE_EXTERNAL_TABLE Procedure for Avro, This procedure with the format parameter type set to
ORC, or Parquet Files the value parquet, orc, or avro, creates an external
table with either Parquet, ORC, or Avro format files in the
Cloud or in a directory.
This allows you to run queries on external data from
Autonomous Database.
CREATE_EXTERNAL_PART_TABLE Procedure This procedure creates an external partitioned table on
files in the Cloud. This allows you to run queries on
external data from Autonomous Database.
CREATE_EXTERNAL_TEXT_INDEX Procedure This procedure creates text index on the object store
files.
CREATE_HYBRID_PART_TABLE Procedure This procedure creates a hybrid partitioned table. This
allows you to run queries on hybrid partitioned data from
Autonomous Database.
6-68
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
DELETE_ALL_OPERATIONS Procedure This procedure clears either all data load operations
logged in the user_load_operations table in your
schema or clears all the data load operations of the
specified type, as indicated with the type parameter.
DELETE_FILE Procedure This procedure removes the specified file from the
specified directory on Autonomous Database
DELETE_OBJECT Procedure This procedure deletes the specified object on object
store.
DROP_EXTERNAL_TEXT_INDEX Procedure This procedure drops text index on the object store files.
EXPORT_DATA Procedure This procedure exports data from Autonomous
Database to files in the Cloud based on the result of a
query. The overloaded form enables you to use the
operation_id parameter. Depending on the format
parameter type option specified, the procedure exports
rows to the Cloud Object store as text with options of
CSV, JSON, Parquet, or XML; or using the
ORACLE_DATAPUMP access driver to write data to a
dump file.
GET_OBJECT Procedure and Function This procedure is overloaded. The procedure form reads
an object from Cloud Object Storage and copies it to
Autonomous Database. The function form reads an
object from Cloud Object Storage and returns a BLOB to
Autonomous Database.
LIST_FILES Function This function lists the files in the specified directory. The
results include the file names and additional metadata
about the files such as file size in bytes, creation
timestamp, and the last modification timestamp.
LIST_OBJECTS Function This function lists objects in the specified location on
object store. The results include the object names and
additional metadata about the objects such as size,
checksum, creation timestamp, and the last modification
timestamp.
MOVE_OBJECT Procedure This procedure moves an object from one Cloud Object
Storage bucket to another one.
PUT_OBJECT Procedure This procedure is overloaded. In one form the procedure
copies a file from Autonomous Database to the Cloud
Object Storage. In another form the procedure copies a
BLOB from Autonomous Database to the Cloud Object
Storage.
SYNC_EXTERNAL_PART_TABLE Procedure This procedure simplifies updating an external
partitioned table from files in the Cloud. Run this
procedure whenever new partitions are added or when
partitions are removed from the Object Store source for
the external partitioned table.
VALIDATE_EXTERNAL_TABLE Procedure This procedure validates the source files for an external
table, generates log information, and stores the rows
that do not match the format options specified for the
external table in a badfile table on Autonomous
Database.
6-69
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
VALIDATE_EXTERNAL_PART_TABLE Procedure This procedure validates the source files for an external
partitioned table, generates log information, and stores
the rows that do not match the format options specified
for the external table in a badfile table on Autonomous
Database.
VALIDATE_HYBRID_PART_TABLE Procedure This procedure validates the source files for a hybrid
partitioned table, generates log information, and stores
the rows that do not match the format options specified
for the hybrid table in a badfile table on Autonomous
Database.
• COPY_COLLECTION Procedure
This procedure loads data into a SODA collection from Cloud Object Storage or
from a directory. If the specified SODA collection does not exist, the procedure
creates it. The overloaded form enables you to use the operation_id parameter.
• COPY_DATA Procedure
This procedure loads data into existing Autonomous Database tables from files in
the Cloud, or from files in a directory. The overloaded form enables you to use the
operation_id parameter.
• COPY_DATA Procedure for Avro, ORC, or Parquet Files
• COPY_OBJECT Procedure
This procedure copies an object from one Cloud Object Storage bucket or folder to
another.
• CREATE_EXTERNAL_PART_TABLE Procedure
This procedure creates an external partitioned table on files in the Cloud, or from
files in a directory. This allows you to run queries on external data from
Autonomous Database.
• CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud or from files in a
directory. This allows you to run queries on external data from Autonomous
Database.
• CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or Parquet Files
• CREATE_EXTERNAL_TEXT_INDEX Procedure
This procedure creates a text index on Object Storage files.
• CREATE_HYBRID_PART_TABLE Procedure
This procedure creates a hybrid partitioned table. This allows you to run queries
on hybrid partitioned data from Autonomous Database using database objects and
files in the Cloud, or database objects and files in a directory.
• DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in the
user_load_operations table in your schema or clears all the data load
operations of the specified type, as indicated with the type parameter.
• DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on
Autonomous Database.
• DELETE_OBJECT Procedure
This procedure deletes the specified object on object store.
6-70
Chapter 6
Autonomous Database Supplied Package Reference
• DROP_EXTERNAL_TEXT_INDEX Procedure
This procedure drops text index on the Object Storage files.
• EXPORT_DATA Procedure
• GET_OBJECT Procedure and Function
This procedure is overloaded. The procedure form reads an object from Cloud Object
Storage and copies it to Autonomous Database. The function form reads an object from
Cloud Object Storage and returns a BLOB to Autonomous Database.
• LIST_FILES Function
This function lists the files in the specified directory. The results include the file names
and additional metadata about the files such as file size in bytes, creation timestamp, and
the last modification timestamp.
• LIST_OBJECTS Function
This function lists objects in the specified location on object store. The results include the
object names and additional metadata about the objects such as size, checksum,
creation timestamp, and the last modification timestamp.
• MOVE_OBJECT Procedure
This procedure moves an object from one Cloud Object Storage bucket or folder to
another.
• PUT_OBJECT Procedure
This procedure is overloaded. In one form the procedure copies a file from Autonomous
Database to the Cloud Object Storage. In another form the procedure copies a BLOB from
Autonomous Database to the Cloud Object Storage.
• SYNC_EXTERNAL_PART_TABLE Procedure
This procedure simplifies updating an external partitioned table from files in the Cloud.
Run this procedure whenever new partitions are added or when partitions are removed
from the Object Store source for the external partitioned table.
• VALIDATE_EXTERNAL_PART_TABLE Procedure
This procedure validates the source files for an external partitioned table, generates log
information, and stores the rows that do not match the format options specified for the
external table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id parameter.
• VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table, generates log information,
and stores the rows that do not match the format options specified for the external table
in a badfile table on Autonomous Database. The overloaded form enables you to use the
operation_id parameter.
• VALIDATE_HYBRID_PART_TABLE Procedure
This procedure validates the source files for a hybrid partitioned table, generates log
information, and stores the rows that do not match the format options specified for the
hybrid table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id parameter.
COPY_COLLECTION Procedure
This procedure loads data into a SODA collection from Cloud Object Storage or from a
directory. If the specified SODA collection does not exist, the procedure creates it. The
overloaded form enables you to use the operation_id parameter.
6-71
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD.COPY_COLLECTION (
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.COPY_COLLECTION (
collection_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
format IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER
);
Parameters
Parameter Description
collection_name The name of the SODA collection into which data will be loaded. If a
collection with this name already exists, the specified data will be
loaded, otherwise a new collection is created.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
This parameter is not used when you specify a directory with
file_uri_list.
6-72
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list This parameter specifies either a comma-delimited list of source file
URIs or one or more directories and source files.
Cloud source file URIs
You can use wildcards as well as regular expressions in the file
names in Cloud source file URIs.
Regular expressions can only be used when the regexuri format
parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters when
the regexuri parameter is set to FALSE. When the regexuri
parameter is set to TRUE the characters "*" and "?" are part of the
specified regular expression pattern.
Regular expression patterns are only supported for the file name or
subfolder path in your URIs and the pattern matching is identical to
that performed by the REGEXP_LIKE function.
For example:
6-73
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
format The options describing the format of the source files. These options
are specified as a JSON string.
Supported formats are: characterset, compression,
encryption, ignoreblanklines, jsonpath, maxdocsize,
recorddelimiter, rejectlimit, type, unpackarrays,
keyassignment, and keypath.
Apart from the mentioned formats for JSON data, Autonomous
Database supports other formats too. For the list of format
arguments supported by Autonomous Database, see
DBMS_CLOUD Package Format Options.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS
view.
Example
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user_name@oracle.com',
password => 'password'
);
DBMS_CLOUD.COPY_COLLECTION(
collection_name => 'myCollection',
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adbexample/b/json/o/myCollection.json'
);
END;
/
COPY_DATA Procedure
This procedure loads data into existing Autonomous Database tables from files in the
Cloud, or from files in a directory. The overloaded form enables you to use the
operation_id parameter.
Syntax
DBMS_CLOUD.COPY_DATA (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
schema_name IN VARCHAR2,
field_list IN CLOB,
format IN CLOB);
DBMS_CLOUD.COPY_DATA (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
6-74
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
table_name The name of the target table on the database. The target table needs to
be created before you run COPY_DATA.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
This parameter is not used when you specify a directory with
file_uri_list.
6-75
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list You can use wildcards as well as regular expressions in the file names in
Cloud source file URIs.
Cloud source file URIs
This parameter specifies either a comma-delimited list of source file URIs
or one or more directories and source files.
Regular expressions can only be used when the regexuri format
parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters when the
regexuri parameter is set to FALSE. When the regexuri parameter is
set to TRUE the characters "*" and "?" are part of the specified regular
expression pattern.
Regular expression patterns are only supported for the file name or
subfolder path in your URIs and the pattern matching is identical to that
performed by the REGEXP_LIKE function.
For example:
The format of the URIs depends on the Cloud Object Storage service you
are using, for details see DBMS_CLOUD URI Formats.
See REGEXP_LIKE Condition for more information on REGEXP_LIKE
condition.
Directory
You can specify one directory and one or more file names or use a
comma separated list of directories and file names. The format to specify
a directory is:'MY_DIR:filename.ext'. By default the directory name
MY_DIR is a database object and is case-insensitive. The file name is
case sensitive.
Regular expressions are not supported when specifying the file names in
a directory. You can only use wildcards to specify file names in a
directory. The character "*" can be used as the wildcard for multiple
characters, and the character "?" can be used as the wildcard for a single
character. For example:'MY_DIR:*" or 'MY_DIR:test?'
To specify multiple directories, use a comma separated list of directories:
For example:'MY_DIR1:*, MY_DIR2:test?'
Use double quotes to specify a case-sensitive directory name. For
example:'"my_dir1":*, "my_dir2":Test?'
To include a quote character, use two quotes. For
example:'MY_DIR:''filename.ext'. This specifies the filename
starts with a quote (').
schema_name The name of the schema where the target table resides. The default
value is NULL meaning the target table is in the same schema as the
user running the procedure.
6-76
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
field_list Identifies the fields in the source files and their data types. The default
value is NULL meaning the fields and their data types are determined by
the target table definition. This argument's syntax is the same as the
field_list clause in regular Oracle external tables. For more
information about field_list see Oracle® Database Utilities.
When the format parameter type option value is json, this parameter is
ignored.
For an example using field_list, see CREATE_EXTERNAL_TABLE
Procedure.
format The options describing the format of the source, log, and bad files. For
the list of the options and how to specify the values see DBMS_CLOUD
Package Format Options.
For Avro, ORC, or Parquet file format options, see DBMS_CLOUD
Package Format Options for Avro, ORC, or Parquet.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS view.
Usage Note
The default record delimiter is detected newline. With detected newline, DBMS_CLOUD tries
to automatically find the correct newline character to use as the record delimiter. DBMS_CLOUD
first searches for the Windows newline character \r\n. If it finds the Windows newline
character, this is used as the record delimiter for all files in the procedure. If a Windows
newline character is not found, DBMS_CLOUD searches for the UNIX/Linux newline character
\n, and if it finds one it uses \n as the record delimiter for all files in the procedure. If the
source files use a combination of different record delimiters, you may encounter an error such
as, "KUP-04020: found record longer than buffer size supported". In this case, you
need to either modify the source files to use the same record delimiter or only specify the
source files that use the same record delimiter.
See DBMS_CLOUD Package Format Options for information on the recorddelmiter format
option.
Example
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'OBJ_STORE_CRED',
username => 'user_name@oracle.com',
password => 'password'
);
DBMS_CLOUD.COPY_COLLECTION(
table_name => 'ORDERS',
schema_name => 'TEST_SCHEMA',
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/adbexample/b/json/o/orde[r]s.tbl.1'
format => json_object('ignoreblanklines' value TRUE,
'rejectlimit' value '0',
'dateformat' value 'yyyy-mm-dd',
6-77
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD.COPY_DATA (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
schema_name IN VARCHAR2 DEFAULT,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the target table on the database. The target table
needs to be created before you run COPY_DATA.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
This parameter is not used when you specify a directory with
file_uri_list.
6-78
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list This parameter specifies either a comma-delimited list of source file
URIs or one or more directories and source files.
Cloud source file URIs
You can use wildcards as well as regular expressions in the file
names in Cloud source file URIs.
Regular expressions can only be used when the regexuri format
parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters when
the regexuri parameter is set to FALSE. When the regexuri
parameter is set to TRUE the characters "*" and "?" are part of the
specified regular expression pattern.
Regular expression patterns are only supported for the file name or
subfolder path in your URIs and the pattern matching is identical to
that performed by the REGEXP_LIKE function.
For example:
6-79
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
field_list Ignored for Avro, ORC, or Parquet files.
The fields in the source match the external table columns by name.
Source data types are converted to the external table column data
type.
For ORC files, see DBMS_CLOUD Package ORC to Oracle Data
Type Mapping.
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle
Data Type Mapping for details on mapping.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data
Type Mapping for details on mapping.
format The options describing the format of the source files. For Avro,
ORC, or Parquet files, only two options are supported: see
DBMS_CLOUD Package Format Options for Avro, ORC, or
Parquet.
Usage Notes
• As with other data files, Avro, ORC, and Parquet data loads generate logs that are
viewable in the tables dba_load_operations and user_load_operations.
Each load operation adds a record to dba[user]_load_operations that indicates
the table containing the logs.
The log table provides summary information about the load.
• For Avro, ORC, or Parquet, when the format parameter type is set to the value
avro, orc, or parquet, the BADFILE_TABLE table is always empty.
– For Parquet files, PRIMARY KEY constraint errors throw an ORA error.
– If data for a column encounters a conversion error, for example, the target
column is not large enough to hold the converted value, the value for the
column is set to NULL. This does not produce a rejected record.
COPY_OBJECT Procedure
This procedure copies an object from one Cloud Object Storage bucket or folder to
another.
The source and target bucket or folder can be in the same or different Cloud Object
store provider.
When the source and target are in distinct Object Stores or have different accounts
with the same cloud provider, you can give separate credential names for the source
and target locations.
The source credential name is by default also used by the target location when target
credential name is not provided.
Syntax
DBMS_CLOUD.COPY_OBJECT (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_object_uri IN VARCHAR2,
target_object_uri IN VARCHAR2,
6-80
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
source_credential_nam The name of the credential to access the source Cloud Object Storage.
e You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a source_credential_name value, the
credential_name is set to NULL.
source_object_uri Specifies URI, that point to the source Object Storage bucket or folder
location.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_object_uri Specifies the URI for the target Object Store.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_credential_nam The name of the credential to access the target Cloud Object Storage
e location.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a target_credential_name value, the
target_object_uri is set to the source_credential_name value.
Example
BEGIN
DBMS_CLOUD.COPY_OBJECT (
source_credential_name => 'OCI_CRED',
source_object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/bgfile.csv',
target_object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/myfile.csv'
);
END;
/
CREATE_EXTERNAL_PART_TABLE Procedure
This procedure creates an external partitioned table on files in the Cloud, or from files in a
directory. This allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name IN VARCHAR2,
6-81
Chapter 6
Autonomous Database Supplied Package Reference
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
file_uri_list IN VARCHAR2,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
partitioning_clause Specifies the complete partitioning clause, including the location
information for individual partitions.
If you use the partitioning_clause parameter, the
file_uri_list parameter is not allowed.
6-82
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list This parameter specifies either a comma-delimited list of source
file URIs or one or more directories and source files.
Cloud source file URIs
You can use wildcards as well as regular expressions in the file
names in Cloud source file URIs.
Regular expressions can only be used when the regexuri
format parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters
when the regexuri parameter is set to FALSE. When the
regexuri parameter is set to TRUE the characters "*" and "?" are
part of the specified regular expression pattern.
Regular expression patterns are only supported for the file name
or subfolder path in your URIs and the pattern matching is
identical to that performed by the REGEXP_LIKE function.
This option is only supported with external tables that are created
on a file in the Object Storage.
For example:
6-83
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
format The format option partition_columns specifies the
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE column names
and data types of partition columns when the partition columns
are derived from the file path, depending on the type of data file,
structured or unstructured:
• When the DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE
includes the column_list parameter and the data files are
unstructured, such as with CSV text files,
partition_columns does not include the data type. For
example, use a format such as the following for this type of
partition_columns specification:
'"partition_columns":["state","zipcode"]'
'"partition_columns":[
{"name":"country",
"type":"varchar2(10)"},
{"name":"year",
"type":"number"},
{"name":"month",
"type":"varchar2(10)"}]'
If the data files are unstructured and the type sub-clause is
specified with partition_columns, the type sub-clause is
ignored.
For object names that are not based on hive format, the order of
the partition_columns specified columns must match the
order as they appear in the object name in the file path specified
in the file_uri_list parameter.
To see all the format parameter options describing the format of
the source files, see DBMS_CLOUD Package Format Options.
Usage Notes
• You cannot call this procedure with both partitioning_clause and
file_uri_list parameters.
• Specifying the column_list parameter is optional with structured data files,
including Avro, Parquet, or ORC data files. If column_list is not specified, the
format parameter partition_columns option must include both name and type.
6-84
Chapter 6
Autonomous Database Supplied Package Reference
• The column_list parameter is required with unstructured data files, such as CSV text
files.
• The procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE supports external partitioned
files in the supported cloud object storage services, including:
– Oracle Cloud Infrastructure Object Storage
– Azure Blob Storage
– Amazon S3
– Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage,
Google Cloud Storage, and Wasabi Hot Cloud Storage.
– GitHub Repository
See DBMS_CLOUD URI Formats for more information.
• The procedure DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE supports external partitioned
files in directories, either in a local file system or in a network file system.
• When you call DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE with the file_uri_list
parameter, the types for columns specified in the Cloud Object Store file name must be
one of the following types:
VARCHAR2(n)
NUMBER(n)
NUMBER(p,s)
NUMBER
DATE
TIMESTAMP(9)
• The default record delimiter is detected newline. With detected newline, DBMS_CLOUD
tries to automatically find the correct newline character to use as the record delimiter.
DBMS_CLOUD first searches for the Windows newline character \r\n. If it finds the Windows
newline character, this is used as the record delimiter for all files in the procedure. If a
Windows newline character is not found, DBMS_CLOUD searches for the UNIX/Linux
newline character \n, and if it finds one it uses \n as the record delimiter for all files in the
procedure. If the source files use a combination of different record delimiters, you may
encounter an error such as, "KUP-04020: found record longer than buffer size
supported". In this case, you need to either modify the source files to use the same
record delimiter or only specify the source files that use the same record delimiter.
See DBMS_CLOUD Package Format Options for information on the recorddelmiter
format option.
• The external partitioned tables you create with
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE include two invisible columns file$path and
file$name. These columns help identify which file a record is coming from.
– file$path: Specifies the file path text up to the beginning of the object name.
– file$name: Specifies the object name, including all the text that follows the bucket
name.
6-85
Chapter 6
Autonomous Database Supplied Package Reference
Examples
Example using the partitioning_clause parameter:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name =>'PET1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter'
value 'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000)
location
( ''&base_URL//file_11.txt'')
,
partition p2 values less than (2000)
location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000)
location
( ''&base_URL/file_31.txt'')
)'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'PET',
format => json_object('delimiter'value ','),
column_list => 'name varchar2(20), gender varchar2(10),
salary number',
partitioning_clause => 'partition by range (salary)
( -- Use test1.csv in the DEFAULT DIRECTORY
DATA_PUMP_DIR
partition p1 values less than (100) LOCATION
(''test1.csv''),
-- Use test2.csv in a specified directory MY_DIR
partition p2 values less than (300) DEFAULT
DIRECTORY MY_DIR LOCATION
(''test2.csv'') )' );
END;
/
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'MYSALES',
6-86
Chapter 6
Autonomous Database Supplied Package Reference
Example using the file_uri_list without the column_list parameter with structured data
files:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
table_name => 'MYSALES',
credential_name => 'DEF_CRED_NAME',
file_uri_list => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-
string/b/bucketname/o/*.parquet',
format =>
json_object('type' value 'parquet', 'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'country', 'type' value
'varchar2(100)'),
json_object('name' value 'year', 'type' value 'number'),
json_object('name' value 'month', 'type' value
'varchar2(2)')
)
)
);
END;
/
CREATE_EXTERNAL_TABLE Procedure
This procedure creates an external table on files in the Cloud or from files in a directory. This
allows you to run queries on external data from Autonomous Database.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
6-87
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
This parameter is not used when you specify a directory with
file_uri_list.
6-88
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list This parameter specifies either a comma-delimited list of source
file URIs or one or more directories and source files.
Cloud source file URIs
You can use wildcards as well as regular expressions in the file
names in Cloud source file URIs.
Regular expressions can only be used when the regexuri
format parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters
when the regexuri parameter is set to FALSE. When the
regexuri parameter is set to TRUE the characters "*" and "?" are
part of the specified regular expression pattern.
Regular expression patterns are only supported for the file name
or subfolder path in your URIs and the pattern matching is
identical to that performed by the REGEXP_LIKE function.
This option is only supported with external tables that are created
on a file in the Object Storage.
For example:
6-89
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
field_list Identifies the fields in the source files and their data types. The
default value is NULL meaning the fields and their data types are
determined by the column_list parameter. This argument's
syntax is the same as the field_list clause in regular Oracle
Database external tables. For more information about
field_list see ORACLE_LOADER Access Driver field_list
under field_definitions Clause in Oracle Database Utilities.
format The options describing the format of the source files. For the list of
the options and how to specify the values see DBMS_CLOUD
Package Format Options.
For Avro, ORC, or Parquet format files, see
CREATE_EXTERNAL_TABLE Procedure for Avro, ORC, or
Parquet Files.
Usage Notes
• The procedure DBMS_CLOUD.CREATE_EXTERNAL_TABLE supports external partitioned
files in the supported cloud object storage services, including:
– Oracle Cloud Infrastructure Object Storage
– Azure Blob Storage
– Amazon S3
– Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object
Storage, Google Cloud Storage, and Wasabi Hot Cloud Storage.
– GitHub Repository
The credential is a table level property; therefore, the external files must be on the
same object store.
See DBMS_CLOUD URI Formats for more information.
• The default record delimiter is detected newline. With detected newline,
DBMS_CLOUD tries to automatically find the correct newline character to use as the
record delimiter. DBMS_CLOUD first searches for the Windows newline character
\r\n. If it finds the Windows newline character, this is used as the record delimiter
for all files in the procedure. If a Windows newline character is not found,
DBMS_CLOUD searches for the UNIX/Linux newline character \n, and if it finds one it
uses \n as the record delimiter for all files in the procedure. If the source files use
a combination of different record delimiters, you may encounter an error such as,
"KUP-04020: found record longer than buffer size supported". In this case,
you need to either modify the source files to use the same record delimiter or only
specify the source files that use the same record delimiter.
See DBMS_CLOUD Package Format Options for information on the
recorddelimiter format option.
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'WEATHER_REPORT_DOUBLE_DATE',
credential_name =>'OBJ_STORE_CRED',
6-90
Chapter 6
Autonomous Database Supplied Package Reference
file_uri_list =>'&base_URL/
Charlotte_NC_Weather_History_Double_Dates.csv',
format => json_object('type' value 'csv', 'skipheaders' value '1'),
field_list => 'REPORT_DATE DATE''mm/dd/yy'',
REPORT_DATE_COPY DATE ''yyyy-mm-dd'',
ACTUAL_MEAN_TEMP,
ACTUAL_MIN_TEMP,
ACTUAL_MAX_TEMP,
AVERAGE_MIN_TEMP,
AVERAGE_MAX_TEMP,
AVERAGE_PRECIPITATION',
column_list => 'REPORT_DATE DATE,
REPORT_DATE_COPY DATE,
ACTUAL_MEAN_TEMP NUMBER,
ACTUAL_MIN_TEMP NUMBER,
ACTUAL_MAX_TEMP NUMBER,
AVERAGE_MIN_TEMP NUMBER,
AVERAGE_MAX_TEMP NUMBER,
AVERAGE_PRECIPITATION NUMBER');
END;
/
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
file_uri_list IN CLOB,
column_list IN CLOB,
field_list IN CLOB DEFAULT,
format IN CLOB DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
This parameter is not used when you specify a directory with
file_uri_list.
6-91
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_uri_list This parameter specifies either a comma-delimited list of source file
URIs or one or more directories and source files.
Cloud source file URIs
You can use wildcards as well as regular expressions in the file names
in Cloud source file URIs.
Regular expressions can only be used when the regexuri format
parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters when the
regexuri parameter is set to FALSE. When the regexuri parameter
is set to TRUE the characters "*" and "?" are part of the specified
regular expression pattern.
Regular expression patterns are only supported for the file name or
subfolder path in your URIs and the pattern matching is identical to that
performed by the REGEXP_LIKE function.
This option is only supported with external tables that are created on a
file in the Object Storage.
For example:
The format of the URIs depends on the Cloud Object Storage service
you are using, for details see DBMS_CLOUD URI Formats.
See REGEXP_LIKE Condition for more information on REGEXP_LIKE
condition.
Directory
You can specify one directory and one or more file names or use a
comma separated list of directories and file names. The format to
specify a directory is:'MY_DIR:filename.ext'. By default the
directory name MY_DIR is a database object and is case-insensitive.
The file name is case sensitive.
Regular expressions are not supported when specifying the file names
in a directory. You can only use wildcards to specify file names in a
directory. The character "*" can be used as the wildcard for multiple
characters, and the character "?" can be used as the wildcard for a
single character. For example:'MY_DIR:*" or 'MY_DIR:test?'
To specify multiple directories, use a comma separated list of
directories: For example:'MY_DIR1:*, MY_DIR2:test?'
Use double quotes to specify a case-sensitive directory name. For
example:'"my_dir1":*, "my_dir2":Test?'
To include a quote character, use two quotes. For
example:'MY_DIR:''filename.ext'. This specifies the filename
starts with a quote (').
6-92
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
column_list (Optional) This field, when specified, overrides the format->schema
parameter which specifies that the schema, columns, and data types,
are derived automatically. See the format parameter for details.
When the column_list is specified for Avro, ORC, or Parquet source,
the column names must match those columns found in the file. Oracle
data types must map appropriately to the Avro, ORC, or Parquet data
types.
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data
Type Mapping for details.
For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type
Mapping for details.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type
Mapping for details.
field_list Ignored for Avro, ORC, or Parquet files.
The fields in the source match the external table columns by name.
Source data types are converted to the external table column data type.
For ORC files, see DBMS_CLOUD Package ORC to Oracle Data Type
Mapping
For Parquet files, see DBMS_CLOUD Package Parquet to Oracle Data
Type Mapping for details.
For Avro files, see DBMS_CLOUD Package Avro to Oracle Data Type
Mapping for details.
format For Avro, ORC, or Parquet type source files, see DBMS_CLOUD
Package Format Options for Avro, ORC, or Parquet for details.
Examples ORC
Examples Avro
Examples Parquet
6-93
Chapter 6
Autonomous Database Supplied Package Reference
CREATE_EXTERNAL_TEXT_INDEX Procedure
This procedure creates a text index on Object Storage files.
The CREATE_EXTERNAL_TEXT_INDEX procedure creates text index on the Object Storage
files specified at the location_uri location. The index is refreshed at regular intervals,
for any new additions or deletions done with files on location URI.
Syntax
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
index_name IN VARCHAR2,
format IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage
location. For public, pre-authenticated, or pre-signed bucket URIs, a
NULL can be specified.
See Configure Policies and Roles to Access Resources for more
information.
If you do not supply a credential_name value, the
credential_name is set to a NULL value.
location_uri Specifies the Object Store bucket or folder URI.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage
service. See DBMS_CLOUD URI Formats for more information.
index_name Specifies the name of the index you are building on the files located
at the location_uri location.
This parameter is mandatory.
6-94
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
format Specifies additional configuration options. Options are specified as
a JSON string.
The supported format options are:
refresh_rate: Specifies the frequency in minutes at which the
local index is refreshed. New file uploads and deletions result in an
index refresh. The default value is 5 minutes.
binary_files: Specifies if the contents of the files to be indexed
are binary. For example, PDF, MS-Word, The default value is FALSE.
stop_words: Specifies a list of stop words can be supplied when
you create indexes.
The stop_words value indicates if it is a list of stop words or a table
of stop words. When a JSON array is provided the stop words
parameter it is treated as a list, otherwise the stop words parameter
is treated as a table name whose column "STOP_WORDS" is used to
read in the list of stop words.
You can specify stop words using the following methods:
• JSON Array: For example: format := '{"stop_words":
["king","queen"]}'
• Stop word table name: For example: format :=
'{"stop_words":"STOP_WORDS_TABLE"}'
If you do not supply a format parameter, the format is set to a
NULL value.
Example
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TEXT_INDEX (
credential_name => 'DEFAULT_CREDENTIAL',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/ts_data/'
index_name => 'EMP',
format => JSON_OBJECT ('refresh_rate' value 10)
);
END;
/
CREATE_HYBRID_PART_TABLE Procedure
This procedure creates a hybrid partitioned table. This allows you to run queries on hybrid
partitioned data from Autonomous Database using database objects and files in the Cloud, or
database objects and files in a directory.
Syntax
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE (
table_name IN VARCHAR2,
credential_name IN VARCHAR2,
partitioning_clause IN CLOB,
column_list IN CLOB,
6-95
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
table_name The name of the external table.
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
partitioning_clause Specifies the complete partitioning clause, including the location
information for individual partitions.
To use directories, the partitioning clause supports the LOCATION
and DEFAULT DIRECTORY values.
You can use wildcards as well as regular expressions in the file
names in Cloud source file URIs.
Regular expressions can only be used when the regexuri
format parameter is set to TRUE.
The characters "*" and "?" are considered wildcard characters
when the regexuri parameter is set to FALSE. When the
regexuri parameter is set to TRUE the characters "*" and "?" are
part of the specified regular expression pattern.
Regular expression patterns are only supported for the file name
or subfolder path in your URIs and the pattern matching is
identical to that performed by the REGEXP_LIKE function. Regular
expression patterns are not supported for directory names.
For example:
6-96
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes
• The procedure DBMS_CLOUD.CREATE_HYBRID_PART_TABLE supports external partitioned
files in the supported cloud object storage services, including:
– Oracle Cloud Infrastructure Object Storage
– Azure Blob Storage
– Amazon S3
– Amazon S3-Compatible, including: Oracle Cloud Infrastructure Object Storage,
Google Cloud Storage, and Wasabi Hot Cloud Storage.
– GitHub Repository
The credential is a table level property; therefore, the external files must be on the same
object store.
See DBMS_CLOUD URI Formats for more information.
• The procedure DBMS_CLOUD.CREATE_HYBRID_PART_TABLE supports hybrid partitioned files
in directories, either in a local file system or in a network file system.
• The external partitioned tables you create with DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
include two invisible columns file$path and file$name. These columns help identify
which file a record is coming from.
– file$path: Specifies the file path text up to the beginning of the object name.
– file$name: Specifies the object name, including all the text that follows the bucket
name.
Examples
BEGIN
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
table_name =>'HPT1',
credential_name =>'OBJ_STORE_CRED',
format => json_object('delimiter' value ',', 'recorddelimiter' value
'newline', 'characterset' value 'us7ascii'),
column_list => 'col1 number, col2 number, col3 number',
partitioning_clause => 'partition by range (col1)
(partition p1 values less than (1000)
external location
( ''&base_URL/file_11.txt'')
,
partition p2 values less than (2000)
external location
( ''&base_URL/file_21.txt'')
,
partition p3 values less than (3000)
)'
);
END;
/
BEGIN
6-97
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD.CREATE_HYBRID_PART_TABLE(
table_name => 'HPT1',
format => json_object('delimiter'value ',',
'recorddelimiter'value 'newline'),
column_list => 'NAME VARCHAR2(30), GENDER VARCHAR2(10), BALANCE
number',
partitioning_clause => 'partition by range (B 2 ALANCE)
(partition p1 values less than (1000) external DEFAULT
DIRECTORY DATA_PUMP_DIR LOCATION (''Scott_male_1000.csv''),
partition p2 values less than (2000) external DEFAULT
DIRECTORY DATA_PUMP_DIR LOCATION (''Mary_female_3000.csv''),
partition p3 values less than (3000))' );
END;
/
DELETE_ALL_OPERATIONS Procedure
This procedure clears either all data load operations logged in the
user_load_operations table in your schema or clears all the data load operations
of the specified type, as indicated with the type parameter.
Syntax
DBMS_CLOUD.DELETE_ALL_OPERATIONS (
type IN VARCHAR DEFAULT NULL);
Parameters
Parameter Description
type Specifies the type of operation to delete. Type values can be found in
the TYPE column in the user_load_operations table.
If no type is specified all rows are deleted.
Usage Note
• DBMS_CLOUD.DELETE_ALL_OPERATIONS does not delete currently running operations
(operations in a "Running" status).
DELETE_FILE Procedure
This procedure removes the specified file from the specified directory on Autonomous
Database.
Syntax
DBMS_CLOUD.DELETE_FILE (
directory_name IN VARCHAR2,
file_name IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE);
6-98
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
directory_name The name of the directory on the Autonomous Database instance.
file_name The name of the file to be removed.
force Ignore and do not report errors if the file does not exist. Valid values are:
TRUE and FALSE. The default value is FALSE.
Note:
To run DBMS_CLOUD.DELETE_FILE with a user other than ADMIN you need to grant
write privileges on the directory that contains the file to that user. For example, run
the following command as ADMIN to grant write privileges to adb_user:
Example
BEGIN
DBMS_CLOUD.DELETE_FILE(
directory_name => 'DATA_PUMP_DIR',
file_name => 'exp1.dmp' );
END;
/
DELETE_OBJECT Procedure
This procedure deletes the specified object on object store.
Syntax
DBMS_CLOUD.DELETE_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
object_uri Object or file URI for the object to delete. The format of the URI depends
on the Cloud Object Storage service you are using, for details see
DBMS_CLOUD URI Formats.
6-99
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
force Ignore and do not report errors if object does not exist. Valid values are:
TRUE and FALSE. The default value is FALSE.
Example
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
credential_name => 'DEF_CRED_NAME',
object_uri => 'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
exp1.dmp' );
END;
/
DROP_EXTERNAL_TEXT_INDEX Procedure
This procedure drops text index on the Object Storage files.
The DROP_EXTERNAL_TEXT_INDEX procedure drops the specified index created with the
CREATE_EXTERNAL_TEXT_INDEX procedure.
Syntax
DBMS_CLOUD.DROP_EXTERNAL_TEXT_INDEX (
index_name IN VARCHAR2,
);
Parameters
Parameter Description
index_name Specifies the name of the index you are dropping.
The index name must match the name provided at the time of the
index creation.
This parameter is mandatory.
Example
BEGIN
DBMS_CLOUD.DROP_EXTERNAL_TEXT_INDEX (
index_name => 'EMP',
);
END;
/
EXPORT_DATA Procedure
This procedure exports data from Autonomous Database based on the result of a
query. This procedure is overloaded and supports writing files to the cloud or to a
directory.
6-100
Chapter 6
Autonomous Database Supplied Package Reference
Based on the format type parameter, the procedure exports files to the Cloud or to a
directory location as text files in CSV, JSON, Parquet, or XML format, or using the
ORACLE_DATAPUMP access driver to write data to an Oracle Datapump dump file.
Syntax
DBMS_CLOUD.EXPORT_DATA (
file_uri_list IN CLOB,
format IN CLOB,
credential_name IN VARCHAR2 DEFAULT NULL,
query IN CLOB);
DBMS_CLOUD.EXPORT_DATA (
file_uri_list IN CLOB DEFAULT NULL,
format IN CLOB DEFAULT NULL,
credential_name IN VARCHAR2 DEFAULT NULL,
query IN CLOB DEFAULT NULL,
operation_id OUT NOCOPY NUMBER);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
When the credential parameter is not included, this specifies output to a
directory.
file_uri_list There are different forms, depending on the value of the format
parameter and depending on whether you include a credential parameter:
• When the format parameter type value is json: The JSON on
Object Store or to the specified directory location is saved with a
generated file name based on the value of the file_uri_list
parameter. See File Naming for Text Output (CSV, JSON, Parquet,
or XML) for more information.
• When the format parameter type value is datapump, the
file_uri_list is a comma-delimited list of the dump files. This
specifies the files to be created on the Object Store. Use of wildcard
and substitution characters is not supported in the file_uri_list.
• When the credential_name parameter is not specified you provide
a directory name in file_uri_list.
The format of the URIs depend on the Cloud Object Storage service you
are using, for details see DBMS_CLOUD URI Formats.
format A JSON string that provides export format options.
Supported option is:
• type: The type format option is required and must have one of the
values: csv | datapump | json | parquet | xml.
See DBMS_CLOUD Package Format Options for EXPORT_DATA.
6-101
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
query Use this parameter to specify a SELECT statement so that only the
required data is exported. The query determines the contents of the files
you export as text files CSV, JSON, Parquet, or XML, or as dump files.
For example:
SELECT warehouse_id, quantity FROM inventories
For information with the format type value datapump, see Oracle Data
Pump Export Data Filters and Unloading and Loading Data with the
ORACLE_DATAPUMP Access Driver for more information.
When the format type value is json, each query result is checked and
if it is not JSON, as determined with the function:
JSON_OBJECT_T.parse(), DBMS_CLOUD.EXPORT_DATA transforms the
query to include JSON_OBJECT function to convert the row into JSON.
See JSON_OBJECT_T Object Type for more information.
For example:
SELECT JSON_OBJECT(* RETURNING CLOB) from(SELECT
warehouse_id, quantity FROM inventories)
operation_id Use this parameter to track the progress and final status of the export
operation as the corresponding ID in the USER_LOAD_OPERATIONS view.
Usage Notes:
• The query parameter value that you supply can be an advanced query, if required,
such as a query that includes joins or subqueries.
• Depending on the format parameter specified, DBMS_CLOUD.EXPORT_DATA outputs
the results of the specified query on the Cloud Object Store or to a directory
location in one of these formats:
– CSV, JSON, Parquet, or XML files.
See Export Data to Object Store as Text and Export Data to a Directory for
more information on using DBMS_CLOUD.EXPORT_DATA with CSV, JSON,
Parquet, or XML output files.
– Using the ORACLE_DATAPUMP access driver to write data to a dump file.
• For CSV, JSON, or XML output, by default when a generated file contains 10MB of
data a new output file is created. However, if you have less than 10MB of result
data you may have multiple output files, depending on the database service and
the number of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous
Database instance.
See File Naming for Text Output (CSV, JSON, Parquet, or XML) for more
information.
The default output file chunk size is 10MB for CSV, JSON, or XML. You can
change this value with the format parameter maxfilesize option. See
DBMS_CLOUD Package Format Options for EXPORT_DATA for more
information.
• For Parquet output, each generated file is less than 128MB and multiple output
files may be generated. However, if you have less than 128MB of result data, you
may have multiple output files depending on the database service and the number
of ECPUs (OCPUs if your database uses OCPUs) for the Autonomous Database
instance.
6-102
Chapter 6
Autonomous Database Supplied Package Reference
See File Naming for Text Output (CSV, JSON, Parquet, or XML) for more information.
• EXPORT_DATA uses DATA_PUMP_DIR as the default logging directory. So the write privilege
on DATA_PUMP_DIR is required when using ORACLE_DATAPUMP output.
• Autonomous Database export using DBMS_CLOUD.EXPORT_DATA with format parameter
type option datapump only supports Oracle Cloud Infrastructure Object Storage, Oracle
Cloud Infrastructure Object Storage Classic object stores or directory output.
• When you specify DBMS_CLOUD.EXPORT_DATA with the format parameter type option
datapump, the credential_name parameter value cannot be an OCI resource principal.
• Oracle Data Pump divides each dump file part into smaller chunks for faster uploads. The
Oracle Cloud Infrastructure Object Storage console shows multiple files for each dump
file part that you export. The size of the actual dump files will be displayed as zero (0)
and its related file chunks as 10mb or less. For example:
exp01.dmp
exp01.dmp_aaaaaa
exp02.dmp
exp02.dmp_aaaaaa
Downloading the zero byte dump file from the Oracle Cloud Infrastructure console or
using the Oracle Cloud Infrastructure CLI will not give you the full dump files. To
download the full dump files from the Object Store, use a tool that supports Swift such as
curl, and provide your user login and Swift auth token.
If you import a file with the DBMS_CLOUD procedures that support the format parameter
type with the value 'datapump', you only need to provide the primary file name. The
procedures that support the 'datapump' format type automatically discover and download
the chunks.
When you use DBMS_CLOUD.DELETE_OBJECT, the procedure automatically discovers and
deletes the chunks when the procedure deletes the primary file.
• The DBMS_CLOUD.EXPORT_DATA procedure creates the dump file(s) from the
file_uri_list values that you specify, as follows:
– As more files are needed, the procedure creates additional files from the
file_uri_list.
– The procedure does not overwrite files. If a dump file in the file_uri_list exists,
DBMS_CLOUD.EXPORT_DATA reports an error.
– DBMS_CLOUD.EXPORT_DATA does not create buckets.
• The number of dump files that DBMS_CLOUD.EXPORT_DATA generates is determined when
the procedure runs. The number of dump files that are generated depends on the number
of file names you provide in the file_uri_list parameter, as well as on the number of
6-103
Chapter 6
Autonomous Database Supplied Package Reference
Autonomous Database OCPUs available to the instance, the service level, and the
size of the data.
For example, if you use a 1 OCPU Autonomous Database instance or the low
service, then a single dump file is exported with no parallelism, even if you provide
multiple file names. If you use a 4 OCPU Autonomous Database instance with the
medium or high service, then the jobs can run in parallel and multiple dump files
are exported if you provide multiple file names.
• The dump files you create with DBMS_CLOUD.EXPORT_DATA cannot be imported
using Oracle Data Pump impdp. Depending on the database, you can use these
files as follows:
– On an Autonomous Database, you can use the dump files with the DBMS_CLOUD
procedures that support the format parameter type with the value 'datapump'.
You can import the dump files using DBMS_CLOUD.COPY_DATA or you can call
DBMS_CLOUD.CREATE_EXTERNAL_TABLE to create an external table.
– On any other Oracle Database, such as Oracle Database 19c on-premise, you
can import the dump files created with the procedure
DBMS_CLOUD.EXPORT_DATA using the ORACLE_DATAPUMP access driver. See
Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver for
more information.
• The query parameter value that you supply can be an advanced query, if required,
such as a query that includes joins or subqueries.
• The provided directory must exist and you must be logged in as the ADMIN user or
have WRITE access to the directory.
• DBMS_CLOUD.EXPORT_DATA does not create directories.
• The procedure does not overwrite files. For example, if a dump file in the
file_uri_list exists, DBMS_CLOUD.EXPORT_DATA reports an error such as:
Examples
The following example shows DBMS_CLOUD.EXPORT_DATA with the format type
parameter with the value datapump:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.dmp',
format => json_object('type' value 'datapump', 'compression'
value 'basic', 'version' value 'latest'),
query => 'SELECT warehouse_id, quantity FROM inventories'
);
END;
/
6-104
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.json',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'json', 'compression'
value 'gzip'));
);
END;
/
The following example shows DBMS_CLOUD.EXPORT_DATA with the format type parameter with
the value xml:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp1.xml',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'xml', 'compression' value
'gzip'));
);
END;
/
The following example shows DBMS_CLOUD.EXPORT_DATA with the format type parameter with
the value csv:
BEGIN
DBMS_CLOUD.EXPORT_DATA(
credential_name => 'OBJ_STORE_CRED',
file_uri_list => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/exp.csv',
query => 'SELECT * FROM DEPT',
format => JSON_OBJECT('type' value 'csv', 'delimiter' value
'|', 'compression' value 'gzip', 'header' value true, 'encryption' value
('user_defined_function' value 'ADMIN.decryption_callback')));
);
END;
/
6-105
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD.EXPORT_DATA(
file_uri_list => 'export_dir:sales.dmp',
format => json_object('type' value 'datapump'),
query => 'SELECT * FROM sales'
);
END;
/
Syntax
DBMS_CLOUD.GET_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
directory_name IN VARCHAR2,
file_name IN VARCHAR2 DEFAULT NULL,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD.GET_OBJECT(
credential_name IN VARCHAR2 DEFAULT NULL,
object_uri IN VARCHAR2,
startoffset IN NUMBER DEFAULT 0,
endoffset IN NUMBER DEFAULT 0,
compression IN VARCHAR2 DEFAULT NULL)
RETURN BLOB;
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
object_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD URI Formats.
directory_name The name of the directory on the database.
1
6-106
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
file_name Specifies the name of the file to create. If file name is not specified,
the file name is taken from after the last slash in the object_uri
parameter. For special cases, for example when the file name
contains slashes, use the file_name parameter.
startoffset The offset, in bytes, from where the procedure starts reading.
endoffset The offset, in bytes, until where the procedure stops reading.
compression Specifies the compression used to store the object. When
compression is set to ‘AUTO’ the file is uncompressed (the value
‘AUTO’ implies the object specified with object_uri is
compressed with Gzip).
Note:
To run DBMS_CLOUD.GET_OBJECT with a user other than ADMIN you need to grant WRITE privileges
on the directory to that user. For example, run the following command as ADMIN to grant write privileges
to adb_user:
Return Values
The function form reads from Object Store and DBMS_CLOUD.GET_OBJECT returns a BLOB.
Examples
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/file.txt',
directory_name => 'DATA_PUMP_DIR');
END;
/
SELECT to_clob(
DBMS_CLOUD.GET_OBJECT(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname/o/file.txt'))
FROM DUAL;
DECLARE
6-107
Chapter 6
Autonomous Database Supplied Package Reference
LIST_FILES Function
This function lists the files in the specified directory. The results include the file names
and additional metadata about the files such as file size in bytes, creation timestamp,
and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_FILES (
directory_name IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter Description
directory_name The name of the directory on the database.
Usage Notes
• To run DBMS_CLOUD.LIST_FILES with a user other than ADMIN you need to grant
read privileges on the directory to that user. For example, run the following
command as ADMIN to grant read privileges to adb_user:
Example
This is a pipelined function that returns a row for each file. For example, use the
following query to use this function:
6-108
Chapter 6
Autonomous Database Supplied Package Reference
LAST_MODIFIED
------------ ---------- ---------- ---------------------
---------------------
cwallet.sso 2965 2018-12-12T18:10:47Z
2019-11-23T06:36:54Z
LIST_OBJECTS Function
This function lists objects in the specified location on object store. The results include the
object names and additional metadata about the objects such as size, checksum, creation
timestamp, and the last modification timestamp.
Syntax
DBMS_CLOUD.LIST_OBJECTS (
credential_name IN VARCHAR2,
location_uri IN VARCHAR2)
RETURN TABLE;
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
location_uri Object or file URI. The format of the URI depends on the Cloud Object
Storage service you are using, for details see DBMS_CLOUD URI
Formats.
Usage Notes
• Depending on the capabilities of the object store, DBMS_CLOUD.LIST_OBJECTS does not
return values for certain attributes and the return value for the field is NULL in this case.
All supported Object Stores return values for the OBJECT_NAME, BYTES, and CHECKSUM
fields.
The following table shows support for the fields CREATED and LAST_MODIFIED by Object
Store:
6-109
Chapter 6
Autonomous Database Supplied Package Reference
Example
This is a pipelined function that returns a row for each object. For example, use the
following query to use this function:
MOVE_OBJECT Procedure
This procedure moves an object from one Cloud Object Storage bucket or folder to
another.
The source and target bucket or folder can be in the same or different Cloud Object
store provider.
When the source and target are in distinct Object Stores or have different accounts
with the same cloud provider, you can give separate credential names for the source
and target locations.
The source credential name is by default also used by the target location when target
credential name is not provided.
Syntax
DBMS_CLOUD.MOVE_OBJECT (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_object_uri IN VARCHAR2,
target_object_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL
);
6-110
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
source_credential_nam The name of the credential to access the source Cloud Object Storage.
e You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a source_credential_name value, the
credential_name is set to NULL.
source_object_uri Specifies URI, that point to the source Object Storage bucket or folder
location.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_object_uri Specifies the URI for the target Object Storage bucket or folder, where the
files need to be moved.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_credential_nam The name of the credential to access the target Cloud Object Storage
e location.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a target_credential_name value, the
target_object_uri is set to the source_credential_name value.
Example
BEGIN
DBMS_CLOUD.MOVE_OBJECT (
source_credential_name => 'OCI_CRED',
source_object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/bgfile.csv',
target_object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/myfile.csv'
);
END;
/
PUT_OBJECT Procedure
This procedure is overloaded. In one form the procedure copies a file from Autonomous
Database to the Cloud Object Storage. In another form the procedure copies a BLOB from
Autonomous Database to the Cloud Object Storage.
Syntax
DBMS_CLOUD.PUT_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
6-111
Chapter 6
Autonomous Database Supplied Package Reference
directory_name IN VARCHAR2,
file_name IN VARCHAR2);
DBMS_CLOUD.PUT_OBJECT (
credential_name IN VARCHAR2,
object_uri IN VARCHAR2,
contents IN BLOB,
file_name IN VARCHAR2);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
object_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD URI Formats.
directory_name The name of the directory on the Autonomous Database.
1
Note:
To run DBMS_CLOUD.PUT_OBJECT with a user other than ADMIN you need to grant read
privileges on the directory to that user. For example, run the following command as ADMIN to
grant read privileges to adb_user:
Example
To handle BLOB data after in-database processing and then store the data directly into
a file in the object store:
DECLARE
my_blob_data BLOB;
BEGIN
/* Some processing producing BLOB data and populating my_blob_data */
DBMS_CLOUD.PUT_OBJECT(
credential_name => 'OBJ_STORE_CRED',
object_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
my_new_file',
contents => my_blob_data));
END;
/
6-112
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes
Depending on your Cloud Object Storage, the size of the object you transfer is limited as
follows:
Oracle Cloud Infrastructure object store does not allow writing files into a public bucket
without supplying credentials (Oracle Cloud Infrastructure allows users to download objects
from public buckets). Thus, you must supply a credential name with valid credentials to store
an object in an Oracle Cloud Infrastructure public bucket using PUT_OBJECT.
SYNC_EXTERNAL_PART_TABLE Procedure
This procedure simplifies updating an external partitioned table from files in the Cloud. Run
this procedure whenever new partitions are added or when partitions are removed from the
Object Store source for the external partitioned table.
Syntax
DBMS_CLOUD.SYNC_EXTERNAL_PART_TABLE (
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
update_columns IN BOOLEAN DEFAULT);
Parameters
Parameter Description
table_name The name of the target table. The target table needs to be created before
you run DBMS_CLOUD.SYNC_EXTERNAL_PART_TABLE.
schema_name The name of the schema where the target table resides. The default
value is NULL meaning the target table is in the same schema as the
user running the procedure.
update_columns The new files may introduce a change to the schema. Updates supported
include: new columns, deleted columns. Updates to existing columns, for
example a change in the data type throw errors.
Default Value: False
VALIDATE_EXTERNAL_PART_TABLE Procedure
This procedure validates the source files for an external partitioned table, generates log
information, and stores the rows that do not match the format options specified for the
6-113
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_PART_TABLE (
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
operation_id Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
USER_LOAD_OPERATIONS view.
partition_name If defined, then only a specific partition is validated. If not specified
then read all partitions sequentially until rowcount is reached.
subpartition_name If defined, then only a specific subpartition is validated. If not
specified then read from all external partitions or subpartitions
sequentially until rowcount is reached.
schema_name The name of the schema where the external table resides. The
default value is NULL meaning the external table is in the same
schema as the user running the procedure.
rowcount Number of rows to be scanned. The default value is NULL
meaning all the rows in the source files are scanned.
partition_key_validat For internal use only. Do not use this parameter.
ion
stop_on_error Determines if the validate should stop when a row is rejected. The
default value is TRUE meaning the validate stops at the first
rejected row. Setting the value to FALSE specifies that the validate
does not stop at the first rejected row and validates all rows up to
the value specified for the rowcount parameter.
6-114
Chapter 6
Autonomous Database Supplied Package Reference
VALIDATE_EXTERNAL_TABLE Procedure
This procedure validates the source files for an external table, generates log information, and
stores the rows that do not match the format options specified for the external table in a
badfile table on Autonomous Database. The overloaded form enables you to use the
operation_id parameter.
Syntax
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE (
table_name IN VARCHAR2,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE(
table_name IN VARCHAR2,
operation_id OUT NOCOPY NUMBER,
schema_name IN VARCHAR2 DEFAULT NULL,
rowcount IN NUMBER DEFAULT 0,
stop_on_error IN BOOLEAN DEFAULT TRUE);
Parameters
Parameter Description
table_name The name of the external table.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS view.
schema_name The name of the schema where the external table resides. The default value
is NULL meaning the external table is in the same schema as the user
running the procedure.
rowcount Number of rows to be scanned. The default value is NULL meaning all the
rows in the source files are scanned.
stop_on_error Determines if the validate should stop when a row is rejected. The default
value is TRUE meaning the validate stops at the first rejected row. Setting the
value to FALSE specifies that the validate does not stop at the first rejected
row and validates all rows up to the value specified for the rowcount
parameter.
If the external table refers to Avro, ORC, or Parquet files then the validate
stops at the first rejected row.
When the external table specifies the format parameter type set to the
value avro, orc, or parquet, the parameter stop_on_error effectively
always has the value TRUE. Thus, the table badfile will always be empty for
an external table referring to Avro, ORC, or Parquet files.
Usage Notes
• DBMS_CLOUD.VALIDATE_EXTERNAL_TABLE works with both partitioned external tables and
hybrid partitioned tables. This potentially reads data from all external partitions until
rowcount is reached or stop_on_error applies. You do not have control over which
partition, or parts of a partition, is read in which order.
6-115
Chapter 6
Autonomous Database Supplied Package Reference
VALIDATE_HYBRID_PART_TABLE Procedure
This procedure validates the source files for a hybrid partitioned table, generates log
information, and stores the rows that do not match the format options specified for the
hybrid table in a badfile table on Autonomous Database. The overloaded form enables
you to use the operation_id parameter.
Syntax
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
table_name IN VARCHAR2,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
DBMS_CLOUD.VALIDATE_HYBRID_PART_TABLE (
table_name IN VARCHAR2,
operation_id OUT NUMBER,
partition_name IN CLOB DEFAULT,
subpartition_name IN CLOB DEFAULT,
schema_name IN VARCHAR2 DEFAULT,
rowcount IN NUMBER DEFAULT,
partition_key_validation IN BOOLEAN DEFAULT,
stop_on_error IN BOOLEAN DEFAULT);
Parameters
Parameter Description
table_name The name of the external table.
operation_id Use this parameter to track the progress and final status of the
load operation as the corresponding ID in the
USER_LOAD_OPERATIONS view.
partition_name If defined, then only a specific partition is validated. If not specified
then read from all external partitions sequentially until rowcount
is reached.
subpartition_name If defined, then only a specific subpartition is validated. If not
specified then read from all external partitions or subpartitions
sequentially until rowcount is reached.
schema_name The name of the schema where the external table resides. The
default value is NULL meaning the external table is in the same
schema as the user running the procedure.
rowcount Number of rows to be scanned. The default value is NULL
meaning all the rows in the source files are scanned.
partition_key_validat For internal use only. Do not use this parameter.
ion
6-116
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
stop_on_error Determines if the validate should stop when a row is rejected. The
default value is TRUE meaning the validate stops at the first
rejected row. Setting the value to FALSE specifies that the validate
does not stop at the first rejected row and validates all rows up to
the value specified for the rowcount parameter.
Subprogram Description
BULK_COPY Procedure This procedure copies files from one Cloud Object
Storage bucket to another.
BULK_DELETE Procedure The procedure deletes files from Cloud Object
Storage bucket or folder.
BULK_DOWNLOAD Procedure This procedure downloads files from Cloud Object
store bucket to a directory in Autonomous
Database.
BULK_MOVE Procedure This procedure moves files from one Cloud Object
Storage bucket to another.
BULK_UPLOAD Procedure This procedure uploads files from a directory in
Autonomous Database to the Cloud Object
Storage.
• BULK_COPY Procedure
This procedure bulk copies files from one Cloud Object Storage bucket to another. The
overloaded form enables you to use the operation_id parameter.
• BULK_DELETE Procedure
This procedure bulk deletes files from the Cloud Object Storage. The overloaded form
enables you to use the operation_id parameter. You can filter the list of files to be
deleted using a regular expression pattern compatible with REGEXP_LIKE operator.
• BULK_DOWNLOAD Procedure
This procedure downloads files into an Autonomous Database directory from Cloud
Object Storage. The overloaded form enables you to use the operation_id parameter.
You can filter the list of files to be downloaded using a regular expression pattern
compatible with REGEXP_LIKE operator.
• BULK_MOVE Procedure
This procedure bulk moves files from one Cloud Object Storage bucket or folder to
another. The overloaded form enables you to use the operation_id parameter.
• BULK_UPLOAD Procedure
This procedure copies files into Cloud Object Storage from an Autonomous Database
directory. The overloaded form enables you to use the operation_id parameter.
6-117
Chapter 6
Autonomous Database Supplied Package Reference
BULK_COPY Procedure
This procedure bulk copies files from one Cloud Object Storage bucket to another. The
overloaded form enables you to use the operation_id parameter.
You can filter the list of files to be deleted using a regular expression pattern
compatible with REGEXP_LIKE operator.
The source and target bucket or folder can be in the same or different Cloud Object
store provider.
When the source and target are in distinct Object Stores or have different accounts
with the same cloud provider, you can give separate credential names for the source
and target locations.
The source credential name is by default also used by the target location.
Syntax
DBMS_CLOUD.BULK_COPY (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_COPY (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter Description
source_credential_n The name of the credential to access the Cloud Object Storage.
ame You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a source_credential_name value, the
credential_name is set to NULL.
source_location_uri Specifies URI, that point to the source Object Storage bucket or
folder location.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage
service. See DBMS_CLOUD URI Formats for more information.
6-118
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
target_location_uri Specifies the URI for the target Object Storage bucket or folder,
where the files need to be copied.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage
service. See DBMS_CLOUD URI Formats for more information.
target_credential_n The name of the credential to access the target Cloud Object
ame Storage location.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a target_credential_name value, the
target_location_uri is set to the source_credential_name
value.
regex_filter Specifies the REGEX expression to filter files. The REGEX
expression pattern must be compatible with the REGEXP_LIKE
operator.
If you do not supply a regex_filter value, the regex_filter is
set to NULL.
See REGEXP_LIKE Condition for more information.
format Specifies the additional configuration options for the file operation.
These options are specified as a JSON string.
The supported format options are:
• logretention: It accepts an integer value that determines the
duration in days for which the status table is retained for a bulk
operation.
The default value is 2 days.
• logprefix: It accepts a string value that determines the bulk
operation status table name prefix string.
The operation type is the default value. For BULK_COPY, the
default logprefix value is COPYOBJ.
• priority: It accepts a string value that determines the number
of file operations performed concurrently.
An operation with a higher priority consumes more database
resources and should run faster.
It accepts the following values:
– HIGH: Determines the number of parallel files handled
using the database's ECPU count (OCPU count if your
database uses OCPUs)
– MEDIUM: Determines the number of simultaneous
processes using the concurrency limit for Medium service.
The default value is 4.
– LOW: Process the files in serial order.
The default value is MEDIUM.
The maximum number of concurrent file operations is limited to
64.
If you do not supply a format value, the format is set to NULL.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS
view.
6-119
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes
• An error is returned when the source and target URI point to the same Object
Storage bucket or folder.
Example
BEGIN
DBMS_CLOUD.BULK_COPY (
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7, 'logprefix'
value 'BULKOP')
);
END;
/
BULK_DELETE Procedure
This procedure bulk deletes files from the Cloud Object Storage. The overloaded form
enables you to use the operation_id parameter. You can filter the list of files to be
deleted using a regular expression pattern compatible with REGEXP_LIKE operator.
Syntax
DBMS_CLOUD.BULK_DELETE(
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_DELETE (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a credential_name value, the
credential_name is set to NULL.
6-120
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
location_uri Specifies URI, that point to an Object Storage location in the
Autonomous Database.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage
service. See DBMS_CLOUD URI Formats for more information.
regex_filter Specifies the REGEX expression to filter files. The REGEX
expression pattern must be compatible with the REGEXP_LIKE
operator.
If you do not supply a regex_filter value, the regex_filter is
set to NULL.
See REGEXP_LIKE Condition for more information.
format Specifies the additional configuration options for the file operation.
These options are specified as a JSON string.
The supported format options are:
• logretention: It accepts an integer value that determines the
duration in days for which the status table is retained for a bulk
operation .
The default value is 2 days.
• logprefix: It accepts a string value that determines the bulk
operation status table name prefix string.
The operation type is the default value. For BULK_DELETE, the
default logprefix value is DELETE.
• priority: It accepts a string value that determines the number
of file operations performed concurrently.
An operation with a higher priority consumes more database
resources and is completed sooner.
It accepts the following values:
– HIGH: Determines the number of parallel files handled
using the database's ECPU count (OCPU count if your
database uses OCPUs).
– MEDIUM: Determines the number of simultaneous
processes using the concurrency limit for Medium service.
The default value is 4.
– LOW: Process the files in serial order.
The default value is MEDIUM.
The maximum number of concurrent file operations is limited to
64.
If you do not supply a format value, the format is set to NULL.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS
view.
Example
BEGIN
DBMS_CLOUD.BULK_DELETE (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
format => JSON_OBJECT ('logretention' value 5, 'logprefix'
6-121
Chapter 6
Autonomous Database Supplied Package Reference
value 'BULKDEL')
);
END;
/
BULK_DOWNLOAD Procedure
This procedure downloads files into an Autonomous Database directory from Cloud
Object Storage. The overloaded form enables you to use the operation_id parameter.
You can filter the list of files to be downloaded using a regular expression pattern
compatible with REGEXP_LIKE operator.
Syntax
DBMS_CLOUD.BULK_DOWNLOAD (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_DOWNLOAD (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a credential_name value, the
credential_name is set to NULL.
location_uri Specifies URI, that point to an Object Storage location in the
Autonomous Database.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage
service. See DBMS_CLOUD URI Formats for more information.
directory_name The name of the directory on the Autonomous Database from
where you want to download the files.
This parameter is mandatory.
6-122
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
regex_filter Specifies the REGEX expression to filter files. The REGEX
expression pattern must be compatible with the REGEXP_LIKE
operator.
If you do not supply a regex_filter value, the regex_filter is
set to NULL.
See REGEXP_LIKE Condition for more information.
format Specifies the additional configuration options for the file operation.
These options are specified as a JSON string.
The supported format options are:
• logretention: It accepts an integer value that determines the
duration in days for which the status table is retained for a bulk
operation.
The default value is 2 days.
• logprefix: It accepts a string value that determines the bulk
operation status table name prefix string. For BULK_DOWNLOAD,
the default logprefix value is DOWNLOAD.
The operation type is the default value.
• priority: It accepts a string value that determines the number
of file operations performed concurrently.
An operation with a higher priority consumes more database
resources and is completed sooner.
It accepts the following values:
– HIGH: Determines the number of parallel files handled
using the database's ECPU count (OCPU count if your
database uses OCPUs).
– MEDIUM: Determines the number of simultaneous
processes using the concurrency limit for Medium service.
The default value is 4.
– LOW: Process the files in serial order.
The default value is MEDIUM.
The maximum number of concurrent file operations is limited to
64.
If you do not supply a format value, the format is set to NULL.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS
view.
Example
BEGIN
DBMS_CLOUD.BULK_DOWNLOAD (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 7, 'logprefix'
value 'BULKOP')
);
END;
/
6-123
Chapter 6
Autonomous Database Supplied Package Reference
BULK_MOVE Procedure
This procedure bulk moves files from one Cloud Object Storage bucket or folder to
another. The overloaded form enables you to use the operation_id parameter.
You can filter the list of files to be deleted using a regular expression pattern
compatible with REGEXP_LIKE operator.
The source and target bucket or folder can be in the same or different Cloud Object
store provider.
When the source and target are in distinct Object Stores or have different accounts
with the same cloud provider, you can give separate credential names for the source
and target locations.
The source credential name is by default also used by the target location when target
credential name is not provided.
The first step in moving files is copying them to the target location, then deleting the
source files, once they are successfully copied.
The object is renamed rather than moved if Object Store allows renaming operations
between source and target locations.
Syntax
DBMS_CLOUD.BULK_MOVE (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_MOVE (
source_credential_name IN VARCHAR2 DEFAULT NULL,
source_location_uri IN VARCHAR2,
target_location_uri IN VARCHAR2,
target_credential_name IN VARCHAR2 DEFAULT NULL,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
6-124
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
source_credential_nam The name of the credential to access the source Cloud Object Storage.
e You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a source_credential_name value, the
credential_name is set to NULL.
source_location_uri Specifies URI, that point to the source Object Storage bucket or folder
location.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_location_uri Specifies the URI for the target Object Storage bucket or folder, where the
files need to be moved.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
target_credential_nam The name of the credential to access the target Cloud Object Storage
e location.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a target_credential_name value, the
target_location_uri is set to the source_credential_name value.
regex_filter Specifies the REGEX expression to filter files. The REGEX expression
pattern must be compatible with the REGEXP_LIKE operator.
If you do not supply a regex_filter value, the regex_filter is set to
NULL.
See REGEXP_LIKE Condition for more information.
6-125
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
format Specifies the additional configuration options for the file operation. These
options are specified as a JSON string.
The supported format options are:
• logretention: It accepts an integer value that determines the
duration in days for which the status table is retained for a bulk
operation.
The default value is 2 days.
• logprefix: It accepts a string value that determines the bulk
operation status table name prefix string.
The operation type is the default value. For BULK_MOVE, the default
logprefix value is MOVE.
• priority: It accepts a string value that determines the number of
file operations performed concurrently.
An operation with a higher priority consumes more database
resources and is completed sooner.
It accepts the following values:
– HIGH: Determines the number of parallel files handled using the
database's ECPU count (OCPU count if your database uses
OCPUs).
– MEDIUM: Determines the number of simultaneous processes
using the concurrency limit for Medium service. The default
value is 4.
– LOW: Process the files in serial order.
The default value is MEDIUM.
The maximum number of concurrent file operations is limited to 64.
If you do not supply a format value, the format is set to NULL.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS view.
Example
BEGIN
DBMS_CLOUD.BULK_MOVE (
source_credential_name => 'OCI_CRED',
source_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname1/o',
target_location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname2/o',
format => JSON_OBJECT ('logretention' value 7,
'logprefix' value 'BULKMOVE')
);
END;
/
Note:
An error is returned when the source and target URI point to the same Object
Storage bucket or folder.
6-126
Chapter 6
Autonomous Database Supplied Package Reference
BULK_UPLOAD Procedure
This procedure copies files into Cloud Object Storage from an Autonomous Database
directory. The overloaded form enables you to use the operation_id parameter.
Syntax
DBMS_CLOUD.BULK_UPLOAD (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL
);
DBMS_CLOUD.BULK_UPLOAD (
credential_name IN VARCHAR2 DEFAULT NULL,
location_uri IN VARCHAR2,
directory_name IN VARCHAR2,
regex_filter IN VARCHAR2 DEFAULT NULL,
format IN CLOB DEFAULT NULL,
operation_id OUT NUMBER
);
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
You can use 'OCI$RESOURCE_PRINCIPAL' as the credential_name
when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
If you do not supply a credential_name value, the credential_name
is set to NULL.
location_uri Specifies URI, that points to an Object Storage location to upload files.
This parameter is mandatory.
The format of the URIs depends on the Cloud Object Storage service.
See DBMS_CLOUD URI Formats for more information.
directory_name The name of the directory on the Autonomous Database from where you
upload files.
This parameter is mandatory.
regex_filter Specifies the REGEX expression to filter files. The REGEX expression
pattern must be compatible with REGEXP_LIKE operator.
If you do not supply a regex_filter value, the regex_filter is set to
NULL.
See REGEXP_LIKE Condition for more information.
6-127
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
format Specifies the additional configuration options for the file operation. These
options are specified as a JSON string.
The supported format options are:
• logretention: It accepts an integer value that determines the
duration in days for which the status table is retained for a bulk
operation.
The default value is 2 days.
• logprefix: It accepts a string value that determines the bulk
operation status table name prefix string.
The operation type is the default value. For BULK_UPLOAD, the
default logprefix value is UPLOAD.
• priority: It accepts a string value that determines the number of
file operations performed concurrently.
An operation with a higher priority consumes more database
resources and is completed sooner.
It accepts the following values:
– HIGH: Determines the number of parallel files handled using the
database's ECPU count (OCPU count if your database uses
OCPUs).
– MEDIUM: Determines the number of simultaneous processes
using the concurrency limit for Medium service. The default
value is 4.
– LOW: Process the files in serial order.
The default value is MEDIUM.
The maximum number of concurrent file operations is limited to 64.
If you do not supply a format value, the format is set to NULL.
operation_id Use this parameter to track the progress and final status of the load
operation as the corresponding ID in the USER_LOAD_OPERATIONS view.
Example
BEGIN
DBMS_CLOUD.BULK_UPLOAD (
credential_name => 'OCI_CRED',
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
directory_name => 'BULK_TEST',
format => JSON_OBJECT ('logretention' value 5,
'logprefix' value 'BULKUPLOAD')
);
END;
/
6-128
Chapter 6
Autonomous Database Supplied Package Reference
6-129
Chapter 6
Autonomous Database Supplied Package Reference
The DBMS_CLOUD REST API functions allow you to make HTTP requests using
DBMS_CLOUD.SEND_REQUEST and obtain and save results. These functions provide a
generic API that lets you call any REST API with the following supported cloud
services:
• Oracle Cloud Infrastructure
See API Reference and Endpoints for information on Oracle Cloud Infrastructure
REST APIs.
• Amazon Web Services (AWS)
See Guides and API References for information on Amazon Web Services REST
APIs.
• Azure Cloud 1
See Azure REST API Reference for information on Azure REST APIs.
• Oracle Cloud Infrastructure Classic
See All REST Endpoints for information on Oracle Cloud Infrastructure Classic
REST APIs.
• GitHub Repository
See GitHub REST API for more information.
DBMS_CLOUD supports GET, PUT, POST, HEAD and DELETE HTTP methods. The REST API
method to be used for an HTTP request is typically documented in the Cloud REST
API documentation.
1 Support for Azure Cloud REST API calls is limited to the domain "blob.windows.net".
6-130
Chapter 6
Autonomous Database Supplied Package Reference
When you use DBMS_CLOUD.SEND_REQUEST and set the cache parameter to TRUE, results are
saved and you can view past results in the SESSION_CLOUD_API_RESULTS view. Saving and
querying historical results of DBMS_CLOUD REST API requests can help you when you need to
work with your previous results in your applications.
For example, to query recent DBMS_CLOUD REST API results, use the view
SESSION_CLOUD_API_RESULTS:
When you save DBMS_CLOUD REST API results with DBMS_CLOUD.SEND_REQUEST the saved
data is only available within the same session (connection). After the session exits, the saved
data is no longer available.
Use DBMS_CLOUD.GET_API_RESULT_CACHE_SIZE and
DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE to view and set the DBMS_CLOUD REST API cache
size, and to disable caching.
• DBMS_CLOUD REST API Results cache_scope Parameter
When you save DBMS_CLOUD REST API results with DBMS_CLOUD.SEND_REQUEST, access to
the results in SESSION_CLOUD_API_RESULTS is provided based on the value of
cache_scope.
• DBMS_CLOUD REST API SESSION_CLOUD_API_RESULTS View
You can save DBMS_CLOUD REST API results when you set the cache parameter to true
with DBMS_CLOUD.SEND_REQUEST. The SESSION_CLOUD_API_RESULTS view describes the
columns you can use when REST API results are saved.
DBMS_CLOUD REST API Results cache_scope Parameter
When you save DBMS_CLOUD REST API results with DBMS_CLOUD.SEND_REQUEST, access to the
results in SESSION_CLOUD_API_RESULTS is provided based on the value of cache_scope.
By default cache_scope is 'PRIVATE' and only the current user of the session can access the
results. If you set the cache_scope to 'PUBLIC', then all session users can access the results.
The default value for cache_scope specifies that each user can only see
DBMS_CLOUD.SEND_REQUEST REST API results generated by the procedures they invoke with
invoker's rights. When you invoke DBMS_CLOUD.SEND_REQUEST in a session, there are three
6-131
Chapter 6
Autonomous Database Supplied Package Reference
possibilities that determines if the current user can see results in the cache, based on
the cache_scope value:
Column Description
URI The DBMS_CLOUD REST API request URL
TIMESTAMP The DBMS_CLOUD REST API response timestamp
CLOUD_TYPE The DBMS_CLOUD REST API cloud type, such as Oracle Cloud
Infrastructure, AMAZON_S3, and AZURE_BLOB
REQUEST_METHOD The DBMS_CLOUD REST API request method, such as GET, PUT,
HEAD
REQUEST_HEADERS The DBMS_CLOUD REST API request headers
REQUEST_BODY_TEXT The DBMS_CLOUD REST API request body in CLOB
RESPONSE_STATUS_CODE The DBMS_CLOUD REST API response status code, such as
200(OK), 404(Not Found)
RESPONSE_HEADERS The DBMS_CLOUD REST API response headers
RESPONSE_BODY_TEXT The DBMS_CLOUD REST API response body in CLOB
SCOPE The cache_scope set by DBMS_CLOUD.SEND_REQUEST. Valid
values are PUBLIC or PRIVATE.
6-132
Chapter 6
Autonomous Database Supplied Package Reference
GET_RESPONSE_HEADERS Function
This function returns the HTTP response headers as JSON data in a JSON object.
Syntax
DBMS_CLOUD.GET_RESPONSE_HEADERS(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN JSON_OBJECT_T;
Parameters
Parameter Description
resp HTTP Response type returned from DBMS_CLOUD.SEND_REQUEST.
Exceptions
GET_RESPONSE_RAW Function
This function returns the HTTP response in RAW format. This is useful if the HTTP response
is expected to be binary format.
Syntax
DBMS_CLOUD.GET_RESPONSE_RAW(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN BLOB;
Parameters
Parameter Description
resp HTTP Response type returned from DBMS_CLOUD.SEND_REQUEST.
Exceptions
6-133
Chapter 6
Autonomous Database Supplied Package Reference
GET_RESPONSE_STATUS_CODE Function
This function returns the HTTP response status code as an integer. The status code
helps to identify if the request is successful.
Syntax
DBMS_CLOUD.GET_RESPONSE_STATUS_CODE(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN PLS_INTEGER;
Parameters
Parameter Description
resp HTTP Response type returned from DBMS_CLOUD.SEND_REQUEST.
Exceptions
GET_RESPONSE_TEXT Function
This function returns the HTTP response in TEXT format (VARCHAR2 or CLOB). Usually,
most Cloud REST APIs return JSON response in text format. This function is useful if
you expect the HTTP response is in text format.
Syntax
DBMS_CLOUD.GET_RESPONSE_TEXT(
resp IN DBMS_CLOUD_TYPES.resp)
RETURN CLOB;
Parameters
Parameter Description
resp HTTP Response type returned from DBMS_CLOUD.SEND_REQUEST.
Exceptions
6-134
Chapter 6
Autonomous Database Supplied Package Reference
GET_API_RESULT_CACHE_SIZE Function
This function returns the configured result cache size. The cache size value only applies for
the current session.
Syntax
DBMS_CLOUD.GET_API_RESULT_CACHE_SIZE()
RETURN NUMBER;
Syntax
DBMS_CLOUD.SEND_REQUEST(
credential_name IN VARCHAR2,
uri IN VARCHAR2,
method IN VARCHAR2,
headers IN CLOB DEFAULT NULL,
async_request_url IN VARCHAR2 DEFAULT NULL,
wait_for_states IN DBMS_CLOUD_TYPES.wait_for_states_t DEFAULT NULL,
timeout IN NUMBER DEFAULT 0,
cache IN PL/SQL BOOLEAN DEFAULT FALSE,
cache_scope IN VARCHAR2 DEFAULT 'PRIVATE',
body IN BLOB DEFAULT NULL)
RETURN DBMS_CLOUD_TYPES.resp;
DBMS_CLOUD.SEND_REQUEST(
credential_name IN VARCHAR2,
uri IN VARCHAR2,
method IN VARCHAR2,
headers IN CLOB DEFAULT NULL,
async_request_url IN VARCHAR2 DEFAULT NULL,
wait_for_states IN DBMS_CLOUD_TYPES.wait_for_states_t DEFAULT NULL,
timeout IN NUMBER DEFAULT 0,
cache IN PL/SQL BOOLEAN DEFAULT FALSE,
cache_scope IN VARCHAR2 DEFAULT 'PRIVATE',
body IN BLOB DEFAULT NULL);
6-135
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
credential_name The name of the credential for authenticating with the corresponding
cloud native API.
You can use 'OCI$RESOURCE_PRINCIPAL' as the
credential_name when resource principal is enabled. See
ENABLE_RESOURCE_PRINCIPAL for more information.
uri HTTP URI to make the request.
method HTTP Request Method: GET, PUT, POST, HEAD, DELETE. Use the
DBMS_CLOUD package constant to specify the method.
See DBMS_CLOUD REST API Constants for more information.
headers HTTP Request headers for the corresponding cloud native API in
JSON format. The authentication headers are set automatically,
only pass custom headers.
async_request_url An asynchronous request URL.
To obtain the URL select your request API from the list of APIs (see
https://docs.cloud.oracle.com/en-us/iaas/api/). Then, navigate to
find the API for your request in the left pane. For example, Database
Services API → Autonomous Database →
StopAutonomousDatabase. This page shows the API home (and
shows the base endpoint). Then, append the base endpoint with the
relative path obtained for your work request WorkRequest link.
wait_for_states Wait for states is a status of type:
DBMS_CLOUD_TYPES.wait_for_states_t. The following are valid
values for expected states: 'ACTIVE', 'CANCELED', 'COMPLETED',
'DELETED', 'FAILED', 'SUCCEEDED'.
Multiple states are allowed for wait_for_states. The default
value for wait_for_states is to wait for any of the expected
states: 'ACTIVE', 'CANCELED', 'COMPLETED', 'DELETED', 'FAILED',
'SUCCEEDED'.
timeout Specifies the timeout, in seconds, for asynchronous requests with
the parameters async_request_url and wait_for_states.
Default value is 0. This indicates to wait for completion of the
request without any timeout.
cache If TRUE specifies the request should be cached in REST result API
cache.
The default value is FALSE, which means REST API requests are
not cached.
cache_scope Specifies whether everyone can have access to this request result
cache. Valid values: "PRIVATE" and "PUBLIC". The default value
is "PRIVATE".
body HTTP Request Body for PUT and POST requests.
6-136
Chapter 6
Autonomous Database Supplied Package Reference
Exceptions
Usage Notes
• If you are using Oracle Cloud Infrastructure, you must use a Signing Key based
credential value for the credential_name. See CREATE_CREDENTIAL Procedure for
more information.
• The optional parameters async_request_url, wait_for_states, and timeout allow you
to handle long running requests. Using this asynchronous form of send_request, the
function waits for the completion status specified in wait_for_states before returning.
With these parameters in the send request, you pass the expected return states in the
wait_for_states parameter, and you use the async_request_url parameter to specify
an associated work request, the request does not return immediately. Instead, the
request probes the async_request_url until the return state is one of the expected states
or the timeout is exceeded (timeout is optional). If no timeout is specified, the request
waits until a state found in wait_for_states occurs.
SET_API_RESULT_CACHE_SIZE Procedure
This procedure sets the maximum cache size for current session. The cache size value only
applies for the current session.
Syntax
DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE(
cache_size IN NUMBER);
Parameters
Parameter Description
cache_size Set the maximum cache size to the specified value cache_size. If the
new maximum cache size is smaller than the current cache size, older
records are dropped until the number of rows is equal to the specified
maximum cache size. The maximum value is 10000.
If the cache size is set to 0, caching is disabled in the session.
The default cache size is 10.
6-137
Chapter 6
Autonomous Database Supplied Package Reference
Exceptions
Example
EXEC DBMS_CLOUD.SET_API_RESULT_CACHE_SIZE(101);
Note:
These examples show Oracle Cloud Infrastructure request APIs and require
that you use a Signing Key based credential for the credential_name.
Oracle Cloud Infrastructure Signing Key based credentials include the
private_key and fingerprint arguments.
For example:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => ‘OCI_KEY_CRED’,
user_ocid =>
‘ocid1.user.oc1..aaaaaaaauq54mi7zdyfhw33ozkwuontjceel7fok5nq3bf
2vwetkpqsoa’,
tenancy_ocid =>
‘ocid1.tenancy.oc1..aabbbbbbaafcue47pqmrf4vigneebgbcmmoy5r7xvoy
picjqqge32ewnrcyx2a’,
private_key =>
‘MIIEogIBAAKCAQEAtUnxbmrekwgVac6FdWeRzoXvIpA9+0r1.....wtnNpESQQ
Q0QLGPD8NM//JEBg=’,
fingerprint =>
‘f2:db:f9:18:a4:aa:fc:94:f4:f6:6c:39:96:16:aa:27’);
END;
/
6-138
Chapter 6
Autonomous Database Supplied Package Reference
See CreateBucket for details on the Oracle Cloud Infrastructure Object Storage Service API
for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/namespace-
string/b/',
method => DBMS_CLOUD.METHOD_POST,
body => UTL_RAW.cast_to_raw(
JSON_OBJECT('name' value 'bucketname',
'compartmentId' value
'compartment_OCID'))
);
END;
/
Notes:
• In this example, namespace-string is the Oracle Cloud Infrastructure object storage
namespace and bucketname is the bucket name. See Understanding Object Storage
Namespaces for more information.
• Where: region is an endpoint region. See Object Storage API reference in API
Reference and Endpoints for more information. For example, where region is: us-
phoenix-1.
6-139
Chapter 6
Autonomous Database Supplied Package Reference
See DeleteBucket for details on the Oracle Cloud Infrastructure Object Storage
Service API for this example.
SET SERVEROUTPUT ON
DECLARE
resp DBMS_CLOUD_TYPES.resp;
BEGIN
-- Send request
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://objectstorage.region.oraclecloud.com/n/
namespace-string/b/bucketname',
method => DBMS_CLOUD.METHOD_DELETE
);
END;
/
Notes:
• In this example, namespace-string is the Oracle Cloud Infrastructure object
storage namespace and bucketname is the bucket name. See Understanding
Object Storage Namespaces for more information.
• Where: region is an endpoint region. See Object Storage API reference in API
Reference and Endpoints for more information. For example, where region is: us-
phoenix-1.
See ListCompartments for details on the Oracle Cloud Infrastructure Identity and
Access Management Service API for this example.
--
-- List compartments
--
DECLARE
6-140
Chapter 6
Autonomous Database Supplied Package Reference
resp DBMS_CLOUD_TYPES.resp;
root_compartment_ocid VARCHAR2(512) := '&1';
BEGIN
-- Send request
dbms_output.put_line('Send Request');
resp := DBMS_CLOUD.send_request(
credential_name => 'OCI_KEY_CRED',
uri => 'https://identity.region.oraclecloud.com/20160918/
compartments?compartmentId=' || root_compartment_ocid,
method => DBMS_CLOUD.METHOD_GET,
headers => JSON_OBJECT('opc-request-id' value 'list-
compartments')
);
dbms_output.put_line('Body: ' || '------------' || CHR(10) ||
DBMS_CLOUD.get_response_text(resp) || CHR(10));
dbms_output.put_line('Headers: ' || CHR(10) || '------------' || CHR(10)
|| DBMS_CLOUD.get_response_headers(resp).to_clob || CHR(10));
dbms_output.put_line('Status Code: ' || CHR(10) || '------------' ||
CHR(10) || DBMS_CLOUD.get_response_status_code(resp));
dbms_output.put_line(CHR(10));
END;
/
Where: region is an endpoint region. See Identity and Access Management (IAM) API
reference in API Reference and Endpoints for more information. For example, where region
is: uk-london-1.
--
-- Sent Work Request Autonomous Database Stop Request with Wait for Status
DECLARE
l_resp DBMS_CLOUD_TYPES.resp;
l_resp_json JSON_OBJECT_T;
l_key_shape JSON_OBJECT_T;
l_body JSON_OBJECT_T;
status_array DBMS_CLOUD_TYPES.wait_for_states_t;
BEGIN
status_array := DBMS_CLOUD_TYPES.wait_for_states_t('SUCCEEDED');
l_body := JSON_OBJECT_T('{}');
l_body.put('autonomousDatabaseId', 'ocid');
-- Send request
dbms_output.put_line(l_body.to_clob);
dbms_output.put_line('Send Request');
l_resp := DBMS_CLOUD.send_request(
credential_name => 'NATIVE_CRED_OCI',
uri => 'https://
database.region.oraclecloud.com/20160918/autonomousDatabases/ocid/actions/
stop',
6-141
Chapter 6
Autonomous Database Supplied Package Reference
Where: region is an endpoint region. See Identity and Access Management (IAM) API
reference in API Reference and Endpoints for more information. For example, where
region is: uk-london-1.
The ocid is the Oracle Cloud Infrastructure resource identifier. See Resource
Identifiers for more information.
https://objectstorage.region.oraclecloud.com/n/namespace-string/b/
bucket/o/filename
6-142
Chapter 6
Autonomous Database Supplied Package Reference
For example, the Native URI for the file channels.txt in the bucketname bucket in the
Phoenix data center is:
https://objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o/channels.txt
1. Open the Oracle Cloud Infrastructure Console by clicking the next to Oracle Cloud.
2. From the Oracle Cloud Infrastructure left navigation menu click Core Infrastructure.
Under Object Storage, click Object Storage.
3. Under List Scope, select a Compartment.
4. From the Name column, select a bucket.
5. In the Objects area, click View Object Details.
6. On the Object Details page, the URL Path (URI) field shows the URI to access the
object.
Note:
The source files need to be stored in an Object Storage tier bucket. Autonomous
Database does not support buckets in the Archive Storage tier. See Overview of
Object Storage for more information.
6-143
Chapter 6
Autonomous Database Supplied Package Reference
https://swiftobjectstorage.region.oraclecloud.com/v1/namespace-string/
bucket/filename
For example, the Swift URI for the file channels.txt in the bucketname bucket in the
Phoenix data center is:
https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/namespace-
string/bucketname/channels.txt
Note:
The source files need to be stored in an Object Storage tier bucket.
Autonomous Database does not support buckets in the Archive Storage tier.
See Overview of Object Storage for more information.
Oracle Cloud Infrastructure Object Storage URI Format Using Pre-Authenticated Request
URL
If your source files reside on the Oracle Cloud Infrastructure Object Storage you can
use Oracle Cloud Infrastructure pre-authenticated URIs. When you create a pre-
authenticated request, a unique URL is generated. You can then provide the unique
URL to users in your organization, partners, or third parties to access the Object
Storage resource target identified in the pre-authenticated request.
Note:
Carefully assess the business requirement for and the security ramifications
of pre‑authenticated access. When you create the pre-authenticated request
URL, note the Expiration and the Access Type to make sure they are
appropriate for your use.
A pre-authenticated request URL gives anyone who has the URL access to
the targets identified in the request for as long as the request is active. In
addition to considering the operational needs of pre-authenticated access, it
is equally important to manage its distribution.
6-144
Chapter 6
Autonomous Database Supplied Package Reference
https://objectstorage.region.oraclecloud.com/p/encrypted_string/n/namespace-string/b/
bucket/o/filename
For example, a sample pre-authenticated URI for the file channels.txt in the bucketname
bucket in the Phoenix data center is:
https://objectstorage.us-phoenix-1.oraclecloud.com/p/2xN-uDtWJNsiD910UCYGue/n/
namespace-string/b/bucketname/o/channels.txt
For example:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
file_uri_list =>'https://objectstorage.us-phoenix-1.oraclecloud.com/p/
unique-pre-authenticated-string/n/namespace-string/b/bucketname/o/
channels.txt',
format => json_object('delimiter' value ',') );
END;
/
Note:
A list of mixed URLs is valid. If the URL list contains both pre-authenticated URLs
and URLs that require authentication, DBMS_CLOUD uses the specified
credential_name to access the URLs that require authentication and for the pre-
authenticated URLs the specified credential_name is ignored.
Note:
Carefully assess the business requirement for and the security ramifications of
using public URLs. When you use public URLs, due to the file content not being
authenticated, make sure this is appropriate for your use.
6-145
Chapter 6
Autonomous Database Supplied Package Reference
You can use a public URL in any DBMS_CLOUD procedure that takes a URL to access
files in your object store, without the need to create a credential. You need to either
specify the credential_name parameter as NULL or not supply a credential_name
parameter.
For example the following uses DBMS_CLOUD.COPY_DATA without a credential_name:
BEGIN
DBMS_CLOUD.COPY_DATA(
table_name =>'CHANNELS',
file_uri_list =>'https://objectstorage.us-
ashburn-1.oraclecloud.com/n/namespace-string/b/bucketname/o/
chan_v3.dat',
format => json_object('delimiter' value ',') );
END;
/
Note:
A list of mixed URLs is valid. If the URL list contains both public URLs and
URLs that require authentication, DBMS_CLOUD uses the specified
credential_name to access the URLs that require authentication and for the
public URLs the specified credential_name is ignored.
See Public Buckets for information on using Oracle Cloud Infrastructure public
buckets.
You can use a presigned URL in any DBMS_CLOUD procedure that takes a URL to
access files in Amazon S3 object store, without the need to create a credential. To use
a presigned URL in any DBMS_CLOUD procedure, either specify the credential_name
parameter as NULL, or do not supply a credential_name parameter.
6-146
Chapter 6
Autonomous Database Supplied Package Reference
Note:
DBMS_CLOUD supports the standard Amazon S3 endpoint syntax to access your
buckets. DBMS_CLOUD does not support Amazon S3 legacy endpoints. See Legacy
Endpoints for more information.
https://adb_user.blob.core.windows.net/adb/channels.txt
Note:
You can use Shared Access Signatures (SAS) URL in any DBMS_CLOUD procedure
that takes a URL to access files in Azure Blob Storage, without the need to create a
credential. To use a Shared Access Signature (SAS) URL, either specify the
credential_name parameter as NULL, or do not supply a credential_name
parameter.
See Grant Limited Access to Azure Storage Resources Using Shared Access
Signatures (SAS) for more information.
Note:
To use DBMS_CLOUD with an Amazon S3 compatible object store you need to provide
valid credentials. See CREATE_CREDENTIAL Procedure for more information.
If your source files reside on a service that supports Amazon S3 compatible URIs, use the
following URI format to access your files:
• Oracle Cloud Infrastructure Object Storage S3 Compatible URL
6-147
Chapter 6
Autonomous Database Supplied Package Reference
https://mynamespace.compat.objectstorage.region.oraclecloud.com/
bucket_name/object_name
https://mynamespace.compat.objectstorage.region.oraclecloud.com/
bucket_name
See Amazon S3 Compatibility and Object Storage Service API for more
information.
• Google Cloud Storage S3 Compatible URL
Object URL Format:
https://bucketname.storage.googleapis.com/object_name
https://bucketname.storage.googleapis.com/
See Migrating from Amazon S3 to Cloud Storage and Request Endpoints for more
information.
• Wasabi S3 Compatible URL
Object URL Format:
https://bucketname.s3.region.wasabisys.com/object_name
https://bucketname.s3.region.wasabisys.com/
See S3-compatible API Connectivity for Wasabi Hot Cloud Storage and What are
the service URLs for Wasabi's different regions? for more information.
6-148
Chapter 6
Autonomous Database Supplied Package Reference
Note:
For DBMS_CLOUD access with GitHub Raw URLs, repository access is limited to read-
only functionality. The DBMS_CLOUD APIs such as DBMS_CLOUD.PUT_OBJECT that write
data are not supported with DBMS_CLOUD APIs on a GitHub Repository.
As an alternative, use DBMS_CLOUD_REPO.PUT_FILE to upload data to a GitHub
Repository.
Use GitHub Raw URLs with DBMS_CLOUD APIs to access source files that reside on a GitHub
Repository. When you browse to a file on GitHub and click the Raw link, this shows the
GitHub Raw URL. The raw.githubusercontent.com domain provides unprocessed versions
of files stored in GitHub repositories.
For example, using DBMS_CLOUD.GET_OBJECT:
BEGIN
DBMS_CLOUD.GET_OBJECT(
credential_name => 'MY_CRED',
object_uri => 'https://raw.githubusercontent.com/myaccount/myrepo/
master/data-management-library/autonomous-database/adb-loading.csv',
directory_name => 'DATA_PUMP_DIR'
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
credential_name => 'MY_CRED',
table_name => 'EMPLOYEES_EXT',
file_uri_list => 'https://raw.githubusercontent.com/myaccount/myrepo/
master/data-management-library/autonomous-database/*.csv',
column_list => 'name varchar2(30), gender varchar2(30), salary
number',
format => JSON_OBJECT('type' value 'csv')
);
END;
/
SELECT * FROM employees_ext;
DBMS_CLOUD procedures that take a URL to access a GitHub Repository do not require
credentials with public visibility GitHub repositories. To use a public visibility URL you can
specify the credential_name parameter as NULL or not supply a credential_name parameter.
See Setting repository visibility for more information.
6-149
Chapter 6
Autonomous Database Supplied Package Reference
managed endpoints URIs. In those cases, DBMS_CLOUD relies on the proper URI scheme
to identify the authentication scheme for the customer-managed endpoint.
Examples:
Customer-managed endpoint using S3-compatible authentication.
This example shows how for new URIs, customers can add the public or private host
name pattern using DBMS_NETWORK_ACL_ADMIN package. The code block, executed by
user ADMIN, enables HTTPS access for user SCOTT to endpoints in domain
*.myprivatesite.com. It then shows how user SCOTT accesses the newly enabled
endpoint. Note that credential MY_CRED for user SCOTT must store the access key and
secret key for S3-compatible authentication performed for the HTTP request indicated
by the URI prefix.
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '*.myprivatesite.com',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'SCOTT',
principal_type => xs_acl.ptype_db),
6-150
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD.get_object(
credential_name => 'MY_CRED',
object_uri => 's3://bucket.myprivatesite.com/file1.csv',
directory_name => 'MY_DIR' );
END;
/
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 'data.cms.gov',
ace => xs$ace_type(privilege_list => xs$name_list('http'),
principal_name => 'SCOTT',
principal_type => xs_acl.ptype_db)
);
END;
/
SELECT DBMS_CLOUD.get_response_text(
DBMS_CLOUD.send_request(
uri => 'public://data.cms.gov/provider-data/api/1/
datastore/imports/a',
method => DBMS_CLOUD.METHOD_GET,
headers => JSON_OBJECT('Accept' VALUE 'application/json')
)
)
FROM DUAL;
/
And:
format => json_object('format_option' value 'format_value'))
Examples:
format => json_object('type' VALUE 'CSV')
6-151
Chapter 6
Autonomous Database Supplied Package Reference
For example:
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value
'true',
'dateformat' value 'YYYY-MM-DD-HH24-MI-SS',
'blankasnull' value 'true', 'logretention' value 7)
Note:
For Avro, ORC, or Parquet format options, see DBMS_CLOUD Package
Format Options for Avro, ORC, or Parquet.
As noted in the Format Option column, a limited set of format options are valid with
DBMS_CLOUD.COPY_COLLECTION or with DBMS_CLOUD.COPY_DATA when the format type is
JSON.
6-152
Chapter 6
Autonomous Database Supplied Package Reference
detectfieldorder Specifies that the fields in the external data files detectfieldorder: true
are in a different order than the columns in the Default value: false
table. Detect the order of fields using the first row
of each external data file and map it to the
columns of the table. The field names in external
data files are compared in case insensitive manner
with the names of the table columns.
This format option is applicable for the following
procedures:
• DBMS_CLOUD.COPY_DATA
• DBMS_CLOUD.CREATE_EXTERNAL_TABLE
• DBMS_CLOUD.CREATE_EXTERNAL_PART_TAB
LE
• DBMS_CLOUD.CREATE_HYBRID_PART_TABLE
Restrictions for detectfieldorder:
• Field names in the data file must appear in the
first record line and must not contain any
white spaces between the field names.
• The field delimiter in the field names record
must be the same as the field delimiter for the
data in the file.
• Quoted field names are not supported. The
field names in data files are compared, in
case insensitive manner, with the names of
the external table columns.
• Embedded field delimiters are not allowed in
the field names.
• The number of columns in the table must
match the number of fields in the data files.
• This format option is not applicable for Bigdata
or Oracle Data Pump formats, as those
formats have precise column metadata
information in the binary file format.
The text formats, CSV, JSON, Parquet, or
XML can benefit from this automatic field
order detection when the first line contains the
field names.
See FIELD NAMES and the description for ALL
FILES for more information.
6-153
Chapter 6
Autonomous Database Supplied Package Reference
6-154
Chapter 6
Autonomous Database Supplied Package Reference
6-155
Chapter 6
Autonomous Database Supplied Package Reference
6-156
Chapter 6
Autonomous Database Supplied Package Reference
6-157
Chapter 6
Autonomous Database Supplied Package Reference
6-158
Chapter 6
Autonomous Database Supplied Package Reference
'"partition_columns":
["state","zipcode"]'
'"partition_columns":[
{"name":"country",
"type":"varchar2(10)"},
{"name":"year",
"type":"number"},
{"name":"month",
"type":"varchar2(10)"}]'
If the data files are unstructured and the type sub-
clause is specified with partition_columns, the
type sub-clause is ignored.
For object names that are not based on hive
format, the order of the partition_columns
specified columns must match the order as they
appear in the object name in the file_uri_list.
quote Specifies the quote character for the fields, the quote: character
quote characters are removed during loading Default value: Null meaning no quote
when specified.
6-159
Chapter 6
Autonomous Database Supplied Package Reference
format =>
json_object('recorddelimiter' VALUE
'''\r\n''')
6-160
Chapter 6
Autonomous Database Supplied Package Reference
6-161
Chapter 6
Autonomous Database Supplied Package Reference
trimspaces Specifies how the leading and trailing spaces of trimspaces: rtrim| ltrim|
the fields are trimmed. notrim| lrtrim| ldrtrim
See the description of trim_spec. Default value: notrim
truncatecol If the data in the file is too long for a field, then this truncatecol:true
option will truncate the value of the field rather Default value: False
than reject the row.
6-162
Chapter 6
Autonomous Database Supplied Package Reference
6-163
Chapter 6
Autonomous Database Supplied Package Reference
And:
format => json_object('format_option' value 'format_value'))
Examples:
format => json_object('type' VALUE 'json')
For example:
format => json_object('compression' value 'gzip', 'type' value 'json')
This table covers the format options for DBMS_CLOUD.EXPORT_DATA when the format
parameter type option is one of: CSV, JSON, Parquet, or XML. For other procedures
and other output types, see DBMS_CLOUD Package Format Options for the list of
format options.
Note:
This option only
applies with csv
type.
6-164
Chapter 6
Autonomous Database Supplied Package Reference
Note:
This option only
applies with csv
type.
Note:
This option only
applies with csv
type.
6-165
Chapter 6
Autonomous Database Supplied Package Reference
6-166
Chapter 6
Autonomous Database Supplied Package Reference
Note:
This option only
applies with csv
type.
fileextension Custom file extension to override the default Valid values: Any file extension.
choice for the format type. This applies to text Default value: Depends on the
formats with DBMS_CLOUD.EXPORT_DATA: CSV, format type option:
JSON, Parquet, or XML. • CSV format: .csv
If the specified string does not start with period • JSON format: .json
(dot), then a dot is automatically inserted before
• PARQUET format: .parquet
the file extension in the final file name.
• XML format: .xml
If no file extension is desired, use the value:
fileextension ='none'
6-167
Chapter 6
Autonomous Database Supplied Package Reference
Note:
This option only
applies with csv
type.
trimspaces Specifies how the leading and trailing spaces of trimspaces: rtrim| ltrim|
the fields are trimmed for CSV format. Trim spaces notrim| lrtrim| ldrtrim
is applied before quoting the field, if the quote
Default value: notrim
parameter is specified.
See the description of trim_spec.
Note:
This option only
applies with csv
type.
6-168
Chapter 6
Autonomous Database Supplied Package Reference
6-169
Chapter 6
Autonomous Database Supplied Package Reference
And:
format => json_object('format_option' value 'format_value'))
Examples:
format => json_object('type' VALUE 'CSV')
For example:
format => json_object('ignoremissingcolumns' value 'true', 'removequotes' value
'true', 'dateformat' value 'YYYY-MM-DD-HH24-MI-SS', 'blankasnull' value 'true')
6-170
Chapter 6
Autonomous Database Supplied Package Reference
Note:
Complex types, such as maps, arrays, and structs are supported starting with
Oracle Database 19c. See DBMS_CLOUD Package Avro, ORC, and Parquet
Complex Types for information on using Avro complex types.
6-171
Chapter 6
Autonomous Database Supplied Package Reference
See DBMS_CLOUD Package Avro, ORC, and Parquet Complex Types for information
on using Avro complex types.
6-172
Chapter 6
Autonomous Database Supplied Package Reference
Note:
Complex types, such as maps, arrays, and structs are supported starting with
Oracle Database 19c. See DBMS_CLOUD Package Avro, ORC, and Parquet
Complex Types for information on using Parquet complex types.
See DBMS_CLOUD Package Avro, ORC, and Parquet Complex Types for information on
using Parquet complex types.
6-173
Chapter 6
Autonomous Database Supplied Package Reference
After you change the value, you can verify the change by querying the
NLS_SESSION_PARAMETERS view:
6-174
Chapter 6
Autonomous Database Supplied Package Reference
Note:
The complex fields map to VARCHAR2 columns and VARCHAR2 size limits apply.
If your ORC, Parquet, or Avro source files contain complex types, then you can query the
JSON output for these common complex types. For example, the following shows an ORC
file, movie-info.orc, with a complex type (the same complex type handling applies for
Parquet and Avro source files).
Consider the movie-info.orc file with the following schema:
id int
original_title string
overview string
poster_path string
6-175
Chapter 6
Autonomous Database Supplied Package Reference
release_date string
vote_count int
runtime int
popularity double
genres array<struct<id:int,name:string>
Notice that each movie is categorized by multiple genres using an array of genres.
The genres array is an array of structs and each item has an id (int) and a name
(string). The genres array is considered a complex type. You can create a table over
this ORC file using DBMS_CLOUD.CREATE_EXTERNAL_TABLE as follows:
BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
table_name =>'movie_info',
credential_name =>'OBJ_STORE_CRED',
file_uri_list =>'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/mytenancy/b/movies/o/movie-info.orc',
format => '{"type":"orc", "schema": "first"}');
END;
/
When you create the external table the database automatically generates the columns
based on the schema in the ORC file (if you are using Avro or Parquet, the same
applies). For this example, the DBMS_CLOUD.CREATE_EXTERNAL_TABLE creates a table in
your database as follows:
6-176
Chapter 6
Autonomous Database Supplied Package Reference
PARALLEL;
)
To make the JSON data more useful, you can transform the column using Oracle's JSON
functions. For example, you can use the JSON "." notation as well as the more powerful
transform functions such as JSON_TABLE.
See Simple Dot-Notation Access to JSON Data for information on "." notation.
See SQL/JSON Function JSON_TABLE for information on JSON_TABLE.
The following example shows a query on the table that takes each value of the array and
turns the value into a row in the result set:
6-177
Chapter 6
Autonomous Database Supplied Package Reference
The JSON_TABLE creates a row for each value of the array, think outer join, and the
struct is parsed to extract the name of the genre. This produces the following output:
original_title release_date
genre_name genres
(500) Days of Summer 2009
Drama [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
(500) Days of Summer 2009
Comedy [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
(500) Days of Summer 2009
Horror [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
(500) Days of Summer 2009
Western [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
(500) Days of Summer 2009
War [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
(500) Days of Summer 2009
Romance [{"id":3,"name":"Drama"},{"id":6,"name":"Comedy"},
{"id":17,"name":"Horror"},{"id":19,"name":"Western"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
10,000 BC 2008
Comedy [{"id":6,"name":"Comedy"}]
11:14 2003
Family [{"id":9,"name":"Thriller"},
{"id":14,"name":"Family"}]
11:14 2003
Thriller [{"id":9,"name":"Thriller"},
{"id":14,"name":"Family"}]
127 Hours 2010
Comedy [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"}]
127 Hours 2010
Drama [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"}]
13 Going on 30 2004
6-178
Chapter 6
Autonomous Database Supplied Package Reference
Romance [{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
13 Going on 30 2004 Comedy
[{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
13 Going on 30 2004 War
[{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
13 Going on 30 2004 Drama
[{"id":6,"name":"Comedy"},{"id":3,"name":"Drama"},
{"id":18,"name":"War"},{"id":15,"name":"Romance"}]
DBMS_CLOUD Package Avro, ORC, and Parquet to Oracle Column Name Mapping
Describes rules for how Avro, ORC, and Parquet column names are converted to Oracle
column names.
The following are supported for Avro, ORC, and Parquet column names, but may require use
of double quotes for Oracle SQL references in external tables. Thus, for ease of use and to
avoid having to use double quotes when referencing column names, if possible do not use
the following in Avro, ORC, and Parquet column names:
• Embedded blanks
• Leading numbers
• Leading underscores
• Oracle SQL reserved words
The following table shows various types of Avro, ORC, and Parquet column names, and rules
for using the column names in Oracle column names in external tables.
6-179
Chapter 6
Autonomous Database Supplied Package Reference
6-180
Chapter 6
Autonomous Database Supplied Package Reference
Notes:
DBMS_CLOUD Exceptions
The following table describes exceptions for DBMS_CLOUD.
6-181
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_ADMIN Package
The DBMS_CLOUD_ADMIN package provides administrative routines for configuring a
database.
• Summary of DBMS_CLOUD_ADMIN Subprograms
This section covers the DBMS_CLOUD_ADMIN subprograms provided with
Autonomous Database.
• DBMS_CLOUD_ADMIN Exceptions
The following table describes exceptions for DBMS_CLOUD_ADMIN.
Subprogram Description
ATTACH_FILE_SYSTEM Procedure This procedure attaches a file system in a directory
on your database.
CANCEL_WORKLOAD_CAPTURE This procedure cancels the current workload
Procedure capture.
CREATE_DATABASE_LINK Procedure This procedure creates a database link to a target
database. There are options to create a database
link to another Autonomous Database instance, to
an Oracle Database that is not an Autonomous
Database, or to a non-Oracle Database using
Oracle-managed heterogeneous connectivity.
DETACH_FILE_SYSTEM Procedure This procedure detaches a file system from a
directory on your database.
DISABLE_APP_CONT Procedure This procedure disables database application
continuity for the session associated with the
specified service name in Autonomous Database.
DISABLE_EXTERNAL_AUTHENTICATIO This procedure disables external authentication for
N Procedure the Autonomous Database instance.
DISABLE_PRINCIPAL_AUTH Procedure This procedure revokes principal based
authentication for the specified provider and applies
to the ADMIN user or to the specified user.
DISABLE_RESOURCE_PRINCIPAL This procedure disables resource principal
Procedure credential and creates the credential
OCI$RESOURCE_PRINCIPAL. With a user name
specified, other than ADMIN, the procedure grants
the specified schema access to the resource
principal credential.
6-182
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
DROP_DATABASE_LINK Procedure This procedure drops a database link.
ENABLE_APP_CONT Procedure This procedure enables database application
continuity for the session associated with the
specified service name in Autonomous Database.
ENABLE_AWS_ARN Procedure This procedure enables a user to create AWS ARN
credentials in Autonomous Database.
ENABLE_EXTERNAL_AUTHENTICATIO This procedure enables a user to logon to
N Procedure Autonomous Database using the specified external
authentication scheme.
ENABLE_FEATURE Procedure This procedure enables the specified feature on the
Autonomous Database instance.
ENABLE_PRINCIPAL_AUTH Procedure This procedure enables principal authentication for
the specified provider and applies to the ADMIN
user or the specified user.
ENABLE_RESOURCE_PRINCIPAL This procedure enables resource principal
Procedure credential and creates the credential
OCI$RESOURCE_PRINCIPAL. With a user name
specified, other than ADMIN, the procedure grants
the specified schema access to the resource
principal credential.
FINISH_WORKLOAD_CAPTURE This procedure stops the workload capture and
Procedure uploads capture files to object storage.
PREPARE_REPLAY Procedure This procedure prepares replay for the refreshable
clone.
PURGE_FLASHBACK_ARCHIVE This procedure purges historical data from the
Procedure Flashback Data Archive.
REPLAY_WORKLOAD Procedure This procedure is overloaded. It initiates the
workload replay.
SET_FLASHBACK_ARCHIVE_RETENTI This procedure enables ADMIN users to modify the
ON Procedure retention period for Flashback Time Travel
flashback_archive.
START_WORKLOAD_CAPTURE This procedure initiates a workload capture.
Procedure
• ATTACH_FILE_SYSTEM Procedure
This procedure attaches a file system in the database.
• CANCEL_WORKLOAD_CAPTURE Procedure
This procedure cancels any ongoing workload capture on the database.
• CREATE_DATABASE_LINK Procedure
• DETACH_FILE_SYSTEM Procedure
This procedure detaches a file system from the database.
• DISABLE_APP_CONT Procedure
This procedure disables database application continuity for the session associated with
the specified service name in Autonomous Database.
• DISABLE_EXTERNAL_AUTHENTICATION Procedure
Disables user authentication with external authentication schemes for the database.
6-183
Chapter 6
Autonomous Database Supplied Package Reference
• DISABLE_FEATURE Procedure
This procedure disables the specified feature on the Autonomous Database
instance.
• DISABLE_PRINCIPAL_AUTH Procedure
This procedure revokes principal based authentication for a specified provider on
Autonomous Database and applies to the ADMIN user or to the specified user.
• DISABLE_RESOURCE_PRINCIPAL Procedure
Disable resource principal credential for the database or for the specified schema.
• DROP_DATABASE_LINK Procedure
This procedure drops a database link.
• ENABLE_APP_CONT Procedure
This procedure enables database application continuity for the session associated
with the specified service name in Autonomous Database.
• ENABLE_AWS_ARN Procedure
This procedure enables an Autonomous Database instance to use Amazon
Resource Names (ARNs) to access AWS resources.
• ENABLE_EXTERNAL_AUTHENTICATION Procedure
Enable users to login to the database with external authentication schemes.
• ENABLE_FEATURE Procedure
This procedure enables the specified feature on the Autonomous Database
instance.
• ENABLE_PRINCIPAL_AUTH Procedure
This procedure enables principal authentication on Autonomous Database for the
specified provider and applies to the ADMIN user or the specified user.
• ENABLE_RESOURCE_PRINCIPAL Procedure
Enable resource principal credential for the database or for the specified schema.
This procedure creates the credential OCI$RESOURCE_PRINCIPAL.
• FINISH_WORKLOAD_CAPTURE Procedure
This procedure finishes the current workload capture, stops any subsequent
workload capture requests to the database, and uploads the capture files to Object
Storage.
• PREPARE_REPLAY Procedure
The PREPARE_REPLAY procedure prepares the refreshable clone for a replay.
• PURGE_FLASHBACK_ARCHIVE Procedure
This procedure enables ADMIN users to purge historical data from Flashback Data
Archive. You can either purge all historical data from Flashback Data Archive
flashback_archive or selective data based on timestamps or System Change
Number.
• REPLAY_WORKLOAD Procedure
This procedure initiates a workload replay on your Autonomous Database
instance. The overloaded form enables you to replay the capture files from an
Autonomous Database instance, on-premises database, or other cloud service
databases.
• SET_FLASHBACK_ARCHIVE_RETENTION Procedure
This procedure allows ADMIN users to modify the retention period for Flashback
Data Archive flashback_archive.
6-184
Chapter 6
Autonomous Database Supplied Package Reference
• START_WORKLOAD_CAPTURE Procedure
This procedure initiates a workload capture on your Autonomous Database instance.
ATTACH_FILE_SYSTEM Procedure
This procedure attaches a file system in the database.
The DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM procedure attaches a file system in your
database and stores information about the file system in the DBA_CLOUD_FILE_SYSTEMS view.
Syntax
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM (
file_system_name IN VARCHAR2,
file_system_location IN VARCHAR2,
directory_name IN VARCHAR2,
description IN VARCHAR2 DEFAULT NULL,
params IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
file_system_name Specifies the name of the file system.
This parameter is mandatory.
file_system_locatio Specifies the location of the file system.
n The value you supply with file_system_location consists of a Fully
Qualified Domain Name (FQDN) and a file path in the form:
FQDN:file_path.
For example:
• FQDN: myhost.sub000445.myvcn.oraclevcn.com
For Oracle Cloud Infrastructure File Storage set the FQDN in Show
Advanced Options when you create a file system. See Creating File
Systems for more information.
• File Path: /results
This parameter is mandatory.
directory_name Specifies the directory name for the attached file system. The directory must
exist.
This parameter is mandatory.
description (Optional) Provides a description of the task.
params A JSON string that provides an additional parameter for the file system.
• nfs_version: Specifies the NFS version to use when NFS is attached
(NFSv3 or NFSv4). Valid values: 3, 4.
Default value: 3
Examples:
Attach to an NFSv3 file system:
BEGIN
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM (
6-185
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM (
file_system_name => 'FSS',
file_system_location => 'myhost.sub000445.myvcn.oraclevcn.com:/
results',
directory_name => 'FSS_DIR',
description => 'Source NFS for sales data',
params => JSON_OBJECT('nfs_version' value 4)
);
END;
/
Usage Notes
• To run this procedure you must be logged in as the ADMIN user or have the
EXECUTE privilege on DBMS_CLOUD_ADMIN.
• You must have WRITE privilege on the directory object in the database to attach a
file system using DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM.
• The DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM procedure can only attach a private
File Storage Service in databases with Private Endpoints enabled.
See OCI File Storage Service and Configuring Network Access with Private
Endpoints for more information.
• The DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM procedure looks up the Network File
System hostname on the customer's virtual cloud network (VCN). The error
"ORA-20000: Mounting NFS fails" is returned if the hostname specified in the
location cannot be located.
• Oracle Cloud Infrastructure File Storage uses NFSv3 to share
• If you attach to non-Oracle Cloud Infrastructure File Storage systems, the
procedure supports NFSv3 and NFSv4
• If you have an attached NFS server that uses NFSv3 and the NFS version is
updated to NFSv4 in the NFS server, you must run
DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM and then
DBMS_CLOUD_ADMIN.ATTACH_FILE_SYSTEM (using the params parameter with
nfs_version set to 4). This attaches NFS with the matching protocol so that
Autonomous Database can access the NFSv4 Server. Without detaching and then
reattaching, the NFS server will be inaccessible and you may see an error such
as: "Protocol not supported".
6-186
Chapter 6
Autonomous Database Supplied Package Reference
CANCEL_WORKLOAD_CAPTURE Procedure
This procedure cancels any ongoing workload capture on the database.
This procedure cancels the current workload capture and enables refresh on the refreshable
clone.
Syntax
DBMS_CLOUD_ADMIN.CANCEL_WORKLOAD_CAPTURE;
Example
BEGIN
DBMS_CLOUD_ADMIN.CANCEL_WORKLOAD_CAPTURE;
END;
/
Usage Note
• To run this procedure you must be logged in as the ADMIN user or have the EXECUTE
privilege on DBMS_CLOUD_ADMIN.
CREATE_DATABASE_LINK Procedure
This procedure creates a database link to a target database in the schema calling the API.
The overloaded forms support the following:
• When you use the gateway_params parameter, this enables you to create a database link
with Oracle-managed heterogeneous connectivity where the link is to a supported non-
Oracle database.
• When you use the rac_hostnames parameter, this enables you to create a database link
from an Autonomous Database on a private endpoint to a target Oracle RAC database.
In this case, you use the rac_hostnames parameter to specify the host names of one or
more individual nodes of the target Oracle RAC database.
Syntax
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name IN VARCHAR2,
hostname IN VARCHAR2,
port IN NUMBER,
service_name IN VARCHAR2,
ssl_server_cert_dn IN VARCHAR2 DEFAULT,
credential_name IN VARCHAR2 DEFAULT,
directory_name IN VARCHAR2 DEFAULT,
gateway_link IN BOOLEAN DEFAULT,
public_link IN BOOLEAN DEFAULT,
6-187
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name IN VARCHAR2,
rac_hostnames IN CLOB,
port IN NUMBER,
service_name IN VARCHAR2,
ssl_server_cert_dn IN VARCHAR2 DEFAULT,
credential_name IN VARCHAR2 DEFAULT,
directory_name IN VARCHAR2 DEFAULT,
gateway_link IN BOOLEAN DEFAULT,
public_link IN BOOLEAN DEFAULT,
private_target IN BOOLEAN DEFAULT);
Parameters
Parameter Description
db_link_name The name of the database link to create.
hostname The hostname for the target database.
Specifying localhost for hostname as is not allowed.
When you specify a connection with Oracle-managed
heterogeneous connectivity by supplying the gateway_params
parameter, note the following:
• When the db_type value is google_bigquery the
hostname is not used and you can provide value such as
example.com.
• When the db_type value is snowflake the hostname is the
Snowflake account identifier. To find your Snowflake account
identifier, see Account Identifier Formats by Cloud Platform
and Region.
Use this parameter or rac_hostnames, do not use both.
6-188
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
rac_hostnames Specifies hostnames for the target Oracle RAC database. The
value is a JSON array that specifies one or more individual host
names for the nodes of the target Oracle RAC database. Multiple
host names can be passed in JSON, separated by a ",". For
example:
'["sales1-svr1.domain", "sales1-svr2.domain",
"sales1-svr3.domain"]'
6-189
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
service_name The service_name for the database to link to. For a target
Autonomous Database, find the service name by one of the
following methods:
• Look in the tnsnames.ora file in the wallet.zip that
you download from an Autonomous Database for your
connection.
• Click Database connection on the Oracle Cloud
Infrastructure Console. In the Connection Strings area,
each connection string includes a service_name entry with
the connection string for the corresponding service. When
both Mutual TLS (mTLS) and TLS connections are allowed,
under TLS authentication select TLS to view the TNS
names and connection strings for connections with TLS
authentication. See View TNS Names and Connection
Strings for an Autonomous Database Instance for more
information.
• Query V$SERVICES view. For example:
6-190
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
directory_name The directory for the cwallet.sso file. The default value for
this parameter is 'data_pump_dir'.
Oracle-managed heterogeneous connectivity is preconfigured
with a wallet that contains most of the common trusted root and
intermediate SSL certificates. The directory_name parameter is
not required when you supply the gateway_params parameter.
Public Endpoint Link to an Autonomous Database Target
without a Wallet:
To connect to an Autonomous Database on a public endpoint
without a wallet (TLS):
• The directory_name parameter must be NULL.
• The ssl_server_cert_dn parameter must be NULL or do
not include this parameter (the default value is NULL).
In addition, to connect to an Autonomous Database with TCP, the
ssl_server_cert_dn parameter must be NULL or do not include
this parameter (the default value is NULL).
Private Endpoint Link without a Wallet:
To connect to a target Oracle Database on a private endpoint
without a wallet:
• The target database must be on a private endpoint.
• The directory_name parameter must be NULL.
• The ssl_server_cert_dn parameter must be NULL or do
not include this parameter (the default value is NULL).
• The private_target parameter must be TRUE.
gateway_link Indicates if the database link is created to another Oracle
Database or to an Oracle Database Gateway.
If gateway_link is set to FALSE, this specifies a database link to
another Autonomous Database or to another Oracle Database.
If gateway_link is set to TRUE, this specifies a database link to a
non-Oracle system. This creates a connect descriptor in the
database link definition that specifies (HS=OK).
When gateway_link is set to TRUE and gateway_params is
NULL, this specifies a database link to a customer-managed
Oracle gateway.
The default value for this parameter is FALSE.
public_link Indicates if the database link is created as a public database link.
To run DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK with this
parameter set to TRUE, the user invoking the procedure must have
EXECUTE privilege on the credential associated with the public
database link and must have the CREATE PUBLIC DATABASE
LINK system privilege. The EXECUTE privilege on the credential
can be granted either by the ADMIN user or by the credential
owner.
The default value for this parameter is FALSE.
6-191
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
private_target When a database link accesses a hostname that needs to be
resolved in a VCN DNS server, specify the private_target
parameter with value TRUE.
When private_target is TRUE, the hostname parameter must
be a single hostname (on a private endpoint, using an IP address,
a SCAN IP, or a SCAN hostname is not supported).
The default value for this parameter is FALSE.
gateway_params Specify the target database type for Oracle-managed
heterogeneous connectivity to connect to non-Oracle databases.
The db_type value is one of:
• awsredshift
• azure
• db2
• google_analytics
• google_bigquery
* See Usage Notes for additional supported
gateway_params when db_type is google_bigquery.
• hive
* See Usage Notes for additional supported
gateway_params when db_type is hive.
• mongodb
• mysql
• postgres
• salesforce
* See Usage Notes for additional supported
gateway_params when db_type is salesforce.
• servicenow
* See Usage Notes for additional supported
gateway_params when db_type is servicenow.
• snowflake
* See Usage Notes for additional supported
gateway_params when db_type is snowflake.
• NULL
When gateway_params is NULL and gateway_link is set
to TRUE, this specifies a database link to a customer-
managed Oracle gateway.
Specify the parameter with the json_object form. For example:
gateway_params => json_object('db_type' value
'awsredshift')
See Oracle-Managed Heterogeneous Connectivity Database
Types and Ports for required port values for each database type.
When gateway_params is NULL and private_target is TRUE,
if directory_name is NULL, a TCP-based database link is
created.
When gateway_params is NULL and private_target is TRUE,
if directory_name is NULL, a TCPS-based database link is
created.
6-192
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes
• When you specify the gateway_params parameter, for some db_type values, additional
gateway_params parameters are supported:
hive When db_type is hive, the parameter http_path is valid. This parameter
specifies the HttpPath value, if required, to connect to the Hive instance.
salesforce When the db_type is salesforce, the parameter: security_token is
valid. A security token is a case-sensitive alphanumeric code. Supplying a
security_token value is required to access Salesforce. For example:
6-193
Chapter 6
Autonomous Database Supplied Package Reference
Obtain and download the ServiceNow REST config file to the specified
directory. For example:
exec DBMS_CLOUD.get_object('servicenow_dir_cred',
'https://objectstorage.<...>/
servicenow.rest','SERVICENOW_DIR');
Set the file_name value to the name of the REST config file you
downloaded, "servicenow.rest".
Then you can use the ServiceNow REST config file with either basic
authentication or OAuth2.0. See
HETEROGENEOUS_CONNECTIVITY_INFO View for samples.
snowflake When the db_type is SNOWFLAKE, the parameters: role, schema, and
warehouse are valid. These values specify a different schema, role, or
warehouse value, other than the default. For example:
• When you use the private_target parameter, note that database links from an
Autonomous Database to a database service that is on a private endpoint are only
supported in commercial regions and US Government regions.
This feature is enabled by default in all commercial regions.
This feature is enabled by default in US Government regions for newly provisioned
databases.
6-194
Chapter 6
Autonomous Database Supplied Package Reference
In addition, when you create a Database Link in a schema other than the ADMIN
schema, for example in a schema named adb_user, the adb_user schema must own the
credential you use with DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK.
• Only one wallet file is valid per directory specified with the directory_name parameter.
You can only upload one cwallet.sso at a time to the directory you choose for wallet
files. This means with a cwallet.sso in a directory, you can only create database links
to the databases for which the wallet in that directory is valid. To use multiple
cwallet.sso files with database links you need to create additional directories and put
each cwallet.sso in a different directory.
See Create Directory in Autonomous Database for information on creating directories.
• To create a database link to an Autonomous Database, set GLOBAL_NAMES to FALSE on the
source database (non-Autonomous Database).
System altered.
• When the private_target parameter is TRUE, the hostname parameter specifies a private
host inside the VCN.
Examples
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED',
username => 'adb_user',
password => 'password');
6-195
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
hostname => 'adb.eu-frankfurt-1.oraclecloud.com',
port => '1522',
service_name => 'example_medium.adb.example.oraclecloud.com',
ssl_server_cert_dn => 'CN=adb.example.oraclecloud.com,OU=Oracle
BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED');
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'AWS_REDSHIFT_LINK_CRED',
username => 'NICK',
password => 'password'
);
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'AWSREDSHIFT_LINK',
hostname => 'example.com',
port => '5439',
service_name => 'example_service_name',
ssl_server_cert_dn => NULL,
credential_name => 'AWS_REDSHIFT_LINK_CRED',
gateway_params => JSON_OBJECT('db_type' value
'awsredshift'));
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'PRIVATE_ENDPOINT_CRED',
username => 'db_user',
password => 'password'
);
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'PRIVATE_ENDPOINT_DB_LINK',
hostname => 'exampleHostname',
port => '1521',
service_name => 'exampleServiceName',
credential_name => 'PRIVATE_ENDPOINT_CRED',
ssl_server_cert_dn => NULL,
directory_name => NULL,
private_target => TRUE);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'GOOGLE_BIGQUERY_CRED',
params => JSON_OBJECT( 'gcp_oauth2' value JSON_OBJECT(
'client_id' value 'client_id',
6-196
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'GOOGLE_BIGQUERY_LINK',
hostname => 'example.com',
port => '443',
service_name => 'example_service_name',
credential_name => 'GOOGLE_BIGQUERY_CRED',
gateway_params => JSON_OBJECT(
'db_type' value 'google_bigquery',
'project' value 'project_name1' ));
END;
/
The table name you specify when you use SELECT with Google BigQuery or Google Analytics
must be in quotes. For example:
Use the rac_hostnames parameter with a target Oracle RAC database on a private endpoint.
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DB_LINK_CRED1',
username => 'adb_user',
password => 'password');
DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK(
db_link_name => 'SALESLINK',
rac_hostnames => '["sales1-svr1.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr2.example.adb.us-
ashburn-1.oraclecloud.com",
"sales1-svr3.example.adb.us-
ashburn-1.oraclecloud.com"]',
port => '1522',
service_name => 'example_high.adb.oraclecloud.com',
ssl_server_cert_dn => 'CN=adb.example.oraclecloud.com,OU=Oracle BMCS
FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US',
credential_name => 'DB_LINK_CRED1',
directory_name => 'EXAMPLE_WALLET_DIR',
private_target => TRUE);
END;
/
DETACH_FILE_SYSTEM Procedure
This procedure detaches a file system from the database.
The DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure detaches a file system from your
database. In addition to that, the DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure also
removes the information about the file system from the DBA_CLOUD_FILE_SYSTEMS view.
6-197
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM(
file_system_name IN VARCHAR2
);
Parameters
Parameter Description
file_system_name Specifies the name of the file system.
This parameter is mandatory.
Example:
BEGIN
DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM (
file_system_name =>
'FSS'
);
END;
Usage Notes
• To run this procedure, you must be logged in as the ADMIN user or have the
EXECUTE privilege on DBMS_CLOUD_ADMIN.
• You must have the WRITE privilege on the directory object in the database, to
detach a file system from a directory using the
DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure.
• The DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure can only detach a private
File Storage Service in databases with Private Endpoints enabled.
See OCI File Storage Service and Configuring Network Access with Private
Endpoints for more information.
• The DBMS_CLOUD_ADMIN.DETACH_FILE_SYSTEM procedure looks up the Network File
System hostname on the customer's virtual cloud network (VCN). The error
"ORA-20000: Mounting NFS fails" is returned if the hostname specified in the
location cannot be located.
DISABLE_APP_CONT Procedure
This procedure disables database application continuity for the session associated
with the specified service name in Autonomous Database.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
service_name IN VARCHAR2);
6-198
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
service_name The service_name for the Autonomous Database service.
To find service names:
• Look in the tnsnames.ora file in the wallet.zip that you
download from an Autonomous Database for your connection.
• Click Database connection on the Oracle Cloud Infrastructure
Console. In the Connection strings area, each connection string
includes a service_name entry that contains the connection string for
the corresponding service. When both Mutual TLS (mTLS) and TLS
connections are allowed, under TLS authentication select TLS to view
the TNS names and connection strings for connections with TLS
authentication. See View TNS Names and Connection Strings for an
Autonomous Database Instance for more information.
• Query V$SERVICES view. For example:
Usage Notes
See Overview of Application Continuity for more information on Application Continuity.
Example
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_APP_CONT(
service_name => 'nv123abc1_adb1_high.adb.oraclecloud.com' );
END;
/
NAME FAILOVER_TYPE
------------------------------------------------------- --------------
nv123abc1_adb1_high.adb.oraclecloud.com
DISABLE_EXTERNAL_AUTHENTICATION Procedure
Disables user authentication with external authentication schemes for the database.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION;
6-199
Chapter 6
Autonomous Database Supplied Package Reference
Exceptions
Example
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_EXTERNAL_AUTHENTICATION;
END;
/
DISABLE_FEATURE Procedure
This procedure disables the specified feature on the Autonomous Database instance.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name IN VARCHAR2);
Parameters
Parameter Description
feature_name Specifies the feature type to be disabled. Supported values are:
• 'ORAMTS': Disable OraMTS feature.
• 'AUTO_DST_UPGRADE': Disable AUTO DST feature.
• 'AUTO_DST_UPGRADE_EXCL_DATA': Disable AUTO DST EXCL
DATA feature.
• 'OWM': Disable Oracle Workspace Manager.
This parameter is mandatory.
Examples
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'ORAMTS');
END;
/
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE');
6-200
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'AUTO_DST_UPGRADE_EXCL_DATA');
END;
/
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_FEATURE(
feature_name => 'OWM');
END;
/
Usage Notes
• To disable the OraMTS, AUTO_DST_UPGRADE, AUTO_DST_UPGRADE_EXCL_DATA, or OWM features
for your Autonomous Database instance, you must be logged in as the ADMIN user or
have the EXECUTE privilege on DBMS_CLOUD_ADMIN.
• When both AUTO_DST_UPGRADE and AUTO_DST_UPGRADE_EXCL_DATA are disabled, if new
time zone versions are available the Autonomous Database instance does not upgrade to
use the latest available time zone files.
• Query dba_cloud_config to verify that AUTO_DST_UPGRADE is disabled.
0 rows selected.
0 rows selected.
DISABLE_PRINCIPAL_AUTH Procedure
This procedure revokes principal based authentication for a specified provider on
Autonomous Database and applies to the ADMIN user or to the specified user.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
provider IN VARCHAR2,
username IN VARCHAR2 DEFAULT 'ADMIN' );
6-201
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
provider Specifies the type of provider.
Valid values:
• AWS
• AZURE
• GCP
• OCI
username Specifier the user to disable principal based authentication for.
A null value is valid for the username. If username is not specified, the
procedure applies for the "ADMIN" user.
Usage Notes
• When the provider value is AZURE and the username is ADMIN, the procedure
disables Azure service principal based authentication on Autonomous Database
and deletes the Azure application on the Autonomous Database instance.
• When the provider value is AZURE and the username is a user other than the ADMIN
user, the procedure revokes the privileges from the specified user. The ADMIN user
and other users that are enabled to use the Azure service principal can continue to
use ADMIN.AZURE$PA and the application that is created for the Autonomous
Database instance remains on the instance.
Examples
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
provider => 'AZURE',
username => 'SCOTT');
END;
/
BEGIN
DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH(
provider => 'GCP');
END;
/
DISABLE_RESOURCE_PRINCIPAL Procedure
Disable resource principal credential for the database or for the specified schema.
Syntax
DBMS_CLOUD_ADMIN.DISABLE_RESOURCE_PRINCIPAL(
username IN VARCHAR2);
6-202
Chapter 6
Autonomous Database Supplied Package Reference
Parameter
Parameter Description
username Specifies an optional user name. The name of the database schema to
remove resource principal access.
If you do not supply a username, the username is set to ADMIN and
the command removes the OCI$RESOURCE_PRINCIPAL credential.
Exceptions
Usage Notes
• Resource principal is not available with refreshable clones.
• You must set up a dynamic group and policies for the dynamic group before you call
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL.
See the following for more information on creating policies, creating a dynamic group,
and creating rules:
– Use Resource Principal to Access Oracle Cloud Infrastructure Resources
– Managing Dynamic Groups
– Managing Policies
• Verify that a resource principal credential is enabled by querying one of the views:
DBA_CREDENTIALS or ALL_TAB_PRIVS.
For example, as the ADMIN user query the view DBA_CREDENTIALS:
OWNER CREDENTIAL_NAME
----- ----------------------
ADMIN OCI$RESOURCE_PRINCIPAL
6-203
Chapter 6
Autonomous Database Supplied Package Reference
Example
EXEC DBMS_CLOUD_ADMIN.DISABLE_RESOURCE_PRINCIPAL();
No rows selected.
DROP_DATABASE_LINK Procedure
This procedure drops a database link.
Syntax
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name IN VARCHAR2,
public_link IN BOOLEAN DEFAULT);
Parameters
Parameter Description
db_link_name The name of the database link to drop.
public_link To run DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK with
public_link set to TRUE, you must have the DROP PUBLIC
DATABASE LINK system privilege.
The default value for this parameter is FALSE.
Example
BEGIN
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK(
db_link_name => 'SALESLINK' );
END;
/
Usage Notes
After you are done using a database link and you run
DBMS_CLOUD_ADMIN.DROP_DATABASE_LINK, to ensure security of your Oracle database
remove any stored wallet files. For example:
• Remove the wallet file in Object Store.
• Use DBMS_CLOUD.DELETE_FILE to remove the wallet file from the data_pump_dir
directory or from the user defined directory where the wallet file was uploaded.
6-204
Chapter 6
Autonomous Database Supplied Package Reference
ENABLE_APP_CONT Procedure
This procedure enables database application continuity for the session associated with the
specified service name in Autonomous Database.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
service_name IN VARCHAR2);
Parameters
Parameter Description
service_name The service_name for the Autonomous Database service.
To find service names:
• Look in the tnsnames.ora file in the wallet.zip that you
download from an Autonomous Database for your connection.
• Click Database connection on the Oracle Cloud Infrastructure
Console. In the Connection strings area, each connection string
includes a service_name entry that contains the connection string for
the corresponding service. When both Mutual TLS (mTLS) and TLS
connections are allowed, under TLS authentication select TLS to view
the TNS names and connection strings for connections with TLS
authentication. See View TNS Names and Connection Strings for an
Autonomous Database Instance for more information.
• Query V$SERVICES view. For example:
Usage Notes
See Overview of Application Continuity for more information on Application Continuity.
Example
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_APP_CONT(
service_name => 'nvthp2ht_adb1_high.adb.oraclecloud.com'
);
END;
/
NAME FAILOVER_TYPE
------------------------------------------------------- -------------
nvthp2ht_adb1_high.adb.oraclecloud.com TRANSACTION
6-205
Chapter 6
Autonomous Database Supplied Package Reference
ENABLE_AWS_ARN Procedure
This procedure enables an Autonomous Database instance to use Amazon Resource
Names (ARNs) to access AWS resources.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_AWS_ARN(
username IN VARCHAR2 DEFAULT NULL,
grant_option IN BOOLEAN DEFAULT FALSE);
Parameters
Parameter Description
username Name of the user to enable to use Amazon Resource Names (ARNs).
A null value is valid for the username. If username is not specified, the
procedure applies for the "ADMIN" user.
grant_option When username is supplied, if grant_option is TRUE the specified
username can enable Amazon Resource Names (ARNs) usage for
other users.
Example
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_AWS_ARN(
username => 'adb_user');
END;
/
Usage Note
• You must be the ADMIN user to run the procedure
DBMS_CLOUD_ADMIN.ENABLE_AWS_ARN.
See Use Amazon Resource Names (ARNs) to Access AWS Resources for more
information.
ENABLE_EXTERNAL_AUTHENTICATION Procedure
Enable users to login to the database with external authentication schemes.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE,
params IN CLOB DEFAULT NULL
);
6-206
Chapter 6
Autonomous Database Supplied Package Reference
Parameter
Parameter Description
type Specifies the external authentication type. Valid values: or .
• 'OCI_IAM'
• 'AZURE_AD'
• 'CMU'
• 'KERBEROS'
force (Optional) Override a currently enabled external authentication scheme.
Valid values are TRUE or FALSE.
The default value is FALSE.
6-207
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
params A JSON string that provides additional parameters for the external
authentication.
CMU parameters:
• location_uri: specifies the cloud storage URI for the bucket
where files required for CMU are stored.
If you specify location_uri there is a fixed name directory object
CMU_WALLET_DIR created in the database at the path
'cmu_wallet' to save the CMU configuration files. In this case,
you do not need to supply the directory_name parameter.
• credential_name: specifies the credentials that are used to
download the CMU configuration files from the Object Store to the
directory object.
Default value is NULL which allows you to provide a Public,
Preauthenticated, or pre-signed URL for Object Store bucket or
subfolder.
• directory_name: specifies the directory name where
configuration files required for CMU are stored. If
directory_name is supplied, you are expected to copy the CMU
configuration files dsi.ora and cwallet.sso to this directory
object.
KERBEROS parameters:
• location_uri: specifies the cloud storage URI for the bucket
where the files required for Kerberos are stored.
If you specify location_uri there is a fixed name directory object
KERBEROS_DIR created in the database to save the Kerberos
configuration files. In this case, you do not need to supply the
directory_name parameter.
• credential_name: specifies the credential that are used to
download Kerberos configuration files from the Object Store
location to the directory object.
Default value is NULL which allows you to provide a Public,
Preauthenticated, or pre-signed URL for Object Store bucket or
subfolder.
• directory_name: specifies the directory name where files
required for Kerberos are stored. If directory_name is supplied,
you are expected to copy the Kerberos configuration files to this
directory object.
• kerberos_service_name: specifies a name to use as the
Kerberos service name. This parameter is optional.
Default value: When not specified, the kerberos_service_name
value is set to the Autonomous Database instance's GUID.
AZURE_AD parameters:
• tenant_id: Tenant ID of the Azure Account. Tenant Id specifies
the Autonomous Database instance's Azure AD application
registration.
• application_id: Azure Application ID created in Azure AD to
assign roles/schema mappings for external authentication in the
Autonomous Database instance.
• application_id_uri: Unique URI assigned to the Azure
Application.
This it the identifier for the Autonomous Database instance. The
name must be domain qualified (this supports cross tenancy
resource access).
6-208
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
The maximum length for this parameter is 256 characters.
Exceptions
Usage Notes
• With type OCI_IAM, if the resource principal is not enabled on the Autonomous Database
instance, this routine enables resource principal with
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL.
• This procedure sets the system parameters IDENTITY_PROVIDER_TYPE and
IDENTITY_PROVIDER_CONFIG to required users to access the instance with Oracle Cloud
Infrastructure Identity and Access Management authentication and authorization.
Examples
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'OCI_IAM',
force=> TRUE );
END;
/
You pass in a directory name that contains the CMU configuration files through params JSON
argument.
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'CMU',
force => TRUE,
params => JSON_OBJECT('directory_name' value 'CMU_DIR'); // CMU_DIR
directory object already exists
END;
/
6-209
Chapter 6
Autonomous Database Supplied Package Reference
You pass in a location URI pointing to an Object Storage location that contains CMU
configuration files through params JSON argument.
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'CMU',
params => JSON_OBJECT('location_uri' value 'https://
objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
bucketname/o',
'credential_name' value
'my_credential_name')
);
END;
/
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'AZURE_AD',
force => TRUE,
params => JSON_OBJECT( 'tenant_id' VALUE '....',
'application_id' VALUE '...',
'application_id_uri' VALUE '.....' ));
END;
/
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
force => TRUE,
params => JSON_OBJECT('directory_name' value 'KERBEROS_DIR'); //
KERBEROS_DIR directory object already exists
END;
/
You pass in a location URI pointing to an Object Storage location that contains
Kerberos configuration files through params JSON argument:
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
force => TRUE,
params => JSON_OBJECT('location_uri' value 'https://
objectstorage.us-phoenix-1.oraclecloud.com/n/namespace-string/b/
6-210
Chapter 6
Autonomous Database Supplied Package Reference
bucketname/o',
'credential_name' value 'my_credential_name');
END;
/
You pass in a service name with the kerberos_service_name in the params JSON argument:
BEGIN DBMS_CLOUD_ADMIN.ENABLE_EXTERNAL_AUTHENTICATION(
type => 'KERBEROS',
force => TRUE,
params => JSON_OBJECT('directory_name' value 'KERBEROS_DIR', //
KERBEROS_DIR directory object already exists
'kerberos_service_name' value 'oracle' ));
END;
/
After Kerberos is enabled on your Autonomous Database instance, use the following query to
view the Kerberos service name:
ENABLE_FEATURE Procedure
This procedure enables the specified feature on the Autonomous Database instance.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name IN VARCHAR2,
params IN CLOB DEFAULT NULL);
Parameters
Parameter Description
feature_name Name of the feature to enable. The supported values are:
• 'JAVAVM': Enable JAVAVM feature.
• 'ORAMTS': Enable OraMTS feature.
• 'AUTO_DST_UPGRADE': Enable AUTO DST feature.
• 'AUTO_DST_UPGRADE_EXCL_DATA': Enable AUTO DST EXCL DATA
feature.
• 'OWM': Enable Oracle Workspace Manager.
This parameter is mandatory.
params A JSON string that provides additional parameters for some features.
The params parameter is:
• location_uri: the location_uri accepts a string value. The value
specifies the HTTPS URL for the OraMTS server in a customer network.
6-211
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE (
feature_name => 'JAVAVM' );
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE (
feature_name => 'AUTO_DST_UPGRADE' );
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE (
feature_name => 'AUTO_DST_UPGRADE_EXCL_DATA' );
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'ORAMTS',
params => JSON_OBJECT('location_uri' VALUE 'https://
mymtsserver.mycorp.com')
);
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_FEATURE(
feature_name => 'OWM' );
END;
/
Usage Notes
• After you run DBMS_CLOUD_ADMIN.ENABLE_FEATURE with feature_name value
'JAVAVM', you must restart the Autonomous Database instance to install Oracle
Java.
6-212
Chapter 6
Autonomous Database Supplied Package Reference
PARAM_NAME PARAM_VALUE
---------------- --------------
auto_dst_upgrade enabled
PARAM_NAME PARAM_VALUE
-------------------------- -----------
auto_dst_upgrade_excl_data enabled
ENABLE_PRINCIPAL_AUTH Procedure
This procedure enables principal authentication on Autonomous Database for the specified
provider and applies to the ADMIN user or the specified user.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider IN VARCHAR2,
username IN VARCHAR2 DEFAULT 'ADMIN',
params IN CLOB DEFAULT NULL);
6-213
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
provider Specifies the type of provider.
Valid values:
• AWS: Enable use of Amazon Resource Names (ARNs)
• AZURE: Enable use of Azure Service Principal
• GCP: Enable use of Google Service Account
• OCI: Enable use of Resource Principal
username Name of the user who has principal authentication usage enabled.
A null value is valid for the username. If username is not specified, the
procedure applies for the "ADMIN" user.
params Specifies the configuration parameters.
When the provider parameter is AWS, GCP, or OCI, params is not
required. The default value is NULL.
grant_option: This parameter is valid for all providers and is a
Boolean value TRUE or FALSE. The default is FALSE.
When TRUE and a username is specified, the specified user can use
ENABLE_PRINCIPAL_AUTH to enable other users.
When the provider parameter is AWS, these options are also
valid:
• aws_role_arn: Specifies the AWS role ARN.
• external_id_type: Specifies the external ID type. Valid values
are: "TENANT_OCID", "DATABASE_OCID", or
"COMPARTMENT_OCID". The default value for external_id_type
is "DATABASE_OCID".
See Perform AWS Management Prerequisites to Use Amazon
Resource Names (ARNs) for details on the values for
external_id_type.
When the provider parameter is AZURE, this option is also valid:
• azure_tenantid: with the value of the Azure tenant ID.
Usage Notes
• When the provider parameter is AZURE, the params parameter must include the
azure_tenantid in the following cases:
– When DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH is called for the first time.
– When DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH is called for the first time
after DBMS_CLOUD_ADMIN.DISABLE_PRINCIPAL_AUTH is called with the provider
parameter AZURE and the username ADMIN.
• When the provider parameter is AWS:
– After you enable ARN on the Autonomous Database instance by running
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH, the credential named AWS$ARN is
available to use with any DBMS_CLOUD API that takes a credential as the input.
6-214
Chapter 6
Autonomous Database Supplied Package Reference
Examples
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'AZURE',
username => 'SCOTT',
params => JSON_OBJECT('azure_tenantid' value 'azure_tenantid'));
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'GCP',
username => 'SCOTT',
params => JSON_OBJECT(
'grant_option' value 'TRUE' ));
END;
/
BEGIN
DBMS_CLOUD_ADMIN.ENABLE_PRINCIPAL_AUTH(
provider => 'AWS',
username => 'SCOTT',
params => JSON_OBJECT(
'aws_role_arn' value 'arn:aws:iam::123456:role/AWS_ROLE_ARN',
'external_id_type' value 'TENANT_OCID'));
END;
/
ENABLE_RESOURCE_PRINCIPAL Procedure
Enable resource principal credential for the database or for the specified schema. This
procedure creates the credential OCI$RESOURCE_PRINCIPAL.
Syntax
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL(
username IN VARCHAR2,
grant_option IN BOOLEAN DEFAULT FALSE);
Parameter
Parameter Description
username Specifies an optional user name. The name of the database schema to
be granted resource principal access.
If you do not supply a username, the username is set to ADMIN.
grant_option When username is supplied, if grant_option is TRUE the specified
username can enable resource principal usage for other users.
6-215
Chapter 6
Autonomous Database Supplied Package Reference
Exceptions
Usage Notes
• You must call DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL with the ADMIN
username or with no arguments before you call
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL with a username for a database
user schema.
• You must set up a dynamic group and policies for the dynamic group before you
call DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL.
See the following for more information on policies, creating a dynamic group, and
creating rules:
– Use Resource Principal to Access Oracle Cloud Infrastructure Resources
– Managing Dynamic Groups
– Managing Policies
• Enabling the resource principal with
DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL is one-time operation. You do not
need to enable the resource principal again, unless you run
DBMS_CLOUD_ADMIN.DISABLE_RESOURCE_PRINCIPAL to disable the resource
principal.
• Resource principal is not available with refreshable clones.
Example
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
OWNER CREDENTIAL_NAME
------- ---------------
ADMIN OCI$RESOURCE_PRINCIPAL
FINISH_WORKLOAD_CAPTURE Procedure
This procedure finishes the current workload capture, stops any subsequent workload
capture requests to the database, and uploads the capture files to Object Storage.
Example
BEGIN
DBMS_CLOUD_ADMIN.FINISH_WORKLOAD_CAPTURE
6-216
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
Usage Notes
• To run this procedure you must be logged in as the ADMIN user or have the EXECUTE
privilege on DBMS_CLOUD_ADMIN.
• When you pass the duration parameter to START_WORKLOAD_CAPTURE, the capture
finishes when it reaches the specified time. However, if you call
FINISH_WORKLOAD_CAPTURE, this stops the workload capture (possibly before the time
specified with the duration parameter).
You can query the DBA_CAPTURE_REPLAY_STATUS view to check the finish workload status.
See DBA_CAPTURE_REPLAY_STATUS View for more information.
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to be
notified about the completion of FINISH_WORKLOAD_CAPTURE as well as the Object
Storage link to download the capture file. This PAR URL is contained in the
captureDownloadURL field of the event and is valid for 7 days from the date of
generation. See Information Events on Autonomous Database for more information.
PREPARE_REPLAY Procedure
The PREPARE_REPLAY procedure prepares the refreshable clone for a replay.
Parameters
Parameter Description
capture_name Specifies the name of the workload capture.
This parameter is mandatory.
Syntax
DBMS_CLOUD_ADMIN.PREPARE_REPLAY(
capture_name IN VARCHAR2);
Example
BEGIN
DBMS_CLOUD_ADMIN.PREPARE_REPLAY
capture_name => 'cap_test1');
END;
/
6-217
Chapter 6
Autonomous Database Supplied Package Reference
This example prepares the refreshable clone to replay the workload indicated by the
capture_name parameter, which involves bringing it up to the capture start time and
then disconnecting it.
Usage Note
• To run this procedure you must be logged in as the ADMIN user or have the
EXECUTE privilege on DBMS_CLOUD_ADMIN.
PURGE_FLASHBACK_ARCHIVE Procedure
This procedure enables ADMIN users to purge historical data from Flashback Data
Archive. You can either purge all historical data from Flashback Data Archive
flashback_archive or selective data based on timestamps or System Change
Number.
Syntax
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE(
scope IN VARCHAR2,
before_scn IN INTEGER DEFAULT NULL,
before_ts IN TIMESTAMP DEFAULT NULL);
Parameter Description
scope This specifies the scope to remove data from the flashback data archive.
• all implies PURGE ALL;before_scn and before_timestamp must both be
NULL.
• scn implies PURGE BEFORE SCN;before_scn must be non-NULL and
before_timestamp must be NULL.
• TIMESTAMP implies PURGE BEFORE timestamp;before_scn must be NULL
and before_timestamp must be non-NULL.
before_scn This specifies the system change number before which all the data is removed from
the flashback archive.
before_timestamp This specifies the timestamp before which all the data is removed from the flashback
archive.
Example
BEGIN
DBMS_CLOUD_ADMIN.PURGE_FLASHBACK_ARCHIVE(
scope => 'ALL'); // Purge all historical data from
Flashback Data Archive flashback_archive
END;
/
REPLAY_WORKLOAD Procedure
This procedure initiates a workload replay on your Autonomous Database instance.
The overloaded form enables you to replay the capture files from an Autonomous
Database instance, on-premises database, or other cloud service databases.
6-218
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
capture_name IN VARCHAR2,
replay_name IN VARCHAR2 DEFAULT NULL,
capture_source_tenancy_ocid IN VARCHAR2 DEFAULT NULL,
capture_source_db_name IN VARCHAR2 DEFAULT NULL);
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
location_uri IN VARCHAR2,
credential_name IN VARCHAR2 DEFAULT NULL,
synchronization IN BOOLEAN DEFAULT TRUE,
process_capture IN BOOLEAN DEFAULT TRUE);
Parameters
Parameter Description
CAPTURE_NAME Specifies the name of the workload capture.
This parameter is mandatory.
REPLAY_NAME Specifies the replay name.
If you do not supply a REPLAY_NAME value, the REPLAY_NAME is auto-generated
with the format REPLAY_RANDOMNUMBER, for example,
REPLAY_1678329506.
CAPTURE_SOURCE_TENANCY_OC Specifies the source tenancy OCID of the workload capture.
ID If you do not supply a CAPTURE_SOURCE_TENANCY_OCID value, the
CAPTURE_SOURCE_TENANCY_OCID is set to NULL.
This parameter is only mandatory when running the workload capture in a full
clone.
CAPTURE_SOURCE_DB_NAME Specifies the source database name of the workload capture
If you do not supply a CAPTURE_SOURCE_DB_NAME value, the
CAPTURE_SOURCE_DB_NAME is set to NULL.
This parameter is only mandatory when running the workload capture in a full
clone.
LOCATION_URI Specifies URI that points to an Object Storage location that contains the
captured files.
This parameter is mandatory.
CREDENTIAL_NAME Specifies the credential to access the object storage bucket.
If you do not supply a credential_name value, the database's default
credentials are used.
SYNCHRONIZATION Specifies the synchronization method used during workload replay.
• TRUE specifies that the synchronization is based on SCN.
• FALSE specifies that the synchronization is based on TIME.
If you do not supply a synchronization value, the synchronization is set
to TRUE.
PROCESS_CAPTURE Specifies whether or not you need to specify process_capture value. It can
be set to FALSE only when you replay the same workload on the target
database repeatedly.
If you do not supply a process_capture value, the process_capture is set
to TRUE.
6-219
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
location_uri => 'https://objectstorage.us-
phoenix-1.oraclecloud.com/n/namespace-string/b/bucketname/o',
credential_name => 'CRED_TEST',
synchronization => TRUE,
process_capture => TRUE);
END;
/
You don't need to create a credential to access Oracle Cloud Infrastructure Object
Store if you enable resource principal credentials. See Use Resource Principal to
Access Oracle Cloud Infrastructure Resources for more information.
BEGIN
DBMS_CLOUD_ADMIN.REPLAY_WORKLOAD(
capture_name => 'CAP_TEST1');
END;
/
6-220
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes for Replaying the Workload from an On-Premises or Other Cloud Service
Database on another Autonomous Database
• To run this procedure you must be logged in as the ADMIN user or have the EXECUTE
privilege on DBMS_CLOUD_ADMIN.
• Before you start replay, you should upload the cap and capfiles subdirectories, which
contain the workload capture files, to the object storage location.
Usage Notes for Replaying the Workload from an Autonomous Database instance on
another Autonomous Database
• To run this procedure you must be logged in as the ADMIN user or have the EXECUTE
privilege on DBMS_CLOUD_ADMIN.
• The replay files are automatically uploaded to the Object Store as a zip file.
• You can query the DBA_CAPTURE_REPLAY_STATUS view to check the workload replay
status.
See DBA_CAPTURE_REPLAY_STATUS View for more information.
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to be
notified about the start and completion of the REPLAY_WORKLOAD as well as the
Object Storage link to download the replay reports. This PAR URL is contained in
the replayDownloadURL field of the event and is valid for 7 days from the date of
generation. See Information Events on Autonomous Database for more information.
SET_FLASHBACK_ARCHIVE_RETENTION Procedure
This procedure allows ADMIN users to modify the retention period for Flashback Data
Archive flashback_archive.
Syntax
DBMS_CLOUD_ADMIN.SET_FLASHBACK_ARCHIVE_RETENTION (
retention_days INTEGER);
Parameter Description
retention_days This specifies the length of time in days that the archived data should be retained for.
The value of retention_days must be greater than 0.
Example
BEGIN
DBMS_CLOUD_ADMIN.SET_FLASHBACK_ARCHIVE_RETENTION(
retention_days => 90); // sets the retention time to 90 days
6-221
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
START_WORKLOAD_CAPTURE Procedure
This procedure initiates a workload capture on your Autonomous Database instance.
Syntax
DBMS_CLOUD_ADMIN.START_WORKLOAD_CAPTURE(
capture_name IN VARCHAR2,
duration IN NUMBER DEFAULT NULL);
Parameters
Parameter Description
capture_name Specifies the name of the workload capture.
This parameter is mandatory.
duration Specifies the duration in minutes for which you want to run the workload
capture.
• If you do not supply a duration value, the duration is set to NULL.
• If set to NULL, the workload will continue until you run the
FINISH_WORKLOAD_CAPTURE procedure.
Example
BEGIN
DBMS_CLOUD_ADMIN.START_WORKLOAD_CAPTURE(
capture_name => 'test');
END;
/
Usage Notes
• To run this procedure you must be logged in as the ADMIN user or have the
EXECUTE privilege on DBMS_CLOUD_ADMIN.
• To measure the impacts of a system change on a workload, you must ensure that
the capture and replay systems are in the same logical state.
• Before initiating a workload capture, you should consider provisioning a
refreshable clone to ensure the same start point for the replay.
Note:
You must subscribe to the Information event
com.oraclecloud.databaseservice.autonomous.database.information to
be notified at the start of START_WORKLOAD_CAPTURE. See Information Events
on Autonomous Database for more information.
6-222
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_ADMIN Exceptions
The following table describes exceptions for DBMS_CLOUD_ADMIN.
DBMS_CLOUD_AI Package
The DBMS_CLOUD_AI package facilitates and configures the translation of natural language
prompts to SQL statements.
• Summary of DBMS_CLOUD_AI Subprograms
This section covers the DBMS_CLOUD_AI subprograms provided with Autonomous
Database.
Subprogram Description
CREATE_PROFILE Procedure This procedure new AI profile for translating natural
language prompts to SQL statements.
DISABLE_PROFILE Procedure This procedure disables an AI profile in the current
database.
DROP_PROFILE Procedure This procedure drops an existing AI profile.
ENABLE_PROFILE Procedure This procedure enables an AI profile to use in the
current database.
GENERATE Function This function generates a SQL statement using AI to
translate.
SET_ATTRIBUTE Procedure This procedure sets AI profile attributes.
SET_PROFILE Procedure This procedure sets AI profile for the current database.
• CREATE_PROFILE Procedure
The procedure creates a new AI profile for translating natural language prompts to SQL
statement.
6-223
Chapter 6
Autonomous Database Supplied Package Reference
• DROP_PROFILE Procedure
The procedure drops an existing AI profile. If the profile does not exist, then the
procedure throws an error.
• ENABLE_PROFILE Procedure
This procedure enables the AI profile that the user specifies. The procedure
changes the status of the AI profile to ENABLED.
• DISABLE_PROFILE Procedure
This procedure disables the AI profile in the current database. The status of the AI
profile is changed to DISABLED by this procedure.
• SET_ATTRIBUTE Procedure
This procedure enables you to set AI profile attributes.
• SET_PROFILE Procedure
This procedure sets AI profile for current session.
• GENERATE Function
• Profile Attributes
CREATE_PROFILE Procedure
The procedure creates a new AI profile for translating natural language prompts to
SQL statement.
Syntax
DBMS_CLOUD_AI.CREATE_PROFILE
profile_name IN VARCHAR2,
attributes IN CLOB DEFAULT NULL,
status IN VARCHAR2 DEFAULT NULL,
description IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
profile_name A name for the AI profile. The profile name must follow the naming
rules of Oracle SQL identifier. Maximum length of profile name is
125 characters.
This is a mandatory parameter.
attributes Profile attributes in JSON format. See AI Profile Attributes for
more details.
The default value is NULL.
status Status of the profile.
The default value is enable.
description Description for the AI profile.
The default value is NULL.
Example
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
6-224
Chapter 6
Autonomous Database Supplied Package Reference
DROP_PROFILE Procedure
The procedure drops an existing AI profile. If the profile does not exist, then the procedure
throws an error.
Syntax
DBMS_CLOUD_AI.DROP_PROFILE(
profile_name IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE
);
Parameters
Parameter Description
profile_name Name of the AI profile
force If TRUE, then the procedure ignores errors if AI profile does not exist.
The default value for this parameter is FALSE.
Example
BEGIN
DBMS_CLOUD_AI.DROP_PROFILE(profile_name => 'OPENAI');
END;
/
Usage Notes
Use force to drop a profile and ignore errors if AI profile does not exist.
ENABLE_PROFILE Procedure
This procedure enables the AI profile that the user specifies. The procedure changes the
status of the AI profile to ENABLED.
Syntax
DBMS_CLOUD_AI.ENABLE_PROFILE(
profile_name IN VARCHAR2
);
6-225
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
profile_name Name for the AI profile to enable
This parameter is mandatory.
BEGIN
DBMS_CLOUD_AI.ENABLE_PROFILE(
profile_name => 'OPENAI'
);
END;
/
DISABLE_PROFILE Procedure
This procedure disables the AI profile in the current database. The status of the AI
profile is changed to DISABLED by this procedure.
Syntax
DBMS_CLOUD_AI.DISABLE_PROFILE(
profile_name IN VARCHAR2
);
Parameters
Parameter Description
profile_name Name for the AI profile.
This parameter is mandatory.
Example
BEGIN
DBMS_CLOUD_AI.DISABLE_PROFILE(
profile_name => 'OPENAI'
);
END;
/
SET_ATTRIBUTE Procedure
This procedure enables you to set AI profile attributes.
Syntax
DBMS_CLOUD_AI.SET_ATTRIBUTE(
profile_name IN VARCHAR2,
6-226
Chapter 6
Autonomous Database Supplied Package Reference
attribute_name IN VARCHAR2,
attribute_value IN CLOB
);
Parameters
Only the owner can set or modify the attributes of the AI profile. For a list of supported
attributes, see Profile Attributes.
Parameter Description
profile_name Name of the AI profile for which you want to set the attributes.
This parameter is mandatory.
attribute_name Name of the AI profile attribute
This parameter is mandatory.
attribute_value Value of the profile attribute.
The default value is NULL.
Example
BEGIN
DBMS_CLOUD_AI.SET_ATTRIBUTE(
profile_name => 'OPENAI',
attribute_name => 'credential_name',
attribute_value => 'OPENAI_CRED_NEW'
);
END;
/
SET_PROFILE Procedure
This procedure sets AI profile for current session.
After setting an AI profile for the database session, any SQL statement with the prefix SELECT
AI is considered a natural language prompt. Depending on the action specified with the AI
prefix, a response is generated using AI. To use the AI prefix, see Examples of Using Select
AI. Optionally, it is possible to override the profile attributes or modify attributes by specifying
them in JSON format. See SET_ATTRIBUTE Procedure for setting the attributes.
The AI profile can only be set for current session if the owner of the AI profile is the session
user.
To set an AI profile for all sessions of a specific database user or all user sessions in the
database, consider using a database event trigger for AFTER LOGON event on the specific user
or the entire database. See CREATE TRIGGER Statement for more details.
Syntax
DBMS_CLOUD_AI.SET_PROFILE(
profile_name IN VARCHAR2,
);
6-227
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
profile_name A name for the AI profile in the current session.
This parameter is mandatory.
Example
BEGIN
DBMS_CLOUD_AI.SET_PROFILE(
profile_name => 'OPENAI'
);
END;
/
GENERATE Function
This function provides AI translation in a stateless manner. With your existing AI
profile, you can use this function to perform the supported actions such as showsql,
narrate, or chat. The default action is showsql.
Overriding some or all of the profile attributes is also possible using this function.
Syntax
DBMS_CLOUD_AI.GENERATE(
prompt IN CLOB,
profile_name IN VARCHAR2 DEFAULT NULL,
action IN VARCHAR2 DEFAULT NULL,
attributes IN CLOB DEFAULT NULL
) RETURN CLOB;
Parameters
Parameter Description
prompt Natural language prompt to translate using AI.
The prompt can include SELECT AI <action> as the prefix. The
action can also be supplied separately as an "action" parameter. The
action supplied in prompt overrides the "action" parameter. Default
action is showsql.
This parameter is mandatory.
6-228
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
profile_name Name of the AI profile. This parameter is optional if an AI profile is
already set in the session using DBMS_CLOUD_AI.SET_PROFILE.
The default value is NULL.
The following conditions apply:
• If a profile is set in the current session, the user may omit
profile_name argument in the DBMS_CLOUD_AI.GENERATE
function.
• If the profile_name argument is supplied in the
DBMS_CLOUD_AI.GENERATE function, it overrides any value set in
the session using the DBMS_CLOUD_AI.SET_PROFILE procedure.
• If there is no profile set in the session using the
DBMS_CLOUD_AI.SET_PROFILE procedure, the profile_name
argument must be supplied in the DBMS_CLOUD_AI.GENERATE
function.
Note:
For Database Actions, you can either
specify profile_name argument in
DBMS_CLOUD_AI.GENERATE or you can
run two steps as a PL/SQL script:
DBMS_CLOUD_AI.SET_PROFILE and
DBMS_CLOUD_AI.GENERATE.
EXEC
DBMS_CLOUD_AI.set_profile('OPE
NAI');
------------------------------
-----------------
SELECT
DBMS_CLOUD_AI.GENERATE(prompt
=> 'how many customers',
------------------------------
------------------
SELECT
DBMS_CLOUD_AI.GENERATE(prompt
=> 'how many customers',
6-229
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
-------------
SELECT
DBMS_CLOUD_AI.GENERATE(prompt
=> 'what is oracle
autonomous database',
action Action for translating natural prompt using AI. The supported actions
include showsql (default), narrate, and chat. Descriptions of actions
are included in Use AI Keyword to Enter Prompts.
Note:
This function does not support the
runsql action. If you supply the runsql
action, it returns the following error:
Examples
The following examples illustrate showsql, narrate, and chat actions that can be used
with the DBMS_CLOUD_AI.GENERATE function.
6-230
Chapter 6
Autonomous Database Supplied Package Reference
Profile Attributes
Attributes of an AI profile help to manage and configure the behavior of the AI profile. Some
attributes are optional and have a default value.
Attributes
Note:
Boolean values are not applicable in the
DBMS_CLOUD_AI.SET_ATTRIBUTE procedure
when setting a single attribute because
attribute_value parameter is of CLOB
datatype.
6-231
Chapter 6
Autonomous Database Supplied Package Reference
Note:
This parameter is not used for Azure as the
model is determined when you create your
deployment in the Azure OpenAI Service
portal.
6-232
Chapter 6
Autonomous Database Supplied Package Reference
[
{"owner": "SH", "name": "SALES",
{"owner": "TEST_USER"}
]
External tables created using sync of OCI Data Catalog or AWS Glue can
also be used the object list. This helps in managing metadata in central Data
Catalogs and use the metadata directly for translating natural language
prompts using AI.
oci_compartment_id Specifies the OCID of the compartment you are permitted to access when
calling the OCI Generative AI service. The compartment ID can contain
alphanumeric characters, hyphens and dots.
The default is the compartment ID of the PDB.
oci_endpoint_id This attributes indicates the endpoint OCID of the Oracle dedicated AI
hosting cluster. The endpoint ID can contain alphanumeric characters,
hyphens and dots. To find the endpoint OCID, see Getting an Endpoint's
Details in Generative AI.
When you want to use the Oracle dedicated AI cluster, you must provide the
endpoint OCID of the hosting cluster.
By default, the endpoint ID is empty and the model is on-demand on a
shared infrastructure.
oci_runtimetype This attribute indicates the runtime type of the provided model. This attribute
is required when the model attribute is specified.
All allowed values can be found in OCI Generative AI runtimeType. See
LlmInferenceRequest Reference.
The default value is COHERE as the default model for OCI Generative AI is
cohere.command. This option is applicable for all actions of Select AI.
Select LLAMA if you are using the meta.llama-2-70b-chat model. Note
that this model is suitable for the chat action of Select AI only.
provider AI provider for the AI profile.
Supported providers:
• openai
• cohere
• azure
• oci
This is a mandatory attribute.
6-233
Chapter 6
Autonomous Database Supplied Package Reference
Note:
The Oracle Generative AI cluster is only
available in the Chicago region.
The following example is using Cohere as the provider and displays custom profile
attributes:
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
profile_name => 'COHERE',
attributes =>
'{"provider": "cohere",
"credential_name": "COHERE_CRED",
"object_list": [{"owner": "ADMIN"}],
"max_tokens":512,
"stop_tokens": [";"],
"model": "command-nightly",
"temperature": 0.5,
"comments": true
}');
END;
/
The following example shows custom profile attributes using OCI Generative AI:
BEGIN
DBMS_CLOUD_AI.CREATE_PROFILE(
'GENAI',
6-234
Chapter 6
Autonomous Database Supplied Package Reference
'{"provider":
"oci",
"credential_name": "GENAI_CRED",
"object_list": [{"owner": "SH", "name": "customers"},
{"owner": "SH", "name": "countries"},
{"owner": "SH", "name":
"supplementary_demographics"},
{"owner": "SH", "name": "profits"},
{"owner": "SH", "name": "promotions"},
{"owner": "SH", "name": "products"}],
"oci_compartment_id": "ocid1.compartment.oc1...",
"oci_endpoint_id": "ocid1.generativeaiendpoint.oc1.us-chicago-1....",
"region": "us-chicago-1",
"model": "cohere.command-light",
"oci_runtimetype": "COHERE"
}');
END;
/
DBMS_CLOUD_FUNCTION Package
The DBMS_CLOUD_FUNCTION package allows you invoke OCI and AWS Lambda remote
functions in your Autonomous Database as SQL functions.
• Summary of DBMS_CLOUD_FUNCTION Subprograms
This table summarizes the subprograms included in the DBMS_CLOUD_FUNCTION package.
Subprogram Description
CREATE_CATALOG Procedure This procedure creates a catalog.
CREATE_FUNCTION Procedure This procedure creates functions in a catalog.
DROP_CATALOG Procedure This procedure drops a catalog and functions created using the
catalog
DROP_FUNCTION Procedure This procedure drops functions from a catalog.
LIST_FUNCTIONS Procedure This procedure lists all the functions in a catalog.
SYNC_FUNCTIONS Procedure This procedure creates a PL/SQL wrapper for adding new
functions to the catalog and removing wrappers for functions that
have been deleted from the catalog.
• CREATE_CATALOG Procedure
This procedure creates a catalog in the database. The
DBMS_CLOUD_FUNCTION.CREATE_CATALOG procedure creates a catalog. A catalog is a set of
functions that creates the required infrastructure to execute subroutines. This procedure
is overloaded.
• CREATE_FUNCTION Procedure
This procedure creates functions in a catalog. There are two overloaded
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION procedures.
6-235
Chapter 6
Autonomous Database Supplied Package Reference
• DROP_CATALOG Procedure
The DBMS_CLOUD_FUNCTION.DROP_CATALOG procedure drops the catalog and
functions created using the catalog. This procedure is overloaded.
• DROP_FUNCTION Procedure
The DBMS_CLOUD_FUNCTION.DROP_FUNCTION procedure drops the function. This
procedure is overloaded.
• LIST_FUNCTIONS Procedure
This procedure lists all the functions in a catalog.
• SYNC_FUNCTIONS Procedure
This procedure creates a PL/SQL wrapper for adding new functions to the catalog
and removing wrappers for functions that have been deleted from the catalog.
CREATE_CATALOG Procedure
This procedure creates a catalog in the database. The
DBMS_CLOUD_FUNCTION.CREATE_CATALOG procedure creates a catalog. A catalog is a set
of functions that creates the required infrastructure to execute subroutines. This
procedure is overloaded.
Syntax
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name IN VARCHAR2,
catalog_name IN VARCHAR2,
service_provider IN VARCHAR2,
cloud_params IN CLOB
);
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
library_name IN VARCHAR2,
library_listener_url IN VARCHAR2,
library_wallet_dir_name IN VARCHAR2,
library_ssl_server_cert_dn IN VARCHAR2,
library_remote_path IN VARCHAR2
);
Parameters
Parameter Description
credential_name Specifies the name of the credential for authentication.
This parameter is mandatory.
service_provider Specifies the type of the service provider.
This parameter can have OCI or AWS or AZURE as a parameter value.
This parameter is mandatory.
catalog_name Specifies the catalog name.
This parameter is mandatory.
cloud_params Provides parameter to the function. For example, Compartment OCID,
Regions and Azure subscription_id.
This parameter is mandatory.
6-236
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
library_name Specifies the name of the library when creating a remote library.
This parameter is mandatory.
library_listener_ Specifies the remote location of the library.
url The parameter accepts a String value in host_name:port_number
format.
For example: EHRPMZ_DBDOMAIN.adb-us-phoenix1.com:16000
This parameter is mandatory.
library_remote_pa Specifies the remote library path.
th You must provide the full absolute path to the remote library.
For example:/u01/app/oracle/product/21.0.0.0/
client_1/lib/libst_shape.so
This parameter is mandatory.
library_wallet_di Specifies the directory where the self-signed wallet is stored.
r_name This parameter is mandatory.
library_ssl_serve Specifies the server certificate Distinguished Name (DN).
r_cert_dn This parameter is mandatory.
Errors
Examples
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name => 'DEFAULT_CREDENTIAL',
catalog_name => 'OCI_DEMO_CATALOG',
service_provider => 'OCI',
cloud_params => ("region_id":"us-phoenix-1",
"compartment_id":"compartment_id"
);
6-237
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
credential_name => 'AZURE$PA',
catalog_name => 'AZURE_DEMO_CATALOG',
service_provider => 'AZURE',
cloud_params => '{"subscription_id":"44495e6a-8ff1-4161-
b387-0e14e675b878"}'
);
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_CATALOG (
library_name => 'EXT_DEMOLIB',
library_listener_url => 'remote_extproc_hostname:16000',
library_wallet_dir_name => 'WALLET_DIR',
library_ssl_server_cert_dn => 'CN=VM Hostname',
library_remote_path => '/u01/app/oracle/extproc_libs/
library name'
);
END;
/
Usage Note
• To create a catalog you must be logged in as the ADMIN user or have privileges on
the following:
– DBMS_CLOUD_OCI_FNC_FUNCTIONS_INVOKE
– DBMS_CLOUD_OCI_FNC_FUNCTIONS_INVOKE_INVOKE_FUNCTION_RESPONSE_T
– DBMS_CLOUD
– Read privilege on USER_CLOUD_FUNCTION
– Read privilege on USER_CLOUD_FUNCTION_CATALOG
CREATE_FUNCTION Procedure
This procedure creates functions in a catalog. There are two overloaded
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION procedures.
CREATE_FUNCTION Syntax
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
credential_name IN VARCHAR2,
catalog_name IN VARCHAR2,
function_name IN VARCHAR2,
6-238
Chapter 6
Autonomous Database Supplied Package Reference
function_id IN VARCHAR2,
input_args IN CLOB DEFAULT NULL,
return_type IN VARCHAR2 DEFAULT 'CLOB',
response_handler IN VARCHAR2 DEFAULT NULL
);
The return type of this is user defined type or PL/SQL type. The function_response is of
JSON with fields.
'{
"STATUS":"<RESPONCE STATUS>",
"RESPONSE_BODY":"<FUNCTION RESPONSE>"
}'
CREATE_FUNCTION Parameters
Parameter Description
credential_name Specifies the name of the credential for authentication.
This parameter is mandatory.
catalog_name Specifies the catalog name.
This parameter is mandatory.
function_name Specifies the PL/SQL function name.
This parameter is mandatory.
function_id The function_id parameter value refers to the OCI function, AWS Lambda
or Azure function.
This parameter is mandatory.
input_args Specifies the key value JSON pair accepting input arguments and their
types.
return_type Defines the return type of the function.
The return type is of CLOB data type.
response_handler Specifies the user defined callback to handle response.
CREATE_FUNCTION Exceptions
6-239
Chapter 6
Autonomous Database Supplied Package Reference
CREATE_FUNCTION Example
CREATE_FUNCTION Syntax
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
library_name IN VARCHAR2,
function_name IN VARCHAR2,
function_id IN VARCHAR2 DEFAULT NULL,
plsql_params IN CLOB DEFAULT NULL,
external_params IN CLOB DEFAULT NULL,
api_type IN VARCHAR2 DEFAULT 'FUNCTION',
with_context IN BOOLEAN DEFAULT FALSE,
return_type IN VARCHAR2 DEFAULT NULL
);
CREATE_FUNCTION Parameters
Parameter Description
library_name Specifies the remote library name.
This parameter is mandatory.
function_name Specifies the PL/SQL function name.
This parameter is mandatory.
function_id The function_id parameter value refers to the external procedures
(extproc).
If the value for function_id is not provided, the value in the
function_name is used.
6-240
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
plsql_params Specifies the key value JSON pair accepting the parameters for the
PL/SQL wrapper.
The values must be provided in the "var_name":"modetype,
datatype" format.
• var_name : is the name of the variable. This parameter is
mandatory.
• modetype : specifies the variable mode. The variable mode can
be one of the following:
– IN
– OUT
– IN OUT
• datatype : specifies the variable datatype. This parameter is
mandatory.
The deafult value for plsql_params is NULL.
external_params Specifies the parameters that need to be provided to the external C
function.
If value is not provided for external_params, the PL/SQL parameters
are used.
api_type Specifies the type of API (function or procedure).
The deafult value for api_type is function.
with_context Specifies that a context pointer is passed to the external procedure.
This context is used by the external C library for connecting back to the
database.
The deafult value for with_context is FALSE.
return_type Specifies the return type of the function created.
CREATE_FUNCTION Example
DECLARE
plsql_params clob := TO_CLOB('{"sal": "IN, FLOAT", "comm" :"IN,
FLOAT"}');
external_params clob := TO_CLOB('sal FLOAT, sal INDICATOR SHORT, comm
FLOAT, comm INDICATOR SHORT,
RETURN INDICATOR SHORT, RETURN FLOAT');
BEGIN
DBMS_CLOUD_FUNCTION.CREATE_FUNCTION (
LIBRARY_NAME => 'demolib',
FUNCTION_NAME => '"PercentComm"',
PLSQL_PARAMS => plsql_params,
EXTERNAL_PARAMS => external_params,
API_TYPE => 'FUNCTION',
WITH_CONTEXT => FALSE,
RETURN_TYPE => 'FLOAT'
);
END;
/
6-241
Chapter 6
Autonomous Database Supplied Package Reference
DROP_CATALOG Procedure
The DBMS_CLOUD_FUNCTION.DROP_CATALOG procedure drops the catalog and functions
created using the catalog. This procedure is overloaded.
Syntax
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
catalog_name IN VARCHAR2
);
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
library_name IN VARCHAR2
);
Parameters
Parameter Description
catalog_name Specifies the catalog name.
This parameter is mandatory.
library_name Specifies the library name.
This parameter is mandatory.
Errors
Example:
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
catalog_name => 'OCI_DEMO_CATALOG'
);
END;
/
Example:
BEGIN
DBMS_CLOUD_FUNCTION.DROP_CATALOG (
library_name => 'library_name'
);
END;
/
6-242
Chapter 6
Autonomous Database Supplied Package Reference
DROP_FUNCTION Procedure
The DBMS_CLOUD_FUNCTION.DROP_FUNCTION procedure drops the function. This procedure is
overloaded.
The DBMS_CLOUD_FUNCTION.DROP_FUNCTION procedure is only supported for cloud functions.
Syntax
DBMS_CLOUD_FUNCTION.DROP_FUNCTION (
catalog_name IN VARCHAR2,
function_name IN VARCHAR2
);
DBMS_CLOUD_FUNCTION.DROP_FUNCTION (
library_name IN VARCHAR2,
function_name IN VARCHAR2
);
Parameters
Parameter Description
catalog_name Specifies the catalog name.
This parameter is mandatory.
function_name Specifies the name of the function to be dropped.
This parameter is mandatory.
library_name Specifies the library name.
This parameter is mandatory.
Examples
BEGIN
DBMS_CLOUD_FUNCTION.DROP_FUNCTION (
catalog_name => 'OCI_DEMO_CATALOG',
function_name => 'demo_function');
END;
/
BEGIN
DBMS_CLOUD_FUNCTION.DROP_FUNCTION (
library_name => 'EXTPROC_DEMO_LIBRARY',
function_name => 'demo_function');
END;
/
6-243
Chapter 6
Autonomous Database Supplied Package Reference
LIST_FUNCTIONS Procedure
This procedure lists all the functions in a catalog.
Syntax
DBMS_CLOUD_FUNCTION.LIST_FUNCTIONS (
credential_name IN VARCHAR2,
catalog_name IN VARCHAR2,
function_list OUT VARCHAR2
);
Parameters
Parameter Description
credential_name Specifies the name of the credential for authentication.
This parameter is mandatory.
function_list Returns the list of functions in JSON format.
This parameter is mandatory.
catalog_name Specifies the catalog name.
This parameter is mandatory.
Errors
Example:
6-244
Chapter 6
Autonomous Database Supplied Package Reference
SYNC_FUNCTIONS Procedure
This procedure creates a PL/SQL wrapper for adding new functions to the catalog and
removing wrappers for functions that have been deleted from the catalog.
Syntax
DBMS_CLOUD_FUNCTION.SYNC_FUNCTIONS (
catalog_name IN VARCHAR2,
refresh_rate IN VARCHAR2 DEFAULT 'DAILY'
);
Parameters
Parameter Description
catalog_name Specifies the catalog name.
This parameter is mandatory.
refresh_rate Specifies the refresh rate of the function.
refresh_rate can accept the following values:
• HOURLY
• DAILY
• WEEKLY
• MONTHLY
The default value for this parameter is DAILY.
Errors
Example:
BEGIN
DBMS_CLOUD_FUNCTION.SYNC_FUNCTIONS (
catalog_name => 'OCI_DEMO_CATALOG'
);
END;
/
DBMS_CLOUD_LINK Package
The DBMS_CLOUD_LINK package allows a user to register a table or a view as a data set for
read only access with Cloud Links.
6-245
Chapter 6
Autonomous Database Supplied Package Reference
• DBMS_CLOUD_LINK Overview
Describes the use of the DBMS_CLOUD_LINK package.
• Summary of DBMS_CLOUD_LINK Subprograms
Shows a table with a summary of the subprograms included in the
DBMS_CLOUD_LINK package.
DBMS_CLOUD_LINK Overview
Describes the use of the DBMS_CLOUD_LINK package.
The DBMS_CLOUD_LINK package provides the REGISTER procedure that allows you to
register a table or a view as a data set for use with Cloud Links. Before you can
register a data set, the ADMIN user must grant a user permission to register a data set
using the DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER procedure. After the ADMIN runs
GRANT_REGISTER, a user can register a table or a view they own as a registered data
set (or register an object in another schema if the user has READ WITH GRANT OPTION
privilege on the object). Registered data sets provide remote access to the registered
object with Cloud Links, subject to the scope specified with the REGISTER procedure.
Subprogram Description
DESCRIBE Function This function retrieves the description for a data set. The
description is provided when a data set is registered with
DBMS_CLOUD_LINK.REGISTER.
FIND Procedure Retrieves the namespace, name, and description for data
sets that match the search string. Matching data sets are
only shown if they are accessible to the user, based on
access restrictions.
GET_DATABASE_ID Function Returns a unique identifier for the Autonomous Database
instance. Repeated calls to
DBMS_CLOUD_LINK.GET_DATABASE_ID on the same
instance always return the same value.
GRANT_AUTHORIZATION Grants authorization to a specified database to access the
Procedure specified data set.
REGISTER Procedure Registers a table or view as data set.
REVOKE_AUTHORIZATION Revokes authorization for a specified database to access the
Procedure specified data set.
UNREGISTER Procedure Removes a registered data set.
• DESCRIBE Function
• FIND Procedure
• GET_DATABASE_ID Function
6-246
Chapter 6
Autonomous Database Supplied Package Reference
• GRANT_AUTHORIZATION Procedure
• REGISTER Procedure
• REVOKE_AUTHORIZATION Procedure
• UNREGISTER Procedure
DESCRIBE Function
This function retrieves the description for a data set. The description is provided when a data
set is registered with DBMS_CLOUD_LINK.REGISTER.
Syntax
DBMS_CLOUD_LINK.DESCRIBE(
namespace IN VARCHAR2,
name IN VARCHAR2
) return CLOB;
Parameters
Parameter Description
namespace Specifies the namespace of registered data set.
name Specifies the name of a registered data set.
Usage Note
You can use this function subject to the access restrictions imposed at registration time with
DBMS_CLOUD_LINK.REGISTER. If a data set is not accessible to a database, then its description
will not be retrieved.
FIND Procedure
This procedure retrieves the namespace, name, and description for data sets that match the
search string. Matching data sets are only shown if they are accessible to the user, based on
access restrictions.
Syntax
DBMS_CLOUD_LINK.FIND(
search_string IN VARCHAR2,
search_result OUT CLOB
);
Parameters
Parameter Description
search_string Specifies the search string. The search string is not case sensitive.
search_result A JSON document that includes the namespace, name, and description
values for the data set.
6-247
Chapter 6
Autonomous Database Supplied Package Reference
Usage Note
The search string is not case sensitive and the package leverages free text search
using Oracle Text.
GET_DATABASE_ID Function
The function returns a unique identifier for the Autonomous Database instance.
Repeated calls to DBMS_CLOUD_LINK.GET_DATABASE_ID on the same instance always
return the same value.
You can call this function on a database that is accessing a registered data set
remotely to obtain the database ID. This allows you to provide the database iD so that
a data set owner can leverage a more fine-grained data access control, for example
with VPD, based on a specified database IDs from remote sites.
A database ID identifies each remote database that accesses a registered data set, to
track and audit access in the V$CLOUD_LINK_ACCESS_STATS and
GV$CLOUD_LINK_ACCESS_STATS Views on the database that owns a registered
data set.
Syntax
DBMS_CLOUD_LINK.GET_DATABASE_ID()
RETURN VARCHAR2;
Usage Notes
Cloud links use the unique identifier that DBMS_CLOUD_LINK.GET_DATABASE_ID returns
to identify individual databases that are accessing a data set remotely. The database
owning the registered data set tracks and audits the database ID as a record of the
origin for data set access in the V$CLOUD_LINK_ACCESS_STATS and
GV$CLOUD_LINK_ACCESS_STATS Views.
The DBMS_CLOUD_LINK.GET_DATABASE_ID identifier is available as a SYS_CONTEXT value
so that you can programmatically obtain this information about a connecting remote
session using SYS_CONTEXT, to further restrict and control what specific data can be
accessed remotely by individual Autonomous Database instances with Virtual Private
Databases (VPD)s.
Return Values
A unique identifier for the Autonomous Database instance of VARCHAR2.
GRANT_AUTHORIZATION Procedure
This procedure grants authorization to a specified database to access the specified
data set.
Syntax
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION(
database_id IN VARCHAR2,
namespace IN VARCHAR2 DEFAULT,
6-248
Chapter 6
Autonomous Database Supplied Package Reference
name IN VARCHAR2
);
Parameters
Parameter Description
database_id Specifies the database ID for an Autonomous Database instance. Use
DBMS_CLOUD_LINK.GET_DATABASE_ID to obtain the database ID.
namespace Specifies the data set namespace to grant access authorization for the
specified database_id.
name Specifies the data set name to grant access authorization for the specified
database_id.
REGISTER Procedure
The procedure registers a table or a view as a data set to allow remote read only access,
subject to restrictions imposed by the scope parameter.
Syntax
DBMS_CLOUD_LINK.REGISTER(
schema_name IN VARCHAR2,
schema_object IN VARCHAR2,
namespace IN VARCHAR2,
name IN VARCHAR2,
description IN CLOB,
scope IN CLOB,
auth_required IN BOOLEAN DEFAULT,
data_set_owner IN VARCHAR2 DEFAULT,
offload_targets IN CLOB DEFAULT
);
Parameters
Parameter Description
schema_name Specifies the owner of the table or view specified with the schema_object
parameter.
schema_object Specifies the name of a table or a view.
namespace Specifies the namespace for the data set.
A NULL value specifies a system-generated namespace value, unique to the
Autonomous Database instance.
name Specifies the name for the data set.
description Specifies text to describe the data.
6-249
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
scope Describes who is allowed to access the data set. The value is a comma
separated list consisting of one or more of the following:
• Database OCID: Access to the data set is allowed for the specific
Autonomous Database instances identified by OCID.
• Compartment OCID: Access to the data set is allowed for databases in
the compartments identified by compartment OCID.
• Tenancy OCID: Access to the data set is allowed for databases in the
tenancies identified by tenancy OCID.
• Region name: Access to the data set is allowed for databases in the
region identified by the named region. In all cases, Cloud Links access
is limited to within a single region and is not cross-region.
• MY$COMPARTMENT: Access to the data set is allowed for databases in
the same compartment as the data set owner.
• MY$TENANCY: Access to the data set is allowed for databases in the
same tenancy as the data set owner.
• MY$REGION: Access to the data set is allowed for databases in the
same region as the data set owner.
The scope values, MY$REGION, MY$TENANCY, and MY$COMPARTMENT are
variables that act as convenience macros and resolve to OCIDs.
auth_required Specifies that an additional authorization is required for databases to read
from the data set. The following cases are possible:
• Databases that are within the SCOPE specified and have been
authorized with DBMS_CLOUD_LINK.GRANT_AUTHORIZATION can view
rows from the data set.
• Any databases that are within the specified SCOPE but have not been
authorized with DBMS_CLOUD_LINK.GRANT_AUTHORIZATION cannot
view data set rows. In this case, consumers without authorization see
the data set as empty.
• Databases that are not within the SCOPE specified see an error when
attempting to access the data set.
data_set_owner Specifies the data set owner. This is indicates who the data set belongs to or
who is responsible for updating and maintaining the data set. For example,
you can set the data_set_owner to the email address of the person who
registered the data set.
6-250
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
offload_targets Specifies one or more Autonomous Database OCIDs of refreshable clones
where access to data sets is offloaded, from the Autonomous Database
where the data set is registered.
The value of is a JSON document that has one or more pairs of
cloud_link_database_id and offload_target values.
For each data set consumer, Autonomous Database instance matching the
value of the specified cloud_link_database_id, access is offloaded to
the refreshable clone identified by the OCID specified in offload_target.
For example:
{
"OFFLOAD_TARGETS": [
{
"CLOUD_LINK_DATABASE_ID": "34xxxx754755680",
"OFFLOAD_TARGET":
"ocid1.autonomousdatabase.oc1..xxxxxaaba3pv6wkcr4jqae5f44
n2b2m2yt2j6rx32uzr4h25vqstifsfghi"
}
]
}
Usage Notes
• You can register an object in another schema if you have READ WITH GRANT OPTION
privilege on the object.
• After you register an object, users may need to wait up to ten (10) minutes to access the
object with Cloud Links.
• To change the scope for an existing data set, you must first unregister the data set with
DBMS_CLOUD_LINK.UNREGISTER. Then you need to register the data set with the new
scope with DBMS_CLOUD_LINK.REGISTER. The total wait time for these two operations can
be up to 20 minutes, including 10 minutes for the unregister and another 10 minutes for
the register to be propagated and accessible through Cloud Links.
This delay can also impact the visibility of the data set in the both the
DBA_CLOUD_LINK_REGISTRATIONS and DBA_CLOUD_LINK_ACCESS views.
• You can register a table or view that resides in another user's schema when the you have
READ WITH GRANT OPTION privileges for the table or view.
• The scope you set when you register a data set is only honored when it matches or is
more restrictive than the value set with DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER. For
example, assume the ADMIN granted the scope 'MY$TENANCY' with GRANT_REGISTER, and
6-251
Chapter 6
Autonomous Database Supplied Package Reference
the user specifies 'MY$REGION' when they register a data set with
DBMS_CLOUD_LINK.REGISTER. In this case they would see an error such as the
following:
REVOKE_AUTHORIZATION Procedure
This procedure revokes authorization for a specified database to access the specified
data set.
Syntax
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION(
database_id IN VARCHAR2,
namespace IN VARCHAR2 DEFAULT,
name IN VARCHAR2
);
Parameters
Parameter Description
database_id Specifies the database ID for an Autonomous Database instance. Use
DBMS_CLOUD_LINK.GET_DATABASE_ID to obtain the database ID.
namespace Specifies the data set namespace to revoke access authorization for
the specified database_id.
name Specifies the data set name to revoke access authorization for the
specified database_id.
UNREGISTER Procedure
The procedure allows a user who previously registered a table or a view with the
REGISTER procedure to unregister the table or view so that it no longer is available for
remote access.
Syntax
DBMS_CLOUD_LINK.UNREGISTER(
namespace IN VARCHAR2,
name IN VARCHAR2
);
6-252
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
namespace Specifies a username.
name Specifies the name for the data set.
Usage Note
DBMS_CLOUD_LINK.UNREGISTER may also take up to ten (10) minutes to fully propagate, after
which the data can longer be accessed remotely.
DBMS_CLOUD_LINK_ADMIN Package
The DBMS_CLOUD_LINK_ADMIN package allows the ADMIN user to enable a database user to
register data sets or to access registered data sets for a given Autonomous Database
instance, subject to the access restrictions as defined with the granted scope.
Privileges can also be disabled for a user that has the privileges set to register data sets or
access registered data sets.
• DBMS_CLOUD_LINK_ADMIN Overview
Describes use of the DBMS_CLOUD_LINK_ADMIN package.
• Summary of DBMS_CLOUD_LINK_ADMIN Subprograms
This table summarizes the subprograms included in the DBMS_CLOUD_LINK_ADMIN
package.
DBMS_CLOUD_LINK_ADMIN Overview
Describes use of the DBMS_CLOUD_LINK_ADMIN package.
Cloud Links provide a cloud-based method to remotely access read only data on an
Autonomous Database instance. The DBMS_CLOUD_LINK_ADMIN package leverages Oracle
Cloud Infrastructure access mechanisms to make data sets accessible within a specific
scope and in addition there is a optional authorization step.
Subprogram Description
GRANT_AUTHORIZE Procedure Allows the ADMIN user to revoke a user's permission to invoke
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION and
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION procedures.
GRANT_READ Procedure Allows a user to read registered data sets, subject to access
restrictions imposed on data sets at registration.
GRANT_REGISTER Procedure Allows a user to register a data set for remote access.
REVOKE_AUTHORIZE Procedure Allows the ADMIN user to revoke a user's permission to invoke
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION and
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION procedures.
6-253
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
REVOKE_READ Procedure Disallows a user from accessing registered data sets of the
Autonomous Database instance.
REVOKE_REGISTER Procedure Disallows a user from registering data sets for remote access.
Data sets that were already registered by the user are unaffected.
• GRANT_AUTHORIZE Procedure
• GRANT_READ Procedure
• GRANT_REGISTER Procedure
• REVOKE_AUTHORIZE Procedure
• REVOKE_READ Procedure
• REVOKE_REGISTER Procedure
GRANT_AUTHORIZE Procedure
The procedure grants a user permission to invoke the
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION and
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION procedures.
Syntax
DBMS_CLOUD_LINK_ADMIN.GRANT_AUTHORIZE(
username IN VARCHAR2
);
Parameters
Parameter Description
username Specifies a username.
Usage Notes
• To enable authorization for a data set with
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION, you have to have granted the privilege
with DBMS_CLOUD_LINK_ADMIN.GRANT_AUTHORIZE. This is true for ADMIN user as
well; however ADMIN user can grant this privilege to themself.
GRANT_READ Procedure
The procedure allows a user to read registered data sets, subject to the access
restrictions imposed on data sets when a data set is registered using
DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER.
Syntax
DBMS_CLOUD_LINK_ADMIN.GRANT_READ(
username IN VARCHAR2
);
6-254
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
username Specifies a username.
Usage Notes
• To read data sets, you have to have granted the privilege with
DBMS_CLOUD_LINK_ADMIN.GRANT_READ. This is true for ADMIN user as well; however
ADMIN user can grant this privilege to themself.
• A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_READ_ENABLED') to check if
they are enabled for READ access to a data set.
For example, the following query:
GRANT_REGISTER Procedure
The procedure allows a user to register a data set for remote access.
Syntax
DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER(
username IN VARCHAR2,
scope IN CLOB
);
Parameters
Parameter Description
username Specifies a user name.
scope Specifies the scope in which permissions to publish are to be granted to the
specified user.
Valid values are:
• 'MY$REGION'
• 'MY$TENANCY'
• 'MY$COMPARTMENT'
Usage Notes
• To register data sets, you have to have granted the privilege with
DBMS_CLOUD_LINK_ADMIN.GRANT_REGISTER. This is true for ADMIN user as well; however
ADMIN user can grant this privilege to themself.
• A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_REGISTER_ENABLED') to check
if they are enabled for registering data sets.
6-255
Chapter 6
Autonomous Database Supplied Package Reference
REVOKE_AUTHORIZE Procedure
This procedure disallows a user from invoking
DBMS_CLOUD_LINK.GRANT_AUTHORIZATION and
DBMS_CLOUD_LINK.REVOKE_AUTHORIZATION procedures.
Syntax
DBMS_CLOUD_LINK_ADMIN.REVOKE_AUTHORIZE(
username IN VARCHAR2
);
Parameters
Parameter Description
username Specifies a username.
REVOKE_READ Procedure
This procedure disallows a user from accessing registered data sets on the
Autonomous Database instance.
Syntax
DBMS_CLOUD_LINK_ADMIN.REVOKE_READ(
username IN VARCHAR2
);
Parameters
Parameter Description
username Specifies a username.
Usage Note
• A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_READ_ENABLED') to
check if they are enabled for READ access to a data set.
For example, the following query:
6-256
Chapter 6
Autonomous Database Supplied Package Reference
REVOKE_REGISTER Procedure
The procedure disallows a user from registering data sets for remote access. Data sets that
were already registered by the user are unaffected.
Syntax
DBMS_CLOUD_LINK_ADMIN.REVOKE_REGISTER(
username IN VARCHAR2
);
Parameters
Parameter Description
username Specifies a user name.
Usage Note
• A user can query SYS_CONTEXT('USERENV', 'CLOUD_LINK_REGISTER_ENABLED') to check
if they are enabled for registering data sets.
For example, the following query:
DBMS_CLOUD_MACADM Package
This section covers the DBMS_CLOUD_MACADM subprograms provided with Autonomous
Database.
• CONFIGURE_DATABASE_VAULT Procedure
This procedure configures the initial two Oracle Database user accounts, which are
granted the DV_OWNER and DV_ACCTMGR roles, respectively for Autonomous Database.
• DISABLE_DATABASE_VAULT Procedure
This procedure disables Oracle Database Vault on Autonomous Database. To use this
procedure you must have the DV_OWNER role.
• DISABLE_USERMGMT_DATABASE_VAULT Procedure
This procedure disallows user management related operations for specified components
on Autonomous Database with Oracle Database Vault enabled.
• ENABLE_DATABASE_VAULT Procedure
This procedure enables Oracle Database Vault on Autonomous Database. To use this
procedure you must have the DV_OWNER role.
• ENABLE_USERMGMT_DATABASE_VAULT Procedure
This procedure allows user management with Oracle Database Vault enabled for
specified components on Autonomous Database.
6-257
Chapter 6
Autonomous Database Supplied Package Reference
CONFIGURE_DATABASE_VAULT Procedure
This procedure configures the initial two Oracle Database user accounts, which are
granted the DV_OWNER and DV_ACCTMGR roles, respectively for Autonomous Database.
Syntax
DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT(
dvowner_uname IN VARCHAR2,
dvacctmgr_uname IN VARCHAR2);
Parameters
Parameter Description
dvowner_uname Name of the user who will be the Database Vault Owner. This user
will be granted the DV_OWNER role.
dvacctmgr_uname Name of the user who will be the Database Vault Account Manager.
This user will be granted the DV_ACCTMGR role. If you omit this
setting, the user specified by the dvowner_uname parameter is made
the Database Vault Account Manager and granted the DV_ACCTMGR
role.
Usage Notes
• Only the ADMIN user can run the DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT
procedure.
• The DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT procedure does not allow
the ADMIN user to be specified as an input for the dvowner_uname or
dvacctmgr_uname arguments.
Example
BEGIN
DBMS_CLOUD_MACADM.CONFIGURE_DATABASE_VAULT(
dvowner_uname => 'adb_dbv_owner',
dvacctmgr_uname => 'adb_dbv_acctmgr');
END;
/
DISABLE_DATABASE_VAULT Procedure
This procedure disables Oracle Database Vault on Autonomous Database. To use this
procedure you must have the DV_OWNER role.
Syntax
DBMS_CLOUD_MACADM.DISABLE_DATABASE_VAULT;
6-258
Chapter 6
Autonomous Database Supplied Package Reference
Usage Notes
After you run DBMS_CLOUD_MACADM.DISABLE_DATABASE_VAULT you must restart the
Autonomous Database instance.
To use this procedure you must have the DV_OWNER role.
Example
EXEC DBMS_CLOUD_MACADM.DISABLE_DATABASE_VAULT;
DISABLE_USERMGMT_DATABASE_VAULT Procedure
This procedure disallows user management related operations for specified components on
Autonomous Database with Oracle Database Vault enabled.
Syntax
DBMS_CLOUD_MACADM.DISABLE_USERMGMT_DATABASE_VAULT('component_name');
Usage Notes
If you enable Oracle Database Vault with Autonomous Database and you want to enforce
strict separation of duty to disallow user management related operations for the APEX, use
the DBMS_CLOUD_MACADM.DISABLE_USERMGMT_DATABASE_VAULT procedure.
To use this procedure you must have the DV_ACCTMGR and DV_ADMIN roles.
Example
The following example disables user management for the APEX component:
EXEC DBMS_CLOUD_MACADM.DISABLE_USERMGMT_DATABASE_VAULT('APEX');
ENABLE_DATABASE_VAULT Procedure
This procedure enables Oracle Database Vault on Autonomous Database. To use this
procedure you must have the DV_OWNER role.
Syntax
DBMS_CLOUD_MACADM.ENABLE_DATABASE_VAULT;
Usage Notes
After you run DBMS_CLOUD_MACADM.ENABLE_DATABASE_VAULT you must restart the Autonomous
Database instance.
To use this procedure you must have the DV_OWNER role.
6-259
Chapter 6
Autonomous Database Supplied Package Reference
Example
The following example enables Oracle Database Vault:
BEGIN
DBMS_CLOUD_MACADM.ENABLE_DATABASE_VAULT;
END;
/
ENABLE_USERMGMT_DATABASE_VAULT Procedure
This procedure allows user management with Oracle Database Vault enabled for
specified components on Autonomous Database.
Syntax
DBMS_CLOUD_MACADM.ENABLE_USERMGMT_DATABASE_VAULT('component_name');
Usage Notes
To use this procedure you must have the DV_ACCTMGR and DV_ADMIN roles.
Example
The following example enables user management for the APEX component:
EXEC DBMS_CLOUD_MACADM.ENABLE_USERMGMT_DATABASE_VAULT('APEX');
DBMS_CLOUD_NOTIFICATION Package
The DBMS_CLOUD_NOTIFICATION package allows you to send messages or the output of
a SQL query to a provider.
• DBMS_CLOUD_NOTIFICATION Overview
• Summary of DBMS_CLOUD_NOTIFICATION Subprograms
This table summarizes the subprograms included in the package.
DBMS_CLOUD_NOTIFICATION Overview
This scenario describes setup and use of the DBMS_CLOUD_NOTIFICATION package.
6-260
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
SEND_DATA Procedure Send SQL query output to a provider.
SEND_MESSAGE Procedure Send a text message to a provider.
• SEND_DATA Procedure
The SEND_DATA procedure sends the results of the specified query to a provider.
• SEND_MESSAGE Procedure
The procedure sends a text message to a provider.
SEND_DATA Procedure
The SEND_DATA procedure sends the results of the specified query to a provider.
Syntax
DBMS_CLOUD_NOTIFICATION.SEND_DATA(
provider IN VARCHAR2,
credential_name IN VARCHAR2,
query IN CLOB,
params IN CLOB
);
Parameters
Parameter Description
provider Specifies the provider.
Valid values are: 'email', 'msteams', and 'slack'
This parameter is mandatory.
credential_name The name of the credential object to access the provider.
For the email provider, the credential is the name of the credential of the
SMTP approved sender that contains its username and password.
For the msteams provider, the credential is the name of the credential.
For the Slack provider, the credential's username must be "SLACK_TOKEN"
and the password is the Slack Bot Token.
This parameter is mandatory.
query Specifies the SQL query to run. Results are sent to the provider.
This parameter is mandatory.
6-261
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
params Specifies specific parameters for the provider in JSON format.
For the provider type email, the valid parameters are:
• sender. This specifies the Email ID of the approved sender in a String
value.
• smtp_host. This specifies the SMTP host name in a String value.
• subject. This specifies the subject of the email in a String value The
maximum size is 100 characters.
• recipient. This specifies the email IDs of recipients in a String
value. Use a comma between email IDs when there are multiple
recipients.
• to_cc. This specifies the email IDs that are receiving a CC of the email.
It is a String value. Use a comma between email IDs when there are
multiple CC recipients.
• to_bcc. This specifies the email IDs that are receiving a BCC of the
email. It is a String value. Use a comma between email IDs when
there are multiple BCC recipients.
• message. This specifies the message text in a String value.
• type. This specifies the output format in a String value as either CSV
or JSON.
• title. This specifies the title of the attachment of SQL output in a
String value. The title should only contain letters, digits, underscores,
hyphens, or dots as characters in its value due to it being used to
generate a file name.
For the provider type msteams, the valid parameters are:
• tenant. This specifies the tenant ID in a String format.
• team. This specifies the team ID in a String format.
• channel. This specifies the channel ID in a String format.
• title. This specifies the title of the file in a String format. The
maximum size is 50 characters.
• type. This specifies the output format in a String value. Valid values
are CSV or JSON.
For the provider type slack, the valid parameters are:
• channel. This specifies the Channel ID in a String value. The
Channel ID is a unique ID for a channel and is different from the
channel name. In Slack, when you view channel details you can find the
Channel ID on the “About” tab.
• type. This specifies the output format in a String value. Valid values
are CSV or JSON.
This parameter is mandatory.
Usage Notes
Use the procedure DBMS_CLOUD.CREATE_CREDENTIAL(credential_name, username,
password) to create the credential object. See CREATE_CREDENTIAL Procedure for
more information.
• For the email provider, the user requires an SMTP connection endpoint for the
Email Delivery server to obtain smtp_host. The user also requires an SMTP
approved sender and its credentials to authenticate with the Email Delivery server
to obtain the credential_name. SMTP connection must be configured, and SMTP
credentials must be generated and approved.
6-262
Chapter 6
Autonomous Database Supplied Package Reference
• For the msteams provider, the user requires the Microsoft Teams App and a bot
configured in it. The app needs to be published to the org and installed after obtaining
approval from the admin from the admin center. The user also requires access for
Files.ReadWrite.All and ChannelSettings.Read.All permission for Graph API from
their Azure Portal. To generate the required token, the user requires bot_id in the
username and bot_secret in the password. The maximum file size supported when
using DBMS_CLOUD_NOTIFICATION.SEND_DATA for Microsoft Teams is 4MB.
• For the slack provider, the username value can be any valid string and the password is
the Slack bot token.
Example
Send SQL output with the email provider:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'EMAIL_CRED',
username => 'username',
password => 'password');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_DATA(
provider => 'email',
credential_name => 'EMAIL_CRED',
query => 'SELECT tablespace_name FROM dba_tablespaces',
params => json_object('recipient' value 'mark@oracle.com,
suresh@oracle.com',
'to_cc' value 'nicole@oracle.com1,
jordan@oracle.com',
'to_bcc' value 'manisha@oracle.com',
'subject' value 'Test subject',
'type' value 'json',
'title' value 'mytitle',
'message' value 'This is the message',
'smtp_host' value
'smtp.email.example.com',
'sender' value
'approver_sender@oracle.com' )
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(credential_name => 'TEAMS_CRED',
username => 'bot_id',
password => 'bot_secret');
END;
/
BEGIN
6-263
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'SLACK_CRED',
username => 'SLACK_TOKEN',
password => 'password');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_DATA(
provider => 'slack',
credential_name => 'SLACK_CRED',
query => 'SELECT username, account_status, expiry_date
FROM USER_USERS WHERE rownum < 5',
params => json_object('channel' value 'C0....08','type'
value 'csv'));
END;
/
SEND_MESSAGE Procedure
The procedure sends a text message to a provider.
Syntax
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider IN VARCHAR2,
credential_name IN VARCHAR2,
message IN CLOB,
params IN CLOB
);
6-264
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
provider Specifies the provider.
Valid values are: 'email', 'msteams', and 'slack'
This parameter is mandatory.
credential_name The name of the credential object to access the provider.
For the email provider, the credential is the name of the credential of the
SMTP approved sender that contains its username and password.
For the msteams provider, the credential must contain the bot_id and the
bot_secret key in both the username and the password.
For the Slack provider, the credential's username must be "SLACK_TOKEN"
and the password is the Slack Bot Token.
This parameter is mandatory.
message Specifies the message text.
This parameter is mandatory.
params Specifies specific parameters for the provider in JSON format.
For the provider type email, the valid parameters are:
• sender. This specifies the Email ID of the approved sender in a String
value.
• smtp_host. This specifies the SMTP host name in a String value.
• subject. This specifies the subject of the email in a String value. The
maximum size is 100 characters.
• recipient. This specifies the email IDs of recipients in a String
value. Use a comma between email IDs when there are multiple
recipients.
• to_cc. This specifies the email IDs that are receiving a CC of the email.
Use a comma between email IDs when there are multiple CC recipients.
• to_bcc. This specifies the email IDs that are receiving a BCC of the
email. Use a comma between email IDs when there are multiple BCC
recipients.
For the provider type msteams, the valid parameter is:
• channel. This specifies the Channel ID in a String value.
For the provider type slack, the valid parameter is:
• channel. This specifies the Channel ID in a String value.
The Channel ID is a unique ID for a channel and is different from the
channel name. In Slack, when you view channel details you can find the
Channel ID on the “About” tab.
This parameter is mandatory.
Usage Notes
Use the procedure DBMS_CLOUD.CREATE_CREDENTIAL(credential_name, username, password)
to create the credential object. See CREATE_CREDENTIAL Procedure for more information.
• For the email provider, the user requires an SMTP connection endpoint for the Email
Delivery server to obtain smtp_host. The user also requires an SMTP approved sender
and its credentials to authenticate with the Email Delivery server to obtain the
credential_name. SMTP connection must be configured, and SMTP credentials must be
generated and approved.
6-265
Chapter 6
Autonomous Database Supplied Package Reference
• For the msteams provider, the user requires the Microsoft Teams App and a bot
configured in it. The app needs to be published to the org and installed after
obtaining approval from the admin from the admin center. The user also requires
access for Files.ReadWrite.All and ChannelSettings.Read.All permission for
Graph API from their Azure Portal. To generate the required token, the user
requires bot_id in the username and bot_secret in the password.
• For the slack provider, the username value can be any valid string and the
password is the Slack bot token.
Examples
Send a text message with the email provider:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'EMAIL_CRED',
username => 'username',
password => 'password');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'email',
credential_name => 'EMAIL_CRED',
message => 'Subject content',
params => json_object('recipient' value
'mark@oracle.com, suresh@oracle.com',
'to_cc' value
'nicole@oracle.com, jordan@oracle.com',
'to_bcc' value
'manisha@oracle.com',
'subject' value 'Test subject',
'smtp_host' value
'smtp.email.example.com',
'sender' value
'approver_sender@oracle.com' )
);
END;
/
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(credential_name => 'TEAMS_CRED',
username => 'bot_id',
password => 'bot_secret');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'msteams',
credential_name => 'TEAMS_CRED',
6-266
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'SLACK_CRED',
username => 'SLACK_TOKEN',
password => 'password');
END;
/
BEGIN
DBMS_CLOUD_NOTIFICATION.SEND_MESSAGE(
provider => 'slack',
credential_name => 'SLACK_CRED',
message => 'Send text from Autonomous Database.',
params => json_object('channel' value 'C0....08'));
END;
/
DBMS_DATA_ACCESS Package
The DBMS_DATA_ACCESS package provides routines to generate and manage Pre-
Authenticated Request (PAR) URLs for data sets.
• DBMS_DATA_ACCESS Overview
Describes the use of the DBMS_DATA_ACCESS package.
• DBMS_DATA_ACCESS Security Model
Security on this package can be controlled by granting EXECUTE on this package to
selected users or roles.
• Summary of DBMS_DATA_ACCESS Subprograms
This section covers the DBMS_DATA_ACCESS subprograms provided with Autonomous
Database.
DBMS_DATA_ACCESS Overview
Describes the use of the DBMS_DATA_ACCESS package.
6-267
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
GET_PREAUTHENTICATED_URL This procedure generates a PAR URL.
Procedure
INVALIDATE_URL Procedure This procedure invalidates a PAR URL.
LIST_ACTIVE_URLS Function This function lists all the currently active PAR URLs.
• GET_PREAUTHENTICATED_URL Procedure
• INVALIDATE_URL Procedure
This procedure invalidates a PAR URL.
• LIST_ACTIVE_URLS Function
This function lists all the currently active PAR URLs.
GET_PREAUTHENTICATED_URL Procedure
This procedure generates a PAR URL.
There are two forms, one to generate the PAR URL for a specific object (table or view).
The overloaded form, using the sql_statement parameter, generates a PAR URL for a
SQL statement.
Syntax
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
schema_name IN VARCHAR2,
schema_object_name IN VARCHAR2,
application_user_id IN VARCHAR2,
expiration_minutes IN NUMBER,
expiration_count IN NUMBER,
result OUT CLOB);
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
sql_statement IN CLOB,
application_user_id IN VARCHAR2,
6-268
Chapter 6
Autonomous Database Supplied Package Reference
expiration_minutes IN NUMBER,
expiration_count IN NUMBER,
result OUT CLOB);
Parameters
Parameter Description
schema_name Specifies the owner of the object.
schema_object_name Specifies the schema object (table or view).
sql_statement Specifies the SELECT statement query text. PAR URLs do not accept bind
variables.
application_user_id Specifies an application user ID value. When the PAR URL is accessed, the
value of application_user_id specified during PAR URL generation is
available through:
sys_context('DATA_ACCESS_CONTEXT$', 'USER_IDENTITY')
You can define VPD Policies that make use of this value in the Application
Context to restrict the rows visible to the application user.
expiration_minutes Duration in minutes of validity of PAR URL.
The maximum allowed expiration time is 90 days (129600 minutes). If the
value is set to greater than 129600, the value used is 129600 minutes (90
days).
If expiration_minutes is specified as a non-null value,
expiration_count must not be set to a non-null value. Both cannot be
non-null at the same time.
Default value: when expiration_minutes is not provided or when
expiration_minutes is provided as NULL, the value is set to 90 days
(129600 minutes).
expiration_count Number of accesses allowed on the PAR URL.
There is no default value.
If expiration_count is not specified and expiration_minutes is not
specified, expiration_minutes is set to 90 days (129600 minutes).
If expiration_count is specified as a non-null value,
expiration_minutes must not be set to a non-null value. Both cannot be
non-null at the same time.
result (CLOB)
Usage Note
• There is a limit of 128 active PAR URLs on an Autonomous Database instance.
Examples
DECLARE
status CLOB;
BEGIN
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
schema_name => 'USER1',
schema_object_name => 'STUDENTS_VIEW',
expiration_minutes => 120,
result => status);
dbms_output.put_line(status);
6-269
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
DECLARE
status CLOB;
par_url_app_string CLOB;
BEGIN
par_url_app_string := 1919292929;
DBMS_DATA_ACCESS.GET_PREAUTHENTICATED_URL(
sql_statement => 'SELECT student_id, student_name FROM
STUDENTS_VIEW ORDER BY student_id',
application_user_id => par_url_app_string,
expiration_count => 25,
result => status);
END;
/
INVALIDATE_URL Procedure
This procedure invalidates a PAR URL.
Syntax
DBMS_DATA_ACCESS.INVALIDATE_URL(
id IN VARCHAR2,
kill_sessions IN BOOLEAN DEFAULT FALSE,
result OUT CLOB);
Parameters
Parameter Description
id Specifies the owner of the object.
kill_sessions By default, existing sessions that may be in the middle of accessing
data using a PAR URL are not killed. When TRUE, this parameter
specifies that such existing sessions should be killed, so that the
invalidation does not leave any ongoing access to the data set.
Valid values: TRUE | FALSE.
result Provides JSON to indicate whether invalidation is a success or a failure
(CLOB).
LIST_ACTIVE_URLS Function
This function lists all the currently active PAR URLs.
Syntax
6-270
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
RETURN The return value is a JSON array.
Example
DECLARE
result CLOB;
BEGIN
result := DBMS_DATA_ACCESS.LIST_ACTIVE_URLS;
DBMS_OUTPUT.PUT_LINE(result);
END;
[
{
"id": "89fa6081-ec6b-4179-9b06-a93af8fbd4b7",
"schema_name": "SCOTT",
"schema_object_name": "EMPLOYEE",
"created_by": "ADMIN",
"application_user_id": "AMIT",
"expiration_time": "2023-01-14T23:41:01.029Z",
"expiration_count": 100,
"access_count": 9,
"created": "2023-01-10T19:41:01.285Z"
},
{
"id": "263d2cd7-3bc0-41a7-8cb9-438a2d843481",
"sql_statement": "select name from v$pdbs",
"created_by": "ADMIN",
"application_user_id": "AMIT",
"expiration_time": "2023-01-15T00:04:30.578Z",
"expiration_count": 100,
"access_count": 0,
"created": "2023-01-10T20:04:30.607Z"
}
]
Usage Note
• The behavior of DBMS_DATA_ACCESS.LIST_ACTIVE_URLS is dependent on the invoker. If the
invoker is ADMIN or any user with PDB_DBA role, the function lists all active PAR URLs,
regardless of the user who generated the PAR URL. If the invoker is not the ADMIN user
and not a user with PDB_DBA role, the list includes only the active PAR URLs generated by
the invoker.
DBMS_PIPE Package
The DBMS_PIPE package lets two or more sessions in the same instance communicate.
6-271
Chapter 6
Autonomous Database Supplied Package Reference
See DBMS_PIPE for details about the core DBMS_PIPE functionality provided in Oracle
Database.
• DBMS_PIPE Overview for Singleton Pipes
Pipe functionality has several potential applications: external service interface,
debugging, independent transactions, and alerts.
• Summary of DBMS_PIPE Subprograms for Singleton Pipes
This table lists the DBMS_PIPE subprograms and briefly describes them.
• DBMS_PIPE Overview for Persistent Messaging Pipes
Pipe functionality has several potential applications: external service interface,
debugging, independent transactions, and alerts.
• Summary of DBMS_PIPE Subprograms for Persistent Messaging
This table lists the DBMS_PIPE subprograms and briefly describes them.
6-272
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
CREATE_PIPE Function Creates a pipe (necessary for private pipes)
NEXT_ITEM_TYPE Function Returns datatype of next item in buffer
PACK_MESSAGE Procedures Builds message in local buffer
PURGE Procedure Purges contents of named pipe
RECEIVE_MESSAGE Function Copies message from named pipe into local buffer
RESET_BUFFER Procedure Purges contents of local buffer
REMOVE_PIPE Function Removes the named pipe
SEND_MESSAGE Function Sends message on named pipe: This implicitly creates a
public pipe if the named pipe does not exist
UNIQUE_SESSION_NAME Function Returns unique session name
UNPACK_MESSAGE Procedures Accesses next item in buffer
• CREATE_PIPE Function
This function explicitly creates a public or private pipe. If the private flag is TRUE, then
the pipe creator is assigned as the owner of the private pipe.
• RECEIVE_MESSAGE Function
This function copies the message into the local message buffer.
• SEND_MESSAGE Function
This function sends a message on the named pipe.
CREATE_PIPE Function
This function explicitly creates a public or private pipe. If the private flag is TRUE, then the
pipe creator is assigned as the owner of the private pipe.
Explicitly-created pipes can only be removed by calling REMOVE_PIPE, or by shutting down the
instance.
In order to create a Singleton Pipe, set the singleton parameter to TRUE. The following
arguments are applicable to Singleton Pipes:
• singleton: Indicates that the pipe should be created as a Singleton Pipe (default value:
FALSE).
• shelflife: Optionally specify a shelflife expiration (in seconds) of cached message in the
Singleton Pipe. It can be used for implicit invalidation of message in SIngleton Pipe.
The message shelflife in Singleton Pipe can also be specified when you send a
message message (see SEND_MESSAGE Function).
Syntax
DBMS_PIPE.CREATE_PIPE (
pipename IN VARCHAR2,
maxpipesize IN INTEGER DEFAULT 66536,
private IN BOOLEAN DEFAULT TRUE,
singleton IN BOOLEAN DEFAULT FALSE,
shelflife IN INTEGER DEFAULT 0)
RETURN INTEGER;
6-273
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
pipename Name of the pipe you are creating.
You must use this name when you call SEND_MESSAGE and
RECEIVE_MESSAGE. This name must be unique across the
instance.
Caution: Do not use pipe names beginning with ORA$. These are
reserved for use by procedures provided by Oracle. Pipename
should not be longer than 128 bytes, and is case insensitive. At
this time, the name cannot contain Globalization Support
characters.
maxpipesize The maximum size allowed for the pipe, in bytes.
The total size of all of the messages on the pipe cannot exceed
this amount. The message is blocked if it exceeds this maximum.
The default maxpipesize is 66536 bytes.
The maxpipesize for a pipe becomes a part of the
characteristics of the pipe and persists for the life of the pipe.
Callers of SEND_MESSAGE with larger values cause the
maxpipesize to be increased. Callers with a smaller value use
the existing, larger value.
The default maxpipesize of 65536 is applicable for all Pipes.
private Uses the default, TRUE, to create a private pipe.
Public pipes can be implicitly created when you call
SEND_MESSAGE.
singleton Use TRUE to create a Singleton Pipe.
Default value: FALSE
shelflife Expiration time in seconds of a message cached in Singleton
Pipe. After the specified shelflife time is exceeded, the
message is no longer accessible from the Pipe. The parameter
shelflife is only applicable to a Singleton Pipe.
Default value is 0, implying that the message never expires.
Return Values
Return Description
0 Successful.
If the pipe already exists and the user attempting to create it is
authorized to use it, then Oracle returns 0, indicating success,
and any data already in the pipe remains.
6-274
Chapter 6
Autonomous Database Supplied Package Reference
Return Description
6 Failed to convert existing pipe to singleton pipe.
• Implicit pipe with more than one existing message cannot be
converted to a Singleton Pipe.
• For an explicit pipe that is not Singleton,
DBMS_PIPE.SEND_MESSAGE cannot send a message with
singleton argument set to TRUE.
7 A non-zero value is given for the shelflife parameter and the
pipe is not a singleton pipe.
ORA-23322 Failure due to naming conflict.
If a pipe with the same name exists and was created by a
different user, then Oracle signals error ORA-23322, indicating
the naming conflict.
Exceptions
Exception Description
Null pipe name Permission error: Pipe with the same name already exists, and you are
not allowed to use it.
Example
Create a Singleton Pipe with shelflife of 1 hour.
DECLARE
l_status INTEGER;
BEGIN
l_status := DBMS_PIPE.create_pipe(pipename => 'MY_PIPE1',
private => TRUE,
singleton => TRUE,
shelflife => 3600);
END;
/
RECEIVE_MESSAGE Function
This function copies the message into the local message buffer.
Syntax
DBMS_PIPE.RECEIVE_MESSAGE (
pipename IN VARCHAR2,
timeout IN INTEGER DEFAULT maxwait,
cache_func IN VARCHAR2 DEFAULT NULL)
RETURN INTEGER;
6-275
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
pipename Name of the pipe on which you want to receive a message.
Names beginning with ORA$ are reserved for use by Oracle.
timeout Time to wait for a message, in seconds. A timeout of 0 lets you
read without blocking.
The timeout does not include the time spent in execution cache
function specified in the cache_func parameter.
Default value: is the constant MAXWAIT, which is defined as
86400000 (1000 days).
cache_func Cache function name to automatically cache a message in a
Singleton Pipe.
The name of the function should be fully qualified with the owner
schema:
• OWNER.FUNCTION_NAME
• OWNER.PACKAGE.FUNCTION_NAME
Default value: NULL
Return Values
Return Description
0 Success
1 Timed out. If the pipe was implicitly-created and is empty, then it
is removed.
2 Record in the pipe is too large for the buffer.
3 An interrupt occurred.
8 Cache function can only be specified when using a Singleton
Pipe.
ORA-23322 User has insufficient privileges to read from the pipe.
Usage Notes
To receive a message from a pipe, first call RECEIVE_MESSAGE. When you receive a
message, it is removed from the pipe; hence, a message can only be received once.
For implicitly-created pipes, the pipe is removed after the last record is removed from
the pipe.
If the pipe that you specify when you call RECEIVE_MESSAGE does not already exist,
then Oracle implicitly creates the pipe and waits to receive the message. If the
message does not arrive within a designated timeout interval, then the call returns and
the pipe is removed.
After receiving the message, you must make one or more calls to UNPACK_MESSAGE to
access the individual items in the message. The UNPACK_MESSAGE procedure is
6-276
Chapter 6
Autonomous Database Supplied Package Reference
overloaded to unpack items of type DATE, NUMBER, VARCHAR2, and there are two additional
procedures to unpack RAW and ROWID items. If you do not know the type of data that you are
attempting to unpack, then call NEXT_ITEM_TYPE to determine the type of the next item in the
buffer.
Cache Function Parameter
Singleton Pipes support cache function to automatically cache a message in the pipe in case
of the following two scenarios:
• Singleton Pipe is empty.
• Message in Singleton Pipe is invalid due to shelflife time elapsed.
The name of the function should be fully qualified with the owner schema:
• OWNER.FUNCTION_NAME
• OWNER.PACKAGE.FUNCTION_NAME
To use a cache function the current session user that invokes DBMS_PIPE.RECEIVE_MESSAGE
must have required privileges to execute the cache function.
Cache Function Syntax
Return Description
0 Success
Non-zero Failure value returned from
DBMS_PIPE.RECEIVE_MESSAGE
Define a cache function to provide encapsulation and abstraction of complexity from the
reader sessions of Singleton Pipe. The typical operations within a cache function would be:
• Create a Singleton Pipe, for an Explicit Pipe, using DBMS_PIPE.CREATE_PIPE.
• Create the message to cache in the Singleton Pipe.
• Send Message to Singleton Pipe, optionally specifying a shelflife for the implicit
message.
Exceptions
Exception Description
Null pipe name Permission error. Insufficient privilege to remove the record from the
pipe. The pipe is owned by someone else.
6-277
Chapter 6
Autonomous Database Supplied Package Reference
Example
DECLARE
l_status INTEGER;
BEGIN
l_status := DBMS_PIPE.receive_message(pipename => 'MY_PIPE1',
timeout => 1,
cache_func =>
'MY_USER.MY_CACHE_FUNC');
END;
/
SEND_MESSAGE Function
This function sends a message on the named pipe.
The message is contained in the local message buffer, which was filled with calls to
PACK_MESSAGE. You can create a pipe explicitly using CREATE_PIPE, otherwise, it is
created implicitly.
To create an implicit Singleton pipe, set the singleton parameter to TRUE. The
following arguments are applicable to Singleton Pipes:
• singleton: Indicates that the pipe should be created as a Singleton Pipe (default
value: FALSE).
• shelflife: Optionally specify a shelflife expiration of cached message in the
Singleton Pipe. It can be used for implicit invalidation of message in Singleton
Pipe.
This argument is applicable for implicit as well as explicit Singleton pipes. A
shelflife value specified in SEND_MESSAGE Function overwrites the
shelflife specified for the Explicit Singleton Pipe in CREATE_PIPE Function and
will be the default for any new messages cached in the Singleton Pipe.
Syntax
DBMS_PIPE.SEND_MESSAGE (
pipename IN VARCHAR2,
timeout IN INTEGER DEFAULT MAXWAIT,
maxpipesize IN INTEGER DEFAULT 65536,
singleton IN BOOLEAN DEFAULT FALSE,
shelflife IN INTEGER DEFAULT 0)
RETURN INTEGER;
6-278
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
pipename Name of the pipe on which you want to place the message.
If you are using an explicit pipe, then this is the name that you
specified when you called CREATE_PIPE.
Caution: Do not use pipe names beginning with 'ORA$'. These names
are reserved for use by procedures provided by Oracle. Pipename
should not be longer than 128 bytes, and is case-insensitive. At this
time, the name cannot contain Globalization Support characters.
timeout Time to wait while attempting to place a message on a pipe, in
seconds.
The default value is the constant MAXWAIT, which is defined as
86400000 (1000 days).
maxpipesize Maximum size allowed for the pipe, in bytes.
The total size of all the messages on the pipe cannot exceed this
amount. The message is blocked if it exceeds this maximum. The
default is 65536 bytes.
The maxpipesize for a pipe becomes a part of the characteristics of
the pipe and persists for the life of the pipe. Callers of SEND_MESSAGE
with larger values cause the maxpipesize to be increased. Callers
with a smaller value simply use the existing, larger value.
Specifying maxpipesize as part of the SEND_MESSAGE procedure
eliminates the need for a separate call to open the pipe. If you created
the pipe explicitly, then you can use the optional maxpipesize
parameter to override the creation pipe size specifications.
The default maxpipesize of 65536 is applicable for all Pipes.
singleton Use TRUE to create a Singleton Pipe.
Default value: FALSE
shelflife Expiration time in seconds of a message cached in Singleton Pipe.
After the specified shelflife time is exceeded, the message is no
longer accessible from the Pipe. The parameter shelflife is only
applicable to a Singleton Pipe.
Default value is 0, implying that the message never expires.
Return Values
Return Description
0 Success.
If the pipe already exists and the user attempting to create it is authorized
to use it, then Oracle returns 0, indicating success, and any data already
in the pipe remains.
If a user connected as SYSDBS/SYSOPER re-creates a pipe, then Oracle
returns status 0, but the ownership of the pipe remains unchanged.
6-279
Chapter 6
Autonomous Database Supplied Package Reference
Return Description
1 Timed out.
This procedure can timeout either because it cannot get a lock on the
pipe, or because the pipe remains too full to be used. If the pipe was
implicitly-created and is empty, then it is removed.
3 An interrupt occurred.
If the pipe was implicitly created and is empty, then it is removed.
6 Failed to convert existing pipe to singleton pipe.
• Implicit pipe with more than one existing message cannot be
converted to a Singleton Pipe.
• For an explicit pipe that is not Singleton, DBMS_PIPE.SEND_MESSAGE
cannot send a message with singleton argument set to TRUE.
7 A non-zero value is given for the shelflife parameter and the pipe is
not a singleton pipe.
ORA-23322 Insufficient privileges.
If a pipe with the same name exists and was created by a different user,
then Oracle signals error ORA-23322, indicating the naming conflict.
Exceptions
Exception Description
Null pipe name Permission error. Insufficient privilege to write to the pipe. The
pipe is private and owned by someone else.
6-280
Chapter 6
Autonomous Database Supplied Package Reference
A Persistent Messaging Pipe can be any one of the supported DBMS_PIPE types:
– Implicit Pipe: Automatically created when a message is sent with an unknown pipe
name using the DBMS_PIPE.SEND_MESSAGE function.
– Explicit Pipe: Created using the DBMS_PIPE.CREATE_PIPE function with a user
specified pipe name.
– Public Pipe: Accessible by any user with EXECUTE permission on DBMS_PIPE
package.
– Private Pipe: Accessible by sessions with the same user as the pipe creator.
Note:
When send and receive messages across different databases using persistent
messages, Oracle recommends you call DBMS_PIPE.CREATE_PIPE before sending or
receiving messages. Creating an explicit pipe with DBMS_PIPE.CREATE_PIPE ensures
that a pipe is created with the access permissions you want, either public or private
(by setting the PRIVATE parameter to FALSE or using the default value TRUE).
Subprogram Description
CREATE_PIPE Function Creates a pipe (necessary for private pipes).
GET_CREDENTIAL_NAME Function Returns the global credential_name variable value.
GET_LOCATION_URI Function Returns the global location_uri variable value that is used
as a default location URI for use when a message is stored in
Cloud Object Store.
NEXT_ITEM_TYPE Function Returns datatype of next item in buffer.
PACK_MESSAGE Procedures Builds message in local buffer.
RECEIVE_MESSAGE Function Copies message from named pipe into local buffer.
RESET_BUFFER Procedure Purges contents of local buffer.
REMOVE_PIPE Function Removes the named pipe.
SEND_MESSAGE Function Sends message on a named pipe: This implicitly creates a
public pipe if the named pipe does not exist.
SET_CREDENTIAL_NAME Procedure Sets the credential_name variable that is used as a default
credential for messages that are stored in Cloud Object Store.
SET_LOCATION_URI Procedure Sets the global location_uri variable that is used as a
default location URI for messages that are stored in Cloud
Object Store.
UNIQUE_SESSION_NAME Function Returns unique session name.
UNPACK_MESSAGE Procedures Accesses next item in buffer.
6-281
Chapter 6
Autonomous Database Supplied Package Reference
• CREATE_PIPE Function
This function explicitly creates a public or private pipe. If the private flag is TRUE,
then the pipe creator is assigned as the owner of the private pipe.
• GET_CREDENTIAL_NAME Function
This function returns the global credential_name variable value for use when
messages are stored to Cloud Object Store.
• GET_LOCATION_URI Function
This function returns the global location_uri variable value that can be used as a
default location URI when pipe messages are stored to Cloud Object Store.
• RECEIVE_MESSAGE Function
This function copies the message into the local message buffer.
• SEND_MESSAGE Function
This function sends a message on the named pipe.
• SET_CREDENTIAL_NAME Procedure
This procedure sets the credential_name variable that is used as a default
credential when pipe messages are stored in Cloud Object Store.
• SET_LOCATION_URI Procedure
This procedure sets the global location_uri variable.
CREATE_PIPE Function
This function explicitly creates a public or private pipe. If the private flag is TRUE, then
the pipe creator is assigned as the owner of the private pipe.
Explicitly-created pipes can only be removed by calling REMOVE_PIPE, or by shutting
down the instance.
Syntax
DBMS_PIPE.CREATE_PIPE (
pipename IN VARCHAR2,
maxpipesize IN INTEGER DEFAULT 66536,
private IN BOOLEAN DEFAULT TRUE)
RETURN INTEGER;
Parameters
Parameter Description
pipename Name of the pipe you are creating.
You must use this name when you call SEND_MESSAGE and
RECEIVE_MESSAGE. This name must be unique across the
instance.
Caution: Do not use pipe names beginning with ORA$. These are
reserved for use by procedures provided by Oracle. Pipename
should not be longer than 128 bytes, and is case insensitive. At
this time, the name cannot contain Globalization Support
characters.
6-282
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
maxpipesize The maximum size allowed for the pipe, in bytes.
The total size of all of the messages on the pipe cannot exceed
this amount. The message is blocked if it exceeds this maximum.
The default maxpipesize is 66536 bytes.
The maxpipesize for a pipe becomes a part of the
characteristics of the pipe and persists for the life of the pipe.
Callers of SEND_MESSAGE with larger values cause the
maxpipesize to be increased. Callers with a smaller value use
the existing, larger value.
The default maxpipesize of 65536 is applicable for all Pipes.
private Uses the default, TRUE, to create a private pipe.
Public pipes can be implicitly created when you call
SEND_MESSAGE.
Return Values
Return Description
0 Successful.
If the pipe already exists and the user attempting to create it is
authorized to use it, then Oracle returns 0, indicating success, and any
data already in the pipe remains.
ORA-23322 Failure due to naming conflict.
If a pipe with the same name exists and was created by a different
user, then Oracle signals error ORA-23322, indicating the naming
conflict.
Exceptions
Exception Description
Null pipe name Permission error: Pipe with the same name already exists, and you are
not allowed to use it.
Example
Create an explicit private named MY_PIPE1
DECLARE
l_status INTEGER;
BEGIN
l_status := DBMS_PIPE.create_pipe(
pipename => 'MY_PIPE1',
private => TRUE);
6-283
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
GET_CREDENTIAL_NAME Function
This function returns the global credential_name variable value for use when
messages are stored to Cloud Object Store.
Syntax
DBMS_PIPE.GET_CREDENTIAL_NAME
RETURN VARCHAR2;
Return Values
Example
DECLARE
credential_name VARCHAR2(400)
BEGIN
credential_name := DBMS_PIPE.GET_CREDENTIAL_NAME;
END;
/
GET_LOCATION_URI Function
This function returns the global location_uri variable value that can be used as a
default location URI when pipe messages are stored to Cloud Object Store.
Syntax
DBMS_PIPE.GET_LOCATION_URI
RETURN VARCHAR2;
Return Value
Example
DECLARE
location_uri VARCHAR2(400)
BEGIN
location_uri := DBMS_PIPE.GET_LOCATION_URI;
END;
/
6-284
Chapter 6
Autonomous Database Supplied Package Reference
RECEIVE_MESSAGE Function
This function copies the message into the local message buffer.
Syntax
DBMS_PIPE.RECEIVE_MESSAGE (
pipename IN VARCHAR2,
timeout IN INTEGER DEFAULT maxwait,
credential_name IN VARCHAR2 DEFAULT null,
location_uri IN VARCHAR2)
RETURN INTEGER;
Parameters
Parameter Description
pipename Name of the pipe on which you want to receive a message.
Names beginning with ORA$ are reserved for use by Oracle.
timeout Time to wait for a message, in seconds. A timeout of 0 lets you read
without blocking.
The timeout does not include the time spent running the cache
function specified with the cache_func parameter.
Default value: is the constant MAXWAIT, which is defined as 86400000
(1000 days).
credential_name The credential name for the cloud store used to store messages.
The credential_name is a package argument that is by default
initialized as NULL.
You can set this value before calling DBMS_PIPE.RECEIVE_MESSAGE.
The passed parameter value takes precedence over the global
variable's value.
The credential object must have EXECUTE and READ/WRITE privileges
by the user running DBMS_PIPE.RECEIVE_MESSAGE.
The credential_name value can be an OCI resource principal, Azure
service principal, Amazon Resource Name (ARN), or a Google service
account. See Configure Policies and Roles to Access Resources for
more information on resource principal based authentication.
location_uri The location URI for the cloud store used to store messages.
The location_uri is a global variable that by default is initialized as
NULL.
You can set this value before calling DBMS_PIPE.RECEIVE_MESSAGE.
The passed parameter value takes precedence over the global
variable's value.
6-285
Chapter 6
Autonomous Database Supplied Package Reference
Return Values
Return Description
0 Success
1 Timed out. If the pipe was implicitly-created and is empty, then it
is removed.
2 Record in the pipe is too large for the buffer.
3 An interrupt occurred.
ORA-23322 User has insufficient privileges to read from the pipe.
Usage Notes
• To receive a message from a pipe, first call RECEIVE_MESSAGE. When you receive a
message, it is removed from the pipe; hence, a message can only be received
once. For implicitly-created pipes, the pipe is removed after the last record is
removed from the pipe.
• If the pipe that you specify when you call RECEIVE_MESSAGE does not already exist,
then Oracle implicitly creates the pipe and waits to receive the message. If the
message does not arrive within a designated timeout interval, then the call returns
and the pipe is removed.
• After receiving the message, you must make one or more calls to UNPACK_MESSAGE
to access the individual items in the message. The UNPACK_MESSAGE procedure is
overloaded to unpack items of type DATE, NUMBER, VARCHAR2, and there are two
additional procedures to unpack RAW and ROWID items. If you do not know the type
of data that you are attempting to unpack, then call NEXT_ITEM_TYPE to determine
the type of the next item in the buffer.
• Persistent messages are guaranteed to either be written or read by exactly one
process. This prevents message content inconsistency due to concurrent writes
and reads. Using a persistent messaging pipe, DBMS_PIPE allows only one
operation, sending a message or a receiving message to be active at a given time.
However, if an operation is not possible due to an ongoing operation, the process
retries periodically until the timeout value is reached.
• If you use Oracle Cloud Infrastructure Object Storage to store messages, you can
use Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location
URI and the credential must match in type as follows:
– If you use a native URI format to access Oracle Cloud Infrastructure Object
Storage, you must use Native Oracle Cloud Infrastructure Signing Keys
authentication in the credential object.
– If you use Swift URI format to access Oracle Cloud Infrastructure Object
Storage, you must use an auth token authentication in the credential object.
6-286
Chapter 6
Autonomous Database Supplied Package Reference
Exceptions
Exception Description
Null pipe name Permission error. Insufficient privilege to remove the record from the
pipe. The pipe is owned by someone else.
SEND_MESSAGE Function
This function sends a message on the named pipe.
The message is contained in the local message buffer, which was filled with calls to
PACK_MESSAGE. You can create a pipe explicitly using CREATE_PIPE, otherwise, it is created
implicitly.
Syntax
DBMS_PIPE.SEND_MESSAGE (
pipename IN VARCHAR2,
timeout IN INTEGER DEFAULT MAXWAIT,
credential_name IN VARCHAR2 DEFAULT null,
location_uri IN VARCHAR2 )
RETURN INTEGER;
Parameters
Parameter Description
credential_name The credential name for the cloud store used to store messages.
The credential_name is a package argument that is by default
initialized as NULL.
You can set this value before calling DBMS_PIPE.SEND_MESSAGE. The
passed parameter value takes precedence over the global variable's
value.
The credential object must have EXECUTE and READ/WRITE privileges
by the user running DBMS_PIPE.SEND_MESSAGE.
The credential_name value can be an OCI resource principal, Azure
service principal, Amazon Resource Name (ARN), or a Google service
account. See Configure Policies and Roles to Access Resources for
more information on resource principal based authentication.
location_uri The location URI for the cloud store used to store messages.
The location_uri is a global variable that by default is initialized as
NULL.
You can set this value before calling DBMS_PIPE.SEND_MESSAGE. The
passed parameter value takes precedence over the global variable's
value.
6-287
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
maxpipesize Maximum size allowed for the pipe, in bytes.
The total size of all the messages on the pipe cannot exceed this
amount. The message is blocked if it exceeds this maximum. The
default is 65536 bytes.
The maxpipesize for a pipe becomes a part of the characteristics of
the pipe and persists for the life of the pipe. Callers of SEND_MESSAGE
with larger values cause the maxpipesize to be increased. Callers
with a smaller value simply use the existing, larger value.
Specifying maxpipesize as part of the SEND_MESSAGE procedure
eliminates the need for a separate call to open the pipe. If you created
the pipe explicitly, then you can use the optional maxpipesize
parameter to override the creation pipe size specifications.
The default maxpipesize of 65536 is applicable for all Pipes.
pipename Name of the pipe on which you want to place the message.
If you are using an explicit pipe, then this is the name that you
specified when you called CREATE_PIPE.
Caution: Do not use pipe names beginning with 'ORA$'. These names
are reserved for use by procedures provided by Oracle. Pipename
should not be longer than 128 bytes, and is case-insensitive. At this
time, the name cannot contain Globalization Support characters.
timeout Time to wait while attempting to place a message on a pipe, in
seconds.
The default value is the constant MAXWAIT, which is defined as
86400000 (1000 days).
Return Values
Return Description
0 Success.
If the pipe already exists and the user attempting to create it is
authorized to use it, then Oracle returns 0, indicating success, and
any data already in the pipe remains.
If a user connected as SYSDBS/SYSOPER re-creates a pipe, then
Oracle returns status 0, but the ownership of the pipe remains
unchanged.
1 Timed out.
This procedure can timeout either because it cannot get a lock on
the pipe, or because the pipe remains too full to be used. If the pipe
was implicitly-created and is empty, then it is removed.
3 An interrupt occurred.
If the pipe was implicitly created and is empty, then it is removed.
6-288
Chapter 6
Autonomous Database Supplied Package Reference
Return Description
ORA-23322 Insufficient privileges.
If a pipe with the same name exists and was created by a different
user, then Oracle signals error ORA-23322, indicating the naming
conflict.
Usage Notes
• Persistent messages are guaranteed to either be written or read by exactly one process.
This prevents message content inconsistency due to concurrent writes and reads. Using
a persistent messaging pipe, DBMS_PIPE allows only one operation, sending a message
or a receiving message to be active at a given time. However, if an operation is not
possible due to an ongoing operation, the process retries periodically until the timeout
value is reached.
• If you use Oracle Cloud Infrastructure Object Storage to store messages, you can use
Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location URI and the
credential must match in type as follows:
– If you use a native URI format to access Oracle Cloud Infrastructure Object Storage,
you must use Native Oracle Cloud Infrastructure Signing Keys authentication in the
credential object.
– If you use Swift URI format to access Oracle Cloud Infrastructure Object Storage, you
must use an auth token authentication in the credential object.
Exceptions
Exception Description
Null pipe name Permission error. Insufficient privilege to write to the pipe. The pipe is
private and owned by someone else.
SET_CREDENTIAL_NAME Procedure
This procedure sets the credential_name variable that is used as a default credential when
pipe messages are stored in Cloud Object Store.
Syntax
DBMS_PIPE.SET_CREDENTIAL_NAME (
credential_name IN VARCHAR2 );
6-289
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
credential_name The name of the credential to access the Cloud Object Storage.
The credential_name value can be an OCI resource principal,
Azure service principal, Amazon Resource Name (ARN), or a
Google service account. See Configure Policies and Roles to
Access Resources for more information on resource principal
based authentication.
Usage Note
If you use Oracle Cloud Infrastructure Object Storage to store messages, you can use
Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location URI and
the credential must match in type as follows:
• If you use a native URI format to access Oracle Cloud Infrastructure Object
Storage, you must use Native Oracle Cloud Infrastructure Signing Keys
authentication in the credential object.
• If you use Swift URI format to access Oracle Cloud Infrastructure Object Storage,
you must use an auth token authentication in the credential object.
Example
BEGIN
DBMS_PIPE.SET_CREDENTIAL_NAME(
credential_name => 'my_cred1');
END;
/
SET_LOCATION_URI Procedure
This procedure sets the global location_uri variable.
Syntax
DBMS_PIPE.SET_LOCATION_URI (
location_uri IN VARCHAR2 );
Parameter
Parameter Description
location_uri Object or file URI. The format of the URI depends on the Cloud
Object Storage service you are using, for details see
DBMS_CLOUD URI Formats.
Usage Note
If you use Oracle Cloud Infrastructure Object Storage to store messages, you can use
Oracle Cloud Infrastructure Native URIs or Swift URIs. However, the location URI and
the credential must match in type as follows:
6-290
Chapter 6
Autonomous Database Supplied Package Reference
• If you use a native URI format to access Oracle Cloud Infrastructure Object Storage, you
must use Native Oracle Cloud Infrastructure Signing Keys authentication in the credential
object.
• If you use Swift URI format to access Oracle Cloud Infrastructure Object Storage, you
must use an auth token authentication in the credential object.
Example
BEGIN
DBMS_PIPE.GET_LOCATION_URI(
location_uri => 'https://objectstorage.us-phoenix-1.oraclecloud.com/n/
namespace-string/b/bucketname1/');
END;
/
DBMS_CLOUD_PIPELINE Package
The DBMS_CLOUD_PIPELINE package allows you to create data pipelines for loading and
exporting data in the cloud. This package supports continuous incremental data load of files
in object store into the database. DBMS_CLOUD_PIPELINE also supports continuous incremental
export of table data or query results from the database to object store based on a timestamp
column.
• Summary of DBMS_CLOUD_PIPELINE Subprograms
This table summarizes the subprograms included in the DBMS_CLOUD_PIPELINE package.
• DBMS_CLOUD_PIPELINE Attributes
Attributes help to control and configure the behavior of a data pipeline.
• DBMS_CLOUD_PIPELINE Views
The DBMS_CLOUD_PIPELINE package uses the following views.
Subprogram Description
CREATE_PIPELINE Procedure Creates a new data pipeline.
DROP_PIPELINE Procedure Drops an existing data pipeline.
RESET_PIPELINE Procedure Resets the tracking state of a data pipeline. Use reset pipeline to
restart the pipeline from the initial state of data load or export.
Optionally reset pipeline can purge data in the database or in
object store, depending on the type of pipeline.
RUN_PIPELINE_ONCE Performs an on-demand run of the pipeline in the current
Procedure foreground session, instead of a scheduled job.
SET_ATTRIBUTE Procedure Sets pipeline attributes. There are two overloaded procedures,
one to set a single attribute and another to set multiple attributes
using a JSON document of attribute name/value pairs
START_PIPELINE Procedure Starts the data pipeline. When a pipeline is started, the pipeline
operation will continuously run in a scheduled job according to the
"interval" configured in pipeline attributes.
6-291
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
STOP_PIPELINE Procedure Stops the data pipeline. When a pipeline is stopped, no future jobs
are scheduled for the pipeline.
• CREATE_PIPELINE Procedure
The procedure creates a new data pipeline.
• DROP_PIPELINE Procedure
The procedure drops an existing data pipeline. If a pipeline has been started, then
it must be stopped before it can be dropped.
• RESET_PIPELINE Procedure
Resets the tracking state of a data pipeline. Use reset pipeline to restart the
pipeline from the initial state of data load or export. Optionally reset pipeline can
purge data in the database or in object store, depending on the type of pipeline. A
data pipeline must be in stopped state to reset it.
• RUN_PIPELINE_ONCE Procedure
This procedure performs an on-demand run of the pipeline in the current
foreground session, instead of a running in a scheduled job. Use
DBMS_CLOUD_PIPELINE.RUN_PIPELINE_ONCE to test a pipeline before you start the
pipeline as a continuous job.
• SET_ATTRIBUTE Procedure
This procedure sets pipeline attributes. There are two overloaded procedures, one
to set a single attribute and another to set multiple attributes using a JSON
document of attribute name/value pairs.
• START_PIPELINE Procedure
• STOP_PIPELINE Procedure
The procedure stops the data pipeline. When a pipeline is stopped, no future jobs
are scheduled for the pipeline.
CREATE_PIPELINE Procedure
The procedure creates a new data pipeline.
Syntax
DBMS_CLOUD_PIPELINE.CREATE_PIPELINE(
pipeline_name IN VARCHAR2,
pipeline_type IN VARCHAR2,
attributes IN CLOB DEFAULT NULL,
description IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline. The pipeline name must follow the
naming rules of Oracle SQL identifiers. See Identifiers for more
information.
This parameter is mandatory.
6-292
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
pipeline_type Specifies the pipeline type.
Valid values: LOAD, EXPORT
This parameter is mandatory.
attributes Pipeline attributes in JSON format.
Default value: NULL
See DBMS_CLOUD_PIPELINE Attributes for more information.
description Description for the pipeline.
Default value: NULL
DROP_PIPELINE Procedure
The procedure drops an existing data pipeline. If a pipeline has been started, then it must be
stopped before it can be dropped.
Syntax
DBMS_CLOUD_PIPELINE.DROP_PIPELINE(
pipeline_name IN VARCHAR2,
force IN BOOLEAN DEFAULT FALSE
);
Parameters
Parameter Description
pipeline_name Specifies a pipeline name.
This parameter is mandatory.
force Forcibly drop a pipeline, even if it is in started state.
Valid values: TRUE, FALSE
Default value: FALSE
Usage Note
• In order to drop a pipeline that is in started state, set the force parameter to TRUE.
RESET_PIPELINE Procedure
Resets the tracking state of a data pipeline. Use reset pipeline to restart the pipeline from the
initial state of data load or export. Optionally reset pipeline can purge data in the database or
in object store, depending on the type of pipeline. A data pipeline must be in stopped state to
reset it.
Syntax
DBMS_CLOUD_PIPELINE.RESET_PIPELINE(
pipeline_name IN VARCHAR2,
purge_data IN BOOLEAN DEFAULT FALSE
);
6-293
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline.
This parameter is mandatory.
purge_data Purge data applies for either a load pipeline or an export pipeline:
• For a load pipeline, when TRUE, truncate the data in database
table.
• For an export pipeline, when TRUE, delete the files in the object
store location.
Valid values: TRUE, FALSE
Default value: FALSE
Usage Notes
• A data pipeline must be in stopped state to reset it. See STOP_PIPELINE
Procedure for more information.
• For a load pipeline, resetting the pipeline clears the record of the files being loaded
by the pipeline. When you call either START_PIPELINE or RUN_PIPELINE_ONCE after
resetting a load pipeline, the pipeline repeats the data load and includes all the
files present in the object store location.
When purge_data is set to TRUE, DBMS_CLOUD_PIPELINE.RESET_PIPELINE does the
following:
– Truncates the data in the pipeline's database table you specify with
table_name attribute.
– Drops the pipeline's status table, and the pipeline's bad file table and error
table ( if they exist).
• For an export pipeline, resetting the pipeline clears the last tracked data in the
database table. When you call either START_PIPELINE or RUN_PIPELINE_ONCE after
resetting an export pipeline, the pipeline repeats exporting data from the table or
query.
When purge_data set to TRUE, DBMS_CLOUD_PIPELINE.RESET_PIPELINE deletes
existing files in the object store location specified with the location attribute.
RUN_PIPELINE_ONCE Procedure
This procedure performs an on-demand run of the pipeline in the current foreground
session, instead of a running in a scheduled job. Use
DBMS_CLOUD_PIPELINE.RUN_PIPELINE_ONCE to test a pipeline before you start the
pipeline as a continuous job.
Syntax
DBMS_CLOUD_PIPELINE.RUN_PIPELINE_ONCE(
pipeline_name IN VARCHAR2
);
6-294
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline to run.
This parameter is mandatory.
Usage Notes
• After you perform a test run of a pipeline you can reset the pipeline state using
DBMS_CLOUD_PIPELINE.RESET_PIPELINE. This allows you to reset the pipeline state before
you start the pipeline in a scheduled job.
• If a pipeline is in the started state, then it cannot be run in the foreground session.
SET_ATTRIBUTE Procedure
This procedure sets pipeline attributes. There are two overloaded procedures, one to set a
single attribute and another to set multiple attributes using a JSON document of attribute
name/value pairs.
Syntax
PROCEDURE DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name IN VARCHAR2,
attribute_name IN VARCHAR2,
attribute_value IN CLOB
);
PROCEDURE DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE(
pipeline_name IN VARCHAR2,
attributes IN CLOB
);
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline to set attributes.
This parameter is mandatory.
attribute_name Specifies the attribute name for the attribute to be set.
See DBMS_CLOUD_PIPELINE Attributes for more information.
attribute_value Specifies the value for the pipeline attribute to set.
See DBMS_CLOUD_PIPELINE Attributes for more information.
attributes Specifies a JSON document containing attribute names and values.
See DBMS_CLOUD_PIPELINE Attributes for more information.
Usage Note
• When you use DBMS_CLOUD_PIPELINE.SET_ATTRIBUTE to set multiple attributes with the
attributes parameter, all the existing attributes are deleted and overwritten with the
specified attributes from the JSON document.
6-295
Chapter 6
Autonomous Database Supplied Package Reference
START_PIPELINE Procedure
The procedure starts the data pipeline. When a pipeline is started, the pipeline
operation runs continuously in a scheduled job according to the interval configured
with the pipeline attributes.
Syntax
DBMS_CLOUD_PIPELINE.START_PIPELINE(
pipeline_name IN VARCHAR2,
start_date IN TIMESTAMP WITH TIME ZONE DEFAULT NULL
);
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline.
This parameter is mandatory.
start_date Specifies the starting date for the pipeline job.
Default value: NULL
Usage Notes
• By default, a pipeline job begins immediately, as soon as the pipeline is started. To
start a pipeline job at a later time specify a valid date or timestamp using the
start_date parameter.
• See DBMS_CLOUD_PIPELINE Attributes for information on the pipeline interval
and other pipeline attributes.
STOP_PIPELINE Procedure
The procedure stops the data pipeline. When a pipeline is stopped, no future jobs are
scheduled for the pipeline.
Syntax
DBMS_CLOUD_PIPELINE.STOP_PIPELINE(
pipeline_name IN VARCHAR2,
force IN BOOLEAN DEFAULTFALSE
);
Parameters
Parameter Description
pipeline_name Specifies a name for the pipeline.
This parameter is mandatory.
6-296
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
force If force parameter is passed as TRUE, then it will terminate any running
jobs for the pipeline.
Valid values: TRUE, FALSE
Default value: FALSE
DBMS_CLOUD_PIPELINE Attributes
Attributes help to control and configure the behavior of a data pipeline.
Attributes
Note:
As indicated in the Pipeline Type column, depending on the pipeline type LOAD or
EXPORT, a pipeline supports a different set of attributes.
6-297
Chapter 6
Autonomous Database Supplied Package Reference
6-298
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_CLOUD_PIPELINE Views
The DBMS_CLOUD_PIPELINE package uses the following views.
• DBA_CLOUD_PIPELINES View
6-299
Chapter 6
Autonomous Database Supplied Package Reference
• USER_CLOUD_PIPELINES View
• DBA_CLOUD_PIPELINE_HISTORY View
• USER_CLOUD_PIPELINE_HISTORY View
• DBA_CLOUD_PIPELINE_ATTRIBUTES View
• USER_CLOUD_PIPELINE_ATTRIBUTES View
DBMS_CLOUD_REPO Package
The DBMS_CLOUD_REPO package provides for use of and management of cloud hosted
code repositories from Oracle Database. Supported cloud code repositories include
GitHub, AWS CodeCommit , and Azure Repos.
• DBMS_CLOUD_REPO Overview
The DBMS_CLOUD_REPO package provides easy access to files in Cloud Code (Git)
Repositories, including: GitHub, AWS CodeCommit, and Azure Repos.
• DBMS_CLOUD_REPO Data Structures
The DBMS_CLOUD_REPO package defines record types and a generic JSON object
type repo.
• DBMS_CLOUD_REPO Subprogram Groups
The DBMS_CLOUD_REPO package subprograms can be grouped into four categories:
Initialization Operations, Repository Management Operations, File Operations, and
SQL Install Operations.
• Summary of DBMS_CLOUD_REPO Subprograms
This section covers the DBMS_CLOUD_REPO subprograms provided with Autonomous
Database.
DBMS_CLOUD_REPO Overview
The DBMS_CLOUD_REPO package provides easy access to files in Cloud Code (Git)
Repositories, including: GitHub, AWS CodeCommit, and Azure Repos.
This package is a single interface for access to Multicloud Code repositories and
allows you to upload SQL files to Git repositories or install SQL scripts directly from
Cloud Code Repositories. This package also allows you to use a Cloud Code
Repository to manage code versions for SQL scripts and to install or patch application
code from Git repositories.
Concepts
• Git Version Control System: Git is software for tracking changes in any set of files,
usually used for coordinating work among programmers collaboratively developing
source code during software development. Its goals include speed, data integrity,
and support for distributed, non-linear workflows.
• Git Repository: A Git repository is a virtual storage of your project. It allows you to
save versions of your code, which you can access when needed.
Architecture
DBMS_CLOUD_REPO package provides four feature areas:
• Repository Initialization with Generic Cloud Code Repository Handle
6-300
Chapter 6
Autonomous Database Supplied Package Reference
6-301
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
INIT_AWS_REPO Function This function initializes an AWS repository handle and returns an opaque type.
INIT_AZURE_REPO Function This function initializes an Azure repository handle and returns an opaque type.
INIT_GITHUB_REPO Function This function initializes a GitHub repository handle and returns an opaque type.
INIT_REPO Function This function initializes a Cloud Code Repository handle and returns an opaque
JSON object.
Subprogram Description
CREATE_REPOSITORY This procedure creates a Cloud Code Repository identified by the repo handle
Procedure argument.
DELETE_REPOSITORY This procedure deletes the Cloud Code Repository identified by the repo
Procedure handle argument.
LIST_REPOSITORIES Function This function lists all the Cloud Code Repositories identified by the repo handle
argument.
UPDATE_REPOSITORY This procedure updates a Cloud Code repository identified by the repo handle
Procedure argument. The procedure supports updating the name, description, or private
visibility status, as supported by the Cloud Code repository.
Subprogram Description
CREATE_BRANCH Procedure This procedure creates a branch in a Cloud Code Repository identified by the
repo handle argument.
DELETE_BRANCH Procedure This procedure deletes a branch in a Cloud Code Repository identified by the
repo handle argument.
LIST_BRANCHES Function This function lists all the Cloud Code Repository branches identified by the
repo handle argument.
LIST_COMMITS Function This function lists all the commits in a Cloud Code Repository branch identified
by the repo handle argument.
6-302
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
MERGE_BRANCH Procedure This procedure merges a Cloud Code Repository branch into another specified
branch in a Cloud Code Repository identified by the repo handle argument.
Subprogram Description
DELETE_FILE Procedure This procedure deletes a file from the Cloud Code repository identified by the
repo handle argument.
GET_FILE Procedure and The function downloads the contents of a file from the Cloud Code repository.
Function The procedure allows you to download the contents of a file from the Cloud
Code repository and save the file in a directory.
LIST_FILES Function This function downloads a file from Cloud Code repository. Optionally, file
content can be accessed from either a specific branch, tag or commit name. By
default, the file is accessed from the default repository branch.
PUT_FILE Procedure This procedure uploads a file to the Cloud Code repository identified by the
repo handle argument. The procedure is overloaded to support either
uploading a file from a directory object or uploading the contents from a CLOB
to the repository file.
Subprogram Description
EXPORT_OBJECT Procedure This procedure uploads the DDL metadata of a database object to the Cloud
Code repository identified by the repo handle argument.
EXPORT_SCHEMA Procedure This procedure exports metadata of all objects in a schema to a Cloud Code
Repository branch identified by the repo handle argument.
INSTALL_FILE Procedure This procedure installs SQL statements from a file in the Cloud Code repository
identified by the repo handle argument.
INSTALL_SQL Procedure This procedure installs SQL statements from a buffer given as input.
6-303
Chapter 6
Autonomous Database Supplied Package Reference
• CREATE_REPOSITORY Procedure
This procedure creates a Cloud Code Repository identified by the repo handle
argument.
• DELETE_BRANCH Procedure
This procedure deletes a branch in the Cloud Code repository identified by the
repo handle argument.
• DELETE_FILE Procedure
This procedure deletes a file from the Cloud Code repository identified by the repo
handle argument.
• DELETE_REPOSITORY Procedure
This procedure deletes the Cloud Code Repository identified by the repo handle
argument.
• EXPORT_OBJECT Procedure
This procedure uploads the DDL metadata of a database object to the Cloud Code
repository identified by the repo handle argument. This procedure is an easy way
to upload the metadata definition of a database object in single step.
• EXPORT_SCHEMA Procedure
This procedure exports metadata of all objects in a schema to the Cloud Code
Repository branch identified by the repo handle argument.
• GET_FILE Procedure and Function
The function downloads the contents of a file from the Cloud Code repository. The
procedure allows you to download the contents of a file from the Cloud Code
repository and save the file in a directory.
• INIT_AWS_REPO Function
This function initializes an AWS repository handle and returns an opaque type.
• INIT_AZURE_REPO Function
This function initializes an Azure repository handle and returns an opaque type.
This function is only supported for Azure cloud provider.
• INIT_GITHUB_REPO Function
This function initializes a GitHub repository handle and returns an opaque type.
• INIT_REPO Function
This function initializes a Cloud Code Repository handle and returns an opaque
JSON object. This function is a generic interface to accept a JSON document, and
avoids having to change code, you only need to change a JSON document, when
moving a code repository from one Cloud Code repository to another Cloud Code
repository.
• INSTALL_FILE Procedure
This procedure installs SQL statements from a file in the Cloud Code repository
identified by the repo handle argument.
• INSTALL_SQL Procedure
This procedure installs SQL statements from a buffer given as input.
• LIST_BRANCHES Function
This function lists branches in the Cloud Code Repository branch identified by the
repo handle argument.
• LIST_COMMITS Function
This function lists commits in the Cloud Code Repository branch identified by the
repo handle argument.
6-304
Chapter 6
Autonomous Database Supplied Package Reference
• LIST_FILES Function
This function downloads a file from Cloud Code repository. Optionally, file content can be
accessed from either a specific branch, tag or commit name. By default, the file is
accessed from the default repository branch. The results include the file names and
additional metadata about the files.
• LIST_REPOSITORIES Function
This function lists all the Cloud Code Repositories identified by the repo handle
argument. The results include the repository names and additional metadata about the
repositories.
• MERGE_BRANCH Procedure
This procedure merges a repository branch into another specified branch in the Cloud
Code Repository identified by the repo handle argument. The MERGE_BRANCH procedure is
currently not supported in Azure.
• PUT_FILE Procedure
This procedure uploads a file to the Cloud Code repository identified by the repo handle
argument. The procedure is overloaded to support either uploading a file from a directory
object or uploading the contents from a BLOB to the repository file.
• UPDATE_REPOSITORY Procedure
This procedure updates a Cloud Code repository identified by the repo handle argument.
UPDATE_REPOSITORY supports updating the name, description, or private visibility
status, as supported by the Cloud Code repository.
CREATE_BRANCH Procedure
This procedure creates a branch in the Cloud Code Repository identified by the repo handle
argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.CREATE_BRANCH(
repo IN CLOB,
branch_name IN VARCHAR2,
parent_branch_name IN VARCHAR2 DEFAULT NULL,
parent_commit_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for all cloud providers.
branch_name Specifies the repository branch name.
This parameter is mandatory and supported for all cloud providers.
parent_branch_name Creates the new branch using the head commit of the specified parent
branch.
This parameter is supported for all cloud providers.
If you do not supply a parent_branch_name value, the
parent_branch_name is set to main.
6-305
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
parent_commit_id Creates the new branch using the specified repository commit.
This parameter is supported for all cloud providers.
If you do not supply a parent_commit_id value, the
parent_commit_id is set to a NULL value.
Note:
To create a branch in a Cloud Code repository, you must specify the parent
branch or the parent commit id.
Example
BEGIN
DBMS_CLOUD_REPO.CREATE_BRANCH (
repo => l_repo,
branch_name => 'test_branch',
parent_branch_name => 'main'
);
END;
/
Usage Note
To run the DBMS_CLOUD_REPO.CREATE_BRANCH procedure, you must be logged in as the
ADMIN user or have EXECUTE privilege on DBMS_CLOUD_REPO.
CREATE_REPOSITORY Procedure
This procedure creates a Cloud Code Repository identified by the repo handle
argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.CREATE_REPOSITORY(
repo IN CLOB,
description IN CLOB DEFAULT NULL,
private IN BOOLEAN DEFAULT TRUE
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is supported for all cloud providers.
description A short text description for the repository.
This parameter is supported for GITHUB and AWS cloud provider.
6-306
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
private Repository is private and only accessible with valid credentials
This parameter is only supported for the GITHUB cloud provider.
Example
BEGIN
DBMS_CLOUD_REPO.CREATE_REPOSITORY(
repo => l_repo,
description => 'My test repo',
private => TRUE
);
END;
/
DELETE_BRANCH Procedure
This procedure deletes a branch in the Cloud Code repository identified by the repo handle
argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.DELETE_BRANCH (
repo IN CLOB,
branch_name IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for all cloud providers.
branch_name Delete branch from a specific repository.
This parameter is mandatory and supported for all cloud providers.
Example
BEGIN
DBMS_CLOUD_REPO.DELETE_BRANCH (
repo => l_repo,
branch_name => 'test_branch'
);
END;
/
Usage Note
To run the DBMS_CLOUD_REPO.DELETE_BRANCH procedure, you must be logged in as the ADMIN
user or have EXECUTE privilege on DBMS_CLOUD_REPO.
6-307
Chapter 6
Autonomous Database Supplied Package Reference
DELETE_FILE Procedure
This procedure deletes a file from the Cloud Code repository identified by the repo
handle argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.DELETE_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
file_path File path to delete file in the repository.
branch_name Delete file from a specific branch.
commit_details Commit Details as a JSON document
{"message": "Commit message", "author": {"name":
"Committing user name", "email": "Email of
committing user" } }
Example
BEGIN
DBMS_CLOUD_REPO.DELETE_FILE(
repo => l_repo,
file_path => 'scripts/test3.sql',
branch_name => 'test_branch'
);
END;
/
DELETE_REPOSITORY Procedure
This procedure deletes the Cloud Code Repository identified by the repo handle
argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.DELETE_REPOSITORY(
repo IN CLOB
);
6-308
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
repo Specifies the repository handle.
Example
BEGIN
DBMS_CLOUD_REPO.DELETE_REPOSITORY(
repo => l_repo
);
END;
/
EXPORT_OBJECT Procedure
This procedure uploads the DDL metadata of a database object to the Cloud Code repository
identified by the repo handle argument. This procedure is an easy way to upload the
metadata definition of a database object in single step.
Syntax
PROCEDURE DBMS_CLOUD_REPO.EXPORT_OBJECT(
repo IN CLOB,
file_path IN VARCHAR2,
object_type IN VARCHAR2,
object_name IN VARCHAR2 DEFAULT NULL,
object_schema IN VARCHAR2 DEFAULT NULL,
branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL,
append IN BOOLEAN DEFAULT FALSE
);
Parameters
Parameter Description
repo Specifies the repository handle.
file_path File path to upload object metadata in the repository.
object_type Object type supported by DBMS_METADATA. See DBMS_METADATA:
Object Types table for details.
object_name Name of the database object to retrieve metadata.
object_schema Owning schema of the database object.
branch_name Put file to a specific branch.
commit_details Commit Details as a JSON document:{"message": "Commit
message", "author": {"name": "Committing user name",
"email": "Email of committing user" } }
append Append metadata DDL to existing file.
6-309
Chapter 6
Autonomous Database Supplied Package Reference
Usage Note
For customized control on the object DDL, you can use DBMS_METADATA.GET_DDL along
with DBMS_CLOUD_REPO.PUT_FILE. In order to get metadata definition of the object, the
current user must be privileged to retrieve the object metadata. See
DBMS_METADATA for the security requirements of the package.
Example
BEGIN
DBMS_CLOUD_REPO.EXPORT_OBJECT(
repo => l_repo,
object_type => 'PACKAGE',
object_name => 'MYPACK',
file_path => 'mypack.sql'
);
END;
/
EXPORT_SCHEMA Procedure
This procedure exports metadata of all objects in a schema to the Cloud Code
Repository branch identified by the repo handle argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.EXPORT_SCHEMA(
repo IN CLOB,
file_path IN VARCHAR2,
schema_name IN VARCHAR2,
filter_list IN CLOB DEFAULT NULL,
branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for all cloud
providers.
file_path Specifies the name of the schema file to upload to the repo.
This parameter is mandatory and supported for all cloud
providers.
schema_name Specifies the name of the schema for which a DDL script is to be
uploaded to the Cloud Code Repository branch.
This parameter is mandatory and supported for all cloud
providers.
6-310
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
filter_list Specifies the CLOB of the JSON array that defines the filter
conditions to include or exclude the objects whose metadata
needs to be exported.
This parameter is supported for all cloud providers.
JSON parameters for filter_list are:
• match_type: Specifies the type of filter to be applied to the
object types or object names.
Valid match_type values are:
– in/not_in
– like/not_like
– equal/not_equal
• type: Specifies the type of object by which to filter.
• name: Specifies the name of the object by which to filter.
branch_name Specifies the repository branch name.
This parameter is supported for all cloud providers.
If you do not supply a branch_name value, the branch_name is
set to the default repository branch.
commit_details Commit Details as a JSON document
{"message": "Commit message", "author": {"name":
"Committing user name", "email": "Email of
committing user" } }
This parameter is supported for all cloud providers.
If you do not supply a commit_details value, the
commit_details is set to the default commit message that
includes the information about the current database session user
and database name performing the commit.
Example
BEGIN
DBMS_CLOUD_REPO.EXPORT_SCHEMA(
repo => l_repo,
schema_name => 'USER1',
file_path => 'myschema_ddl.sql'
filter_list =>
to_clob('[
{ "match_type":"equal",
"type":"table"
},
{ "match_type":"not_equal",
"type":"view"
},
{ "match_type":"in",
"type":"table",
"name": " ''EMPLOYEE_SALARY'',''EMPLOYEE_ADDRESS'' "
},
{ "match_type":"equal",
"type":"sequence",
"name": "EMPLOYEE_RECORD_SEQ"
},
6-311
Chapter 6
Autonomous Database Supplied Package Reference
{ "match_type":"like",
"type":"table",
"name": "%OFFICE%"
}
]'
);
);
END;
/
Usage Note
To run the DBMS_CLOUD_REPO.EXPORT_SCHEMA procedure, you must be logged in as the
ADMIN user or have EXECUTE privilege on DBMS_CLOUD_REPO.
Syntax
FUNCTION DBMS_CLOUD_REPO.GET_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
branch_name IN VARCHAR2 DEFAULT NULL,
tag_name IN VARCHAR2 DEFAULT NULL,
commit_name IN VARCHAR2 DEFAULT NULL
) RETURN CLOB;
PROCEDURE DBMS_CLOUD_REPO.GET_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
directory_name IN VARCHAR2,
target_file_name IN VARCHAR2 DEFAULT NULL,
branch_name IN VARCHAR2 DEFAULT NULL,
tag_name IN VARCHAR2 DEFAULT NULL,
commit_name IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
file_path File path in the repository.
directory_name Directory object name to save the file contents.
target_file_name Target file name to save contents in directory.
branch_name Get file from a specific branch.
tag_name Get file from a specific Tag.
commit_name Get file from a specific commit.
6-312
Chapter 6
Autonomous Database Supplied Package Reference
Example
BEGIN
DBMS_CLOUD_REPO.GET_FILE(
repo => l_repo,
file_path => 'test3.sql',
directory_name => 'DATA_PUMP_DIR',
target_file_name => 'test2.sql'
);
END;
/
INIT_AWS_REPO Function
This function initializes an AWS repository handle and returns an opaque type.
Syntax
FUNCTION DBMS_CLOUD_REPO.INIT_AWS_REPO(
credential_name IN VARCHAR2,
repo_name IN VARCHAR2,
region IN VARCHAR2
) RETURN repo;
Parameters
Parameter Description
credential_name Credential object specifying AWS CodeCommit accesskey/secretkey.
repo_name Specifies the repository name.
region Specifies the AWS region for the CodeCommit repository.
Example
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_AWS_REPO(
credential_name => 'AWS_CRED',
repo_name => 'my_repo',
region => 'us-east-1'
);
END;
/
INIT_AZURE_REPO Function
This function initializes an Azure repository handle and returns an opaque type. This function
is only supported for Azure cloud provider.
6-313
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
FUNCTION DBMS_CLOUD_REPO.INIT_AZURE_REPO(
credential_name IN VARCHAR2,
repo_name IN VARCHAR2,
organization IN VARCHAR2,
project IN VARCHAR2
) RETURN repo;
Parameters
Parameter Description
credential_name Credential object specifying Azure, with a Username and Personal
Access Token (PAT).
repo_name Specifies the repository name.
organization Specifies the Azure DevOps Organization.
project Azure Team Project name.
Example
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_AZURE_REPO(
credential_name => 'AZURE_CRED',
repo_name => 'my_repo',
organization => 'myorg',
project => 'myproj',
);
END;
/
INIT_GITHUB_REPO Function
This function initializes a GitHub repository handle and returns an opaque type.
Syntax
FUNCTION DBMS_CLOUD_REPO.INIT_GITHUB_REPO(
credential_name IN VARCHAR2 DEFAULT NULL,
repo_name IN VARCHAR2,
owner IN VARCHAR2)
RETURN repo;
Parameters
Parameter Description
credential_name Credential object specifying GitHub.
User Email and Personal Access Token (PAT).
repo_name Specifies the repository name.
6-314
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
owner Specifies the repository owner.
Example
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_GITHUB_REPO(
credential_name => 'GITHUB_CRED',
repo_name => 'my_repo',
owner => 'foo'
);
END;
/
INIT_REPO Function
This function initializes a Cloud Code Repository handle and returns an opaque JSON object.
This function is a generic interface to accept a JSON document, and avoids having to change
code, you only need to change a JSON document, when moving a code repository from one
Cloud Code repository to another Cloud Code repository.
Syntax
FUNCTION DBMS_CLOUD_REPO.INIT_REPO(
params IN CLOB)
RETURN CLOB;
Parameters
Example
BEGIN
:repo := DBMS_CLOUD_REPO.INIT_REPO(
6-315
Chapter 6
Autonomous Database Supplied Package Reference
INSTALL_FILE Procedure
This procedure installs SQL statements from a file in the Cloud Code repository
identified by the repo handle argument.
Syntax
PROCEDURE DBMS_CLOUD_REPO.INSTALL_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
branch_name IN VARCHAR2 DEFAULT NULL,
tag_name IN VARCHAR2 DEFAULT NULL,
commit_name IN VARCHAR2 DEFAULT NULL,
stop_on_error IN BOOLEAN DEFAULT TRUE
);
Parameters
Parameter Description
repo Specifies the repository handle.
file_path File path in the repository.
branch_name Branch to install file from a specific branch.
tag_name Tag to install file from a specific Tag.
commit_name Commit ID to install file from a specific commit.
stop_on_error Stop executing the SQL statements on first error.
Usage Notes
• You can install SQL statements containing nested SQL from a Cloud Code
repository file using the following:
– @: includes a SQL file with a relative path to the ROOT of the repository.
– @@: includes a SQL file with a path relative to the current file.
• The scripts are intended as schema install scripts and not as generic SQL scripts:
– Scripts cannot contain SQL*Plus client specific commands.
– Scripts cannot contain bind variables or parameterized scripts.
– SQL statements must be terminated with a slash on a new line (/).
– Scripts can contain DDL, DML PLSQL statements, but direct SELECT
statements are not supported. Using SELECT within a PL/SQL block is
supported.
Any SQL statement that can be run using EXECUTE IMMEDIATE will work if it does
not contain bind variables or defines.
6-316
Chapter 6
Autonomous Database Supplied Package Reference
Example
BEGIN
DBMS_CLOUD_REPO.INSTALL_FILE(
repo => l_repo,
file_path => 'test3.sql',
stop_on_error => FALSE
);
END;
/
INSTALL_SQL Procedure
This procedure installs SQL statements from a buffer given as input.
Syntax
PROCEDURE DBMS_CLOUD_REPO.INSTALL_SQL(
content IN CLOB,
stop_on_error IN BOOLEAN DEFAULT TRUE
);
Parameters
Parameter Descriptions
content Is the CLOB containing the SQL statements to run.
stop_on_error Stop executing the SQL statements on first error.
Usage Notes
• The scripts are intended as schema install scripts and not as generic SQL scripts:
– Scripts cannot contain SQL*Plus client specific commands.
– Scripts cannot contain bind variables or parameterized scripts.
– SQL statements must be terminated with a slash on a new line (/).
– Scripts can contain DDL, DML PLSQL statements, but direct SELECT statements are
not supported. Using SELECT within a PL/SQL block is supported.
Any SQL statement that can be run using EXECUTE IMMEDIATE will work if it does not
contain bind variables or defines.
Example
BEGIN
DBMS_CLOUD_REPO.INSTALL_SQL(
content => 'create table t1 (x varchar2(30))' || CHR(10) || '/',
stop_on_error => FALSE
);
END;
/
6-317
Chapter 6
Autonomous Database Supplied Package Reference
LIST_BRANCHES Function
This function lists branches in the Cloud Code Repository branch identified by the repo
handle argument.
Syntax
FUNCTION DBMS_CLOUD_REPO.LIST_BRANCHES(
repo IN CLOB
) RETURN list_branch_ret_tab PIPELINED PARALLEL_ENABLE;
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for all cloud
providers.
Example
Usage Notes
• This is a pipelined table function with return type as list_branch_ret_tab.
• DBMS_CLOUD_REPO.LIST_BRANCHES returns the column: name, which indicates the
name of the Cloud Code Repository branch.
LIST_COMMITS Function
This function lists commits in the Cloud Code Repository branch identified by the repo
handle argument.
Syntax
FUNCTION DBMS_CLOUD_REPO.LIST_COMMITS(
repo IN CLOB,
branch_name IN VARCHAR2 DEFAULT NULL,
file_path IN VARCHAR2 DEFAULT NULL,
commit_id IN VARCHAR2 DEFAULT NULL
) RETURN list_commit_ret_tab PIPELINED PARALLEL_ENABLE;
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for all cloud
providers.
6-318
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
branch_name List commits from a specific branch.
This parameter is supported for all cloud providers.
If you do not supply a branch_name value, the branch_name is
set to main.
file_path List files under the specified subfolder path in the repository.
This parameter is only supported for Git and Azure cloud
providers.
If you do not supply a file_path value, the file_path is set to
a NULL value.
commit_id List files starting from the specified sha/id
This parameter is supported for all cloud providers.
If you do not supply a commit_id value, the commit_id is set to
a NULL value.
Example
Example
Usage Notes
• This is a pipelined table function with a return type as list_commit_ret_tab.
• DBMS_CLOUD_REPO.LIST_COMMITS returns the column: commit_id.
LIST_FILES Function
This function downloads a file from Cloud Code repository. Optionally, file content can be
accessed from either a specific branch, tag or commit name. By default, the file is accessed
from the default repository branch. The results include the file names and additional metadata
about the files.
Syntax
FUNCTION DBMS_CLOUD_REPO.LIST_FILES(
repo IN CLOB,
path IN VARCHAR2 DEFAULT NULL,
branch_name IN VARCHAR2 DEFAULT NULL,
tag_name IN VARCHAR2 DEFAULT NULL,
commit_id IN VARCHAR2 DEFAULT NULL
) RETURN list_file_ret_tab PIPELINED PARALLEL_ENABLE;
6-319
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
repo Specifies the repository handle.
path List files under the specified subfolder path in the repository.
branch_name List files from a specific branch.
tag_name List files from a specific Tag.
commit_name List files from a specific commit.
Usage Notes
• This is a pipelined table function with return type as list_file_ret_tab.
• DBMS_CLOUD_REPO.LIST_FILES returns the columns: id, name, url, and bytes.
Example
NAME
-------------------------
test3.sql
LIST_REPOSITORIES Function
This function lists all the Cloud Code Repositories identified by the repo handle
argument. The results include the repository names and additional metadata about the
repositories.
Syntax
FUNCTION DBMS_CLOUD_REPO.LIST_REPOSITORIES(
repo IN CLOB
) RETURN list_repo_ret_tab PIPELINED PARALLEL_ENABLE;
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is supported by all cloud providers.
description A short text description for the repository.
This parameter is supported the GITHUB and AWS cloud
providers.
private Repository is private and only accessible with valid credentials
This parameter is supported for the GITHUB cloud provider.
Usage Notes
• This is a pipelined table function with return type as list_repo_ret_tab.
6-320
Chapter 6
Autonomous Database Supplied Package Reference
Example
NAME DESCRIPTION
--------------------- ---------------
TestRepo1 My test repo
MERGE_BRANCH Procedure
This procedure merges a repository branch into another specified branch in the Cloud Code
Repository identified by the repo handle argument. The MERGE_BRANCH procedure is currently
not supported in Azure.
Syntax
PROCEDURE DBMS_CLOUD_REPO.MERGE_BRANCH (
repo IN CLOB,
branch_name IN VARCHAR2,
parent_branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is mandatory and supported for GITHUB and AWS
cloud providers.
branch_name Specifies the Git branch name to merge.
This parameter is mandatory and supported for all cloud providers.
target_branch_name Specifies the target branch name to merge into.
This parameter is mandatory and supported for all cloud providers.
commit_details Commit Details as a JSON document
{"message": "Commit message", "author": {"name":
"Committing user name", "email": "Email of committing
user" } }
If you do not supply a commit_detailsvalue, the commit_details is
set to the default commit message that includes the information about
current database session user and database name performing the
commit.
Example
BEGIN
DBMS_CLOUD_REPO.MERGE_BRANCH (
repo => l_repo,
branch_name => 'test_branch',
6-321
Chapter 6
Autonomous Database Supplied Package Reference
Usage Note
To run the DBMS_CLOUD_REPO.MERGE_BRANCH procedure, you must be logged in as the
ADMIN user or have EXECUTE privilege on DBMS_CLOUD_REPO.
PUT_FILE Procedure
This procedure uploads a file to the Cloud Code repository identified by the repo
handle argument. The procedure is overloaded to support either uploading a file from
a directory object or uploading the contents from a BLOB to the repository file.
Syntax
PROCEDURE DBMS_CLOUD_REPO.PUT_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
contents IN BLOB,
branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL
);
PROCEDURE DBMS_CLOUD_REPO.PUT_FILE(
repo IN CLOB,
file_path IN VARCHAR2,
directory_name IN VARCHAR2,
source_file_name IN VARCHAR2 DEFAULT NULL,
branch_name IN VARCHAR2 DEFAULT NULL,
commit_details IN CLOB DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
file_path File path to upload file in the repository.
contents BLOB containing the file contents.
directory_name Directory object name containing the file name.
source_file_name Source file name to upload to repository.
branch_name Put file to a specific branch.
commit_details Commit Details as a JSON document:
{"message": "Commit message", "author": {"name":
"Committing user name", "email": "Email of
committing user" } }
6-322
Chapter 6
Autonomous Database Supplied Package Reference
Example
BEGIN
DBMS_CLOUD_REPO.PUT_FILE(
repo => l_repo,
);
END;
/
UPDATE_REPOSITORY Procedure
This procedure updates a Cloud Code repository identified by the repo handle argument.
UPDATE_REPOSITORY supports updating the name, description, or private visibility status,
as supported by the Cloud Code repository.
Syntax
PROCEDURE DBMS_CLOUD_REPO.UPDATE_REPOSITORY(
repo IN OUT CLOB,
new_name IN VARCHAR2 DEFAULT NULL,
description IN CLOB DEFAULT NULL,
private IN BOOLEAN DEFAULT NULL
);
Parameters
Parameter Description
repo Specifies the repository handle.
This parameter is supported for all cloud providers.
new_name New name for repository.
This parameter is supported for all cloud providers.
description A short text description for the repository.
This parameter is supported for GITHUB and AWS cloud providers.
private Repository is private and only accessible with valid credentials.
This parameter is supported for the GITHUB cloud provider.
Example
BEGIN
DBMS_CLOUD_REPO.UPDATE_REPOSITORY(
repo => l_repo,
new_name => 'repo2'
);
END;
/
6-323
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_DCAT Package
The DBMS_DCAT package provides functions and procedures to help Autonomous
Database users leverage the data discovery and centralized metadata management
system of OCI Data Catalog.
Data Catalog harvests metadata from a data lake's object storage assets. The
harvesting process creates logical entities, which can be thought of as tables with
columns and associated data types. DBMS_DCAT procedures and functions connect
Autonomous Database to Data Catalog and then synchronize the assets with the
database, creating protected schemas and external tables. You can then query object
store using those external tables, easily joining external data with data stored in
Autonomous Database. This dramatically simplifies the management process; there is
a single, centrally managed metadata store that is shared across multiple OCI services
(including Autonomous Databases). There are also Autonomous Database dictionary
views that allow you to inspect the contents of Data Catalog using SQL, and show you
how these Data Catalog entities map to your Autonomous Database schemas and
tables.
• Data Catalog Users and Roles
The DBMS_DCAT package supports synced users/schemas, dcat_admin users and
local users. Users must have the dcat_sync role to be able to use this package.
• Required Credentials and IAM Policies
This topic describes the Oracle Cloud Infrastructure Identity and Access
Management (IAM) user credentials and policies required to give Autonomous
Database users permission to manage a data catalog and to read from object
storage.
• Summary of Connection Management Subprograms
This table lists the DBMS_DCAT package procedures used to create, query and drop
Data Catalog connections.
• Summary of Synchronization Subprograms
Running a synchronization, creating and dropping a synchronization job, and
dropping synchronized schemas can be performed with the procedures listed in
this table.
• Summary of Data Catalog Views
Data Catalog integration with Autonomous Database provides numerous tables
and views.
6-324
Chapter 6
Autonomous Database Supplied Package Reference
clause, so that they cannot be altered by local users (not even the PDB admin) and can
only be modified through the synchronization.
• User dcat_admin
User dcat_admin is a local database user that can run a sync and grant READ privilege
on synced tables to other users or roles. The user is created as a no authentication user
without the CREATE SESSION privilege.
• Local users
Database users querying the external tables must be explicitly granted READ privileges
on the synced external tables by users dcat_admin or ADMIN. By default, after the sync
is completed, only users dcat_admin and ADMIN have access to the synced external
tables.
6-325
Chapter 6
Autonomous Database Supplied Package Reference
The private_key parameter specifies the generated private key in PEM format.
Private keys created with a passphrase are not supported. Therefore, we need to
make sure we generate a key with no passphrase. See How to Generate an API
Signing Key for more details on how to create a private key with no passphrase. Also,
the private key that we provide for this parameter must only contain the key itself
without any header or footer (e.g. ‘-----BEGIN RSA PRIVATE KEY-----', ‘-----END RSA
PRIVATE KEY-----’).
The fingerprint parameter specifies the fingerprint that is obtained either after
uploading the public key to the console, or using the OpenSSL commands. See How
to Upload the Public Key and How to Get the Key's Fingerprint for further details on
obtaining the fingerprint.
Once all the necessary information is gathered and the private key is generated, we're
ready to run the following CREATE_CREDENTIAL procedure:
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'OCI_NATIVE_CRED',
user_ocid =>
'ocid1.user.oc1..aaaaaaaatfn77fe3fxux3o5lego7glqjejrzjsqsrs64f4jsjrhbsk
5qzndq',
6-326
Chapter 6
Autonomous Database Supplied Package Reference
tenancy_ocid =>
'ocid1.tenancy.oc1..aaaaaaaapwkfqz3upqklvmelbm3j77nn3y7uqmlsod75rea5zmtmbl574
ve6a',
private_key =>
'MIIEogIBAAKCAQEA...t9SH7Zx7a5iV7QZJS5WeFLMUEv+YbYAjnXK+dOnPQtkhOblQwCEY3Hsbl
j7Xz7o=',
fingerprint =>
'4f:0c:d6:b7:f2:43:3c:08:df:62:e3:b2:27:2e:3c:7a');
END;
/
OWNER CREDENTIAL_NAME
----- ---------------
ADMIN OCI_NATIVE_CRED
resource.id = 'ocid1.autonomousdatabase.oc1.iad.abuwcljr...fjkfe'
2. Define a policy granting the adb-grp-1 dynamic group full access to the Data Catalog
instances, in the mycompartment compartment.
3. Define a policy that allows the adb-grp-1 dynamic group to read any bucket in the
compartment named mycompartment.
6-327
Chapter 6
Autonomous Database Supplied Package Reference
1. Allow users that are members of adb-admins to manage all data catalogs within
mycompartment.
2. Allow users that are members of adb-admins to read any object in any bucket
within mycompartment.
Subprogram Description
SET_DATA_CATALOG_ Create a connection to the given data catalog
CONN Procedure
SET_DATA_CATALOG_ Set the data catalog access credential used by a specific
CREDENTIAL Procedure connection to the data catalog
SET_OBJECT_STORE_ Set the credential used by the given unique connection identifier for
CREDENTIAL Procedure accessing the Object Store
UNSET_DATA_CATALO Remove an existing Data Catalog connection
G_CONN Procedure
• SET_DATA_CATALOG_CREDENTIAL Procedure
This procedure sets the Data Catalog access credential used by a specific
connection to the Data Catalog.
• SET_OBJECT_STORE_CREDENTIAL Procedure
This procedure sets the credential that is used by the given unique connection
identifier for accessing the Object Store. Changing the Object Store access
credential alters all existing synced tables to use the new credential.
• SET_DATA_CATALOG_CONN Procedure
This procedure creates a connection to the given Data Catalog. The connection is
required to synchronize metadata with Data Catalog. An Autonomous Database
instance can connect to multiple Data Catalog instances and supports connecting
to OCI Data Catalogs and AWS Glue Data Catalogs.
• UNSET_DATA_CATALOG_CONN Procedure
This procedure removes an existing Data Catalog connection.
SET_DATA_CATALOG_CREDENTIAL Procedure
This procedure sets the Data Catalog access credential used by a specific connection
to the Data Catalog.
Syntax
PROCEDURE DBMS_DCAT.SET_DATA_CATALOG_CREDENTIAL(
credential_name VARCHAR2(128) DEFAULT NULL,
6-328
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
credential_name (Optional) The credential used for accessing the Data Catalog.
dcat_con_id The unique Data Catalog connection identifier. This credential is used for
the connection identified by dcat_con_id. The default is Null.
Usage
This credential must have Manage Data Catalog permissions; see Data Catalog Policies. The
default is the resource principal; see Access Cloud Resources by Configuring Policies and
Roles.
SET_OBJECT_STORE_CREDENTIAL Procedure
This procedure sets the credential that is used by the given unique connection identifier for
accessing the Object Store. Changing the Object Store access credential alters all existing
synced tables to use the new credential.
Syntax
PROCEDURE DBMS_DCAT.SET_OBJECT_STORE_CREDENTIAL(
credential_name VARCHAR2(128),
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
credential_name The credential used by the external tables for accessing the Object Store.
dcat_con_id The unique Data Catalog connection identifier. The default is NULL.
SET_DATA_CATALOG_CONN Procedure
This procedure creates a connection to the given Data Catalog. The connection is required to
synchronize metadata with Data Catalog. An Autonomous Database instance can connect to
multiple Data Catalog instances and supports connecting to OCI Data Catalogs and AWS
Glue Data Catalogs.
Syntax
PROCEDURE DBMS_DCAT.SET_DATA_CATALOG_CONN (
region VARCHAR2 DEFAULT NULL,
endpoint VARCHAR2 DEFAULT NULL,
catalog_id VARCHAR2 DEFAULT NULL,
dcat_con_id VARCHAR2 DEFAULT NULL,
catalog_type VARCHAR2 DEFAULT NULL
);
6-329
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
region The Data Catalog region. If the endpoint is specified, region is
optional. If both endpoint and region are given, then the
endpoint takes precedence. Default is NULL.
endpoint The Data Catalog endpoint. If the region is specified, endpoint is
optional. If both endpoint and region are given, then the
endpoint takes precedence. Default is NULL.
catalog_id The unique Oracle Cloud Identifier (OCID) for the Data Catalog
instance. When connecting to AWS Glue Data Catalogs,
catalog_id is optional.
dcat_con_id A unique Data Catalog connection identifier. This identifier is
required when connecting to multiple Data Catalogs and is optional
when connecting to only one. It is used to refer to the Data Catalog
connection in subsequent calls or when querying views. If no
identifier is specified this procedure generates a NULL connection
identifier. The following restrictions apply for dcat_con_id:
• It must be unique within the Autonomous Database instance.
• It must start with a letter.
• It may contain alphanumeric characters, underscores (_), dollar
signs ($), and pound signs (#).
• It must be at least 16 characters long.
catalog_type The type of data catalog to connect. Allowed values:
• OCI_DCAT - OCI Data Catalog
• AWS_GLUE - AWS Glue Data Catalog
• NULL - The catalog type is automatically detected from the
provided region or endpoint.
Usage
You only need to call this procedure once to set the connection. As part of the
connection process, Autonomous Database adds custom properties to Data Catalog.
These custom properties are accessible to Data Catalog users and allow you to
override default names (for schemas, tables and columns) and column data types.
Before creating a connection, credentials must be created and set. For a description of
the connection process, see Typical Workflow with Data Catalog for OCI Data
Catalogs and User Workflow for Querying with AWS Glue Data Catalog for AWS Glue
Data Catalogs.
BEGIN
DBMS_DCAT.SET_DATA_CATALOG_CONN(
region=>'uk-london-1',
catalog_id=>'ocid1.datacatalog.oc1.uk-london-1...');
6-330
Chapter 6
Autonomous Database Supplied Package Reference
END;
/
BEGIN
DBMS_DCAT.SET_DATA_CATALOG_CONN(
region=>'uk-london-1',
catalog_type=>'AWS_GLUE'
END;
/
UNSET_DATA_CATALOG_CONN Procedure
This procedure removes an existing Data Catalog connection.
Note:
Invoking this procedure drops all of the protected schemas and external tables that
were created as part of previous synchronizations. It does not impact the metadata
in Data Catalog.
Syntax
PROCEDURE DBMS_DCAT.UNSET_DATA_CATALOG_CONN (
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
dcat_con_id The unique Data Catalog connection identifier. Default is Null.
6-331
Chapter 6
Autonomous Database Supplied Package Reference
Note:
On April 4, 2022, the sync_option and grant_read parameters were added
to the DBMS_DCAT.RUN_SYNC procedure. To ensure correct performance of
scheduled sync jobs created prior to that date, you need to drop and recreate
the scheduled sync jobs. See DBMS_DCAT.DROP_SYNC_JOB Procedure
and DBMS_DCAT.CREATE_SYNC_JOB Procedure.
Subprogram Description
CREATE_SYNC_JOB Create a scheduler job to invoke RUN_SYNC periodically
Procedure
DROP_SYNC_JOB Drop an existing sync job for the given unique connection identifier
Procedure
DROP_SYNCED_SCHE Drop all previously synchronized schemas for the given unique
MAS Procedure connection identifier
RUN_SYNC Procedure Run a synchronization operation
• RUN_SYNC Procedure
This procedure runs a synchronization operation and is the entry point to the
synchronization. As an input, it takes lists of selected data catalog assets, folders
and entities and materializes them by creating, dropping, and altering external
tables.
• CREATE_SYNC_JOB Procedure
This procedure creates a scheduler job to invoke RUN_SYNC periodically.
• DROP_SYNC_JOB Procedure
This procedure drops an existing sync job for the given unique connection
identifier.
• DROP_SYNCED_SCHEMAS Procedure
This procedure drops all previously synchronized schemas for the given unique
connection identifier.
RUN_SYNC Procedure
This procedure runs a synchronization operation and is the entry point to the
synchronization. As an input, it takes lists of selected data catalog assets, folders and
entities and materializes them by creating, dropping, and altering external tables.
The sync_option parameter specifies which operation the RUN_SYNC procedure
performs: SYNC, DELETE or REPLACE. The operation is performed over entities within the
scope of the synced_objects parameter.
Every call to the RUN_SYNC procedure returns a unique operation_id that can be used
to query the USER_LOAD_OPERATIONS view to obtain information about the status of the
sync and the corresponding log_table. The DBMS_DCAT$SYNC_LOG view can be queried
for easy access to the log_table for the last sync operation executed by the current
6-332
Chapter 6
Autonomous Database Supplied Package Reference
user. For further details, see DBMS_DCAT$SYNC_LOG View, and Monitoring and
Troubleshooting Loads.
Note:
On April 4, 2022, the sync_option and grant_read parameters were added to the
RUN_SYNC procedure. To ensure correct performance of scheduled sync jobs created
prior to that date, you need to drop and recreate the scheduled sync jobs. See
DBMS_DCAT.DROP_SYNC_JOB Procedure and
DBMS_DCAT.CREATE_SYNC_JOB Procedure.
Bucket: MYBUCKET
cluster1/db1.db/sales/country=USA/year=2020/month=01/sales1.csv
cluster1/db1.db/sales/country=USA/year=2020/month=01/sales2.csv
cluster1/db1.db/sales/country=USA/year=2020/month=02/sales1.csv
Harvesting the bucket using a filename pattern with a starting folder prefix of
cluster1/db1.db generates a logical entity named SALES with three partition
attributes: country, year, and month. The type for partitioned attributes is Partition
while the type for non-partitioned attributes is Primitive.
• Example 2. Logical entities based on harvested objects that follow the non-Hive style
partitioning format with prefix-based filename patterns.
Consider the following objects:
Bucket: MYBUCKET
cluster2/db2.db/sales/USA/2020/01/sales1.csv
cluster2/db2.db/sales/USA/2020/01/sales2.csv
cluster2/db2.db/sales/USA/2020/02/sales1.csv
Harvesting the bucket using a filename pattern with a starting folder prefix of
cluster2/db2.db generates a logical entity named SALES with three partition
attributes: name0, name1, and name2. The only difference between the generated
logical entity compared to Example 1, is that the names of partitioned attributes are
6-333
Chapter 6
Autonomous Database Supplied Package Reference
auto generated, while in Example 1 they are extracted from the URL (country,
year, and month respectively).
For a complete end to end example of synchronizing partitioned logical entities, see
Example: A Partitioned Data Scenario.
Syntax
PROCEDURE DBMS_DCAT.RUN_SYNC (
synced_objects IN CLOB,
sync_option IN VARCHAR2 DEFAULT 'SYNC',
error_semantics IN VARCHAR2 DEFAULT 'SKIP_ERRORS',
log_level IN VARCHAR2 DEFAULT 'INFO',
grant_read IN VARCHAR2 DEFAULT NULL,
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
PROCEDURE DBMS_DCAT.RUN_SYNC (
synced_objects IN CLOB,
sync_option IN VARCHAR2 DEFAULT 'SYNC',
error_semantics IN VARCHAR2 DEFAULT 'SKIP_ERRORS',
log_level IN VARCHAR2 DEFAULT 'INFO',
grant_read IN VARCHAR2 DEFAULT NULL,
operation_id OUT NOCOPY NUMBER,
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
6-334
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
synced_objects This parameter is a JSON document that specifies the data catalog
objects to synchronize.
For OCI Data Catalogs, the JSON document specifies a set of entities in
multiple granularity: data assets, folders (Object Store buckets) or logical
entities. It contains an asset_list that is either an array of asset objects
or an array containing a single "*" string that stands for 'sync all (object
store) data assets in the catalog'.
For AWS Glue Data Catalogs, the JSON document specifies a list of
tables in multiple granularity: databases, tables. The document specifies
a list of databases. Users can restrict the set of tables to be synced by
specifying individual tables within a database.
sync_option (Optional) There are three options:
• SYNC (Default) - This option ensures that what is in the data catalog,
over the synced_objects scope, is represented in the Autonomous
Database. If a logical entity or Glue table was deleted from the data
catalog, since the last sync operation, then it is deleted in the
Autonomous Database. The following operations are performed over
the synced_objects scope:
– Adds tables for new data catalog entities
– Removes tables for deleted data catalog entities
– Updates properties (such as name, columns and data types) for
existing tables
• DELETE - Deletes tables within the synced_objects scope.
• REPLACE - Replaces all currently synced objects with the objects
within the synced_objects scope.
error_semantics (Optional) This parameter specifies the error behavior. If set to
SKIP_ERRORS, the sync attempts to continue despite errors encountered
for individual entities. If set to STOP_ON_ERROR, the procedure fails on the
first encountered error. The default is SKIP_ERRORS.
log_level (Optional) This parameter specifies the following values in increasing
level of logging detail: (OFF, FATAL, ERROR, WARN, INFO, DEBUG, TRACE,
ALL). The default is INFO.
grant_read (Optional) This parameter is a list of users/roles that are automatically
granted READ privileges on all external tables processed by this
invocation of RUN_SYNC. All users/roles in the grant_read list are given
READ privileges on all new or already existing external tables that
correspond to entities specified by the synced_objects parameter. The
RUN_SYNC procedure preserves already granted privileges on synced
external tables.
operation_id (Optional) This parameter is used to find the corresponding entry in
USER_LOAD_OPERATIONS for the sync and determine the name of the log
table.
Note: A version of RUN_SYNC that does not return an operation_id is
available so users can query USER_LOAD_OPERATIONS for the latest
sync.
6-335
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
dcat_con_id This parameter is the unique data catalog connection identifier that was
specified when the connection to the data catalog was created. See
DBMS_DCAT SET_DATA_CATALOG_CONN Procedure. This parameter
identifies which connection is used for synchronization and becomes a
part of the derived schema name. See Synchronization Mapping for a
description of how the schema name is derived. The parameter default is
NULL.
EXEC DBMS_DCAT.RUN_SYNC(synced_objects=>'{"asset_list":["*"]}');
Example: synced_objects Parameter for Synchronizing All OCI Data Catalog Data
Assets
The following is an example synced_objects parameter for synchronizing all (Object
Storage) data assets in the Data Catalog.
{"asset_list" : ["*"]}
{"asset_list": [
{
"asset_id":"0b320de9-8411-4448-91fb-9e2e7f78fd5f"
},
{
"asset_id":"0b320de9-8411-4448-91fb-9e2e7f74523"
}
]}
{"asset_list": [
{
"asset_id":"0b320de9-8411-4448-91fb-9e2e7f78fd5f",
"folder_list":[
"f1",
"f2"
]
6-336
Chapter 6
Autonomous Database Supplied Package Reference
}
]}
{"asset_list":[
{
"asset_id":"0b320de9-8411-4448-91fb-9e2e7f78fd5f",
"entity_list": [
"entity1",
"entity2"
],
"folder_list": [
"f1",
"f2"
]
}
]}
Example: synced_objects Parameter for Synchronizing All AWS Glue Data Catalog
Databases
The following shows an example synced_objects parameter for synchronizing all databases
in the AWS Glue Data Catalog.
{"database_list":["*"]}
Example: synced_objects Parameter for Synchronizing Two AWS Glue Data Catalog
Databases
The following shows an example synced_objects parameter for synchronizing two AWS Glue
Data Catalog databases.
{"database_list":[
{"database":"tpcdscsv"},
{"database":"tpcdsparquet"} ]}
Example: synced_objects Parameter for Synchronizing Three AWS Glue Data Catalog
Databases
The following shows an example synced_objects parameter for synchronizing three tables
from an AWS Glue Data Catalog database.
{"database_list":[
{"database":"tpcdsparquet",
"table_list": [ "tpcdsparquet_customer",
"tpcdsparquet_item",
"tpcdsparquet_web_sales" ] } ]}
6-337
Chapter 6
Autonomous Database Supplied Package Reference
CREATE_SYNC_JOB Procedure
This procedure creates a scheduler job to invoke RUN_SYNC periodically.
It takes as input the set of objects to be synced, the error semantics, the log level, and
a repeat interval. See DBMS_DCAT RUN_SYNC Procedure for further details on how
synchronization works.
There can only be a single sync job. The CREATE_SYNC_JOB procedure fails if another
job is already specified, unless the force parameter is set to TRUE. If force is set to TRUE
the previous job is dropped.
If a scheduler job attempts to run while another sync is in progress, the scheduler job
fails.
Note:
On April 4, 2022, the sync_option and grant_read parameters were added
to the RUN_SYNC procedure. To ensure correct performance of scheduled
sync jobs created prior to that date, you need to drop and recreate the
scheduled sync jobs. See DBMS_DCAT.DROP_SYNC_JOB Procedure and
DBMS_DCAT.CREATE_SYNC_JOB Procedure.
Syntax
PROCEDURE DBMS_DCAT.CREATE_SYNC_JOB (
synced_objects IN CLOB,
error_semantics IN VARCHAR2 DEFAULT 'SKIP_ERRORS',
log_level IN VARCHAR2 DEFAULT 'INFO',
repeat_interval IN VARCHAR2,
force IN VARCHAR2 DEFAULT 'FALSE',
grant_read IN VARCHAR2 DEFAULT NULL,
sync_option IN VARCHAR2 DEFAULT 'SYNC',
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
synced_objects A JSON object specifying the objects to be synced, as described in
the RUN_SYNC procedure.
error_semantics (Optional) Error behavior, as specified for RUN_SYNC. Default is
SKIP_ERRORS.
log_level (Optional) Logging level, as specified for RUN_SYNC. Default is
INFO.
repeat_interval Repeat interval for the job, with the same semantics as the repeat
interval parameter of the DBMS_SCHEDULER.CREATE_JOB
procedure. For details on the repeat_interval, see Overview of
Creating Jobs.
6-338
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
force (Optional) If TRUE, existing sync jobs are deleted first. If FALSE, the
CREATE_SYNC_JOB procedure fails if a sync job already exists.
Default is FALSE.
grant_read (Optional) List of users/roles to be granted READ on the synced
external tables, as described for procedure RUN_SYNC. See
DBMS_DCAT.RUN_SYNC Procedure.
sync_option (Optional) Behavior with respect to entities that have already been
synced through a previous RUN_SYNC operation, as described for
procedure RUN_SYNC. See DBMS_DCAT.RUN_SYNC Procedure.
dcat_con_id This parameter is the unique Data Catalog connection identifier that
was specified when the connection to Data Catalog was created.
See DBMS_DCAT SET_DATA_CATALOG_CONN Procedure. This
parameter identifies which connection is used for synchronization
and becomes a part of the derived schema name. See
Synchronization Mapping for a description of how the schema name
is derived. The parameter default is NULL.
DROP_SYNC_JOB Procedure
This procedure drops an existing sync job for the given unique connection identifier.
Syntax
PROCEDURE DBMS_DCAT.DROP_SYNC_JOB (
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
dcat_con_id The unique Data Catalog connection identifier. The default is NULL.
DROP_SYNCED_SCHEMAS Procedure
This procedure drops all previously synchronized schemas for the given unique connection
identifier.
Syntax
PROCEDURE DBMS_DCAT.DROP_SYNCED_SCHEMAS (
dcat_con_id IN VARCHAR2 DEFAULT NULL
);
Parameters
Parameter Description
dcat_con_id The unique Data Catalog connection identifier. The default is NULL.
6-339
Chapter 6
Autonomous Database Supplied Package Reference
View Description
ALL_CLOUD_CATALOG Display information about OCI Data Catalog data assets and AWS
_DATABASES View Glue Data Catalog databases
ALL_CLOUD_CATALOG Used to display information about data entities for OCI Data
_TABLES View Catalogs and tables for AWS Glue Data Catalogs
ALL_DCAT_ASSETS List data catalog assets that this database is authorized to access
View
ALL_DCAT_ATTRIBUTE List data catalog attributes this database is authorized to access
S View
ALL_DCAT_CONNECTI A view that contains information about the data catalog(s)
ONS View connected to this instance
ALL_DCAT_ENTITIES Lists logical entities this database is authorized to access
View
ALL_DCAT_FOLDERS List metadata for the Object Storage buckets containing the data
View files for the Logical Entities
ALL_DCAT_GLOBAL_A List all accessible catalogs across all regions, along with the level of
CCESSIBLE_CATALOG access privileges for each catalog
S View
ALL_DCAT_LOCAL_AC List all accessible catalogs in the current region, along with the level
CESSIBLE_CATALOGS of access privileges for each catalog
View
ALL_GLUE_DATABASE Lists the AWS Glue Data Catalog databases that the data catalog
S View credential is authorized to access
ALL_GLUE_TABLES Shows all AWS Glue Data Catalog tables that the data catalog
View credential is authorized to access
DCAT_ATTRIBUTES List the mapping of logical entity attributes to external table columns
View
DCAT_ENTITIES View Describes the mapping of logical entities to external tables
DBMS_DCAT$SYNC_LO Provides easy access to the log table for the last sync operation
G View executed by the current user
• ALL_CLOUD_CATALOG_DATABASES View
Use view ALL_CLOUD_CATALOG_DATABASES to display information about OCI Data
Catalog data assets and AWS Glue Data Catalog databases.
6-340
Chapter 6
Autonomous Database Supplied Package Reference
• ALL_CLOUD_CATALOG_TABLES View
View ALL_CLOUD_CATALOG_TABLES is used to display information about data entities for
OCI Data Catalogs and tables for AWS Glue Data Catalogs.
• ALL_DCAT_ASSETS View
The Data Catalog assets that this database is authorized to access.
• ALL_DCAT_ATTRIBUTES View
The Data Catalog attributes this database is authorized to access.
• ALL_DCAT_CONNECTIONS View
A view that contains information about the data catalog(s) connected to this instance.
• ALL_DCAT_ENTITIES View
The Data Catalog logical entities this database is authorized to access.
• ALL_DCAT_FOLDERS View
Metadata for the Object Storage buckets containing the data files for the Logical Entities.
• ALL_DCAT_GLOBAL_ACCESSIBLE_CATALOGS View
This view lists all accessible catalogs across all regions, along with the level of access
privileges for each catalog.
• ALL_DCAT_LOCAL_ACCESSIBLE_CATALOGS View
This view lists all accessible catalogs in the current region, along with the level of access
privileges for each catalog.
• ALL_GLUE_DATABASES View
The AWS Glue Data Catalog databases that the data catalog credential is authorized to
access.
• ALL_GLUE_TABLES View
This view shows all AWS Glue Data Catalog tables that the data catalog credential is
authorized to access.
• DCAT_ATTRIBUTES View
Lists the mapping of logical entity attributes to external table columns.
• DCAT_ENTITIES View
Describes the mapping of logical entities to external tables.
• DBMS_DCAT$SYNC_LOG View
The DBMS_DCAT$SYNC_LOG view provides easy access to the log table for the last sync
operation executed by the current user.
ALL_CLOUD_CATALOG_DATABASES View
Use view ALL_CLOUD_CATALOG_DATABASES to display information about OCI Data Catalog data
assets and AWS Glue Data Catalog databases.
Column Description
DCAT_CON_ID CON1
CATALOG_ID Data catalog unique identifier.
OCI Data Catalog example:
ocid1.datacatalog.oc1.ap-mumbai-1.….y35a
AWS Glue Data Catalog example:
NULL
579294766787
6-341
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
NAME Name of the data asset (OCI)/ database (AWS Glue).
OCI Data Catalog example:
OBJECT_STORE_AT_ASHBURN
AWS Glue Data Catalog example:
OBJECT_STORE_AT_N_CALIFORNIA
DESCRIPTION Description of the data asset (OCI)/ database (AWS Glue).
OCI Data Catalog example:
TIME_CREATED The date and time the data asset (OCI) / databases (AWS Glue) were created in
the data catalog.
OCI Data Catalog example:
26-SEP-22 10.56.01.395000 PM +00:00
AWS Glue Data Catalog example:
2022-06-15T09:45:35+01:00
6-342
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
DETAILS JSON document with metadata about each data entity (OCI) / database (AWS
Glue).
OCI Data Catalog example:
{
"catalog-id": "ocid1.datacatalog.oc1.ap-
mumbai-1.amaaa...",
"description": null,
"display-name": "OBJECT_STORE_AT_ASHBURN",
"external-key": "https://swiftobjectstorage.us-
ashburn-1....",
"key": "bc95181c-3ac3-4959-9e5f-4e460d3fb82a",
"lifecycle-state": "ACTIVE",
"time-created": "2022-09-26T22:56:01.395000+00:00",
"type-key": "3ea65bc5-f60d-477a-a591-f063665339f9",
"uri": "/dcat/20190325/dataAssets/
bc95181c-3ac3-4959-9e5f-4e460d3fb82a"
}
{
"Name": "dbmsdcatpoc",
"Parameters": {
"somekey": "somevalue"
},
"CreateTime": "2022-06-15T09:45:35+01:00",
"CreateTableDefaultPermissions": [
{
"Principal": {
"DataLakePrincipalIdentifier":
"IAM_ALLOWED_PRINCIPALS"
},
"Permissions": [
"ALL"
]
}
],
"CatalogId": "579294766787"
}
ALL_CLOUD_CATALOG_TABLES View
View ALL_CLOUD_CATALOG_TABLES is used to display information about data entities for OCI
Data Catalogs and tables for AWS Glue Data Catalogs.
6-343
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
DCAT_CON_ID Unique identifier of the data catalog. The connection id.
OCI Data Catalog example: CON1
AWS Glue Data Catalog example: CON1
CATALOG_ID Data catalog unique identifier.
OCI Data Catalog example: ocid1.datacatalog.oc1.ap-mumbai-1.
….y35a
AWS Glue Data Catalog example: NULL
579294766787
DATABASE_NAME Name of the data asset (OCI)/ database (AWS Glue).
OCI Data Catalog example: OBJECT_STORE_AT_ASHBURN
AWS Glue Data Catalog example: OBJECT_STORE_AT_N_CALIFORNIA
NAME Name of the data entity (OCI) / table (AWS Glue).
OCI Data Catalog example: BIKES_TRIPS
AWS Glue Data Catalog example: BIKES_TRIPS
DESCRIPTION Description of the data entity (OCI) / table (AWS Glue).
OCI Data Catalog example: Table storing bike trips
AWS Glue Data Catalog example: Table storing bike trips
TIME_CREATED The date and time the data entity (OCI) / table (AWS Glue) was created in
the data catalog.
OCI Data Catalog example: 26-SEP-22 10.56.01.395000 PM +00:00
AWS Glue Data Catalog example: 2022-06-15T09:45:35+01:00
TIME_UPDATED Last time a change was made to the data entity (OCI) / table (AWS Glue).
OCI Data Catalog example: 26-SEP-22 10.56.01.395000 PM +00:00
AWS Glue Data Catalog example: 2022-06-15T09:45:35+01:00
6-344
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
DETAILS JSON document with metadata about each each data entity (OCI) / table
(AWS Glue)
OCI Data Catalog example:
{
"business-name": null,
"data-asset-key": "bc95181c-3ac3-4959-9e5f-...",
"description": null,
"display-name": "bikes_trips",
"external-key": "LE: https://swiftobjectstorage.us-
ashburn-1.oraclecloud.com/v1/..._trips",
"folder-key": "9c4b542d-d6eb-4b83-bf59-...",
"folder-name": "hive",
"is-logical": true,
"is-partition": false,
"key": "fde30a69-a07c-478a-ab62-...",
"lifecycle-state": "ACTIVE",
"object-storage-url": "https://objectstorage.us-
ashburn-1.oraclecloud.com/n/...",
"path": "OBJECT_STORE_AT_ASHBURN/hive/hive",
"pattern-key": "db21b3f1-1508-4045-aa80-...",
"properties": {
"default": {
"CONTENT-LENGTH": "4310321",
"LAST-MODIFIED": "Fri, 9 Oct 2020 20:16:52 UTC",
"archivedPECount": "0",
"dataEntityExpression": "{logicalEntity:[^/]+}.db/
{logicalEntity:[^/]+}/.*",
"harvestedFile": "bikes.db/trips/
p_start_month=2019-09/000000_0",
"patternName": "bikes_trips"
},
"harvestProps": {
"characterset": "UTF8",
"compression": "none",
"type": "PARQUET"
}
},
"realized-expression": "bikes.db/trips/.*",
"time-created": "2022-09-26T22:56:35.063000+00:00",
"time-updated": "2022-09-26T22:56:35.063000+00:00",
"type-key": "6753c3af-7f88-44b9-be52-1d57bef462fb",
"updated-by-id": "ocid1.user.oc1..r5l3tov7a",
"uri": "/dcat/20190325/dataAssets/
bc95181c-3ac3-4959-9e5f-..."
}
6-345
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
AWS Glue Data Catalog example:
{
"Name": "bikes_trips",
"DatabaseName": "dbmsdcatpoc",
"Owner": "owner",
"CreateTime": "2022-06-23T13:24:20+01:00",
"UpdateTime": "2022-06-23T13:24:20+01:00",
"LastAccessTime": "2022-06-23T13:24:20+01:00",
"Retention": 0,
"StorageDescriptor": {
"Columns": [
{
"Name": "trip_duration",
"Type": "int"
},
{
"Name": "start_month",
"Type": "string"
}, ...
],
"Location": "s3://dbmsdcatpoc/hive/bikes.db/
trips/",
"InputFormat":
"org.apache.hadoop.hive.ql.io.parquet.MapredParquetInput
Format",
"OutputFormat":
"org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutpu
tFormat",
"Compressed": false,
"NumberOfBuckets": -1,
"SerdeInfo":
{ "SerializationLibrary":
"org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveS
erDe",
"Parameters": {
"serialization.format": "1"
}
},
"BucketColumns": [],
"SortColumns": [],
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "crawler-bikes",
"averageRecordSize": "86",
"classification": "parquet",
"compressionType": "none",
"objectCount": "12",
"recordCount": "404947",
"sizeKey": "35312159",
6-346
Chapter 6
Autonomous Database Supplied Package Reference
Column Description
"typeOfData": "file"
},
"StoredAsSubDirectories": false
},
"PartitionKeys": [
{
"Name": "p_start_month",
"Type": "string"
}
],
"TableType": "EXTERNAL_TABLE",
"Parameters": {
"CrawlerSchemaDeserializerVersion": "1.0",
"CrawlerSchemaSerializerVersion": "1.0",
"UPDATED_BY_CRAWLER": "crawler-bikes",
"averageRecordSize": "86",
"classification": "parquet",
"compressionType": "none",
"objectCount": "12",
"recordCount": "404947",
"sizeKey": "35312159",
"typeOfData": "file"
},
"CreatedBy": "arn:aws:sts::579294766787:assumed-
role/AWSGlueServiceRole-dbmsdcat/AWS-Crawler",
"IsRegisteredWithLakeFormation": false,
"CatalogId": "579294766787",
"VersionId": "0"
}
Example
ALL_DCAT_ASSETS View
The Data Catalog assets that this database is authorized to access.
6-347
Chapter 6
Autonomous Database Supplied Package Reference
ALL_DCAT_ATTRIBUTES View
The Data Catalog attributes this database is authorized to access.
6-348
Chapter 6
Autonomous Database Supplied Package Reference
ALL_DCAT_CONNECTIONS View
A view that contains information about the data catalog(s) connected to this instance.
ALL_DCAT_ENTITIES View
The Data Catalog logical entities this database is authorized to access.
6-349
Chapter 6
Autonomous Database Supplied Package Reference
ALL_DCAT_FOLDERS View
Metadata for the Object Storage buckets containing the data files for the Logical
Entities.
6-350
Chapter 6
Autonomous Database Supplied Package Reference
ALL_DCAT_GLOBAL_ACCESSIBLE_CATALOGS View
This view lists all accessible catalogs across all regions, along with the level of access
privileges for each catalog.
ALL_DCAT_LOCAL_ACCESSIBLE_CATALOGS View
This view lists all accessible catalogs in the current region, along with the level of access
privileges for each catalog.
6-351
Chapter 6
Autonomous Database Supplied Package Reference
ALL_GLUE_DATABASES View
The AWS Glue Data Catalog databases that the data catalog credential is authorized
to access.
ALL_GLUE_TABLES View
This view shows all AWS Glue Data Catalog tables that the data catalog credential is
authorized to access.
6-352
Chapter 6
Autonomous Database Supplied Package Reference
DCAT_ATTRIBUTES View
Lists the mapping of logical entity attributes to external table columns.
DCAT_ENTITIES View
Describes the mapping of logical entities to external tables.
6-353
Chapter 6
Autonomous Database Supplied Package Reference
DBMS_DCAT$SYNC_LOG View
The DBMS_DCAT$SYNC_LOG view provides easy access to the log table for the last sync
operation executed by the current user.
Every call to the RUN_SYNC procedure is logged to a new log table, pointed to by the
LOGFILE_TABLE field of USER_LOAD_OPERATIONS. The log tables are automatically
dropped after 2 days, and users can clear all sync logs using the
DBMS_CLOUD.DELETE_ALL_OPERATIONS procedure where type is DCAT_SYNC.
The DBMS_DCAT$SYNC_LOG view automatically identifies the latest log table. The schema
for the DBMS_DCAT$SYNC_LOG view is described below and the access permissions are
identical to those of the individual log tables. By default READ is granted to the
dbms_dcat role and to the ADMIN user.
DBMS_MAX_STRING_SIZE Package
The DBMS_MAX_STRING_SIZE package provides an interface for checking and changing
the value of the DBMS_MAX_STRING_SIZE initialization parameter.
• CHECK_MAX_STRING_SIZE Function
This function checks if the MAX_STRING_SIZE parameter can be updated to a given
value and returns a list of violations that would prevent the parameter from being
updated.
• MODIFY_MAX_STRING_SIZE Procedure
This procedure updates the value of the MAX_STRING_SIZE parameter to a given
value.
6-354
Chapter 6
Autonomous Database Supplied Package Reference
CHECK_MAX_STRING_SIZE Function
This function checks if the MAX_STRING_SIZE parameter can be updated to a given value and
returns a list of violations that would prevent the parameter from being updated.
Syntax
DBMS_MAX_STRING_SIZE.CHECK_MAX_STRING_SIZE(
new_value IN VARCHAR2)
RETURN DBMS_MAX_STRING_SIZE_TBL;
Parameters
Parameter Description
new_value Specifies the new MAX_STRING_SIZE parameter value to be set. The only
valid value is:'STANDARD' .
Usage Notes
If the return list is empty, then there are no violations and the MAX_STRING_SIZE update can
be performed.
Example
MODIFY_MAX_STRING_SIZE Procedure
This procedure updates the value of the MAX_STRING_SIZE parameter to a given value.
Syntax
DBMS_MAX_STRING_SIZE.MODIFY_MAX_STRING_SIZE(
new_value IN VARCHAR2);
6-355
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
new_value Specifies the new MAX_STRING_SIZE parameter value to be set. The
only valid value is: 'STANDARD'.
Usage Notes
• Using DBMS_MAX_STRING_SIZE.MODIFY_MAX_STRING_SIZE is a one-way change that
cannot be reverted. After a database is switched back to the STANDARD style of
supporting a maximum length of 4000 bytes for the VARCHAR2, NVARCHAR2, and RAW
data types, you cannot re-enable EXTENDED data types.
• The ADMIN user is granted EXECUTE privilege WITH GRANT OPTION clause on
DBMS_MAX_STRING_SIZE. Oracle recommends that you do not GRANT EXECUTE on
this package to other users.
• The error ORA-20000 is raised if any object exists that would prevent
MAX_STRING_SIZE from being updated.
• The ADMIN user is granted EXECUTE privilege WITH GRANT OPTION clause on
DBMS_MAX_STRING_SIZE. Oracle recommends that you do not GRANT EXECUTE on
this package to other users.
Example
NAME VALUE
max_string_size EXTENDED
BEGIN
DBMS_MAX_STRING_SIZE.MODIFY_MAX_STRING_SIZE('STANDARD');
END;
/
NAME VALUE
max_string_size STANDARD
DBMS_AUTO_PARTITION Package
The DBMS_AUTO_PARTITION package provides administrative routines for managing
automatic partitioning of schemas and tables.
• CONFIGURE Procedure
This procedure configures settings for automatic partitioning in Autonomous
Database.
6-356
Chapter 6
Autonomous Database Supplied Package Reference
• VALIDATE_CANDIDATE_TABLE Function
This function checks if the given table is a valid candidate for automatic partitioning in
Autonomous Database.
• RECOMMEND_PARTITION_METHOD Function
This function returns a recommendation ID that can be used with APPLY_RECOMMENDATION
procedure to apply the recommendation, or can be used with
DBA_AUTO_PARTITION_RECOMMENDATIONS view to retrieve details of the recommendations
for automatic partitioning in Autonomous Database.
• APPLY_RECOMMENDATION Procedure
This procedure applies the given recommendation in an Autonomous Database.
• REPORT_ACTIVITY Function
This function returns a report of the automatic partitioning operations executed during a
specific period in an Autonomous Database.
• REPORT_LAST_ACTIVITY Function
This function returns a report of the most recent automatic partitioning operation
executed in an Autonomous Database.
CONFIGURE Procedure
This procedure configures settings for automatic partitioning in Autonomous Database.
Syntax
DBMS_AUTO_PARTITION.CONFIGURE (
PARAMETER_NAME IN VARCHAR2,
PARAMETER_VALUE IN VARCHAR2,
ALLOW IN BOOLEAN DEFAULT TRUE);
6-357
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
PARAMETER_NAME Name of the automatic partitioning configuration parameter to
update. It can have one of the following values:
• AUTO_PARTITION_MODE
• AUTO_PARTITION_SCHEMA
• AUTO_PARTITION_TABLE
• AUTO_PARTITION_REPORT_RETENTION
AUTO_PARTITION_MODE sets the mode of automatic partitioning
operation, and has one of the following values:
• IMPLEMENT: In this mode, automatic partitioning generates a
report and modifies the existing table using the recommended
partition method.
• REPORT ONLY: In this mode, automatic partitioning generates a
report but existing tables are not modified. This is the default
value.
• OFF: In this mode, automatic partitioning is prevented from
generating, considering, or applying recommendations. It does
not disable existing automatic partitioned tables.
AUTO_PARTITION_SCHEMA sets schemas to include or exclude from
using automatic partitioning. Its behavior is controlled by the allow
parameter. The automatic partitioning process manages two
schema lists.
1. Inclusion list is the list of schemas, case-sensitive, that can use
automatic partitioning.
2. Exclusion list is the list of schemas, case-sensitive, that cannot
use automatic partitioning.
Initially, both lists are empty, and all schemas in the database can
use automatic partitioning. If the inclusion list contains one or more
schemas, then only the schemas listed in the inclusion list can use
automatic partitioning. If the inclusion list is empty and the exclusion
list contains one or more schemas, then all schemas use automatic
partitioning except the schemas listed in the exclusion list. If both
lists contain one or more schemas, then all schemas use automatic
partitioning except the schemas listed in the exclusion list.
AUTO_PARTITION_TABLE sets tables to include or exclude from
using auto partitioning. The parameter value is
<schema_name>.<table_name>. The automatic partitioning
process manages two table lists.
1. Inclusion list is the list of tables, case-sensitive, that can use
automatic partitioning.
2. Exclusion list is the list of tables, case-sensitive, that cannot
use automatic partitioning.
Initially, both lists are empty, and all tables in the database can use
automatic partitioning. If the inclusion list contains one or more
tables, then only the tables listed in the inclusion list can use
automatic partitioning. If the inclusion list is empty and the exclusion
list contains one or more tables, then all tables use automatic
partitioning except the tables listed in the exclusion list. If both lists
contain one or more tables, then all tables use automatic
partitioning except the tables listed in the exclusion list. If a table is
not on either list, the schema inclusion and exclusion lists decide if a
6-358
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
table is a candidate table for automatic partitioning. If there is a
conflict between the schema level lists and the table level lists, the
table level lists take precedence.
To remove all tables from inclusion and exclusion lists run:
DBMS_AUTO_PARTITION.CONFIGURE('AUTO_PARTITION_TABL
E', NULL);
Usage Notes
• You can check the current setting for automatic partitioning configuration using the
following SQL:
VALIDATE_CANDIDATE_TABLE Function
This function checks if the given table is a valid candidate for automatic partitioning in
Autonomous Database.
Valid Candidate
To be a valid candidate, the following tests must pass:
• Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and
AUTO_PARTITION_TABLE configuration parameters.
• Table exists and has up-to-date statistics.
6-359
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_AUTO_PARTITION.VALIDATE_CANDIDATE_TABLE
( SQLSET_OWNER IN VARCHAR2 DEFAULT 'SYS',
SQLSET_NAME IN VARCHAR2 DEFAULT 'SYS_AUTO_STS',
TABLE_OWNER IN VARCHAR2,
TABLE_NAME IN VARCHAR2)
RETURN VARCHAR2;
Parameters
Parameter Description
SQLSET_OWNER, Name of SQL tuning set representing the workload to be evaluated.
SQLSET_NAME
TABLE_OWNER, Name of a table to validate as a candidate for automatic partitioning.
TABLE_NAME
Usage Notes
• As an example, you can check the validity of a sample table, LINEORDER in schema
TEST, with the following SQL:
SELECT DBMS_AUTO_PARTITION.VALIDATE_CANDIDATE_TABLE
( TABLE_OWNER => 'TEST',
TABLE_NAME => 'LINEORDER')
FROM DUAL;
RECOMMEND_PARTITION_METHOD Function
This function returns a recommendation ID that can be used with
APPLY_RECOMMENDATION procedure to apply the recommendation, or can be used with
6-360
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD
( SQLSET_OWNER IN VARCHAR2 DEFAULT 'SYS',
SQLSET_NAME IN VARCHAR2 DEFAULT 'SYS_AUTO_STS',
TABLE_OWNER IN VARCHAR2 DEFAULT NULL,
TABLE_NAME IN VARCHAR2 DEFAULT NULL,
TIME_LIMIT IN INTERVAL DAY TO SECOND DEFAULT INTERVAL '1' DAY,
REPORT_TYPE IN VARCHAR2 DEFAULT 'TEXT',
REPORT_SECTION IN VARCHAR2 DEFAULT 'SUMMARY',
REPORT_LEVEL IN VARCHAR2 DEFAULT 'TYPICAL')
RETURN RAW;
Parameters
Parameter Description
SQLSET_OWNER, Name of SQL tuning set representing the workload to be evaluated.
SQLSET_NAME
TABLE_OWNER, Name of a table to validate as a candidate for automatic partitioning.
TABLE_NAME
TIME_LIMIT When the function chooses the tables for which to generate
recommendations, TABLE_OWNER and TABLE_NAME are NULL), parameter
limits how long the function runs before it stops looking for new candidate
tables to partition. Once started processing a table, process will not
terminate. It is expected that the function may run longer than this
parameter. If this parameter is NULL there is no time limit. The default is 1
day.
REPORT_TYPE Used to generate report for recommended partition method. See
REPORT_ACTIVITY Function for details.
REPORT_SECTION Used to generate persistent report for recommended partition method.
See REPORT_ACTIVITY Function for details.
REPORT_LEVEL Used to generate report for recommended partition method. See
REPORT_ACTIVITY Function for details.
Usage Notes
• The AUTO_PARTITION_MODE controls the actions taken by this function:
– IMPLEMENT: In this mode, automatic partitioning generates a report and modifies the
existing table using the recommended partition method.
– REPORT ONLY: In this mode, automatic partitioning generates a report generated but
existing tables are not modified. This is the default value.
– OFF: In this mode, automatic partitioning prevented from producing, considering, or
applying new recommendations. It does not disable existing automatic partitioned
tables.
• Unlike automatic indexing, automatic partitioning does not run periodically as a
background task. Automatic partitioning only runs when you invoke it using the
DBMS_AUTO_PARTITION.RECOMMEND_PARTITION_METHOD function.
6-361
Chapter 6
Autonomous Database Supplied Package Reference
Return Values
This function returns a recommendation ID that can be used as follows:
DBMS_AUTO_PARTITION.APPLY_RECOMMENDATION to apply the recommendation,
APPLY_RECOMMENDATION Procedure
This procedure applies the given recommendation in an Autonomous Database.
Syntax
DBMS_AUTO_PARTITION.APPLY_RECOMMENDATION
( RECOMMENDATION_ID IN RAW,
TABLE_OWNER IN VARCHAR2 DEFAULT NULL,
TABLE_NAME IN VARCHAR2 DEFAULT NULL);
Parameters
Parameter Description
RECOMMENDATION_ID Recommendation ID returned from
RECOMMEND_PARTITION_METHOD function or queried from
DBA_AUTO_PARTITION_RECOMMENDATIONS view.
TABLE_OWNER, When a single recommendation ID has recommendations for
TABLE_NAME multiple tables, this optional parameter allows you to control which
tables are partitioned.
• If parameters are NULL, partition all tables recommended in
the given recommendation ID.
• If a table name is given, partition only the named table.
• If either TABLE_OWNER or TABLE_NAME is NOT NULL, they must
both be NOT NULL.
Usage Note:
Regardless of AUTO_PARTITION_MODE, this procedure raises an ORA-20000:
recommendation_id was not found if either there are no accepted recommendations
associated with the RECOMMENDATION_ID, or all accepted recommendations associated
with the RECOMMENDATION_ID have already been applied. The first case applies if
RECOMMENDATION_ID was generated with AUTO_PARTITION_MODE = OFF. The second
case applies if RECOMMENDATION_ID was generated with AUTO_PARTITION_MODE =
IMPLEMENT.
6-362
Chapter 6
Autonomous Database Supplied Package Reference
REPORT_ACTIVITY Function
This function returns a report of the automatic partitioning operations executed during a
specific period in an Autonomous Database.
Syntax
DBMS_AUTO_PARTITION.REPORT_ACTIVITY
( ACTIVITY_START IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
ACTIVITY_END IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
TYPE IN VARCHAR2 DEFAULT 'TEXT',
SECTION IN VARCHAR2 DEFAULT 'ALL',
LEVEL IN VARCHAR2 DEFAULT 'TYPICAL')
RETURN CLOB;
Parameters
Parameter Description
ACTIVITY_START Starting time automatic partitioning operations use for the report. If no
value is specified, or NULL is specified, the report is generated for the last
automatic partitioning operation that was executed.
ACTIVITY_END Ending time automatic partitioning operations use for the report. If no
value is specified, or NULL is specified, then the report is generated for
the last automatic partitioning operation that was executed.
TYPE Format of the report that has one of the following values:
• TEXT (default)
• HTML
• XML
SECTION Sections to include in the report that has one of the following values:
• SUMMARY - Include only the workload summary in the report
• ALL = Include all the sections in the report. (default)
level Level of information to include in the report that has one of the following
values:
• TYPICAL - Include typical automatic partitioning information in the
report (default).
• CHANGED - Include only SQL with changed performance in the report.
• IMPROVED - Include only SQL with improved performance in the
report.
• REGRESSED - Include only SQL with regressed performance in the
report.
• UNCHANGED - Include only SQL with unchanged performance in the
report.
• ALL - Include all automatic partitioning information in the report.
Usage Notes
Returns: A performance analysis report for workload executed on database after
recommendation is applied. This report is not stored persistently with the recommendation.
6-363
Chapter 6
Autonomous Database Supplied Package Reference
REPORT_LAST_ACTIVITY Function
This function returns a report of the most recent automatic partitioning operation
executed in an Autonomous Database.
Syntax
DBMS_AUTO_PARTITION.REPORT_LAST_ACTIVITY
( TYPE IN VARCHAR2 DEFAULT 'TEXT',
SECTION IN VARCHAR2 DEFAULT 'ALL',
LEVEL IN VARCHAR2 DEFAULT 'TYPICAL')
RETURN CLOB;
Parameters
Parameter Description
TYPE The output format of the report, see REPORT_ACTIVITY Function
for information.
SECTION The sections included in the report, see REPORT_ACTIVITY
Function for information.
LEVEL The level of information included in the report, see
REPORT_ACTIVITY Function for information.
Usage Notes
Returns: A performance analysis report for workload executed on database after latest
recommendation is applied. This report is not stored persistently with the
recommendation.
CS_RESOURCE_MANAGER Package
The CS_RESOURCE_MANAGER package provides an interface to list and update consumer
group parameters, and to revert parameters to default values.
• LIST_CURRENT_RULES Function
This function lists the parameter values for each consumer group.
• LIST_DEFAULT_RULES Function
This function returns the default values for all consumer groups.
• REVERT_TO_DEFAULT_VALUES Procedure
This procedure reverts the specified resource manager's plan properties to default
values.
• UPDATE_PLAN_DIRECTIVE Procedure
Use this procedure to update the resource plan for a specified consumer group.
LIST_CURRENT_RULES Function
This function lists the parameter values for each consumer group.
6-364
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
CS_RESOURCE_MANAGER.LIST_CURRENT_RULES
RETURN TABLE;
Example
LIST_DEFAULT_RULES Function
This function returns the default values for all consumer groups.
Syntax
CS_RESOURCE_MANAGER.LIST_DEFAULT_RULES
RETURN TABLE;
Usage Note
• By default the parallel degree policy value is MANUAL for the TPURGENT consumer group.
The CS_RESOURCE_MANAGER.LIST_DEFAULT_RULES function shows no value for the default
value for the DEGREE_OF_PARALLELISM for the TPURGENT consumer group.
Example
6-365
Chapter 6
Autonomous Database Supplied Package Reference
TPURGENT 0 0
12 300
REVERT_TO_DEFAULT_VALUES Procedure
This procedure reverts the specified resource manager's plan properties to default
values.
Syntax
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(
consumer_group IN VARCHAR2,
shares IN BOOLEAN DEFAULT FALSE,
concurrency_limit IN BOOLEAN DEFAULT FALSE);
Parameters
Parameter Description
consumer_group Specifies the consumer group to revert.
Depending on the workload, valid values are: HIGH, MEDIUM, LOW,
TP, or TPURGENT.
shares When the value is TRUE, revert shares for the service to the
default value.
concurrency_limit When the value is TRUE, revert the concurrency_limit for the
service to the default value. When you revert the
concurrency_limit, both the concurrency_limit and the
degree_of_parallelism values are set to their default values.
Usage Note
• When the workload type is Data Warehouse, the valid values for consumer_group
are HIGH, MEDIUM, or LOW.
Examples
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(
consumer_group => 'MEDIUM',
concurrency_limit => TRUE);
END;
/
BEGIN
CS_RESOURCE_MANAGER.REVERT_TO_DEFAULT_VALUES(
consumer_group => 'HIGH',
shares => TRUE);
END;
/
6-366
Chapter 6
Autonomous Database Supplied Package Reference
UPDATE_PLAN_DIRECTIVE Procedure
Use this procedure to update the resource plan for a specified consumer group.
Syntax
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group IN VARCHAR2,
io_megabytes_limit IN NUMBER DEFAULT NULL,
elapsed_time_limit IN NUMBER DEFAULT NULL,
shares IN NUMBER DEFAULT NULL,
concurrency_limit IN NUMBER DEFAULT NULL);
Parameters
Parameter Description
consumer_group Specifies the consumer group to update.
Depending on the workload, valid values are: HIGH, MEDIUM, LOW, TP,
or TPURGENT.
io_megabytes_limit Specifies the maximum megabytes of I/O that a SQL operation can
issue.
Specify a NULL value to clear the limit.
elapsed_time_limit Specifies the maximum time in seconds that a SQL operation can run.
Specify a NULL value to clear the limit.
shares Specifies the shares value. A higher number of shares, relative to other
consumer groups, increases the consumer group's CPU and I/O
priority.
concurrency_limit Specifies the maximum number of concurrent SQL statements that can
be executed.
This parameter is only valid with the MEDIUM consumer group.
Usage Notes
• When a SQL statement in the specified service runs more than the specified runtime limit
(elapsed_time_limit) or does more I/O than the specified amount
(io_megabytes_limit), then the SQL statement will be terminated.
• When the workload type is Data Warehouse, the valid values for consumer_group are
HIGH, MEDIUM, or LOW.
• When the concurrency_limit parameter is specified, the only valid value for
consumer_group is MEDIUM.
Examples
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group => 'HIGH',
shares => 8);
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group => 'MEDIUM',
6-367
Chapter 6
Autonomous Database Supplied Package Reference
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group => 'HIGH',
io_megabytes_limit => null,
elapsed_time_limit => null);
END;
/
BEGIN
CS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(
consumer_group => 'MEDIUM',
concurrency_limit => 2);
END;
/
CS_SESSION Package
The CS_SESSION package provides an interface to switch the database service and
consumer group of the existing session.
When a connection is established with an Autonomous Database, that session is
assigned a consumer group. For example, a session could be created using a
connection to the LOW service of an Autonomous Database. You might want to switch
the consumer group, for example from LOW to HIGH. The CS_SESSION package
provides an API for switching.
• See Database Service Names for Autonomous Data Warehouse for more
information.
• See Database Service Names for Autonomous Transaction Processing and
Autonomous JSON Database for more information.
The consumer groups determine concurrency and degree of parallelism (DOP). For
example, statements on a connection established to the LOW database service run
serially. Statements on a connection established to the HIGH database service run in
parallel. If you have a workload that requires serial statement processing with
switching to a HIGH consumer group for a few statements, the CS_SESSION package
enables you to switch.
• SWITCH_SERVICE Procedure
This procedure switches the database service and consumer group of the current
session.
SWITCH_SERVICE Procedure
This procedure switches the database service and consumer group of the current
session.
6-368
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
CS_SESSION.SWITCH_SERVICE(service_name IN varchar2);
Parameters
Parameter Description
service_name Specifies the consumer group to update.
Depending on the workload, valid values are:
HIGH, MEDIUM, LOW, TP, or TPURGENT.
Usage Notes
When called, the procedure switches the session to the specified service and the related
consumer group. If the specified service does not exist in that database, an error message is
provided. For example, if you specify 'TP' as the service name on a data warehouse
workload, the error indicates it's not a valid service name. No error is reported if the current
service and the specified service are identical.
The procedure does not reset session attributes. Anything the user set for their session
before calling this procedure will continue as-is. For example, if a session parameter was
modified and then later the session switched to a different service, the parameter value will
stay the same.
Example
BEGIN
CS_SESSION.SWITCH_SERVICE('HIGH');
END;
/
6-369
Chapter 6
Autonomous Database Supplied Package Reference
END IF;
END;
/
Error Messages
The following table describes exceptions for CS_SESSION
DBMS_SHARE Package
The DBMS_SHARE subprograms and views are used to share local data with external
systems.
• Summary of Share Producer Subprograms
This table lists the DBMS_SHARE package procedures and functions used to produce
shares for recipients.
• Summary of Share Consumer Subprograms
This table lists the DBMS_SHARE package procedures and functions used to
consume shares.
• Summary of Share Producer Views
This table lists the share producer views for the DBMS_SHARE package.
• Summary of Share Consumer Views
This table lists the DBMS_SHARE package views.
• DBMS_SHARE Constants
These constants are used by the DBMS_SHARE package.
Subprogram Description
ADD_TO_SHARE Add a table or view to a share.
Procedure
ASSERT_SHAREABLE_ Return without error, if the object exists and can be shared.
OBJECT Procedure
ASSERT_SHARING_ID Run basic validation checks against a sharing id and return one in
Procedure canonical form.
6-370
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
CAN_CREATE_SHARE This function checks to see if the current schema can create share
Function recipients.
CAN_CREATE_SHARE_ This function checks to see if the current schema can create share
RECIPIENT Function recipients.
CLEAR_RECIPIENT_EV Clear events from the share recipient event log.
ENTS Procedure
CLEAR_SHARE_EVENT Clear events from the share event log.
S Procedure
CREATE_BEARER_TOK Create a credential suitable for use with delta share providers.
EN_CREDENTIAL
Procedure
CREATE_CLOUD_STOR Create a named cloud storage URI link.
AGE_LINK Procedure
CREATE_OR_REPLACE Create or replace a named cloud storage URI.
_CLOUD_STORAGE_LI
NK Procedure
CREATE_OR_REPLACE Create or replace a share recipient.
_SHARE_RECIPIENT
Procedure
CREATE_SHARE Create a named share object.
Procedure
CREATE_SHARE_RECI Create a new share recipient.
PIENT Procedure
DROP_CLOUD_STORA Drop a cloud storage link.
GE_LINK Procedure
DROP_RECIPIENT Drop a recipient.
Procedure
DROP_SHARE Drop a share and all of its contents.
Procedure
DROP_SHARE_LINK_VI Drop a view that was created by the CREATE_SHARE_LINK_VIEW
EW Procedure procedure.
DROP_SHARE_VERSIO Drop a single share version.
N Procedure
DROP_SHARE_VERSIO Drop a range of share versions.
NS Procedure
DROP_UNUSED_SHAR Drop any share version that is not currently in use.
E_VERSIONS Procedure
ENABLE_SCHEMA Enable or disable a schema for sharing.
Procedure
GET_ACTIVATION_LINK Generate the link that gets put into emails to the authorized
Function recipient.
GET_PUBLISHED_IDEN Get data about the current user that was set by
TITY Function SET_PUBLISHED_IDENTITY.
GET_RECIPIENT_PROP Return the value of a property for a recipient.
ERTY Function
GET_SHARE_PROPER Get the property value of an existing share.
TY Function
GET_SHARE_TABLE_P Get the property value of an existing share table.
ROPERTY Function
GRANT_TO_RECIPIENT Grant access on a share to a specific recipient.
Procedure
6-371
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
POPULATE_SHARE_PR Generate a delta profile for a recipient.
OFILE Procedure
PUBLISH_SHARE Publish a share and return immediately.
Procedure
PUBLISH_SHARE_WAIT Publish a share and wait until the background job is complete.
Procedure
PURGE_DETACHED_FI Delete or forget parquet files that have become detached from their
LES Procedure shares.
REMOVE_FROM_SHAR Remove a table or view from a share.
E Procedure
RENAME_RECIPIENT Rename a recipient.
Procedure
RENAME_SHARE Rename a share.
Procedure
RENAME_SHARE_LINK Rename a registered share link.
Procedure
RENAME_SHARE_SCH Rename a share schema.
EMA Procedure
RENAME_SHARE_TABL Rename a share table.
E Procedure
REVOKE_FROM_RECIP Revoke access on a share from a specific recipient.
IENT Procedure
SET_CURRENT_SHARE Change the current version of a share.
_VERSION Procedure
SET_PUBLISHED_IDEN Set data about the current user that will be supplied to recipients of
TITY Procedure published ORACLE shares.
SET_RECIPIENT_LOG_ Change the log level for an existing share recipient.
LEVEL Procedure
SET_SHARE_LOG_LEV Change the log level for an existing share.
EL Procedure
SET_STORAGE_CRED Set the access credential name for the given storage.
ENTIAL Procedure
STOP_JOB Procedure Stop a running share job.
UNPUBLISH_SHARE Unpublish a share.
Procedure
UPDATE_DEFAULT_RE Update the default recipient property values.
CIPIENT_PROPERTY
Procedure
UPDATE_DEFAULT_SH Update the default share property values.
ARE_PROPERTY
Procedure
UPDATE_RECIPIENT_P Update a property of an existing recipient.
ROPERTY Procedure
UPDATE_SHARE_JOB_ Modify properties of a running share job.
PROPERTY Procedure
UPDATE_SHARE_PROP Update a property of an existing share.
ERTY Procedure
UPDATE_SHARE_TABL Update the property value of an existing share table.
E_PROPERTY
Procedure
VALIDATE_CREDENTIA Validate a credential name, converting it to canonical form first if
L Function required.
6-372
Chapter 6
Autonomous Database Supplied Package Reference
Subprogram Description
VALIDATE_SHARE_STO Check to see if the given storage is suitable for versioned shares.
RAGE Procedure
WAIT_FOR_JOB This procedure waits until the specified share job is complete.
Procedure
• ADD_TO_SHARE Procedure
Add a table or view to a share. The object becomes visible to any external user who has
been granted access to the share.
• ASSERT_SHAREABLE_OBJECT Procedure
Return without error, if the object exists and can be shared.
• ASSERT_SHARING_ID Procedure
Run basic validation checks against a sharing id and return one in canonical form. An
exception is raised if the id is obviously invalid.
• CAN_CREATE_SHARE Function
This function checks to see if the current schema can create share recipients. If shares
can be created, a 1 is returned and 0 otherwise.
• CAN_CREATE_SHARE_RECIPIENT Function
This function checks to see if the current schema can create share recipients. If shares
can be created a 1 is returned, and 0 otherwise.
• CLEAR_RECIPIENT_EVENTS Procedure
Clear events from the share recipient event log.
• CLEAR_SHARE_EVENTS Procedure
Clear events from the share event log.
• CREATE_BEARER_TOKEN_CREDENTIAL Procedure
Create a credential suitable for use with delta share providers.
• CREATE_CLOUD_STORAGE_LINK Procedure
Create a named cloud storage URI link. A cloud storage link is a named association
between an OCI bucket URI, and a local credential name.
• CREATE_OR_REPLACE_CLOUD_STORAGE_LINK Procedure
Create or replace a named cloud storage URI. A cloud storage link is a named
association between an OCI bucket URI, and a local credential name.
• CREATE_OR_REPLACE_SHARE_RECIPIENT Procedure
Create or replace a share recipient. You must provide at least an email address or
sharing id.
• CREATE_SHARE Procedure
Create a named share object.
• CREATE_SHARE_RECIPIENT Procedure
Create a new share recipient.
• DROP_CLOUD_STORAGE_LINK Procedure
Drop a cloud storage link.
• DROP_RECIPIENT Procedure
Drop a recipient. All access to the recipient will be revoked.
• DROP_SHARE Procedure
Drop a share and all of its contents. Future access to the share by consumers will end.
6-373
Chapter 6
Autonomous Database Supplied Package Reference
• DROP_SHARE_LINK_VIEW Procedure
Drop a view that was created by the CREATE_SHARE_LINK_VIEW procedure.
• DROP_SHARE_VERSION Procedure
Drop a single share version. Note that you cannot drop the current version.
• DROP_SHARE_VERSIONS Procedure
Drop a range of share versions. Note that you cannot drop the current version
using this procedure.
• DROP_UNUSED_SHARE_VERSIONS Procedure
Drop any share version that is not currently in use.
• ENABLE_SCHEMA Procedure
Enable or disable a schema for sharing. This procedure must be run by the
ADMIN user.
• GET_ACTIVATION_LINK Function
Generate the link that gets put into emails to the authorized recipient. This
activation link leads to the download page, where the recipient clicks a button to
get the delta profile.
• GET_PUBLISHED_IDENTITY Function
Get data about the current user that was set by SET_PUBLISHED_IDENTITY.
• GET_RECIPIENT_PROPERTY Function
Return the value of a property for a recipient.
• GET_SHARE_PROPERTY Function
Get the property value of an existing share.
• GET_SHARE_TABLE_PROPERTY Function
Get the property value of an existing share table.
• GRANT_TO_RECIPIENT Procedure
Grant access on a share to a specific recipient. The share and recipient must both
belong to the same schema.
• POPULATE_SHARE_PROFILE Procedure
Generate a delta profile for a recipient. You could print this to the screen or upload
it somewhere. For example, to an object bucket using DBMS_CLOUD.EXPORT_DATA.
• PUBLISH_SHARE Procedure
Publish a share and return immediately.
• PUBLISH_SHARE_WAIT Procedure
Publish a share and wait until the background job is complete. The publication
continues even if the call is interrupted.
• PURGE_DETACHED_FILES Procedure
Delete or forget parquet files that have become detached from their shares.
• REMOVE_FROM_SHARE Procedure
Remove a table or view from a share.
• RENAME_RECIPIENT Procedure
Rename a recipient. This procedure only changes the local name of the recipient.
The external definition of the recipient, for example the name of the OAUTH user
or sharing id, is not changed.
6-374
Chapter 6
Autonomous Database Supplied Package Reference
• RENAME_SHARE Procedure
Rename a share. Care should be take with this procedure since the change effects any
existing consumers whose access is based on the previous name. Consumers are not
notified directly or updated.
• RENAME_SHARE_LINK Procedure
Rename a registered share link.
• RENAME_SHARE_SCHEMA Procedure
Rename a share schema.
• RENAME_SHARE_TABLE Procedure
Rename a share table.
• REVOKE_FROM_RECIPIENT Procedure
Revoke access on a share from a specific recipient.
• SET_CURRENT_SHARE_VERSION Procedure
Change the current version of a share.
• SET_PUBLISHED_IDENTITY Procedure
Set data about the current user that will be supplied to recipients of published ORACLE
shares.
• SET_RECIPIENT_LOG_LEVEL Procedure
Change the log level for an existing share recipient.
• SET_SHARE_LOG_LEVEL Procedure
Change the log level for an existing share.
• SET_STORAGE_CREDENTIAL Procedure
Set the credential name used by the current user when it attempts to access the given
storage.
• STOP_JOB Procedure
Attempt to stop a running share job. The procedure should return quickly, but it may take
some time for the associated job to stop.
• UNPUBLISH_SHARE Procedure
Unpublish a share.
• UPDATE_DEFAULT_RECIPIENT_PROPERTY Procedure
Update the default recipient property values. This procedure requires the user to have
admin privileges.
• UPDATE_DEFAULT_SHARE_PROPERTY Procedure
Update the default share property values.
• UPDATE_RECIPIENT_PROPERTY Procedure
Update a property of an existing recipient.
• UPDATE_SHARE_JOB_PROPERTY Procedure
Modify properties of a running share job. The procedure should return quickly, but it may
take some time for the changes to take effect.
• UPDATE_SHARE_PROPERTY Procedure
Update a property of an existing share.
• UPDATE_SHARE_TABLE_PROPERTY Procedure
Update the property value of an existing share table.
• VALIDATE_CREDENTIAL Function
Validate a credential name, converting it to canonical form first if required.
6-375
Chapter 6
Autonomous Database Supplied Package Reference
• VALIDATE_SHARE_STORAGE Procedure
Check to see if the given storage is suitable for versioned shares.
• WAIT_FOR_JOB Procedure
This procedure waits until the specified share job is complete.
ADD_TO_SHARE Procedure
Add a table or view to a share. The object becomes visible to any external user who
has been granted access to the share.
Syntax
PROCEDURE ADD_TO_SHARE
(
share_name IN VARCHAR2,
table_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
share_table_name IN VARCHAR2 := NULL,
share_schema_name IN VARCHAR2 := NULL,
object_metadata IN SYS.JSON_OBJECT_T := NULL,
replace_existing IN BOOLEAN := FALSE,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of an existing share to which the object is granted.
table_name The name of the entity to share (for example, a table or view name).
owner The owner of the entity to share. The default is to current schema.
share_table_name The externally visible name of the table. By default, this is the
uppercased table_name.
share_schema_name The externally visible schema where the table will be placed. By
default this is the uppercased table owner. The schema is created
automatically if it does not already exist.
object_metadata Optional metadata to associate with the shared entity.
replace_existing If TRUE, and this share_table_name already exists, the existing
table_name is dropped from the share and replaced with this
table_name.
If FALSE, and this share_table_name already exists, an exception
is raised indicating the share table is already used.
share_owner The owner of the share.
auto_commit If TRUE, this procedure call commits changes that are not visible
externally until the commit takes place. The default value is FALSE,
which means that the user must COMMIT after running this call in
order to make the change visible.
6-376
Chapter 6
Autonomous Database Supplied Package Reference
ASSERT_SHAREABLE_OBJECT Procedure
Return without error, if the object exists and can be shared.
Syntax
PROCEDURE ASSERT_SHAREABLE_OBJECT
(
object_name IN VARCHAR2,
object_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
object_name The name of the object.
object_owner The owner of the object. Defaults to current schema.
ASSERT_SHARING_ID Procedure
Run basic validation checks against a sharing id and return one in canonical form. An
exception is raised if the id is obviously invalid.
Syntax
PROCEDURE ASSERT_SHARING_ID
(
sharing_id IN OUT NOCOPY VARCHAR2,
out_sharing_id_type IN OUT NOCOPY VARCHAR2
);
Parameters
Parameter Description
sharing_id The id to check.
out_sharing_id_type The type of the id, if valid. For example, TENANCY or DATABASE.
CAN_CREATE_SHARE Function
This function checks to see if the current schema can create share recipients. If shares can
be created, a 1 is returned and 0 otherwise.
Syntax
FUNCTION CAN_CREATE_SHARE
RETURN NUMBER;
6-377
Chapter 6
Autonomous Database Supplied Package Reference
CAN_CREATE_SHARE_RECIPIENT Function
This function checks to see if the current schema can create share recipients. If shares
can be created a 1 is returned, and 0 otherwise.
Syntax
FUNCTION CAN_CREATE_SHARE_RECIPIENT
RETURN NUMBER;
CLEAR_RECIPIENT_EVENTS Procedure
Clear events from the share recipient event log.
Syntax
PROCEDURE CLEAR_RECIPIENT_EVENTS
(
recipient_name IN VARCHAR2,
from_time IN TIMESTAMP WITH TIME ZONE := NULL,
to_time IN TIMESTAMP WITH TIME ZONE := NULL,
recipient_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
recipient_name The local name of the share recipient.
from_time Earliest time for events that should be cleared or NULL.
to_time Latest time for events that should be cleared or NULL.
recipient_owner The schema that owns the recipient.
CLEAR_SHARE_EVENTS Procedure
Clear events from the share event log.
6-378
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
PROCEDURE CLEAR_SHARE_EVENTS
(
share_name IN VARCHAR2,
from_time IN TIMESTAMP WITH TIME ZONE := NULL,
to_time IN TIMESTAMP WITH TIME ZONE := NULL,
share_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_name The name of the share.
from_time Earliest time for events that should be cleared or NULL.
to_time Latest time for events that should be cleared or NULL.
share_owner The schema that owns the share.
CREATE_BEARER_TOKEN_CREDENTIAL Procedure
Create a credential suitable for use with delta share providers.
This is similar to the CREATE_CREDENTIALS call, but it takes explicit values instead of a delta
sharing profile. See CREATE_CREDENTIALS Procedure and Function for further
information.
Syntax
PROCEDURE CREATE_BEARER_TOKEN_CREDENTIAL
(
credential_name IN VARCHAR2,
bearer_token IN VARCHAR2 := NULL,
token_endpoint IN VARCHAR2 := NULL,
client_id IN VARCHAR2 := NULL,
client_secret IN VARCHAR2 := NULL,
token_refresh_rate IN PLS_INTEGER := 3600
);
Parameters
Parameter Description
credential_name The name of the new credential.
bearer_token The bearer token, if known.
token_endpoint The endpoint to call to get a new token.
client_id The username to send to the token_endpoint.
client_secret The password to send to the token_endpoint.
token_refresh_rate An optional refresh time, in seconds.
6-379
Chapter 6
Autonomous Database Supplied Package Reference
SQL> exec
dbms_share.create_bearer_token_credential('MY_FIXED_CREDENTIAL',
'FF42322D27D4C2DEE05392644664351E')
PL/SQL procedure successfully completed.
SQL> select username from user_credentials where credential_name =
'MY_FIXED_CREDENTIAL';
USERNAME
-----------------------------------------------------------------------
-------------------------------------------------
BEARER_TOKEN
SQL> BEGIN
2 dbms_share.create_bearer_token_credential(
3 credential_name=>'MY_RENEWABLE_CREDENTIAL',
4 token_endpoint=>'https://myserver/ords/share_provider/oauth/
token',
5 client_id=>'VXGQ_44s6qJ-K4WHUNM2yQ..',
6 client_secret=>'y9ddppgwEmZl7adDHFQndw..');
7 END;
8 /
PL/SQL procedure successfully completed.
SQL> select credential_name, username from user_credentials where
credential_name LIKE 'MY_RENEWABLE_CREDENTIAL%';
CREDENTIAL_NAME USERNAME
------------------------------------------
-------------------------------------
MY_RENEWABLE_CREDENTIAL BEARER_TOKEN
MY_RENEWABLE_CREDENTIAL$TOKEN_REFRESH_CRED VXGQ_44s6qJ-K4WHUNM2yQ..
CREATE_CLOUD_STORAGE_LINK Procedure
Create a named cloud storage URI link. A cloud storage link is a named association
between an OCI bucket URI, and a local credential name.
Note:
Use the SET_STORAGE_CREDENTIAL Procedure to add a credential to
the storage link.
6-380
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
PROCEDURE CREATE_CLOUD_STORAGE_LINK
(
storage_link_name IN VARCHAR2,
uri IN VARCHAR2,
owner IN VARCHAR2 := NULL,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
storage_link_name The name of the cloud storage link. The name of the link should follow
standard Oracle naming conventions.
uri The URI of the storage bucket. The URI should be of the form https://
objectstorage.region.oraclecloud.com/n/namespace-
string/b/bucket/o
owner The owner of the storage link. Leave as NULL for the current user.
metadata (Optional) A JSON metadata document containing additional information.
auto_commit The default is TRUE. If TRUE, this transaction is committed. If FALSE,
the user must commit the transaction. Changes are not visible until the
commit takes place.
Example
In this example, a cloud storage link named MY_SHARE_STORAGE is created on the given URL.
SQL> BEGIN
2 dbms_share.create_cloud_storage_link(
3 'MY_SHARE_STORAGE',
4 'https://objectstorage.../n/abcdef/b/my_bucket/o' );
5 END;
6 /
STORAGE_LINK_NAME
-----------------------------------------------------------------------------
-----------
MY_SHARE_STORAGE
6-381
Chapter 6
Autonomous Database Supplied Package Reference
CREATE_OR_REPLACE_CLOUD_STORAGE_LINK Procedure
Create or replace a named cloud storage URI. A cloud storage link is a named
association between an OCI bucket URI, and a local credential name.
Note:
Use the SET_STORAGE_CREDENTIAL Procedure to add a credential to
the storage link.
Syntax
PROCEDURE CREATE_OR_REPLACE_CLOUD_STORAGE_LINK
(
storage_link_name IN VARCHAR2,
uri IN VARCHAR2,
owner IN VARCHAR2 := NULL,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
storage_link_name The name of the cloud storage link. The name of the link should
follow standard Oracle naming conventions. See Database Object
Names and Qualifiers
uri The URI of the storage bucket. The URI should be of the form
https://objectstorage.region.oraclecloud.com/n/
namespace-string/b/bucket/o
owner The owner of the storage link. Defaults to the current schema.
metadata An optional JSON metadata document containing additional
information.
auto_commit If TRUE, the changes automatically commit after creating the link.
The default is TRUE.
CREATE_OR_REPLACE_SHARE_RECIPIENT Procedure
Create or replace a share recipient. You must provide at least an email address or
sharing id.
Syntax
PROCEDURE CREATE_OR_REPLACE_SHARE_RECIPIENT
(
recipient_name IN VARCHAR2,
description IN VARCHAR2 := NULL,
recipient_owner IN VARCHAR2 := NULL,
email IN VARCHAR2 := NULL,
6-382
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
recipient_name The local name of the share recipient. Some names are not allowed (e.g.
MY_TENANCY).
description A description of the recipient.
recipient_owner The schema that owns the recipient.
email An email that will be registered for the OAUTH user.
sharing_id The sharing id of the recipient, from GET_SHARING_ID Function.
CREATE_SHARE Procedure
Create a named share object.
Syntax
PROCEDURE CREATE_SHARE
(
share_name IN VARCHAR2,
share_type IN VARCHAR2 := SHARE_TYPE_VERSIONED,
storage_link_name IN VARCHAR2 := NULL,
storage_link_owner IN VARCHAR2 := NULL,
description IN VARCHAR2 := NULL,
public_description IN VARCHAR2 := NULL,
configuration IN SYS.JSON_OBJECT_T := NULL,
force_create IN BOOLEAN := FALSE,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE,
log_level IN PLS_INTEGER := LOG_LEVEL_BASIC,
run_storage_tests IN BOOLEAN := TRUE
);
Parameters
Parameter Description
share_name The name of the share. This name is uppercased since delta shares are
case insensitive. The name follows standard Oracle conventions, so it
must be 128 characters or fewer and must be double quoted if it is not a
simple identifier. The only difference is that it will be uppercased even if it
is double quoted.
share_type The type of share. For information on constants used for this parameter,
see descriptions for Share Types in DBMS_SHARE Constants.
storage_link_name The name of the cloud storage link where the objects are created. The
user must have read/write access to this storage and have the ability to
create pre-authenticated requests on the storage. The parameter is
required for versioned shares, and optional for current shares.
storage_link_owner The owner of the cloud storage link where objects are created.
description A local description for the share.
6-383
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
public_description A public description for the share.
configuration A configuration document that defines how objects are created.
force_create Set force_create to TRUE to redefine the share if it exists.
share_owner The owner of the share.
auto_commit If TRUE, this procedure call commits changes that are not visible
externally until the commit takes place. The default value is FALSE, which
means that the user must COMMIT after running this call in order to
make the change visible.
log_level Event logging level. This controls the amount of detail recorded in the
ALL_SHARE_EVENTS View. For information on constants used for this
parameter, see descriptions for Log Level in DBMS_SHARE Constants.
run_storage_tests If this parameter is TRUE then DBMS_SHARE runs tests to verify that the
specified share storage link has the correct privileges.
If this parameter is FALSE, then the procedure does not run any checks
at creation time. This may lead to errors later, during publication or
consumption of the share.
Oracle recommends that you specify TRUE for this parameter.
CREATE_SHARE_RECIPIENT Procedure
Create a new share recipient.
Syntax
PROCEDURE CREATE_SHARE_RECIPIENT
(
recipient_name IN VARCHAR2,
description IN VARCHAR2 := NULL,
recipient_owner IN VARCHAR2 := NULL,
email IN VARCHAR2 := NULL,
sharing_id IN VARCHAR2 := NULL
);
Parameters
Parameter Description
recipient_name The local name of the share recipient. Some names are not
allowed, for example: MY_TENANCY.
description A description of the recipient.
recipient_owner The schema that owns the recipient.
email An email that will be registered for the OAUTH user. You must
provide at least one of email or sharing id.
sharing_id The sharing id of the recipient from GET_SHARING_ID Function.
You must provide at least one of email or sharing id.
6-384
Chapter 6
Autonomous Database Supplied Package Reference
DROP_CLOUD_STORAGE_LINK Procedure
Drop a cloud storage link.
Syntax
PROCEDURE DROP_CLOUD_STORAGE_LINK
(
storage_link_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
storage_link_name The name of the cloud storage link to drop.
owner The owner of the cloud storage link. Leave as NULL for the current user.
auto_commit If TRUE, the code automatically commits after dropping the link. The
default is TRUE.
DROP_RECIPIENT Procedure
Drop a recipient. All access to the recipient will be revoked.
Syntax
PROCEDURE DROP_RECIPIENT
(
recipient_name IN VARCHAR2,
owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
recipient_name The name of the share recipient.
owner The schema that defines the recipient.
DROP_SHARE Procedure
Drop a share and all of its contents. Future access to the share by consumers will end.
Syntax
PROCEDURE DROP_SHARE
(
share_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
6-385
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_name The name of the share to drop.
share_owner The owner of the share to drop.
destroy_objects If TRUE, delete all objects created on behalf of the share. The
default is TRUE.
DROP_SHARE_LINK_VIEW Procedure
Drop a view that was created by the CREATE_SHARE_LINK_VIEW procedure.
Syntax
PROCEDURE DROP_SHARE_LINK_VIEW
(
view_name IN VARCHAR2,
view_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
view_name The name of the new view.
view_owner The view owner. The default is the current schema.
DROP_SHARE_VERSION Procedure
Drop a single share version. Note that you cannot drop the current version.
Syntax
PROCEDURE DROP_SHARE_VERSION
(
share_name IN VARCHAR2,
share_version IN NUMBER,
destroy_objects IN BOOLEAN := TRUE,
force_drop IN BOOLEAN := FALSE,
share_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_name The name of the share.
6-386
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
share_version The version to drop. You can not drop the current version.
destroy_objects Destroy any associated object in cloud storage, if applicable.
force_drop Drop the share version even if there is an outstanding PAR file on
the version.
share_owner The owner of the share.
DROP_SHARE_VERSIONS Procedure
Drop a range of share versions. Note that you cannot drop the current version using this
procedure.
Syntax
PROCEDURE DROP_SHARE_VERSIONS
(
share_name IN VARCHAR2,
from_share_version IN NUMBER,
to_share_version IN NUMBER,
destroy_objects IN BOOLEAN := TRUE,
force_drop IN BOOLEAN := FALSE,
share_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_name The name of the share.
from_share_version The lowest version to drop.
to_share_version The highest version to drop.
destroy_objects Destroy any associated object in cloud storage, if applicable.
force_drop Drop the share version even if there is an outstanding PAR file on the
version.
share_owner The owner of the share.
DROP_UNUSED_SHARE_VERSIONS Procedure
Drop any share version that is not currently in use.
Syntax
PROCEDURE DROP_UNUSED_SHARE_VERSIONS
(
share_name IN VARCHAR2,
destroy_objects IN BOOLEAN := TRUE,
share_owner IN VARCHAR2 := NULL
);
6-387
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_name The name of the share.
destroy_objects Destroy any associated object in cloud storage, if applicable.
share_owner The owner of the share.
ENABLE_SCHEMA Procedure
Enable or disable a schema for sharing. This procedure must be run by the ADMIN
user.
Users can consume delta shares without being enabled with this procedure, but they
cannot create or publish shares. Sharing is disabled by default for all schemas,
including ADMIN.
Syntax
PROCEDURE ENABLE_SCHEMA
(
schema_name IN VARCHAR2,
enabled IN BOOLEAN := TRUE,
privileges IN PLS_INTEGER := NULL
);
Parameters
Parameter Description
schema_name The schema to enable or disable.
enable TRUE to enable, FALSE to disable.
6-388
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
privileges The privileges argument has bitmap values. If you leave the
argument as NULL, it defaults to PRIV_CREATE_SHARE +
PRIV_CREATE_RECIPIENT + PRIV_CONSUME_ORACLE_SHARE.
Bitmap values are as follows:
• PRIV_CREATE_SHARE
Allow users to create an publish shares in their own schema.
PRIV_CREATE_SHARE CONSTANT
PLS_INTEGER := 1;
• PRIV_CREATE_RECIPIENT
Allow users to create share recipients in their own schema.
PRIV_CREATE_RECIPIENT CONSTANT
PLS_INTEGER := 2;
• PRIV_CONSUME_ORACLE_SHARE
Allow users to consume Oracle-to-Oracle live shares.
PRIV_CONSUME_ORACLE_SHARE CONSTANT
PLS_INTEGER := 4;
• PRIV_ORDS_ACL
Grant an ACL to the user on the local ORDS endpoint. This is
required for the user to generate bearer tokens on locally
created shares.
PRIV_ORDS_ACL CONSTANT
PLS_INTEGER := 8;
GET_ACTIVATION_LINK Function
Generate the link that gets put into emails to the authorized recipient. This activation link
leads to the download page, where the recipient clicks a button to get the delta profile.
Syntax
FUNCTION GET_ACTIVATION_LINK
(
recipient_name IN VARCHAR2,
recipient_owner IN VARCHAR2 := NULL,
expiration IN PLS_INTEGER := 259200,
invalidate_previous IN BOOLEAN := TRUE
)
RETURN VARCHAR2;
Parameters
Parameter Description
recipient_name The local name of the recipient.
6-389
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
recipient_owner The schema that owns the recipient.
expiration Number of seconds before the activation link expires.
invalidate_previous If TRUE, which is the default, a previously generated activation link is
made invalid. If FALSE, a previously generated activation link remains
valid.
SQL> exec
dbms_output.put_line(dbms_share.get_activation_link('NEW_RECIPIENT'))
http://.../ords/_adpshr/delta-sharing/download?
key=43BA....YXJlX3Rlc3Q=
GET_PUBLISHED_IDENTITY Function
Get data about the current user that was set by SET_PUBLISHED_IDENTITY.
Syntax
FUNCTION GET_PUBLISHED_IDENTITY
RETURN CLOB;
Example
SQL> declare
2 id_json json_object_t := json_object_t();
3 begin
4 id_json.put('name', 'Demo Publisher');
5 id_json.put('description', 'Documentation Share Provider');
6 id_json.put('contact', 'null@example.com');
7 dbms_share.set_published_identity(id_json);
8 end;
6-390
Chapter 6
Autonomous Database Supplied Package Reference
9 /
Published Identity-
-----------------------------------------------------------------------------
--
{
"name" : "Demo Publisher",
"description" : "Documentation Share Provider",
"contact" : "null@example.com"
}
GET_RECIPIENT_PROPERTY Function
Return the value of a property for a recipient.
Syntax
FUNCTION GET_RECIPIENT_PROPERTY
(
recipient_name IN VARCHAR2,
recipient_property IN VARCHAR2,
recipient_owner IN VARCHAR2 := NULL
)RETURN VARCHAR2;
Parameters
Parameter Description
recipient_name The name of the recipient.
recipient_property The property to get. These properties include:
• PROP_SHARE_DESC
• PROP_SHARE_PUBLIC_DESC
• PROP_SHARE_VERSION_ACCESS
• PROP_SHARE_JOB_NAME
• PROP_SHARE_SPLIT_SIZE
• PROP_SHARE_LOG_LEVEL
• PROP_SHARE_JOB_DOP
• PROP_SHARE_JOB_CLASS
• PROP_SHARE_JOB_PRIORITY
• PROP_SHARE_VERSION_ACCESS
For information on constants used for this parameter, see descriptions for
Share Properties in DBMS_SHARE Constants.
recipient_owner The owner of the recipient. Defaults to current user.
6-391
Chapter 6
Autonomous Database Supplied Package Reference
GET_SHARE_PROPERTY Function
Get the property value of an existing share.
Syntax
FUNCTION GET_SHARE_PROPERTY
(
share_name IN VARCHAR2,
share_property IN VARCHAR2,
share_owner IN VARCHAR2 := NULL
)
RETURN VARCHAR2
Parameters
Parameter Description
share_name The name of the share.
share_property The property value to get. For information on constants used for this
parameter, see descriptions for Share Properties in
DBMS_SHARE Constants.
share_owner The owner of the share. Defaults to current user.
GET_SHARE_TABLE_PROPERTY Function
Get the property value of an existing share table.
Syntax
FUNCTION GET_SHARE_TABLE_PROPERTY
(
share_name IN VARCHAR2,
share_table_name IN VARCHAR2,
share_table_property IN VARCHAR2,
share_schema_name IN VARCHAR2 := NULL,
share_owner IN VARCHAR2 := NULL
)RETURN VARCHAR2;
Parameters
Parameter Description
share_name The name of the share.
share_table_name The name of the share table.
share_table_propert The share table property to update. For information on constants
y used for this parameter, see descriptions for Share Table
Properties in DBMS_SHARE Constants.
share_schema_name The name of the share schema. Defaults to uppercase of current
user.
share_owner The owner of the share.
6-392
Chapter 6
Autonomous Database Supplied Package Reference
GRANT_TO_RECIPIENT Procedure
Grant access on a share to a specific recipient. The share and recipient must both belong to
the same schema.
Syntax
PROCEDURE GRANT_TO_RECIPIENT
(
share_name IN VARCHAR2,
recipient_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share to grant.
recipient_name The name of the recipient.
owner The owner of both the share and recipient.
auto_commit The auto_commit parameter is ignored. This procedure will always
commit.
POPULATE_SHARE_PROFILE Procedure
Generate a delta profile for a recipient. You could print this to the screen or upload it
somewhere. For example, to an object bucket using DBMS_CLOUD.EXPORT_DATA.
Syntax
PROCEDURE POPULATE_SHARE_PROFILE
(
recipient_name IN VARCHAR2,
share_profile IN OUT NOCOPY SYS.JSON_OBJECT_T,
recipient_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
recipient_name The local name of the recipient.
share_profile The share profile, without bearer token.
recipient_owner The schema that owns the recipient.
6-393
Chapter 6
Autonomous Database Supplied Package Reference
PUBLISH_SHARE Procedure
Publish a share and return immediately.
The publication will continue in the background. You can check on the status of the job
by querying the USER_SHARE_JOBS view. See USER_SHARE_JOBS View for further
information.
Syntax
PROCEDURE PUBLISH_SHARE
(
share_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
drop_prior_versions IN BOOLEAN := FALSE,
share_job_dop IN NUMBER := NULL,
share_job_priority IN NUMBER := NULL,
job_class IN VARCHAR2 := 'DEFAULT_JOB_CLASS',
force_job_class IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share to publish.
share_owner The share owner, which must be the current user or NULL.
6-394
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
drop_prior_versions Set to TRUE if you want to drop all prior versions of the share. Note
that versions are only dropped if there are no outstanding Pre-
Authenticated Requests (PARs).
share_job_dop Specify the maximum number of dbms_scheduler jobs that will be
used to publish the share. Leave as NULL to use the system default
number.
share_job_priority Specify a relative priority of this share publication compared to
others. If two shares are being published at the same time, then the
one with the highest priority is processed first even if it was started
later.
job_class The scheduler job class, from all_scheduler_job_classes, that
are used to publish the share.
force_job_class Use the specifiedjob_class even if the admin has defined a
different default job class.
PUBLISH_SHARE_WAIT Procedure
Publish a share and wait until the background job is complete. The publication continues
even if the call is interrupted.
Syntax
PROCEDURE PUBLISH_SHARE_WAIT
(
share_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
drop_prior_versions IN BOOLEAN := FALSE,
share_job_dop IN NUMBER := NULL,
share_job_priority IN NUMBER := NULL,
job_class IN VARCHAR2 := 'DEFAULT_JOB_CLASS',
force_job_class IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share to publish.
share_owner The share owner, which must be the current user or NULL.
drop_prior_versions Set to TRUE if you want to drop all prior versions of the share. Note that
versions are only dropped if there are no outstanding Pre-Authenticated
Requests (PARs).
share_job_dop Specify the maximum number of dbms_scheduler jobs that will be used
to publish the share. Leave as NULL to use the system default number.
share_job_priority Specify a relative priority of this share publication compared to others. If
two shares are being published at the same time, then the one with the
highest priority is processed first even if it was started later.
job_class The scheduler job class, from all_scheduler_job_classes, that are
used to publish the share.
force_job_class Use the specifiedjob_class even if the admin has defined a different
default job class.
6-395
Chapter 6
Autonomous Database Supplied Package Reference
PURGE_DETACHED_FILES Procedure
Delete or forget parquet files that have become detached from their shares.
Syntax
PROCEDURE PURGE_DETACHED_FILES
(
file_pattern IN VARCHAR2 := '%',
credential_name IN VARCHAR2 := NULL,
purge_mode IN PLS_INTEGER := PURGE_DROP,
owner_id IN NUMBER := SYS_CONTEXT('USERENV',
'CURRENT_USERID')
);
PROCEDURE PURGE_DETACHED_FILES
(
purge_job_id IN OUT NOCOPY NUMBER,
file_pattern IN VARCHAR2 := '%',
credential_name IN VARCHAR2 := NULL,
purge_mode IN PLS_INTEGER := PURGE_DROP,
owner_id IN NUMBER := SYS_CONTEXT('USERENV',
'CURRENT_USERID')
);
Parameters
Parameter Description
purge_job_id The purge job ID.
file_pattern An optional LIKE pattern for the files that are to be purged.
credential_name An optional credential to be used to delete the files.
6-396
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
purge_mode Specifies how the files are purged. Purge mode values include:
• PURGE_DROP
Attempt to drop the files, using the specified credential. If the
file cannot be dropped, then the file will continue to be listed in
the *_SHARE_DETACHED_FILES views.
REMOVE_FROM_SHARE Procedure
Remove a table or view from a share.
Syntax
PROCEDURE REMOVE_FROM_SHARE
(
share_name IN VARCHAR2,
share_table_name IN VARCHAR2,
share_schema_name IN VARCHAR2 := NULL,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of an existing share to which the object is revoked.
share_table_name The name of the share table to revoke. This must match the externally
visible name, so the input will be uppercased.
share_schema_name The name of the share schema. Defaults to uppercase of current user.
share_owner The owner of the share.
6-397
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
auto_commit If TRUE, this procedure call commits changes that are not visible
externally until the commit takes place. The default value is FALSE, which
means that the user must COMMIT after running this call in order to
make the change visible.
RENAME_RECIPIENT Procedure
Rename a recipient. This procedure only changes the local name of the recipient. The
external definition of the recipient, for example the name of the OAUTH user or
sharing id, is not changed.
Syntax
PROCEDURE RENAME_RECIPIENT
(
old_recipient_name IN VARCHAR2,
new_recipient_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
old_recipient_name The current name of the share recipient.
new_recipient_name The new name of the share recipient.
owner The schema that defines the recipient.
auto_commit If TRUE, the changes are automatically committed. Changes are
not visible externally until the commit takes place. The default is
TRUE.
RENAME_SHARE Procedure
Rename a share. Care should be take with this procedure since the change effects
any existing consumers whose access is based on the previous name. Consumers are
not notified directly or updated.
Syntax
PROCEDURE RENAME_SHARE
(
old_share_name IN VARCHAR2,
new_share_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
6-398
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
old_share_name The current name of the share.
new_share_name The new name of the share.
share_owner The owner of the share.
auto_commit If TRUE, the changes are automatically committed. Changes are not
visible externally until the commit takes place. The default is TRUE.
RENAME_SHARE_LINK Procedure
Rename a registered share link.
Syntax
PROCEDURE RENAME_SHARE_LINK
(
old_name IN VARCHAR2,
new_name IN VARCHAR2,
link_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
old_name The current name of the share link.
new_name The new name of the link.
link_owner The owner of the share link. Defaults to current schema.
RENAME_SHARE_SCHEMA Procedure
Rename a share schema.
Syntax
PROCEDURE RENAME_SHARE_SCHEMA
(
share_name IN VARCHAR2,
old_schema_name IN VARCHAR2,
new_schema_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share.
6-399
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
old_schema_name The old name of the schema.
new_schema_name The new name of the schema.
share_owner The owner of the share. Defaults to the current schema.
auto_commit If TRUE, the changes are automatically committed. Changes are not
visible externally until the commit takes place. The default is FALSE.
RENAME_SHARE_TABLE Procedure
Rename a share table.
Syntax
PROCEDURE RENAME_SHARE_TABLE
(
share_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
old_share_table_name IN VARCHAR2,
new_share_table_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share.
share_schema_name The name of the share schema.
old_share_table_nam The old name of the share table.
e
new_share_table_nam The new name of the share table.
e
share_owner The owner of the share.
auto_commit If TRUE, the changes are automatically committed. Changes are
not visible externally until the commit takes place. The default is
FALSE.
REVOKE_FROM_RECIPIENT Procedure
Revoke access on a share from a specific recipient.
Syntax
PROCEDURE REVOKE_FROM_RECIPIENT
(
share_name IN VARCHAR2,
recipient_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
6-400
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_name The name of the share to revoke.
recipient_name The name of the recipient.
owner The owner of the share and recipient.
auto_commit If TRUE, the changes are automatically committed. Changes are not
visible externally until the commit takes place. The default is FALSE.
SET_CURRENT_SHARE_VERSION Procedure
Change the current version of a share.
Syntax
PROCEDURE SET_CURRENT_SHARE_VERSION
(
share_name IN VARCHAR2,
share_version IN NUMBER,
share_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_name The name of the share.
share_version The new version or NULL. The version must exist and must be valid. If
the share_version is NULL, then no version will be marked as
CURRENT and the share will be "unpublished".
share_owner The owner of the share. Defaults to the current schema.
SET_PUBLISHED_IDENTITY Procedure
Set data about the current user that will be supplied to recipients of published ORACLE
shares.
Syntax
PROCEDURE SET_PUBLISHED_IDENTITY
(
metadata IN SYS.JSON_OBJECT_T := NULL
);
6-401
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
metadata If the provider identity has already been set, then only the items that
the caller wants to update need to be included in the JSON.
Supplying a NULL value for an item causes that item to be removed
from the stored identity. However, "name", "description" and
"contact" cannot be removed in this way.
If the metadata argument is NULL, the existing provider identity is
deleted. This can only happen if the current user has no published
shares.
If the provider identity has not yet been set, the JSON must contain
at least:
• name (<=128 bytes)
• description (<=4000 bytes)
• contact (<=4000 bytes) values
Additional items may be included at the caller's discretion.
{
"name" : "A sample share provider",
"description" : "Test of share provider metadata",
"contact" : "provider1@example.com".
"schedule" : "New data provided on alternate rainy Fridays"
}
{
"description" : "The Share Provider You Can Trust!(tm)",
"schedule" : null
}
SET_RECIPIENT_LOG_LEVEL Procedure
Change the log level for an existing share recipient.
Syntax
PROCEDURE SET_RECIPIENT_LOG_LEVEL
(
recipient_name IN VARCHAR2,
log_level IN PLS_INTEGER,
recipient_owner IN VARCHAR2 := NULL
);
6-402
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
recipient_name The local name of the share recipient.
log_level Event logging level. For information on constants used for this parameter,
see descriptions for Log Level in DBMS_SHARE Constants.
recipient_owner The schema that owns the recipient.
SET_SHARE_LOG_LEVEL Procedure
Change the log level for an existing share.
Syntax
PROCEDURE SET_SHARE_LOG_LEVEL
(
share_name IN VARCHAR2,
log_level IN PLS_INTEGER,
share_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_name The name of the share.
log_level Event logging level. For information on constants used for this parameter,
see descriptions for Log Level in DBMS_SHARE Constants.
share_owner The owner of the share.
SET_STORAGE_CREDENTIAL Procedure
Set the credential name used by the current user when it attempts to access the given
storage.
Syntax
PROCEDURE SET_STORAGE_CREDENTIAL
(
storage_link_name IN VARCHAR2,
credential_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
check_if_exists IN BOOLEAN := TRUE,
auto_commit IN BOOLEAN := TRUE
);
6-403
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
storage_link_name The name of a cloud storage link previously created using
CREATE_CLOUD_STORAGE_LINK Procedure.
credential_name The name of a local credential that gives access to the storage.
The credentials used for share storage must be able to read, write,
delete, and manage pre-authenticated requests. See Using Pre-
Authenticated Requests
owner The owner of the cloud storage link. Leave as NULL for the current
user.
check_if_exists If check_if_exists is TRUE, then the function will also confirm
that the credential exists for the current user.
auto_commit The default is TRUE. If TRUE, this transaction is committed. If
FALSE, the user must commit the transaction. Changes are not
visible until the commit takes place.
STOP_JOB Procedure
Attempt to stop a running share job. The procedure should return quickly, but it may
take some time for the associated job to stop.
Syntax
PROCEDURE STOP_JOB
(
share_job_id IN NUMBER,
share_job_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_job_id The ID of the share job.
share_job_owner The owner of the job. Defaults to the current schema.
UNPUBLISH_SHARE Procedure
Unpublish a share.
Syntax
PROCEDURE UNPUBLISH_SHARE
(
share_name IN VARCHAR2,
out_share_job_id IN OUT NOCOPY NUMBER,
share_owner IN VARCHAR2 := NULL,
drop_all_versions IN BOOLEAN := FALSE,
restart_versions IN BOOLEAN := FALSE
);
6-404
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_name The name of the share.
out_share_job_id The ID of any share job that needs to be run to process this command.
share_owner The owner of the share. Defaults to the current schema.
drop_all_versions If set to TRUE, all existing versions along with all associated storage is
dropped.
If drop_all_versions is FALSE, then all existing versions continue to
exist, but the share is not visible to recipients. A subsequent call to
PUBLISH_SHARE or SET_CURRENT_SHARE_VERSION makes them visible
again.
restart_versions Restart version numbering. If FALSE, then the next published version
number will be the same as it would have been if no versions had been
dropped.
If TRUE, then the next published version will be set to one more than the
highest existing version. When used in conjunction with
drop_all_versions, this has the effect of resetting the share to its
original state. Note that this may confuse any existing delta clients, so it
should be used with care.
UPDATE_DEFAULT_RECIPIENT_PROPERTY Procedure
Update the default recipient property values. This procedure requires the user to have admin
privileges.
Syntax
PROCEDURE UPDATE_DEFAULT_RECIPIENT_PROPERTY
(
recipient_property IN VARCHAR2,
new_value_vc IN VARCHAR2
);
Parameters
Parameter Description
recipient_property The property to update. For information on constants used for this
parameter, see descriptions for Share Recipient Properties in
DBMS_SHARE Constants.
new_value_vc The new property value.
UPDATE_DEFAULT_SHARE_PROPERTY Procedure
Update the default share property values.
Syntax
PROCEDURE UPDATE_DEFAULT_SHARE_PROPERTY
(
share_property IN VARCHAR2,
6-405
Chapter 6
Autonomous Database Supplied Package Reference
new_value IN VARCHAR2
);
Parameters
Parameter Description
share_property The property to update. For information on constants used for this
parameter, see descriptions for Share Properties in
DBMS_SHARE Constants.
These properties can be read using the
ALL_SHARE_DEFAULT_SETTINGS View.
new_value The new property value.
UPDATE_RECIPIENT_PROPERTY Procedure
Update a property of an existing recipient.
Syntax
PROCEDURE UPDATE_RECIPIENT_PROPERTY
(
recipient_name IN VARCHAR2,
recipient_property IN VARCHAR2,
new_value IN VARCHAR2,
recipient_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
recipient_name The name of the recipient.
recipient_property The property to update. These properties include:
• PROP_RECIPIENT_EMAIL
• PROP_RECIPIENT_DESCRIPTION
• PROP_RECIPIENT_LOG_LEVEL
• PROP_RECIPIENT_SHARING_ID
• PROP_RECIPIENT_PROFILE_URL
• PROP_RECIPIENT_PAR_TYPE
• PROP_RECIPIENT_MIN_PAR_LIFETIME
• PROP_RECIPIENT_PAR_LIFETIME
• PROP_RECIPIENT_TOKEN_LIFETIME
For information on constants used for this parameter, see
descriptions for Share Recipient Recipient Properties in
DBMS_SHARE Constants.
new_value The new property value.
recipient_owner The owner of the recipient. Defaults to the current schema.
6-406
Chapter 6
Autonomous Database Supplied Package Reference
UPDATE_SHARE_JOB_PROPERTY Procedure
Modify properties of a running share job. The procedure should return quickly, but it may take
some time for the changes to take effect.
Syntax
PROCEDURE UPDATE_SHARE_JOB_PROPERTY
(
share_job_id IN NUMBER,
share_property IN VARCHAR2,
new_value IN VARCHAR2,
share_job_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_job_id The ID of the share job.
share_property The property to update. For information on constants used for this
parameter, see descriptions for Share Job Properties in DBMS_SHARE
Constants.
new_value The new property value.
share_job_owner The owner of the job. Defaults to the current schema.
UPDATE_SHARE_PROPERTY Procedure
Update a property of an existing share.
Syntax
PROCEDURE UPDATE_SHARE_PROPERTY
(
share_name IN VARCHAR2,
share_property IN VARCHAR2,
new_value IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
share_name The name of the share.
share_property The property to update. For information on constants used for this
parameter, see descriptions for Share Properties in DBMS_SHARE
Constants.
new_value The new property value.
share_owner The owner of the share. Defaults to the current schema.
6-407
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
Auto_commit If TRUE (the default), the changes are automatically committed. If
FALSE, the user must commit the changes. Changes are visible
externally after the commit takes place.
UPDATE_SHARE_TABLE_PROPERTY Procedure
Update the property value of an existing share table.
Syntax
PROCEDURE UPDATE_SHARE_TABLE_PROPERTY
(
share_name IN VARCHAR2,
share_table_name IN VARCHAR2,
share_table_property IN VARCHAR2,
new_value IN VARCHAR2,
share_schema_name IN VARCHAR2 := NULL,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
share_name The name of the share.
share_table_name The name of the share table.
share_table_propert The property to update. These properties include:
y • PROP_SHARE_TABLE_SPLIT_METHOD
• PROP_SHARE_TABLE_SPLIT_COLUMNS
• PROP_SHARE_TABLE_ORDER_COLUMNS
• PROP_SHARE_TABLE_SHARE_COLUMNS
• PROP_SHARE_TABLE_SPLIT_SIZE
• PROP_SHARE_TABLE_GATHER_STATS
• PROP_SHARE_TABLE_SPLIT_ROWS
• PROP_SHARE_TABLE_FLASHBACK
new_value The new property value.
share_schema_name The name of the share schema (defaults to uppercase of current
user).
share_owner The owner of the share. Defaults to the current schema.
Auto_commit If TRUE (the default), the changes are automatically committed. If
FALSE, the user must commit the changes. Changes are visible
externally after the commit takes place.
6-408
Chapter 6
Autonomous Database Supplied Package Reference
VALIDATE_CREDENTIAL Function
Validate a credential name, converting it to canonical form first if required.
The procedure raises an exception if the name is not a valid Oracle identifier. The
credential_name is returned without double quotes, as it would appear in the
CREDENTIAL_NAME column of the USER_CREDENTIALS view.
Syntax
FUNCTION VALIDATE_CREDENTIAL
(
credential_name IN VARCHAR2,
check_if_exists IN BOOLEAN := TRUE
)
RETURN VARCHAR2;
Parameters
Parameter Description
credential_name The name of the credential to validate in standard database form, with
double quotes if the name is not a simple identifier.
check_if_exists If TRUE, then the function also confirms that the credential exists for the
current user.
VALIDATE_SHARE_STORAGE Procedure
Check to see if the given storage is suitable for versioned shares.
Syntax
The procedure syntax including the validation_results output parameter.
PROCEDURE VALIDATE_SHARE_STORAGE
(
storage_link_name IN VARCHAR2,
validation_results IN OUT NOCOPY VARCHAR2,
run_storage_tests IN BOOLEAN := TRUE,
storage_link_owner IN VARCHAR2 := NULL
);
PROCEDURE VALIDATE_SHARE_STORAGE
(
storage_link_name IN VARCHAR2,
run_storage_tests IN BOOLEAN := TRUE,
storage_link_owner IN VARCHAR2 := NULL
);
6-409
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
storage_link_name The cloud storage link name.
validation_results (Optional input and output) Validation result details returned in
JSON form.
The validation_results include the results for each separate
test.
run_storage_tests Run tests to validate the storage. If TRUE (the default), the
procedure tests READ, WRITE, DELETE, and
PREAUTHENTICATED REQUESTS.
storage_link_owner The cloud storage link owner.
{
"READ":"PASSSED",
"WRITE":"PASSSED",
"CREATE_PAR":"PASSSED",
"DELETE_PAR":"PASSSED",
"DELETE":"PASSSED"
}
WAIT_FOR_JOB Procedure
This procedure waits until the specified share job is complete.
Syntax
PROCEDURE WAIT_FOR_JOB
(
share_job_id IN NUMBER,
completed IN OUT NOCOPY BOOLEAN,
maximum_wait IN NUMBER := NULL
);
Parameters
Parameter Description
share_job_id The ID of the share job.
completed Job completion indicator.
maximum_wait The maximum wait period, in seconds. A NULL value implies no
limit.
6-410
Chapter 6
Autonomous Database Supplied Package Reference
Subprograms Description
ASSERT_SHARING_ID Run basic validation checks against a sharing id and return one in
Procedure canonical form.
CREATE_CREDENTIALS Create a credential containing the bearer token from a delta profile.
Procedure and Function
CREATE_OR_REPLACE_ Subscribe to share from a registered share provider.
SHARE_LINK Procedure
CREATE_OR_REPLACE_ Subscribe to an Oracle share provider, with a local name.
ORACLE_SHARE_PROVI
DER Procedure
CREATE_ORACLE_SHAR Subscribe to an Oracle share provider, with a local name.
E_PROVIDER Procedure
CREATE_SHARE_LINK Subscribe to share from a registered share provider.
Procedure
CREATE_SHARE_LINK_VI Create or replace a named view that gives access to a remote shared
EW Procedure table.
CREATE_SHARE_PROVI Subscribe to a delta share provider.
DER Procedure
DISCOVER_AVAILABLE_S Return one SHARE_AVAILABLE_SHARES_ROW for each available table
HARES Function from the subscribed share providers.
DISCOVER_AVAILABLE_T Return one SHARE_AVAILABLE_TABLES_ROW for each available table
ABLES Function from the subscribed share providers or from an explicit delta endpoint.
DROP_SHARE_LINK Drop a share link created by the CREATE_SHARE_LINK procedure.
Procedure
DROP_SHARE_PROVIDE Drop a subscription to a share provider.
R Procedure
ENABLE_DELTA_ENDPOI Create necessary ACLs that allow the specified user to connect to a delta
NT Procedure endpoint.
FLUSH_SHARE_LINK_CA Flush the cache of shares for a given share link.
CHE Procedure
FLUSH_SHARE_PROVID Flush the cache of shares for a given share provider.
ER_CACHE Procedure
GENERATE_SHARE_LINK Generate a SELECT statement that returns data from a shared table.
_SELECT Procedure and
Function
GET_ORACLE_SHARE_LI Retrieve the cloud link name and namespace for an Oracle-to-Oracle
NK_INFO Function share.
GET_SHARE_LINK_INFO Get the endpoint(s), share type and share name along with any additional
Procedure JSON metadata for a share link.
GET_SHARE_PROVIDER Get the credential name to be used by the current user when it attempts
_CREDENTIAL Procedure to access the given delta share provider.
GET_SHARE_PROVIDER Get the endpoint string(s) and share type along with any additional JSON
_INFO Procedure metadata for a share provider.
GET_SHARING_ID Return an identifier that can be used as the sharing_id in the
Function CREATE_SHARE_RECIPIENT procedure.
OPEN_SHARE_LINK_CU Open a cursor that returns data from a shared table.
RSOR Procedure
6-411
Chapter 6
Autonomous Database Supplied Package Reference
Subprograms Description
REFRESH_BEARER_TOK Update one or more credentials created by
EN_CREDENTIAL CREATE_BEARER_TOKEN_CREDENTIAL or CREATE_CREDENTIALS.
Procedure
RENAME_CLOUD_STORA Rename a registered cloud storage link.
GE_LINK Procedure
RENAME_SHARE_LINK Rename a registered share link.
Procedure
RENAME_SHARE_PROVI Rename a registered share provider.
DER Procedure
REMOVE_SHARE_SCHE Remove a schema and all its contents from a share.
MA Procedure
SET_SHARE_LINK_META Set the additional JSON metadata for the share link.
DATA Procedure
SET_SHARE_PROVIDER_ Set the credential name to access the given share provider.
CREDENTIAL Procedure
SET_SHARE_PROVIDER_ Set additional JSON metadata for the share provider.
METADATA Procedure
UPDATE_BEARER_TOKE Modify an attribute of a credential created by CREATE_CREDENTIALS or
N_CREDENTIAL CREATE_BEARER_TOKEN_CREDENTIAL.
Procedure
• ASSERT_SHARING_ID Procedure
Run basic validation checks against a sharing id and return one in canonical form.
An exception is raised if the id is obviously invalid.
• CREATE_CREDENTIALS Procedure and Function
Create a credential containing the bearer token from a delta sharing profile. The
standard type, version 1, specifies an endpoint and a single long term bearer
token.
• CREATE_OR_REPLACE_SHARE_LINK Procedure
Subscribe to share from a registered share provider.
• CREATE_OR_REPLACE_ORACLE_SHARE_PROVIDER Procedure
Subscribe to an Oracle share provider, with a local name.
• CREATE_ORACLE_SHARE_PROVIDER Procedure
Subscribe to an Oracle share provider, with a local name.
• CREATE_SHARE_LINK Procedure
Subscribe to share from a registered share provider. The available share names
can be found by calling DISCOVER_AVAILABLE_SHARES.
• CREATE_SHARE_LINK_VIEW Procedure
Create or replace a named view that gives access to a remote shared table.
• CREATE_SHARE_PROVIDER Procedure
Subscribe to a delta share provider.
• DISCOVER_AVAILABLE_SHARES Function
Return one SHARE_AVAILABLE_SHARES_ROW for each available table from the
subscribed share providers.
• DISCOVER_AVAILABLE_TABLES Function
Return one SHARE_AVAILABLE_TABLES_ROW for each available table from the
subscribed share providers.
6-412
Chapter 6
Autonomous Database Supplied Package Reference
• DROP_SHARE_LINK Procedure
Drop a share link created by the CREATE_SHARE_LINK procedure.
• DROP_SHARE_PROVIDER Procedure
Drop a subscription to a share provider.
• ENABLE_DELTA_ENDPOINT Procedure
Create necessary ACLs that allow the specified user to connect to a delta endpoint.
Admin privileges are required for this procedure.
• FLUSH_SHARE_LINK_CACHE Procedure
Flush the cache of shares for a given share link. The list of shares for the remote
endpoints is fetched instead of relying on cached values.
• FLUSH_SHARE_PROVIDER_CACHE Procedure
Flush the cache of shares for a given share provider. The list of shares for the remote
endpoints is fetched instead of relying on cached values.
• GENERATE_SHARE_LINK_SELECT Procedure and Function
Generate a SELECT statement that returns data from a shared table.
• GET_ORACLE_SHARE_LINK_INFO Function
Retrieve the cloud link name and namespace for an Oracle-to-Oracle share.
• GET_SHARE_LINK_INFO Procedure
Get the endpoint(s), share type and share name along with any additional JSON
metadata for a share link.
• GET_SHARE_PROVIDER_CREDENTIAL Procedure
Get the credential name to be used by the current user when it attempts to access the
given delta share provider.
• GET_SHARE_PROVIDER_INFO Procedure
Get the endpoint string(s) and share type along with any additional JSON metadata for a
share provider. For ORACLE share providers, the Oracle provider ID is returned in the
endpoint argument.
• GET_SHARING_ID Function
Return an identifier that can be used as the sharing_id in the CREATE_SHARE_RECIPIENT
procedure. This function can be used to share data between two users, the "provider"
and the "recipient", in different databases.
• OPEN_SHARE_LINK_CURSOR Procedure
Open a cursor that returns data from a shared table.
• REFRESH_BEARER_TOKEN_CREDENTIAL Procedure
Update one or more credentials created by CREATE_BEARER_TOKEN_CREDENTIAL or
CREATE_CREDENTIALS by calling the registered token endpoints and fetching new bearer
tokens. Note this procedure is called automatically by a scheduler job,
ADP$BEARER_REFRESH_JOB, that runs every 50 minutes.
• RENAME_CLOUD_STORAGE_LINK Procedure
Rename a registered cloud storage link.
• RENAME_SHARE_PROVIDER Procedure
Rename a registered share provider.
• REMOVE_SHARE_SCHEMA Procedure
Remove a schema and all its contents from a share.
• SET_SHARE_LINK_METADATA Procedure
Set the additional JSON metadata for the share link.
6-413
Chapter 6
Autonomous Database Supplied Package Reference
• SET_SHARE_PROVIDER_CREDENTIAL Procedure
Set the credential name to be used by the current user when it attempts to access
the given share provider.
• SET_SHARE_PROVIDER_METADATA Procedure
Set additional JSON metadata for the share provider.
• UPDATE_BEARER_TOKEN_CREDENTIAL Procedure
Modify an attribute of a credential created by CREATE_CREDENTIALS or
CREATE_BEARER_TOKEN_CREDENTIAL.
ASSERT_SHARING_ID Procedure
Run basic validation checks against a sharing id and return one in canonical form. An
exception is raised if the id is obviously invalid.
Syntax
PROCEDURE ASSERT_SHARING_ID
(
sharing_id IN OUT NOCOPY VARCHAR2,
out_sharing_id_type IN OUT NOCOPY VARCHAR2
);
Parameters
Parameter Description
sharing_id The id to check.
out_sharing_id_type The type of the id, if valid. For example, TENANCY or DATABASE.
Syntax
PROCEDURE CREATE_CREDENTIALS
(
credential_base_name IN VARCHAR2,
delta_profile IN CLOB,
out_credential_name IN OUT NOCOPY VARCHAR2
);
Below is the functional version of the create_credentials that returns the name of the
new credentials in JSON form.
FUNCTION CREATE_CREDENTIALS
(
credential_base_name IN VARCHAR2,
delta_profile IN CLOB
6-414
Chapter 6
Autonomous Database Supplied Package Reference
)
RETURN CLOB;
Parameters
Parameter Description
credential_base_name The base name of the credential(s) to create.
delta_profile The delta sharing profile, in JSON format, obtained from the share
provider.
{
"shareCredentialsVersion": 1,
"endpoint": "https://<endpoint>",
"bearerToken": "<token>",
"expirationTime": "...",
}
{
"shareCredentialsVersion": 1,
"endpoint": "https://<endpoint>",
"bearerToken": "<token>",
"expirationTime": "...",
"tokenEndpoint": "https://<token endpoint>",
"clientID": "<client id>",
"clientSecret": "<client secret>"
}
See Profile File Format and Bearer Token for further information.
out_credential_name The name of the newly created bearer token credential.
CREATE_OR_REPLACE_SHARE_LINK Procedure
Subscribe to share from a registered share provider.
Syntax
PROCEDURE CREATE_OR_REPLACE_SHARE_LINK
(
share_link_name IN VARCHAR2,
share_provider IN VARCHAR2,
share_name IN VARCHAR2,
provider_owner IN VARCHAR2 := NULL,
link_owner IN VARCHAR2 := NULL,
use_default_credential IN BOOLEAN := TRUE,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
6-415
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_link_name The name of the share link that will be created. The available share
names can be found by calling DISCOVER_AVAILABLE_SHARES
Function.
share_provider The name of the registered share provider.
share_name The name of the share from the share provider.
provider_owner The owner of the share provider. Defaults to the current schema.
link_owner The owner of the share link. Defaults to the current schema.
use_default_credent If TRUE, use the same credentials as the share provider.
ial
metadata Optional metadata to associate with the share link.
auto_commit If TRUE (the default), this procedure call commits changes that are
not visible externally until the commit takes place. If FALSE, the
user must COMMIT after running this call in order to make the
change visible.
CREATE_OR_REPLACE_ORACLE_SHARE_PROVIDER Procedure
Subscribe to an Oracle share provider, with a local name.
It will then appear in ALL_SHARE_PROVIDERS View with RECIPIENT_TYPE =
'ORACLE'.
Note:
Use the SET_STORAGE_CREDENTIAL procedure to add a credential to the
storage link. See SET_STORAGE_CREDENTIAL Procedure.
Syntax
PROCEDURE CREATE_OR_REPLACE_CLOUD_STORAGE_LINK
(
storage_link_name IN VARCHAR2,
uri IN VARCHAR2,
owner IN VARCHAR2 := NULL,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
storage_link_name The name of the cloud storage link. The name of the link should
follow standard Oracle naming conventions. See Database Object
Names and Qualifiers
6-416
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
uri The URI of the storage bucket. The URI should be of the form
https://objectstorage.region.oraclecloud.com/n/
namespace-string/b/bucket/o
owner The owner of the storage link. Defaults to the current schema.
metadata An optional JSON metadata document containing additional
information.
auto_commit If TRUE, the changes automatically commit after creating the link.
The default is TRUE.
CREATE_ORACLE_SHARE_PROVIDER Procedure
Subscribe to an Oracle share provider, with a local name.
It will then appear in ALL_SHARE_PROVIDERS View with RECIPIENT_TYPE = 'ORACLE'.
Syntax
PROCEDURE CREATE_ORACLE_SHARE_PROVIDER
(
oracle_provider_id IN VARCHAR2,
provider_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
oracle_provider_id The provider ID obtained from the
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS view.
See ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS View.
provider_name A local name for the provider
owner The owner of the new share provider. Leave as NULL for the current user.
metadata Optional JSON metadata to associate with the provider.
auto_commit If TRUE, the changes automatically commit after creating the link. The
default is TRUE.
CREATE_SHARE_LINK Procedure
Subscribe to share from a registered share provider. The available share names can be
found by calling DISCOVER_AVAILABLE_SHARES.
Syntax
PROCEDURE CREATE_SHARE_LINK
(
share_link_name IN VARCHAR2,
share_provider IN VARCHAR2,
6-417
Chapter 6
Autonomous Database Supplied Package Reference
share_name IN VARCHAR2,
provider_owner IN VARCHAR2 := NULL,
link_owner IN VARCHAR2 := NULL,
use_default_credential IN BOOLEAN := TRUE,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
share_link_name The name of the share link to create. This should follow standard
Oracle naming conventions and does not need to be the same as
the share_name parameter.
share_provider The name of the share provider, listed in ALL_SHARE_PROVIDERS,
that shared the data.
share_name The name of the share, as defined by the share provider.
provider_owner The schema that subscribed the share provider, as listed in
ALL_SHARE_PROVIDER. Leave as null to default to the current user,
which is typical.
link_owner The owner of the new share link. Leave as null to default to the
current user, which is typical.
use_default_credent Set to TRUE to use the same credential for both the share link and
ial the share provider.
metadata Optional JSON metadata to associated with the share link.
auto_commit If TRUE, the changes automatically commit after creating the link.
The default is TRUE.
CREATE_SHARE_LINK_VIEW Procedure
Create or replace a named view that gives access to a remote shared table.
Syntax
PROCEDURE CREATE_SHARE_LINK_VIEW
(
view_name IN VARCHAR2,
share_link_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
share_table_name IN VARCHAR2,
view_owner IN VARCHAR2 := NULL,
share_link_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
view_name The name of the new view.
share_link_name The name of the share link.
share_schema_name The name of the shared schema.
6-418
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
share_table_name The name of the shared table.
view_owner The view owner. Defaults to the current schema.
share_link_owner The link owner. Defaults to the current schema.
CREATE_SHARE_PROVIDER Procedure
Subscribe to a delta share provider.
Syntax
PROCEDURE CREATE_SHARE_PROVIDER
(
provider_name IN VARCHAR2,
endpoint IN VARCHAR2,
token_endpoint IN VARCHAR2 := NULL,
share_type IN VARCHAR2 := 'DELTA',
owner IN VARCHAR2 := NULL,
metadata IN SYS.JSON_OBJECT_T := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
provider_name The local name of this share provider.
endpoint The delta endpoint, from the delta sharing profile.
token_endpoint This parameter is ignored.
share_type The type of share provider. Leave this as DELTA.
owner The owner of the share provider. Defaults to the current schema.
metadata Optional JSON metadata to associate with the share provider.
auto_commit If TRUE (the default), this procedure call commits changes that are not
visible externally until the commit takes place. If FALSE, the user must
COMMIT after running this call in order to make the change visible.
DISCOVER_AVAILABLE_SHARES Function
Return one SHARE_AVAILABLE_SHARES_ROW for each available table from the subscribed share
providers.
Syntax
FUNCTION DISCOVER_AVAILABLE_SHARES
(
share_provider IN VARCHAR2,
owner IN VARCHAR2 := NULL
) RETURN share_available_shares_tbl PIPELINED;
6-419
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
share_provider The name of the share provider.
owner The owner of the share provider. Defaults to the current schema.
AVAILABLE_SHARE_NAME
-----------------------------------------------------------------------
---------
BURLINGTON_EXPEDITION_2022
EGYPT_EXPEDITION_2022
DISCOVER_AVAILABLE_TABLES Function
Return one SHARE_AVAILABLE_TABLES_ROW for each available table from the subscribed
share providers.
Syntax
FUNCTION DISCOVER_AVAILABLE_TABLES
(
share_provider IN VARCHAR2 := NULL,
share_name IN VARCHAR2 := NULL,
owner IN VARCHAR2 := NULL,
endpoint IN VARCHAR2 := NULL,
credential_name IN VARCHAR2 := NULL
) RETURN share_available_tables_tbl PIPELINED;
Parameters
Parameter Description
share_provider An optional share provider name. If NULL, search all subscribed
share providers.
share_name An optional share name. If NULL, search all discovered shares.
owner The owner of the share provider. Defaults to the current schema.
endpoint An optional delta endpoint.
credential_name An optional bearer token credential to access the endpoint.
6-420
Chapter 6
Autonomous Database Supplied Package Reference
7 rows selected.
DROP_SHARE_LINK Procedure
Drop a share link created by the CREATE_SHARE_LINK procedure.
Syntax
PROCEDURE DROP_SHARE_LINK
(
link_name IN VARCHAR2,
6-421
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
link_name The name of the share link to drop.
link_owner The owner of the share link to drop. Leave as null for the current
schema.
DROP_SHARE_PROVIDER Procedure
Drop a subscription to a share provider.
Syntax
PROCEDURE DROP_SHARE_PROVIDER
(
provider_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
drop_credentials IN BOOLEAN := FALSE
);
Parameters
Parameter Description
provider_name The name of the share provider to drop.
owner The owner of the share provider to drop. Defaults to the current
schema.
drop_credentials If TRUE, any credentials associated with the provider are dropped.
If FALSE, credentials are not dropped.
ENABLE_DELTA_ENDPOINT Procedure
Create necessary ACLs that allow the specified user to connect to a delta endpoint.
Admin privileges are required for this procedure.
Syntax
PROCEDURE ENABLE_DELTA_ENDPOINT
(
schema_name IN VARCHAR2,
delta_profile IN CLOB,
enabled IN BOOLEAN := TRUE
);
Parameters
Parameter Description
schema_name The schema to enable or disable.
6-422
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
delta_profile The delta profile. Only the endpoint and tokenEndpoint are required.
enabled TRUE to enable and FALSE to disable.
FLUSH_SHARE_LINK_CACHE Procedure
Flush the cache of shares for a given share link. The list of shares for the remote endpoints is
fetched instead of relying on cached values.
Syntax
PROCEDURE FLUSH_SHARE_LINK_CACHE
(
link_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
link_name The name of the share link.
owner The owner of the share link. Defaults to the current schema.
auto_commit If TRUE, the changes are automatically committed. The default is TRUE.
FLUSH_SHARE_PROVIDER_CACHE Procedure
Flush the cache of shares for a given share provider. The list of shares for the remote
endpoints is fetched instead of relying on cached values.
Syntax
PROCEDURE FLUSH_SHARE_PROVIDER_CACHE
(
provider_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
provider_name The name of the share provider.
owner The owner of the share provider. Defaults to the current schema.
auto_commit If TRUE, the changes are automatically committed. The default is TRUE.
6-423
Chapter 6
Autonomous Database Supplied Package Reference
Syntax
Procedure version of GENERATE_SHARE_LINK_SELECT.
PROCEDURE GENERATE_SHARE_LINK_SELECT
(
share_link_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
share_table_name IN VARCHAR2,
stmt IN OUT NOCOPY CLOB,
share_link_owner IN VARCHAR2 := NULL
);
FUNCTION GENERATE_SHARE_LINK_SELECT
(
share_link_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
share_table_name IN VARCHAR2,
share_link_owner IN VARCHAR2 := NULL
)
RETURN CLOB;
Parameters
Parameter Description
share_link_name The name of the share link.
share_schema_name The name of the shared schema.
share_table_name The name of the shared table.
stmt The generated select statement.
share_link_owner The link owner. Defaults to the current schema.
GET_ORACLE_SHARE_LINK_INFO Function
Retrieve the cloud link name and namespace for an Oracle-to-Oracle share.
Syntax
FUNCTION GET_ORACLE_SHARE_LINK_INFO
(
oracle_provider_id IN VARCHAR2,
share_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
share_table_name IN VARCHAR2
6-424
Chapter 6
Autonomous Database Supplied Package Reference
)
RETURN CLOB;
Parameters
Parameter Description
oracle_provider_id The Oracle Provider ID from
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS.
share_name The name of a share from
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS.SHARES.
share_schema_name The name of a share schema from
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS.SHARES.
share_table_name The name of a table from
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS.SHARES.
Return
The return value is a JSON object containing three properties: schema, table, and dblink. The
caller can use these three properties to fetch data using a query of the following form:
SELECT *
FROM <schema>.<table>@<dblink>
GET_SHARE_LINK_INFO Procedure
Get the endpoint(s), share type and share name along with any additional JSON metadata for
a share link.
Syntax
PROCEDURE GET_SHARE_LINK_INFO
(
link_name IN VARCHAR2,
endpoint IN OUT NOCOPY VARCHAR2,
share_type IN OUT NOCOPY VARCHAR2,
share_name IN OUT NOCOPY VARCHAR2,
token_endpoint IN OUT NOCOPY VARCHAR2,
metadata IN OUT NOCOPY BLOB,
link_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
link_name The name of a share link (previously created by CREATE_SHARE_LINK).
endpoint OUT: The endpoint, if any, associated with the share link.
share_type OUT: The type of share link (for example, DELTA).
share_name OUT: The name of the linked share.
token_endpoint This parameter is no longer used.
metadata OUT: Optional metadata associated with the share link.
6-425
Chapter 6
Autonomous Database Supplied Package Reference
Parameter Description
link_owner The owner of the share link. Defaults to the current schema.
GET_SHARE_PROVIDER_CREDENTIAL Procedure
Get the credential name to be used by the current user when it attempts to access the
given delta share provider.
Syntax
PROCEDURE GET_SHARE_PROVIDER_CREDENTIAL
(
provider_name IN VARCHAR2,
share_credential IN OUT NOCOPY VARCHAR2,
get_token_credential IN OUT NOCOPY VARCHAR2,
owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
provider_name The name of the share provider.
share_credential OUT: The name of the credential associated with the provider. The
credential name is returned without double quotes, as it would
appear in the CREDENTIAL_NAME column of the
USER_CREDENTIALS view.
See ALL_CREDENTIALS View.
get_token_credentia This parameter is not used.
l
owner The owner is the name of the schema where the share provider was
registered, not the owner of the credential. Defaults to the current
schema.
GET_SHARE_PROVIDER_INFO Procedure
Get the endpoint string(s) and share type along with any additional JSON metadata for
a share provider. For ORACLE share providers, the Oracle provider ID is returned in
the endpoint argument.
Syntax
PROCEDURE GET_SHARE_PROVIDER_INFO
(
provider_name IN VARCHAR2,
endpoint IN OUT NOCOPY VARCHAR2,
share_type IN OUT NOCOPY VARCHAR2,
token_endpoint IN OUT NOCOPY VARCHAR2,
metadata IN OUT NOCOPY BLOB,
owner IN VARCHAR2 := NULL
);
6-426
Chapter 6
Autonomous Database Supplied Package Reference
Parameters
Parameter Description
provider_name The name of the share provider.
endpoint The delta endpoint.
share_type The share type: DELTA or ORACLE.
token_endpoint This parameter is not used.
metadata The optional metadata that was associated with the share provider.
owner The owner of the share provider. Defaults to the current schema.
GET_SHARING_ID Function
Return an identifier that can be used as the sharing_id in the CREATE_SHARE_RECIPIENT
procedure. This function can be used to share data between two users, the "provider" and the
"recipient", in different databases.
See CREATE_SHARE_RECIPIENT Procedure for further information.
Syntax
FUNCTION GET_SHARING_ID
(
sharing_id_type IN VARCHAR2 := SHARING_ID_TYPE_DATABASE
)
RETURN VARCHAR2;
Parameters
Parameter Description
sharing_id_type The type of sharing ID.
Usage
The flow is as follows:
1. The recipient calls DBMS_SHARE.GET_SHARING_ID to get a unique identifier.
2. The recipient sends this identifier (e.g. via email) to the provider.
3. The provider calls DBMS_SHARE.CREATE_SHARE_RECIPIENT, passing in the identifier as
sharing_id.
4. The provider calls DBMS_SHARE.GRANT_TO_RECIPIENT to give the recipient access to
shared data.
The sharing_id_type parameter is used to specify which database users can access the
share following the above sequence.
• DATABASE The share will be visible to any admin user in the database where
GET_SHARING_ID was called.
• COMPARTMENT The share will be visible to any admin user in any database in the
same compartment where GET_SHARING_ID was called.
6-427
Chapter 6
Autonomous Database Supplied Package Reference
• TENANCY The share will be visible to any admin user in any database in the
same tenancy where GET_SHARING_ID was called.
• REGION The share will be visible to any admin user in any database in the same
region where GET_SHARING_ID was called.
OPEN_SHARE_LINK_CURSOR Procedure
Open a cursor that returns data from a shared table.
Syntax
PROCEDURE OPEN_SHARE_LINK_CURSOR
(
share_link_name IN VARCHAR2,
share_schema_name IN VARCHAR2,
share_table_name IN VARCHAR2,
table_cursor IN OUT NOCOPY SYS_REFCURSOR,
share_link_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
share_link_name The name of the share link.
share_schema_name The name of the shared schema.
share_table_name The name of the shared table.
table_cursor The cursor.
share_link_owner The link owner. Defaults to current schema.
REFRESH_BEARER_TOKEN_CREDENTIAL Procedure
Update one or more credentials created by CREATE_BEARER_TOKEN_CREDENTIAL or
CREATE_CREDENTIALS by calling the registered token endpoints and fetching new
bearer tokens. Note this procedure is called automatically by a scheduler job,
ADP$BEARER_REFRESH_JOB, that runs every 50 minutes.
Syntax
PROCEDURE REFRESH_BEARER_TOKEN_CREDENTIAL
(
credential_name IN VARCHAR2 := NULL
);
Parameters
Parameter Description
credential_name The name of the credential to refresh.
6-428
Chapter 6
Autonomous Database Supplied Package Reference
RENAME_CLOUD_STORAGE_LINK Procedure
Rename a registered cloud storage link.
Syntax
PROCEDURE RENAME_CLOUD_STORAGE_LINK
(
old_name IN VARCHAR2,
new_name IN VARCHAR2,
owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := TRUE
);
Parameters
Parameter Description
old_name The current name of the cloud storage link.
new_name The new name of the cloud storage link.
owner The owner of the cloud storage link. Defaults to the current schema.
auto_commit If TRUE, the changes are automatically committed. The default is TRUE.
RENAME_SHARE_PROVIDER Procedure
Rename a registered share provider.
Syntax
PROCEDURE RENAME_SHARE_PROVIDER
(
old_name IN VARCHAR2,
new_name IN VARCHAR2,
owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
old_name The current name of the share provider.
new_name The new name of the share provider.
owner The owner of the share provider. Defaults to the current schema.
6-429
Chapter 6
Autonomous Database Supplied Package Reference
REMOVE_SHARE_SCHEMA Procedure
Remove a schema and all its contents from a share.
Syntax
PROCEDURE REMOVE_SHARE_SCHEMA
(
share_name IN VARCHAR2,
schema_name IN VARCHAR2,
share_owner IN VARCHAR2 := NULL,
auto_commit IN BOOLEAN := FALSE
);
Parameters
Parameter Description
share_name The name of the share.
schema_name The name of the schema to remove.
share_owner The owner of the share.
auto_commit If TRUE, the changes are automatically committed. The default is
FALSE.
SET_SHARE_LINK_METADATA Procedure
Set the additional JSON metadata for the share link.
Syntax
PROCEDURE SET_SHARE_LINK_METADATA
(
link_name IN VARCHAR2,
metadata IN SYS.JSON_OBJECT_T,
replace_existing IN BOOLEAN := FALSE,
link_owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
link_name The name of the share link.
metadata The new metadata.
replace_existing If TRUE, then all existing metadata is replaced by the new version. If
FALSE, then the new value is merged with any existing metadata.
link_owner The owner of the share link. Defaults to the current schema.
6-430
Chapter 6
Autonomous Database Supplied Package Reference
SET_SHARE_PROVIDER_CREDENTIAL Procedure
Set the credential name to be used by the current user when it attempts to access the given
share provider.
Syntax
PROCEDURE SET_SHARE_PROVIDER_CREDENTIAL
(
provider_name IN VARCHAR2,
share_credential IN VARCHAR2,
get_token_credential IN VARCHAR2 := NULL,
owner IN VARCHAR2 := NULL,
check_if_exists IN BOOLEAN := TRUE
);
Parameters
Parameter Description
provider_name The name of the share provider.
share_credential The bearer token credential.
get_token_credential This argument is ignored.
owner The owner of the share provider. Defaults to the current schema.
check_if_exists If TRUE (default), then validate that the credential exists.
SET_SHARE_PROVIDER_METADATA Procedure
Set additional JSON metadata for the share provider.
Syntax
PROCEDURE SET_SHARE_PROVIDER_METADATA
(
provider_name IN VARCHAR2,
metadata IN SYS.JSON_OBJECT_T,
replace_existing IN BOOLEAN := FALSE,
owner IN VARCHAR2 := NULL
);
Parameters
Parameter Description
provider_name The name of the share provider.
metadata The new metadata.
replace_existing If TRUE, then all existing metadata is replaced by the new version. If
FALSE, the new value is merged with any existing metadata.
owner The owner of the share provider. Defaults to the current schema.
6-431
Chapter 6
Autonomous Database Supplied Package Reference
UPDATE_BEARER_TOKEN_CREDENTIAL Procedure
Modify an attribute of a credential created by CREATE_CREDENTIALS or
CREATE_BEARER_TOKEN_CREDENTIAL.
Syntax
PROCEDURE UPDATE_BEARER_TOKEN_CREDENTIAL
(
credential_name IN VARCHAR2,
attribute IN VARCHAR2,
new_value IN VARCHAR2
);
Parameters
Parameter Description
credential_name The name of the credential to update.
attribute The attribute to update. One of 'BEARER_TOKEN', 'CLIENT_ID',
'CLIENT_SECRET', 'TOKEN_REFRESH_RATE'. The token endpoint can
not be changed.
new_value The new value.
SQL> BEGIN
2 dbms_share.create_bearer_token_credential(
3 credential_name=>'MY_RENEWABLE_CREDENTIAL',
4 token_endpoint=>'https://myserver/ords/share_provider/oauth/
token',
5 client_id=>'VXGQ_44s6qJ-K4WHUNM2yQ..',
6 client_secret=>'y9ddppgwEmZl7adDHFQndw..');
7 END;
8 /
PL/SQL procedure successfully completed.
SQL> select credential_name, username from user_credentials where
credential_name LIKE 'MY_RENEWABLE_CREDENTIAL%';
CREDENTIAL_NAME
------------------------------------------
USERNAME
-------------------------------------
MY_RENEWABLE_CREDENTIAL
BEARER_TOKEN
MY_RENEWABLE_CREDENTIAL$TOKEN_REFRESH_CRED
ABCDEF
6-432
Chapter 6
Autonomous Database Supplied Package Reference
View Description
ALL_OBJECT_SHAREABILITY View This view gives information about the shareability, or
otherwise, of tables and views visible to the user.
ALL_SHAREABLE_OBJECTS View This view lists all of the tables or views that can be
added to a share.
ALL_SHARES View This view lists all share created using CREATE_SHARE.
ALL_SHARE_DEFAULT_SETTINGS View A view containing the default settings for shares and
recipients.
ALL_SHARE_EVENTS View This view lists all log events associated with shares.
ALL_SHARE_FILES View A view that contains a list of all the parquet files
generated by shares.
ALL_SHARE_RECIPIENTS View A view containing all share recipients created using
CREATE_SHARE_RECIPIENT.
ALL_SHARE_RECIPIENT_EVENTS View This view contains records of endpoint request by delta
share consumers.
ALL_SHARE_RECIPIENT_GRANTS View This view contains one row for each share/recipient
combination as specified by calls to
GRANT_TO_RECIPIENT.
ALL_SHARE_SCHEMAS View This view lists all the share schemas that were added to
shares using ADD_TO_SHARE.
ALL_SHARE_TABLES View This view lists all tables or views that were added to
share using ADD_TO_SHARE.
ALL_SHARE_VERSIONS View A view listing the different versions of shares.
DBA_SHARE_DETACHED_FILES View This view contains a list of parquet files that we
generated by DBMS_SHARE but that have become
detached in some way.
DBA_SHARE_EVENTS View This view contains records of endpoint request by delta
share consumers.
DBA_SHARE_FILES View A view that contains a list of all the parquet files
generated by shares.
DBA_SHARE_JOB_SEGMENTS View This view contains a list of parquet files that we
generated by DBMS_SHARE but that have become
detached in some way.
DBA_SHARE_JOBS View List share jobs associated with the current user.
DBA_SHARE_RECIPIENT_EVENTS View This view contains records of endpoint request by delta
share consumers.
DBA_SHARE_RECIPIENT_GRANTS View This view contains one row for each share/recipient
combination as specified by calls to
GRANT_TO_RECIPIENT.
DBA_SHARE_RECIPIENTS View A view containing all share recipients created using
CREATE_SHARE_RECIPIENT.
DBA_SHARE_SCHEMAS View This view lists all the share schemas that were added to
shares using ADD_TO_SHARE.
DBA_SHARE_TABLES View This view lists all tables or views that were added to
share using ADD_TO_SHARE.
DBA_SHARE_VERSIONS View A view listing the different versions of shares.
6-433
Chapter 6
Autonomous Database Supplied Package Reference
View Description
DBA_SHARES View This view lists all share created using
DBMS_SHARE.CREATE_SHARE.
USER_SHARE_DETACHED_FILES View This view contains a list of parquet files that we
generated by DBMS_SHARE but that have become
detached in some way.
USER_SHARE_EVENTS View This view contains records of endpoint request by delta
share consumers.
USER_SHARE_FILES View A view that contains a list of all the parquet files
generated by shares.
USER_SHARE_JOBS View List share jobs associated with the current user.
USER_SHARE_JOB_SEGMENTS View List segments (tables, table partitions, or table sub-
partitions) that are being actively processed by running
share jobs.
USER_SHARE_RECIPIENT_EVENTS View This view contains records of endpoint request by delta
share consumers.
USER_SHARE_RECIPIENT_GRANTS View This view contains one row for each share/recipient
combination as specified by calls to
GRANT_TO_RECIPIENT.
USER_SHARE_RECIPIENTS View A view containing all share recipients created using
CREATE_SHARE_RECIPIENT.
USER_SHARE_SCHEMAS View This view lists all the share schemas that were added to
shares using ADD_TO_SHARE.
USER_SHARE_TABLES View This view lists all tables or views that were added to
share using ADD_TO_SHARE.
USER_SHARE_VERSIONS View A view listing the different versions of shares.
USER_SHAREABLE_OBJECTS View This view lists all of the tables or views that can be
added to a share.
USER_SHARES View This view lists all share created using
DBMS_SHARE.CREATE_SHARE.
• ALL_OBJECT_SHAREABILITY View
A view containing all share links that were created using CREATE_SHARE_LINK.
• ALL_SHAREABLE_OBJECTS View
This view lists all of the tables or views that can be added to a share. Objects that
are not shareable may be listed in ALL_OBJECT_SHAREABILITY along with a reason
explaining why they are excluded.
• ALL_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE.
• ALL_SHARE_DEFAULT_SETTINGS View
A view containing the default settings for shares and recipients.
• ALL_SHARE_EVENTS View
This view lists all log events associated with shares.
• ALL_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares.
• ALL_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT.
6-434
Chapter 6
Autonomous Database Supplied Package Reference
• ALL_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers.
• ALL_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by calls to
GRANT_TO_RECIPIENT.
• ALL_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using ADD_TO_SHARE.
• ALL_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE.
• ALL_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row
represents a particular set of parquet files. For LIVE shares, each row represents a
different list of shared objects.
• DBA_SHARE_DETACHED_FILES View
This view contains a list of parquet files that were generated by DBMS_SHARE but that have
become detached in some way.
• DBA_SHARE_EVENTS View
This view lists all log events associated with shares. Its columns are the same as those in
the ALL_SHARE_EVENTS view.
• DBA_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares. Its columns are the
same as those in the ALL_SHARE_FILES view.
• DBA_SHARE_JOB_SEGMENTS View
List segments (tables, table partitions, or table sub-partitions) that are being actively
processed by running share jobs. Its columns are the same as those in the
USER_SHARE_JOB_SEGMENTS view.
• DBA_SHARE_JOBS View
List share jobs associated with the current user. Its columns are the same as those in the
USER_SHARE_JOBS view with the addition of an OWNER column.
• DBA_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers. Its columns
are the same as those in the ALL_SHARE_RECIPIENT_EVENTS view.
• DBA_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by calls to
GRANT_TO_RECIPIENT. Its columns are the same as those in the
ALL_SHARE_RECIPIENT_GRANTS view.
• DBA_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT. Its
columns are the same as those in the ALL_SHARE_RECIPIENTS view.
• DBA_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using ADD_TO_SHARE.
Its columns are the same as those in the ALL_SHARE_SCHEMAS view.
• DBA_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE. Its
columns are the same as those in the ALL_SHARE_TABLES view.
6-435
Chapter 6
Autonomous Database Supplied Package Reference
• DBA_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row
represents a particular set of parquet files. For LIVE shares, each row represents
a different list of shared objects. Its columns are the same as those in the
ALL_SHARE_VERSIONS view.
• DBA_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE. Its columns are
the same as those in the ALL_SHARES view.
• USER_SHARE_DETACHED_FILES View
This view contains a list of parquet files that were generated by DBMS_SHARE but
that have become detached in some way.
• USER_SHARE_EVENTS View
This view lists all log events associated with shares. Its columns are the same as
those in the ALL_SHARE_EVENTS view.
• USER_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares. Its columns
are the same as those in the ALL_SHARE_FILES view.
• USER_SHARE_JOBS View
List share jobs associated with the current user.
• USER_SHARE_JOB_SEGMENTS View
List segments (tables, table partitions, or table sub-partitions) that are being
actively processed by running share jobs.
• USER_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers. Its
columns are the same as those in the ALL_SHARE_RECIPIENT_EVENTS view.
• USER_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by
calls to GRANT_TO_RECIPIENT. Its columns are the same as those in the
ALL_SHARE_RECIPIENT_GRANTS view.
• USER_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT. Its
columns are the same as those in the ALL_SHARE_RECIPIENTS view.
• USER_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using
ADD_TO_SHARE. Its columns are the same as those in the ALL_SHARE_SCHEMAS view.
• USER_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE. Its
columns are the same as those in the ALL_SHARE_SCHEMAS view.
• USER_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row
represents a particular set of parquet files. For LIVE shares, each row represents
a different list of shared objects. Its columns are the same as those in the
ALL_SHARE_VERSIONS view.
• USER_SHAREABLE_OBJECTS View
This view lists all of the tables or views that can be added to a share. Objects that
are not shareable may be listed in ALL_OBJECT_SHAREABILITY along with a reason
6-436
Chapter 6
Autonomous Database Supplied Package Reference
explaining why they are excluded. Its columns are the same as those in the
ALL_SHAREABLE_OBJECTS view.
• USER_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE.
ALL_OBJECT_SHAREABILITY View
A view containing all share links that were created using CREATE_SHARE_LINK.
ALL_SHAREABLE_OBJECTS View
This view lists all of the tables or views that can be added to a share. Objects that are not
shareable may be listed in ALL_OBJECT_SHAREABILITY along with a reason explaining why
they are excluded.
6-437
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE.
6-438
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARE_DEFAULT_SETTINGS View
A view containing the default settings for shares and recipients.
These can be modified (by ADMIN) using UPDATE_DEFAULT_SHARE_PROPERTY and
UPDATE_DEFAULT_RECIPIENT_PROPERTY. Note that there is no DBA or USER version of this
view.
ALL_SHARE_EVENTS View
This view lists all log events associated with shares.
Entries can be removed by deleting the associated share or by calling the
CLEAR_SHARE_EVENTS Procedure.
6-439
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares.
ALL_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT.
6-440
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers.
6-441
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by calls
to GRANT_TO_RECIPIENT.
ALL_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using ADD_TO_SHARE.
ALL_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE.
6-442
Chapter 6
Autonomous Database Supplied Package Reference
ALL_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row represents
a particular set of parquet files. For LIVE shares, each row represents a different list of
shared objects.
6-443
Chapter 6
Autonomous Database Supplied Package Reference
DBA_SHARE_DETACHED_FILES View
This view contains a list of parquet files that were generated by DBMS_SHARE but that
have become detached in some way.
This could happen, for example, if a share or share version is dropped with the
destroy_objects argument set to FALSE. It can also happen if a user with shares is
dropped.
See Also:
"USER_SHARE_DETACHED_FILES View"
DBA_SHARE_EVENTS View
This view lists all log events associated with shares. Its columns are the same as
those in the ALL_SHARE_EVENTS view.
See Also:
"ALL_SHARE_EVENTS View"
DBA_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares. Its columns are
the same as those in the ALL_SHARE_FILES view.
See Also:
"ALL_SHARE_FILES View"
6-444
Chapter 6
Autonomous Database Supplied Package Reference
DBA_SHARE_JOB_SEGMENTS View
List segments (tables, table partitions, or table sub-partitions) that are being actively
processed by running share jobs. Its columns are the same as those in the
USER_SHARE_JOB_SEGMENTS view.
See Also:
"USER_SHARE_JOB_SEGMENTS View"
DBA_SHARE_JOBS View
List share jobs associated with the current user. Its columns are the same as those in the
USER_SHARE_JOBS view with the addition of an OWNER column.
See Also:
"USER_SHARE_JOBS View"
DBA_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers. Its columns are the
same as those in the ALL_SHARE_RECIPIENT_EVENTS view.
See Also:
"ALL_SHARE_RECIPIENT_EVENTS View"
DBA_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by calls to
GRANT_TO_RECIPIENT. Its columns are the same as those in the
ALL_SHARE_RECIPIENT_GRANTS view.
See Also:
"ALL_SHARE_RECIPIENT_GRANTS View"
DBA_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT. Its columns
are the same as those in the ALL_SHARE_RECIPIENTS view.
See Also:
" ALL_SHARE_RECIPIENTS View"
DBA_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using ADD_TO_SHARE. Its
columns are the same as those in the ALL_SHARE_SCHEMAS view.
See Also:
"ALL_SHARE_SCHEMAS View"
6-445
Chapter 6
Autonomous Database Supplied Package Reference
DBA_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE. Its
columns are the same as those in the ALL_SHARE_TABLES view.
See Also:
"ALL_SHARE_TABLES View"
DBA_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row
represents a particular set of parquet files. For LIVE shares, each row represents a
different list of shared objects. Its columns are the same as those in the
ALL_SHARE_VERSIONS view.
See Also:
"ALL_SHARE_VERSIONS View"
DBA_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE. Its columns are the
same as those in the ALL_SHARES view.
See Also:
"ALL_SHARES View"
USER_SHARE_DETACHED_FILES View
This view contains a list of parquet files that were generated by DBMS_SHARE but that
have become detached in some way.
This could happen, for example, if a share or share version is dropped with the
destroy_objects argument set to FALSE. It can also happen if a user with shares is
dropped. In that case the files would show up in DBA_SHARE_DETACHED_FILES View.
See Also:
"DBA_SHARE_DETACHED_FILES View"
USER_SHARE_EVENTS View
This view lists all log events associated with shares. Its columns are the same as
those in the ALL_SHARE_EVENTS view.
See Also:
"ALL_SHARE_EVENTS View"
6-446
Chapter 6
Autonomous Database Supplied Package Reference
USER_SHARE_FILES View
A view that contains a list of all the parquet files generated by shares. Its columns are the
same as those in the ALL_SHARE_FILES view.
See Also:
"ALL_SHARE_FILES View"
USER_SHARE_JOBS View
List share jobs associated with the current user.
Share jobs are created whenever the database needs to run share processes in the
background. The PUBLISH_SHARE Procedure initiates a share job.
USER_SHARE_JOB_SEGMENTS View
List segments (tables, table partitions, or table sub-partitions) that are being actively
processed by running share jobs.
6-447
Chapter 6
Autonomous Database Supplied Package Reference
See Also:
"DBA_SHARE_JOB_SEGMENTS View"
USER_SHARE_RECIPIENT_EVENTS View
This view contains records of endpoint request by delta share consumers. Its columns
are the same as those in the ALL_SHARE_RECIPIENT_EVENTS view.
See Also:
"ALL_SHARE_RECIPIENT_EVENTS View"
USER_SHARE_RECIPIENT_GRANTS View
This view contains one row for each share/recipient combination as specified by calls
to GRANT_TO_RECIPIENT. Its columns are the same as those in the
ALL_SHARE_RECIPIENT_GRANTS view.
See Also:
"ALL_SHARE_RECIPIENT_GRANTS View"
6-448
Chapter 6
Autonomous Database Supplied Package Reference
USER_SHARE_RECIPIENTS View
A view containing all share recipients created using CREATE_SHARE_RECIPIENT. Its columns
are the same as those in the ALL_SHARE_RECIPIENTS view.
See Also:
" ALL_SHARE_RECIPIENTS View"
USER_SHARE_SCHEMAS View
This view lists all the share schemas that were added to shares using ADD_TO_SHARE. Its
columns are the same as those in the ALL_SHARE_SCHEMAS view.
See Also:
"ALL_SHARE_SCHEMAS View"
USER_SHARE_TABLES View
This view lists all tables or views that were added to share using ADD_TO_SHARE. Its columns
are the same as those in the ALL_SHARE_SCHEMAS view.
See Also:
"ALL_SHARE_TABLES View"
USER_SHARE_VERSIONS View
A view listing the different versions of shares. For VERSIONED shares, each row represents
a particular set of parquet files. For LIVE shares, each row represents a different list of
shared objects. Its columns are the same as those in the ALL_SHARE_VERSIONS view.
See Also:
"ALL_SHARE_VERSIONS View"
USER_SHAREABLE_OBJECTS View
This view lists all of the tables or views that can be added to a share. Objects that are not
shareable may be listed in ALL_OBJECT_SHAREABILITY along with a reason explaining why
they are excluded. Its columns are the same as those in the ALL_SHAREABLE_OBJECTS view.
See Also:
"ALL_SHAREABLE_OBJECTS View"
USER_SHARES View
This view lists all share created using DBMS_SHARE.CREATE_SHARE.
See Also:
"ALL_SHARES View"
6-449
Chapter 6
Autonomous Database Supplied Package Reference
View Description
ALL_AVAILABLE_ORACLE_SHARE_PROVIDERS This view lists all available Oracle share
View providers.
ALL_SHARE_CACHE_JOB_INFO View This view contains information on the
most recent Oracle Live share cache
population job in the current POD.
ALL_SHARE_LINKS View A view containing all share links that were
created using CREATE_SHARE_LINK.
ALL_SHARE_LINK_VIEWS View A view that contains a list of all the views
created to access the shared data using
CREATE_SHARE_LINK_VIEW.
ALL_SHARE_PROVIDERS View A view that contains one row for each
share provider subscription, created by
CREATE_SHARE_PROVIDER.
DBA_SHARE_LINK_VIEWS View A view that contains a list of all the views
created to access the shared data using
CREATE_SHARE_LINK_VIEW.
DBA_SHARE_LINKS View A view containing all share links that were
created using CREATE_SHARE_LINK.
USER_SHARE_LINKS_VIEW View A view that contains a list of all the views
created to access the shared data using
CREATE_SHARE_LINK_VIEW.
USER_SHARE_LINKS View A view containing all share links that were
created using CREATE_SHARE_LINK.
USER_SHARE_PROVIDERS View A view that contains one row for each
share provider subscription, created by
CREATE_SHARE_PROVIDER.
• ALL_AVAIL