Professional Documents
Culture Documents
F54691-08
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
iii
June 2020 1-27
May 2020 1-27
April 2020 1-28
March 2020 1-28
February 2020 1-29
September 2019 1-29
May 2019 1-30
Cloud Concepts 1-30
Features of Oracle NoSQL Database Cloud Service 1-31
Key Features 1-32
Responsibility Model for Oracle NoSQL Database 1-33
Always Free Service 1-35
Functional difference between the NoSQL Cloud Service and On-premise
database 1-36
Oracle NoSQL Database Cloud Service Subscription 1-37
Service Limits 1-37
Service Quotas 1-38
Service Events 1-39
Service Metrics 1-41
Data Regions and Associated Service Endpoints 1-42
Plan 1-44
Plan your service 1-44
Developer Overview 1-44
Oracle NoSQL Database Cloud Service Limits 1-46
Estimating Capacity 1-48
Estimating Your Monthly Cost 1-54
Configure 1-55
Configuration tasks for Analytics Integrator 1-55
Accessing Oracle Cloud Object Storage 1-55
Accessing the Oracle Cloud Autonomous Data Warehouse 1-59
Enabling a Compute Instance for Oracle NoSQL Database Cloud Service and
ADW and (optionally) Enabling the ADW Database for Object Storage 1-69
Devops 1-74
Deploying Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager 1-74
Prerequisites 1-75
Step 1: Create Terraform configuration files for NDCS Table or Index 1-75
Step 2: Where to Store Your Terraform Configurations 1-80
Step 3: Create a Stack from a File 1-81
Step 4: Generate an Execution Plan 1-87
Step 5: Run an Apply Job 1-92
iv
Updating Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager 1-99
Step 1: Create Terraform Override files for NoSQL Database Table 1-100
Step 2: Update the Execution Plan 1-101
Step 3: Generate an Execution Plan 1-102
Step 4: Run an Apply Job 1-103
Develop 1-111
Install Analytics Integrator 1-111
Creating a table in the Oracle NoSQL Database Cloud Service 1-112
Install Oracle NoSQL Database Analytics Integrator 1-120
Running the Oracle NoSQL Database Analytics Integrator 1-122
Verifying Data in Oracle Analytics tool 1-134
Using console to create tables 1-141
Using Console to Create Tables in Oracle NoSQL Database Cloud Service 1-141
Inserting Data Into Tables 1-156
Using APIs to create tables 1-158
About Oracle NoSQL Database SDK drivers 1-159
Obtaining a NoSQL Handle 1-161
About Compartments 1-167
Creating Tables and Indexes 1-170
Adding Data 1-183
Using Plugins 1-192
Using IntelliJ Plugin for Development 1-192
Using Eclipse Plugin for Development 1-200
About Oracle NoSQL Database Visual Studio Code Extension 1-201
Designing a Table in Oracle NoSQL Database Cloud Service 1-218
Table Fields 1-218
Primary Keys and Shard Keys 1-220
Time to Live 1-221
Table States and Life Cycles 1-222
Developing in Oracle NoSQL Database Cloud Simulator 1-223
Downloading the Oracle NoSQL Database Cloud Simulator 1-223
Oracle NoSQL Database Cloud Simulator Compared With Oracle NoSQL
Database Cloud Service 1-224
Using Oracle NoSQL Database Migrator 1-225
Overview 1-225
Workflow for Oracle NoSQL Database Migrator 1-230
Use Case Demonstrations 1-237
Oracle NoSQL Database Migrator Reference 1-264
Source Configuration Templates 1-264
Sink Configuration Templates 1-299
Transformation Configuration Templates 1-331
v
Mapping of DynamoDB types to Oracle NoSQL types 1-335
Oracle NoSQL to Parquet Data Type Mapping 1-336
Mapping of DynamoDB table to Oracle NoSQL table 1-337
Troubleshooting the Oracle NoSQL Database Migrator 1-338
Manage 1-341
Using APIs to manage tables 1-341
Reading Data 1-341
Using Queries 1-346
Modifying Tables 1-355
Deleting Data 1-360
Dropping Tables and Indexes 1-366
Using console to manage tables 1-369
Modifying Table Data Using Console 1-369
Managing Table Data Using Console 1-370
Managing Tables and Indexes Using Console 1-371
Monitor 1-380
Monitoring Oracle NoSQL Database Cloud Service 1-380
Oracle NoSQL Database Cloud Service Metrics 1-381
Viewing or Listing Oracle NoSQL Database Cloud Service Metrics 1-390
How to Collect Oracle NoSQL Database Cloud Service Metrics? 1-392
Secure 1-393
About Oracle NoSQL Database Cloud Service Security Model 1-393
Authorization to access OCI resources 1-395
Setting Up Users, Groups, and Policies Using Identity and Access Management 1-396
Setting Up Users, Groups, and Policies Using Identity Domains 1-400
Managing Access to Oracle NoSQL Database Cloud Service Tables 1-402
Accessing NoSQL Tables Across Tenancies 1-402
Giving Another User Permission to Manage NoSQL Tables 1-404
Typical Policy Statements to Manage Tables 1-404
Reference 1-405
References for Analytics Integrator 1-406
Known issues with Oracle NoSQL Database Analytics Integrator 1-406
Failure handling in Oracle NoSQL Database Analytics Integrator 1-409
Reference on NoSQL Database Cloud Service 1-409
Oracle NoSQL Database Cloud Service Reference 1-410
Oracle NoSQL Database Cloud Service Policies Reference 1-418
Known Issues for Oracle NoSQL Database Cloud Service 1-423
Index
vi
1
Oracle NoSQL Database Cloud Service
• Get Started
• Overview
• Configure
• Plan
• Devops
• Develop
• Manage
• Monitor
• Secure
• Reference
Get Started
• Getting Started with Oracle NoSQL Database Cloud Service
• Getting started with Oracle NoSQL Database Analytics Integrator
1-1
Chapter 1
Get Started
To view help for the current page, click the help icon at the top of the page.
Creating a Compartment
When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy with a
root compartment that holds all your cloud resources. You then create additional
compartments within the tenancy (root compartment) and corresponding policies to
control access to the resources in each compartment. Before you create an Oracle
NoSQL Database Cloud Service table, Oracle recommends that you set up the
compartment where you want the table to belong.
1-2
Chapter 1
Get Started
You create compartments in Oracle Cloud Infrastructure Identity and Access Management
(IAM). See Setting Up Your Tenancy and Managing Compartments in Oracle Cloud
Infrastructure Documentation.
1-3
Chapter 1
Get Started
Tip:
Make a note of the location of your private key, optional pass phrase, and
fingerprint of the public key while generating and uploading the API Signing
Key.
After performing the tasks discussed above, collect the credentials information and
provide them to your application.
Providing the Credentials to your Application:
The Oracle NoSQL Database SDKs allow you to provide the credentials to an
application in multiple ways. The SDKs support a configuration file as well as one or
more interfaces that allow direct specification of the information. See the
documentation for the programming language driver that you are using to know about
the specific credentials interfaces.
If you are using a configuration file, the default location is ~/.oci/config. The SDKs
allow the file to reside in alternative locations. It's content looks like this:
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaas...7ap
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaap...keq
pass_phrase=mysecretphrase
The [DEFAULT] line indicates that the lines that follow specify the DEFAULT profile. A
configuration file can include multiple profiles, prefixed with [PROFILE_NAME]. For
example:
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaas...7us
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:15
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaabbap...keq
pass_phrase=mysecretphrase
[MYPROFILE]
user=ocid1.user.oc1..aaaaaaaas...7ap
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaap...keq
pass_phrase=mysecretphrase
1-4
Chapter 1
Get Started
by using certificates that are added to the instance. These certificates are automatically
created, assigned to instances, and rotated.
Using instance principals authentication, you can authorize an instance to make API calls on
Oracle Cloud Infrastructure services. After you set up the required resources and policies, an
application running on an instance can call Oracle Cloud Infrastructure public services,
removing the need to configure user credentials or a configuration file. Instance principal
authentication can be used from an instance where you don't want to store a configuration
file.
• Java
• Python
1-5
Chapter 1
Get Started
• Go
• Node.js
• C#
• Spring Data
Java
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
1-6
Chapter 1
Get Started
SignatureProvider authProvider =
SignatureProvider.createWithInstancePrincipal();
SignatureProvider authProvider =
SignatureProvider.createWithResourcePrincipal();
Creating a handle :
You create a handle to access the cloud service in the us-ashburn-1 region.
At this point, your handle is set up to perform data operations. See SignatureProvider for
more details on the Java classes used.
Python
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
1-7
Chapter 1
Get Started
#
at_provider = SignatureProvider()
at_provider =
SignatureProvider.create_with_instance_principal(region=my_region)
at_provider = SignatureProvider.create_with_resource_principal()
Creating a handle :
The first step in any Oracle NoSQL Database Cloud Service application is to create a
handle used to send requests to the service. The handle is configured using your
credentials and other authentication information as well as the endpoint to which the
application will connect. An example endpoint is to use the region
Regions.US_ASHBURN_1.
Go
You can connect your application to NDCS using any of the following methods:
• Directly providing credentials in the code:
privateKeyFile := "/path/to/privateKeyFile"
passphrase := "examplepassphrase"
sp, err := iam.NewRawSignatureProvider("ocid1.tenancy.oc1..tenancy",
1-8
Chapter 1
Get Started
"ocid1.user.oc1..user",
"us-ashburn-1",
"fingerprint",
"compartmentID",
privateKeyFile ,
&passphrase )
if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
// This is only required if the "region" property is not
//specified in the config file.
Region: "us-ashburn-1",
}
cfg := nosqldb.Config{
// This is only required if the "region" property is not
//specified in ~/.oci/config.
// This takes precedence over the "region" property when both are
specified.
Region: "us-ashburn-1",
}
client, err := nosqldb.NewClient(cfg)
1-9
Chapter 1
Get Started
}
client, err := nosqldb.NewClient(cfg)
sp, err :=
iam.NewSignatureProviderWithResourcePrincipal("compartment_id")
if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
Region: "us-ashburn-1",
}
client, err := nosqldb.NewClient(cfg)
Node.js
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
You may specify credentials directly as part of auth.iam property in the initial
configuration. Create NoSQLClient instance as follows:
[DEFAULT]
tenancy=<your-tenancy-ocid>
user=<your-user-ocid>
fingerprint=<fingerprint-of-your-public-key>
1-10
Chapter 1
Get Started
key_file=<path-to-your-private-key-file>
pass_phrase=<your-private-key-passphrase>
region=<your-region-identifier>
Note that you may also specify your region identifier together with credentials in the OCI
configuration file. The driver will look at the location above by default, and if a region is
provided together with credentials, you do not need to provide initial configuration and
can use the no-argument constructor:
1-11
Chapter 1
Get Started
You may also use JSON config file with the same configuration as described
above. Note that when using Instance Principal you must specify compartment id
(OCID) as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Instance Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.
• Connecting using a Resource Principal :
Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources.
Once set up, create NoSQLClient instance as follows:
You may also use JSON config file with the same configuration as described
above. Note that when using Resource Principal you must specify compartment id
(OCID) as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Resource Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.
C#
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
You may specify credentials directly as IAMCredentials when creating
IAMAuthorizationProvider. Create NoSQLClient as follows:
1-12
Chapter 1
Get Started
You can store the credentials in an Oracle Cloud Infrastructure configuration file. The
default path for the configuration file is ~/.oci/config, where ~ stands for user's home
directory. On Windows, ~ is a value of USERPROFILE environment variable. The file may
contain multiple profiles. By default, the SDK uses profile named DEFAULT to store the
credentials
To use these default values, create file named config in ~/.oci directory with the
following contents:
[DEFAULT]
tenancy=<your-tenancy-ocid>
user=<your-user-ocid>
fingerprint=<fingerprint-of-your-public-key>
key_file=<path-to-your-private-key-file>
pass_phrase=<your-private-key-passphrase>
region=<your-region-identifier>
Note that you may also specify your region identifier together with credentials in the OCI
configuration file. By default, the driver will look for credentials and a region in the OCI
configuration file at the default path and in the default profile. Thus, if you provide region
together with credentials as shown above, you can create NoSQLClient instance without
passing any configuration:
Alternatively, you may specify the region (as well as other properties) in NoSQLConfig:
1-13
Chapter 1
Get Started
has its own identity, and it authenticates using the certificates that are added to it.
Once set up, create NoSQLClient instance as follows:
Compartment="ocid1.compartment.oc1.............................",
AuthorizationProvider=IAMAuthorizationProvider.CreateWithInstancePri
ncipal()
});
{
"Region": "<your-service-region>",
"AuthorizationProvider":
{
"AuthorizationType": "IAM",
"UseInstancePrincipal": true
},
"Compartment":
"ocid1.compartment.oc1.............................",
}
Note that when using Instance Principal you must specify compartment id (OCID)
as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Instance Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.
• Connecting using a Resource Principal :
Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources. Once set
up, create NoSQLClient instance as follows:
{
"Region": "<your-service-region>",
"AuthorizationProvider":
1-14
Chapter 1
Get Started
{
"AuthorizationType": "IAM",
"UseResourcePrincipal": true
},
"Compartment": "ocid1.compartment.oc1.............................",
}
Note that when using Resource Principal you must specify compartment id (OCID) as
compartment property. This is required even if you wish to use default compartment. Note
that you must use compartment id and not compartment name or path. In addition, when
using Resource Principal, you may not prefix table name with compartment name or path
when calling NoSQLClient APIs.
Spring Data
You can use one of these methods to connect to the Oracle NoSQL Database Cloud Service.
1. Use the SignatureProvider instance as a constructor in the NosqlDbConfig class to
configure the Spring Data Framework to connect and authenticate with the Oracle
NoSQL Database Cloud Service. See SignatureProvider in the Java SDK API Reference.
import oracle.nosql.driver.iam.SignatureProvider (
<tenantID>, //The Oracle Cloud Identifier (OCID) of the tenancy.
<userID>, //The Oracle Cloud Identifier (OCID) of a user in the
tenancy.
<fingerprint>, //The fingerprint of the key pair used for signing.
new File(<privateKeyFile>), //Full path to the key file.
char[] passphrase //Optional. A passphrase for the key, if it is
encrypted.
)
2. Use the SignatureProvider with the Instance principal authentication to connect to the
Oracle NoSQL Database Cloud Service. This requires a one-time setup. For more
details, see Instance principal authentication.
SignatureProvider.createWithInstancePrincipal()
3. Use the Cloud Simulator, which requires either an AuthorizationProvider instance from
the NosqlDbConfig class or a helper method such as
NosqlDbConfig.createCloudSimConfig().
com.oracle.nosql.spring.data.NosqlDbFactory.CloudSimProvider.getProvider()
To expose the connection and security parameters to the Oracle NoSQL Database SDK for
Spring Data, you need to create a class that extends the AbstractNosqlConfiguration
class. This provides a NosqlDbConfig Spring bean that describes how to connect to the
Oracle NoSQL Database Cloud Service.
1-15
Chapter 1
Get Started
Typical Workflow
Typical sequence of tasks to work with Oracle NoSQL Database Cloud Service.
If you're developing applications using Oracle NoSQL Database Cloud Service for the
first time, follow these tasks as a guide.
If you're setting up Oracle NoSQL Database Cloud Service for the first time, see
Setting up Your Service.
1-16
Chapter 1
Get Started
Download the Oracle NoSQL Database Analytics Integrator from the Oracle Technology
Network and install it in the desired compute environment. Once installed, you then have all
the classes needed to copy data from the Oracle NoSQL Database Cloud Service to a
database in the Oracle Autonomous Data Warehouse.
1-17
Chapter 1
Get Started
1-18
Chapter 1
Overview
Overview
• What's New in Oracle NoSQL Database Cloud Service
• Cloud Concepts
• Features of Oracle NoSQL Database Cloud Service
• Oracle NoSQL Database Cloud Service Subscription
Topics
• May 2023
• December 2022
1-19
Chapter 1
Overview
May 2023
Feature Description
Changes in terraform script While using Terraform scripts, the table schema can be updated
for updating NoSQL table based on new version of create table ddl statement instead of an
definition alter table statement.That is to update the definition of a table, you
use a new CREATE TABLE as the ddl_statement and internally
the compiler parses the ddl and compares with the existing table
definition and generates an equivalent alter table statement and
applies it to the table.
Additional features The IntelliJ plugin for Oracle NoSQL Database offers these new
available in IntelliJ plugin features:
• Add new columns using form-based entry or supply DDL
statements
• Drop Columns
• Create Indexes
• Drop Indexes
• Execute DML statements to update, insert, and delete data
from a table
1-20
Chapter 1
Overview
Feature Description
Additional features The Oracle NoSQL Database Visual Studio (VS) Code extension
available in Visual Studio for Oracle NoSQL Database offers these new features:
Code Extension • Add new columns using form-based entry or supply DDL
statements
• Drop Columns
• Create Indexes
• Drop Indexes
• Execute DML statements to update, insert, and delete data
from a table
• Download the Query Result after running the SELECT query
into a JSON file
• Download each row of the result obtained after running the
SELECT query into a JSON file
December 2022
Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chicago in North America data region.
See Data Regions and Associated Service URLs.
Migrator utility updates Enhanced the migrator to support importing CSV files that conform to
the RFC4180 standard. Users can create a NoSQL table that
corresponds to CSV file fields either manually or through the migrator.
The migrator now supports table creation with on-demand capacity and
Import/Export of Child tables in NDCS. Additionally, it provides an
option to specify OCI Object Storage service namespace for valid
source and sinks.
Feature Description
New functionality in OCI The following new functionality has been added in the OCI console
console • Bulk upload of table rows: The Upload Data button in the Table
details page allows bulk uploading of data from a local file into the
table, via the browser. The Bulk upload feature is intended for
loading less than a few thousand rows
• Query execution plan: You can now access the query execution
plan for your SQL queries from the OCI console. On the Table
Details page, you have a button to view the query execution plan.
1-21
Chapter 1
Overview
September 2022
Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in two
available new regions:
• Italy Northwest (Milan) in EMEA data region.
• Spain Central (Madrid) in EMEA data region.
See Data Regions and Associated Service URLs.
August 2022
Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Mexico Central (Queretaro) in LAD data region.
See Data Regions and Associated Service URLs.
Availability of Child Tables Table hierarchies (child tables) are available in the cloud. With the
availability of table hierarchy, developers have additional flexibility
when choosing the best data model to meet their business and
application workload requirements. With child tables comes the
ability to perform left outer join (nested table) queries.
Migrator utility updates Enhanced the migrator do support importing files from
DynamoDB. The process is simple, export your DynamoDB tables
as JSON files to AWS S3, then grab those files and import them
into Oracle NoSQL.
June 2022
Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• France Central (Paris) in EMEA data region.
See Data Regions and Associated Service URLs.
Format change for JSON Added pretty print JSON in the query section of the console.
output
New query driver in the Removed the REST based query driver from the console and
console replaced it with a javascript driver. This adds significant more
functionality to the console when it comes to querying your data.
February 2022
Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• South Africa Central (Johannesburg) in EMEA data region.
See Data Regions and Associated Service URLs.
1-22
Chapter 1
Overview
Feature Description
Oracle NoSQL Database With this release, the NoSQL Database Migrator supports the
Migrator below listed functionality:
• Sink for Parquet - Export Oracle NoSQL Database table data
as Parquet files.
• Sink for Parquet in OCI Object storage - Export Oracle
NoSQL Database table data as Parquet files to OCI object
storage.
• TTL Support - Export and import of Row TTL data.
• New transformation includeFields.
For more details, see Overview of Oracle NoSQL Database
Migrator in Using Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database You can use the new Oracle NoSQL Database Visual Studio (VS)
Visual Studio (VS) Code Code Extension to browse tables and execute queries on your
Extension Oracle NoSQL Database Cloud Service instance or simulator.
See About Oracle NoSQL Database Visual Studio Code
Extension.
On Demand pricing model Oracle NoSQL Database Cloud Service added an on-demand
pricing model. With this model, the service automatically scales
the read and write capacities to meet dynamic workload needs.
Customers don't need to provision the read or write capacities for
each table/collection. The monthly billing captures the
application's actual read and write capacities and charges
accordingly.
December 2021
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in two new
available regions:
• Sweden Central (Stockholm) in EMEA data region.
• UAE Central (Abu Dhabi) in EMEA data region.
See Data Regions and Associated Service URLs.
November 2021
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in two new
available regions:
• Marseille (France South ) in EMEA data region.
• Singapore in APAC data region.
See Data Regions and Associated Service URLs.
New OCI IAM Identity The new OCI IAM services introduces identity domains. Identity
Domains service domains are the next generation of IDCS instances (stripes). Each OCI
IAM identity domain represents a stand-alone identity and access
management solution.
1-23
Chapter 1
Overview
October 2021
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Jerusalem (Israel) in EMEA data region.
See Data Regions and Associated Service URLs.
Manage tables and table You can now use the .NET SDK that enables your .NET
data from your .NET application to create, update, and drop tables as well as add,
application read, and delete data in the tables.
May 2021
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Brazil South East (Vinhedo) in LAD data region.
See Data Regions and Associated Service URLs.
February 2021
Feature Description
SQL new features • SQL string functions regex_like(any, string, string),
regex_like(any, string)
• IN operator, DISTINCT operator
• untyped JSON Index
• SQL ORDER BY and GROUP BY clause
Spring Data Driver The Oracle NoSQL Database Cloud Service SDK for Spring Data
provides POJO (Plain Old Java Object) centric modeling and
integration between the Oracle NoSQL Database Cloud Service
and the Spring Data Framework. The following features are
currently supported by the Oracle NoSQL Database Cloud
Service SDK for Spring Data.
• Generic CRUD operations on a repository using methods in
the CrudRepository interface.
• Pagination and sorting operations using methods in the
PagingAndSortingRepository interface.
• Derived Queries.
• Native Queries
For more information, see Oracle NoSQL Database SDK for
Spring Data
1-24
Chapter 1
Overview
December 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chile Central (Santiago) in LAD data region.
See Data Regions and Associated Service URLs.
November 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Cardiff (UK) in EMEA data region.
See Data Regions and Associated Service URLs.
Always Free NoSQL As part of the Oracle Cloud Free Tier, the Oracle NoSQL Database
Database Service Cloud Service participates as an Always Free service.
• You may have up to three Always Free NoSQL tables in your
tenancy.
• You can have both Always Free and regular tables in the same
tenancy.
• The Always Free NoSQL tables are displayed in the console with
an “Always Free” label next to the table name.
• An Always Free NoSQL table cannot be changed to a regular table
or vice versa.
October 2020
Summary of October 2020 new features available in Oracle NoSQL Database Cloud Service.
1-25
Chapter 1
Overview
Feature Description
Oracle NoSQL Database Migrator You can now use Oracle NoSQL Database
Migrator to migrate NoSQL tables from one
data source to another. This tool can operate
on tables in Oracle Oracle NoSQL Database
Cloud Service, Oracle NoSQL Database on-
premise, and handle JSON and MongoDB-
formatted JSON input files. With this release,
NoSQL Database Migrator supports the below
listed migration options:
• Oracle NoSQL Database on-premise to
Oracle NoSQL Database Cloud Service
and vice-versa
• Between two Oracle NoSQL on-premise
Databases
• Between two Oracle NoSQL Database
Cloud Service Tables
• JSON file to Oracle NoSQL Database on-
premise and vice-versa
• JSON file to Oracle NoSQL Database
Cloud Service and vice-versa
• MongoDB-formatted JSON file to an
Oracle NoSQL Database table on-premise
or cloud
For more details, see Overview of Oracle
NoSQL Database Migrator in Using Oracle
NoSQL Database Cloud Service.
September 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Dubai in EMEA data region.
See Data Regions and Associated Service URLs.
August 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• San Jose in North America data region.
See Data Regions and Associated Service URLs.
1-26
Chapter 1
Overview
July 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Jeddah in EMEA data region.
See Data Regions and Associated Service URLs.
June 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chuncheon in APAC data region
See Data Regions and Associated Service URLs.
May 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five new
available regions:
1. North America data region
a. Montreal
2. APAC data region
a. Osaka
b. Melbourne
c. Hyderabad
3. LAD data region
a. Sao Paulo
See Data Regions and Associated Service URLs.
1-27
Chapter 1
Overview
April 2020
Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five
available new regions:
1. North America data region
a. Toronto
2. APAC data region
a. Tokyo
b. Seoul
3. EMEA data region
a. Amsterdam
b. London
See Data Regions and Associated Service URLs
March 2020
Feature Description
Manage tables and table You can now use the Node.js SDK that enable your Node.js
data from your Node.js applications to create, update, and drop tables as well as add,
applications read, and delete data in the tables.
Manage tables and table You can now use the Go SDK that enable your Go applications to
data from your Go create, update, and drop tables as well as add, read, and delete
applications data in the tables.
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five
available new regions:
1. North America data region
a. Ashburn
2. APAC data region
a. Mumbai
b. Sydney
3. EMEA data region
a. Frankfurt
b. Zurich
See Data Regions and Associated Service URLs
1-28
Chapter 1
Overview
February 2020
Feature Description
Integration with Oracle Cloud Oracle NoSQL Database Cloud Service is completely integrated with
Infrastructure. Oracle Cloud Infrastructure. As a result, the following features are
integrated into the NoSQL Database Cloud Service:
• Oracle Cloud Infrastructure Identity and Access Management
replaces Oracle Identity Cloud Service for implementing Identity,
Permissions, and Compartments.
• Oracle Cloud Infrastructure console to create, manage, and
monitor NoSQL tables and data.
• Oracle Cloud Infrastructure tags
• Oracle Cloud Infrastructure auditing
• Oracle Cloud Infrastructure search
• Oracle Cloud Infrastructure limits and quotas
• Oracle Cloud Infrastructure monitoring
Naming tables in your Table naming has changed to fit in with Oracle Cloud Infrastructure
application code changes compartments. For details, see About Compartments.
because of Oracle Cloud
Infrastructure Identity and
Access Management
integration.
The Oracle NoSQL Database The Oracle NoSQL Database Cloud Service query language is
Cloud Service query improvised with new feature updates. They are:
language is updated with new • Enhanced query support for sorted and aggregated queries.
features. • Support for geo_near in NoSQL Database queries.
• Support for Identity columns.
September 2019
Feature Description
Use IntelliJ plug-in to quickly You can now use the IntelliJ plug-in to browse tables and execute
build and run queries queries on your Oracle NoSQL Database Cloud Service instance or
simulator.
Universal Credit accounts do After signing into Oracle Cloud, you use the Oracle Cloud Infrastructure
not use My Services Console to access your service. Previously you were required to
Dashboard access the service from the My Services Dashboard.
New Data Region locations Oracle NoSQL Database Cloud Service is now available in four new
available regions:
• Canada Southeast (Toronto)
• UK South (London)
• South Korea Central (Seoul)
• Japan East (Tokyo)
See Data Regions and Associated Service URLs.
1-29
Chapter 1
Overview
May 2019
Feature Description
Manage tables and table You can now use the Python SDK that enables your Python
data from your Python application to create, update, and drop tables as well as add,
application read, and delete data in the tables.
Cloud Concepts
Learn the Oracle NoSQL Database Cloud Service concepts.
• Table: A Table is a collection of rows where each row holds a data record from
your application.
Each table row consists of key and data fields which are defined when a table is
created. In addition, a table has a specified storage, can support a defined
maximum read and write throughput, and has a maximum size. The storage
capacity is specified at table creation time and can be changed later.
– High-Level Data Types: Oracle NoSQL Database Cloud Service supports all
three types of Big Data. You can create NoSQL tables to store structured,
unstructured, or semi-structured data.
* Structured: This type of data can be organized and stored in tables with a
predefined structure or schema. For example, the data stored in regular
relational database tables come under this category. They adhere to a
fixed schema and are simple to manage and analyze. Data generated
from credit card transactions and e-commerce transactions are a few
examples of structured data.
* Semi-Structured: The data that can not fit into a relational database but
can be organized into rows and columns after a certain level of processing
is called semi-structured data. Oracle NoSQL Database Cloud Service
can store and process semi-structured data by storing key-value pairs in
NoSQL tables. XML data is an example of semi-structured data.
* Unstructured: The data that can not be organized or stored in tables with
a fixed schema or structure are called Unstructured data. Videos, images,
and media are a few examples of unstructured data. Oracle NoSQL
Database Cloud Service lets you define tables with rows of JSON data
type to store unstructured data.
– Data Types: A table is created using DDL (Data Definition Language) which
defines the data types and primary keys used for the table.
Oracle NoSQL Database Cloud Service supports several data types, including
several numeric types, string, binary, timestamp, maps, arrays, records, and a
special JSON data type which can hold any valid JSON data. Applications can
use unstructured tables where a row uses the JSON data type to store the
data, or use structured tables where all row types are defined and enforced.
See Supported Data Types to view the list of data types supported in Oracle
NoSQL Database Cloud Service.
Unstructured tables are flexible. But typed data is safer from an enforcement
and storage efficiency point of view. Table schema can be modified , but the
table structure is less flexible to change.
1-30
Chapter 1
Overview
– Indexes: Applications can create an index on any data field which has a data type
that permits indexing, including JSON data fields. JSON indexes are created using a
path expression into the JSON data.
– Capacity: When you create a table, you can choose between Provisioned Capacity
and On-Demand Capacity.
* By choosing Provisioned Capacity, you also specify throughput and storage
resources available for the table. The read and write operations to the table are
limited by the read and write throughput capacity that you define. The amount of
space that the table can use is limited by the storage capacity.
* By choosing On-Demand Capacity, the read and write operations to the table are
automatically managed by Oracle. The amount of space that the table can use is
limited by the storage capacity.
See Estimating Capacity to learn how to estimate capacity for your application
workload.
• Distribution and Sharding: Although not visible to the user, Oracle NoSQL Database
Cloud Service tables are sharded and replicated for availability and performance.
Therefore, you should consider this during schema design.
– Primary and Shard keys: An important consideration for a table is the designation of
the primary key, and the shard key. When you create a table in Oracle NoSQL
Database Cloud Service, the data in the table is automatically sharded based on a
portion of the table primary key, called the shard key. See Primary Keys and Shard
Keys for considerations on how to designate the primary and shard keys.
– Read Consistency: Read consistency specifies different levels of flexibility in terms
of which copy of the data is used to fulfill a read operation. Oracle NoSQL Database
Cloud Service provides two levels of consistency, EVENTUAL, and ABSOLUTE.
Applications can specify ABSOLUTE consistency, which guarantees that all read
operations return the most recently written value for a designated key. Or,
applications capable of tolerating inconsistent data can specify EVENTUAL consistency,
allowing the database to return a value more quickly even if it is not up-to-date.
ABSOLUTE consistency results in a higher cost, consuming twice the number of read
units for the same data relative to EVENTUAL consistency, and should only be used
when required. Consistency can be set for a NoSQL handle, or as an optional
argument for all read operations.
• Identity Access and Management: Oracle NoSQL Database Cloud Service uses the
Oracle Cloud Infrastructure Identity and Access Management to provide secure access to
Oracle Cloud. Oracle Cloud Infrastructure Identity and Access Management enables you
to create user accounts and give users permission to inspect, read, use, or manage
Oracle NoSQL Database Cloud Service tables. See Overview of Oracle Cloud
Infrastructure Identity and Access Management in Oracle Cloud Infrastructure
Documentation.
1-31
Chapter 1
Overview
Key Features
Learn the key features of Oracle NoSQL Database Cloud Service.
• Fully Managed with Zero Administration: Developers do not need to administer
data servers or the underlying infrastructure and security. Oracle maintains the
hardware and software which allows developers to focus on building applications.
• Faster Development Life Cycle: After purchasing access to the service,
developers write their applications, and then connect to the service using their
credentials. Reading and writing data can begin immediately. Oracle performs
Database Management, Storage Management, High Availability, and Scalability
which helps developers concentrate on delivering high-performance applications.
• High Performance and Predictability: Oracle NoSQL Database Cloud Service
takes advantage of the latest component technologies in the Oracle Cloud
Infrastructure by providing high performance at scale. Developers know that their
applications return data with predictable latencies, even as their throughput and
storage requirements increase.
• On-Demand Throughput and Storage Provisioning: Oracle NoSQL Database
Cloud Service scales to meet application throughput performance requirements
with low and predictable latency. As workloads increase with periodic business
fluctuations, applications can increase their provisioned throughput to maintain a
consistent user experience. As workloads decrease, the same applications can
reduce their provisioned throughput, resulting in lower operating expenses. The
same holds true for storage requirements. Those can be adjusted based on
business fluctuations. You can increase or decrease the storage using the Oracle
Cloud Infrastructure Console or the TableRequest API.
You can choose between an on-demand capacity allocation or provisioned-based
capacity allocation:
– With on-demand capacity, you don't need to provision the read or write
capacities for each table. You only pay for the read and write units that are
actually consumed. Oracle NoSQL Database Cloud Service automatically
manages the read and write capacities to meet the needs of dynamic
workloads.
– With provisioned capacity, you can increase or decrease the throughput using
the Oracle Cloud Infrastructure Console or the TableRequest API.
You can also modify the capacity mode from Provisioned Capacity to On-Demand
Capacity and vice-versa.
• Simple APIs: Oracle NoSQL Database Cloud Service provides easy-to-use
CRUD (Create Read Update Delete) APIs that allow developers to easily create
tables and maintain data in them.
• Data Modeling: Oracle NoSQL Database Cloud Service supports both schema-
based and schema-less (JSON) modeling.
• Data Safety in Redundancy: The Oracle NoSQL Database Cloud Service stores
data across multiple Availability Domains (ADs) or Fault Domains (FDs) in single
AD regions. If an AD or FD becomes unavailable, user data is still accessible from
another AD or FD.
1-32
Chapter 1
Overview
• Data Security: Data is encrypted at rest (on disk) with Advanced Encryption Standard
(AES 256). Data is encrypted in motion (transferring data between the application and
Oracle NoSQL Database Cloud Service) with HTTPS.
• ACID-Compliant Transactions: ACID (Atomicity, Consistency, Isolation, Durability)
transactions are fully supported for the data you store in Oracle NoSQL Database Cloud
Service. If required, consistency can be relaxed in favor of lower latency.
• JSON Data Support: Oracle NoSQL Database Cloud Service allows developers to query
schema-less JSON data by using the familiar SQL syntax.
• Partial JSON Updates: Oracle NoSQL Database Cloud Service allows developers to
update (change, add, and remove) parts of a JSON document. Because these updates
occur on the server, the need for a read-modify-write cycle is eliminated, which would
consume throughput capacity.
• Time-To-Live: Oracle NoSQL Database Cloud Service lets developers set a time frame
on table rows, after which the rows expire automatically, and are no longer available. This
feature is a critical requirement when capturing sensor data for Internet Of Things (IoT)
services.
• SQL Queries: Oracle NoSQL Database Cloud Service lets developers access data with
SQL queries.
• Secondary Indexes: Secondary indexes allow a developer to create an index on any
field of a supported data type, thus improving performance over multiple paths for queries
using the index.
• NoSQL Table Hierarchy: Oracle NoSQL Database Cloud Service supports Table
hierarchies that offer high scalability while still providing the benefits of data
normalization. A NoSQL table hierarchy is an ideal data model for applications that need
some data normalization, but also require predictable, low latency at scale. A table
hierarchy links distinct tables and therefore enables left outer joins, combining rows from
two or more tables based on related columns between them. Such joins execute
efficiently as rows from the parent-child tables are co-located in the same database
shard.
1-33
Chapter 1
Overview
1-34
Chapter 1
Overview
1-35
Chapter 1
Overview
1-36
Chapter 1
Overview
Service Limits
Oracle NoSQL Database Cloud Service has various default limits. Whenever you create an
Oracle NoSQL Database Cloud Service table, the system ensures that your requests are
within the bounds of the specified limit. When creating On Demand capacity tables, the On
Demand Capacity max limits will be used during the validation.
Oracle Cloud tenancies are typically active in more than one region. You can view this as a
single large tenancy, however, the Oracle NoSQL Database Cloud Service takes the
combination of tenancy OCID and region location to establish some of the limits (region-level
limits). Additionally, it has limits at the table level. For detailed list of service limits, see Oracle
NoSQL Database Cloud Service Limits
You can view the existing limits for Read Units, Write Units, and Table size for your region
from the Limits, Quotas, and Usage page in Oracle Cloud Infrastructure Console as shown
below. This example shows the values for the Ashburn region. You see the service limit, the
current usage, and the current availability for each of the limits. Note that the availability can
be affected by quota policies on either this compartment or its parent compartment.
1-37
Chapter 1
Overview
You can increase your service limits by submitting a request either from Limits,
Quotas, and Usage page in Oracle Cloud Infrastructure Console or using the
TableRequest API as shown below. This is an example service limit update request for
increasing the read units from 100000 to 110000 in the Ashburn region.
See About Service Limits and Usage in Oracle Cloud Infrastructure Documentation.
Service Quotas
You can use quotas to determine how other users allocate Oracle NoSQL Database
Cloud Service resources across compartments in Oracle Cloud Infrastructure. A
compartment is a collection of related resources (such as instances, virtual cloud
networks, block volumes) that can be accessed only by certain groups that have been
given permission by an administrator. Whenever you create an Oracle NoSQL
Database Cloud Service table or scale up the provisioned throughput or storage, the
system ensures that your requests are within the bounds of the quota for that
compartment.
This table lists the Oracle NoSQL Database Cloud Service quotas that you can
reference.
You can set quotas using the Console or API. You can execute quota statements from
the Quota Policies page under the Governance option in Oracle Cloud Infrastructure
Console.
1-38
Chapter 1
Overview
If you do not specify any region, then the quota will be set to the entire tenancy, which
means it applies to all regions. However, you can set a specific quota to one region alone
by applying a filter condition in the set clause and specifying the name of one particular
region as shown below.
Limit the number of Oracle NoSQL Database Cloud Service read units that users
can allocate to tables they create in the region us-phoenix-1 to 10,000.
In this example, Only the Phoenix region will have a read unit count quota of 10000.
• Limit the number of Oracle NoSQL Database Cloud Service write units that users
can allocate to tables they create in my_compartment to 5,000.
• Limit the maximum storage space of Oracle NoSQL Database Cloud Service that
users can allocate to tables they create in my_compartment to 1,000 GB.
Service Events
Actions that you perform on Oracle NoSQL Database Cloud Service tables emit events.
You can define rules that trigger a specific action when an event occurs. For example, you
might define a rule that sends a notification to administrators when someone drops a table.
See Overview of Events and Get Started with Events in Oracle Cloud Infrastructure
Documentation.
This table lists the Oracle NoSQL Database Cloud Service events that you can reference.
1-39
Chapter 1
Overview
Example
This example shows information associated with the event Create Table Begin:
{
"cloudEventsVersion": "0.1",
"contentType": "application/json",
"source": "nosql",
"eventID": "<unique_ID>",
"eventType": "com.oraclecloud.nosql.createtable.begin",
"eventTypeVersion": "<version>",
"eventTime": "2019-12-30T00:52:01.343Z",
1-40
Chapter 1
Overview
"data": {
"additionalDetails": {},
"availabilityDomain": "<availability_domain>",
"compartmentId": "ocid1.compartment.oc1..<unique_ID>",
"compartmentName": "my_compartment",
"freeformTags": {
"key":"value"
},
"resourceId": "ocid1.nosqltable.oc1..<unique_ID>",
"resourceName": "my_nosql_table"
},
"extensions": {
"compartmentId": "ocid1.compartment.oc1..<unique_ID>"
}
}
Service Metrics
Learn about the metrics emitted by the metric namespace oci_nosql (Oracle NoSQL
Database Cloud Service).
Metrics for Oracle NoSQL Database Cloud Service include the following dimensions:
• RESOURCEID
The OCID of the NoSQL Table in the Oracle NoSQL Database Cloud Service.
Note:
OCID is an Oracle-assigned unique ID that is included as part of the resource's
information in both the console and API.
• TABLENAME
The name of the NoSQL table in the Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service sends metrics to the Oracle Cloud Infrastructure
Monitoring Service. You can view or create alarms on these metrics using the Oracle Cloud
Infrastructure Console SDKs or CLI. See OCI SDKs and CLI in Oracle Cloud Infrastructure
Documentation.
Available Metrics
1-41
Chapter 1
Overview
Data Regions
To start with Oracle NoSQL Database Cloud Service, you must create an account
(either for free trial or to purchase provisioning). Along with other details, the account
application requires you to choose the default data region.
If your application is running under your tenancy on an OCI host in the same region,
you should configure your VCN to route all NDCS traffic through the Service Gateway.
See Access to Oracle Services: Service Gateway for more details.
This table lists the service endpoints for all the data regions which are or will be
supported by Oracle NoSQL Database Cloud Service. See Service Availability for the
latest information about the regions that support Oracle NoSQL Database Cloud
Service.
1-42
Chapter 1
Overview
1-43
Chapter 1
Plan
Plan
• Plan your service
Developer Overview
Get a high-level overview of the service architecture and select an SDK/Driver that will
meet your application development needs.
NDCS Developer tasks
Oracle NoSQL Database Cloud Service (NDCS) is a fully HA service. It is designed for
highly demanding applications that require low latency response times, a flexible data
model, and elastic scaling for dynamic workloads. As a fully managed service, Oracle
handles all the administrative tasks such as software upgrades, security patches,
hardware failures, and patching.
1-44
Chapter 1
Plan
NoSQL Database SDKs/Drivers – These SDKs are licensed under Universal Permissive
License (UPL) and can be used on either the NoSQL Cloud Service or the on-premise
database. These are full featured SDKs and offer a rich set of functionality. These drivers can
also be used in applications executing against Oracle NoSQL clusters running in other
vendors cloud.
1. NoSQL SDK for Java
2. NoSQL JavaScript SDK
3. NoSQL Python SDK
4. NoSQL .NET SDK
5. NoSQL Go SDK
6. NoSQL SDK for Spring Data
OCI Console – Offers ability to create tables quickly, modify tables, delete tables, load data,
create indexes quickly, delete indexes, basic queries, alter table capacities and view metrics.
OCI SDKs/Drivers – Oracle Cloud Infrastructure provides a number of Software Development
Kits (SDKs) to facilitate development of custom solutions. These are typically licensed under
UPL. These offer similar functionality to the OCI console through a programmatic interface.
1. Rest API
2. SDK for Java
3. SDK for Python
4. SDK for Javascript
5. SDK for .NET
6. SDK for Go
7. SDK for Ruby
References:
1-45
Chapter 1
Plan
1-46
Chapter 1
Plan
1-47
Chapter 1
Plan
Estimating Capacity
Learn how to estimate throughput and storage capacities for your Oracle NoSQL
Database Cloud Service.
1-48
Chapter 1
Plan
Note:
Oracle NoSQL Database Cloud Service automatically manages the read and write
capacities to meet the needs of dynamic workloads when using On-Demand
Capacity. It is recommended to validate that the capacity needs do not exceed the
On Demand Capacity limits. See Oracle NoSQL Database Cloud Service Limits for
more details.
{
"additionalFeatures": "Front Facing 1.3MP Camera",
"os": "Macintosh OS X 10.7",
"battery": {
"type": "Lithium Ion (Li-Ion) (7000 mAH)",
"standbytime" : "24 hours" },
"camera": {
"features": ["Flash","Video"],
1-49
Chapter 1
Plan
Assume that the application has 100,000 such records and the primary key is
about 20 bytes in size. Also, assume that there are queries that would read
records using secondary index. For example, to find all the records that have
screen size of 13 inches. So, an index is created on the screenSize field.
The information in summarized as follows:
Tables Rows per Columns per Key Size in Value Size in Indexes Index Key
Table Table Bytes Bytes (sum Size in Bytes
of all
columns)
1 100000 2 20 1 KB 1 20
2. Identify the list of operations (typically CRUD operations and Index reads) on the
table and at what rate (per second) they are expected.
1-50
Chapter 1
Plan
1-51
Chapter 1
Plan
1-52
Chapter 1
Plan
1-53
Chapter 1
Plan
Using steps 2 and 3, determine read and write units for the application workload.
Note:
The preceding calculations assume eventually consistent read requests. For
an absolute consistency read request, the operation consumes double the
capacity units. Therefore, the read capacity units would be 4844 Read Units.
1-54
Chapter 1
Configure
Management. Scroll through to locate Oracle NoSQL Database Cloud. Click Add to add
an entry for Oracle NoSQL Database Cloud under the Configuration Options.
4. Step 4: Expand Database - NoSQL to find the different Utilization and configuration
options. You have two options under Configuration. You could start with an "Always Free"
option or you could provision your instance with your desired configuration.
• Step 4a: If you want an Always Free option, under Configuration expand Oracle
NoSQL Database Cloud - Read, Oracle NoSQL Database Cloud Service - Storage,
and Oracle NoSQL Database Cloud Service - Write and change the Read, Storage
and Write capacity as 0. Then your total cost estimate is shown as 0 and you can
proceed with the Always Free option.
5. Step 5: Alternatively, if you want to provision higher read, write, and storage capacity
than what is available in Always Free, you can do so by entering the configuration values
under Database-NoSQL.
• Step 5a: Under Utilization, do not modify the default values as Oracle NoSQL
Database Cloud Service does not use any of these values.
• Step 5 b: Under Configuration, add the number of Read Units, Write Units, and
Storage Capacity that you estimated in the previous step. The cost is estimated
based on your input values and shown on the page.
Note:
If you are using the auto-scale feature, an invoice will be generated end-of-the-
month for the actual consumption of read and write units in real-time. So you may
wish to collect your own audit logs in the application to verify end-of-the-month
billing. It would be recommended to log the consumed read and write units that are
returned by the NoSQL Database Cloud service with every API call. You could use
this data to correlate with end-of-the-month invoicing data from the Oracle Cloud
metering and billing system.
For a detailed understanding of the different pricing models available, see NoSQL Database
Cloud Service Pricing.
Configure
• Configuration tasks for Analytics Integrator
1-55
Chapter 1
Configure
1-56
Chapter 1
Configure
There is no need to create any other files. Once the bucket is created, all you need to do is
specify the name of the bucket in the configuration, along with the bucket’s compartment, and
the utility will take it from there; creating objects with names derived from the table being
copied.
For example, if the name of the bucket you created is nosql-to-adw and the name of the
table you wish to copy to ADW is myTable, and you direct the utility to use the Oracle NoSQL
Migrator , then the utility will retrieve data from the NoSQL table named myTable, convert it
to Parquet format, and copy the Parquet data to the nosql-to-adw bucket as objects having
names of the form, myTable_2021_07_22/Data/000000.parquet, myTable_2021_07_22/Data/
000001.parquet, etc.
1-57
Chapter 1
Configure
• If you, rather than the system administrator, generate the AUTH_TOKEN, then
copy it to a file for safekeeping. Whether generated by you or the system
administrator, the AUTH_TOKEN must then be stored in the ADW database. For
1-58
Chapter 1
Configure
details on how to do this, see Enable the OCI Resource Principal Credential or Store/
Enable the User's Object Storage AUTH_TOKEN in the ADW Database
1-59
Chapter 1
Configure
1-60
Chapter 1
Configure
1-61
Chapter 1
Configure
1-62
Chapter 1
Configure
1-63
Chapter 1
Configure
After obtaining the wallet zip file, make note of the password and store the wallet in
any environment from where you will be connecting to the database. Additionally, to
use the Oracle NoSQL Database Analytics Integrator, the extracted contents of the
wallet zip file must be installed in the environment where you will be executing the
utility. For example, if you are executing the utility from an Oracle Cloud Compute
Instance, you should extract the contents of the zip file in any directory on that
instance. Then use the path to that directory as the value of the parameter
databaseWallet in the database section of the utility’s configuration file.
Enable the Resource Principal Credential or Store/Enable the User's Object Storage
AUTH_TOKEN in the ADW Database
After retrieving data from the desired NoSQL Cloud Service table and writing that data
to Parquet files in Object Storage, the Oracle NoSQL Database Analytics Integrator
uses subprograms from the Oracle PL/SQL DBMS_CLOUD package to retrieve the
Parquet files from Object Storage. It then loads the data contained in those files to a
table in the database you created in the Oracle Cloud Autonomous Data Warehouse.
Before the Oracle NoSQL Database Analytics Integrator can do this, you must provide
a way for the ADW database to authenticate with Object Storage for access to those
Parquet files. The ADW database can authenticate with the Object Storage service in
one of two ways: using the OCI Resource Principal or a user-specific AUTH_TOKEN
that either you or the system administrator generates. The authentication mechanism
you decide to use is enabled by executing the following steps from the Oracle Cloud
Console.
• Select Oracle Database from the menu on the left side of the display.
• Select Autonomous Data Warehouse.
1-64
Chapter 1
Configure
1-65
Chapter 1
Configure
• Select Development from the menu on the left side of the display.
1-66
Chapter 1
Configure
• From the window labeled [Worksheet]*, if you wish to authenticate the ADW database
with Object Storage using the Resource Principal, then execute the following procedure.
EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();
1-67
Chapter 1
Configure
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'NOSQLADWDB_OBJ_STORE_CREDENTIAL',
username => '<your-Oracle-Cloud-username>',
password => '<cut-and-paste-the-AUTH_TOKEN>'
);
END;
Note:
When the ADW database uses the OCI Resource Principal to authenticate
with Object Storage, the name of the credential is OCI$RESOURCE_PRINCIPAL.
Alternatively, when using the AUTH_TOKEN to authenticate with Object
Storage, the name of the credential is the value you specify for the
credential_name parameter in the DBMS_CLOUD.CREATE_CREDENTIAL
procedure. But note that the value shown above
(NOSQLADWDB_OBJ_STORE_CREDENTIAL) is only an example. You can use any
name you wish. Thus, the dbmsCredentialName parameter in the
configuration file should contain either the value OCI$RESOURCE_PRINCIPAL,
or the name you specify here for the credential_name parameter; depending
on the authentication mechanism you choose to employ for authenticating
the ADW database with Object Storage.
1-68
Chapter 1
Configure
Enabling a Compute Instance for Oracle NoSQL Database Cloud Service and ADW
and (optionally) Enabling the ADW Database for Object Storage
Steps to authorize your compute instance to perform actions on the NoSQL Service,
ObjectStorage, and ADW.
Create a Dynamic Group for the Compute Instance and the ADW Database
Although you can execute the Oracle NoSQL Database Analytics Integrator using your own
credentials exclusively, it is recommended that you execute the utility from an Oracle Cloud
Compute Instance authorized to perform actions on the Oracle NoSQL Cloud Service, Object
Storage, and the Autonomous Data Warehouse. Similarly, although you can use an Object
Storage AUTH_TOKEN to allow the ADW database to access Object Storage, it is
recommended that you use the OCI Resource Principal to authenticate the ADW database
with Object Storage. It is important to note though, that because the database you create in
ADW requires authentication using the database’s username and password, your user
credentials still must be supplied to the utility to access that resource.
To authorize your compute instance to perform actions on the NoSQL Service,
ObjectStorage, and ADW, a dynamic group must be created and a set of matching rules must
be added for your instance. To allow the ADW database to use the OCI Resource Principal to
access Object Storage, a dynamic group with the appropriate set of rules must also be
created. If you wish, the same dynamic group you create for your compute instance can also
be used for the ADW database. This is shown in the example below.
• Select Identity & Security from the menu on the left of the display.
• Under Identity, select Dynamic Groups.
1-69
Chapter 1
Configure
1-70
Chapter 1
Configure
• Click Create.
1-71
Chapter 1
Configure
1-72
Chapter 1
Configure
An example set of policies that allow the compute instance from the dynamic group to access
the NoSQL Cloud Service, ObjectStorage, and ADW is given below.
1-73
Chapter 1
Devops
After this configuration, you should be able to execute the utility from a compute
instance using Instance Principal authentication.
Devops
• Deploying Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager
• Updating Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager
1-74
Chapter 1
Devops
Note:
We’re going to be working with the Oracle Cloud Infrastructure (OCI) Resource
Manager Command Line Interface (CLI) and executing these commands in the
Cloud Shell using Console. This means you will need some information about your
cloud tenancy and other items such as public or private key pairs handy. If you want
to configure the OCI CLI on your local machine, refer to this documentation.
Prerequisites
• Basic understanding of Terraform. Read the brief introduction here.
• An Oracle Cloud account and a subscription to the Oracle NoSQL Database Cloud
Service. If you do not already have an Oracle Cloud account you can start here.
• OCI Terraform provider installed and configured.
However, you might have to configure additional arguments with authentication credentials
for an OCI account based on the authentication method. The OCI Terraform provider
supports three authentication methods:
• API Key Authentication
• Instance Principal Authorization
• Security Token Authentication
The region argument specifies the geographical region in which your provider resources are
created. To target multiple regions in a single configuration, you simply create a provider
definition for each region and then differentiate by using a provider alias, as shown in the
following example. Notice that only one provider, named "oci" is defined, and yet the oci
1-75
Chapter 1
Devops
provider definition is entered twice, once for the us-phoenix-1 region (with the alias
"phx"), and once for the region us-ashburn-1 (with the alias "iad").
provider "oci" {
region = "us-phoenix-1"
alias = "phx"
}
provider "oci" {
region = "us-ashburn-1"
alias = "iad"
}
variable "tenancy_ocid" {
}
variable "user_ocid" {
}
variable "fingerprint" {
}
variable "private_key_path" {
}
variable "region" {
}
provider "oci" {
region = var.region
tenancy_ocid = var.tenancy_ocid
user_ocid = var.user_ocid
fingerprint = var.fingerprint
private_key_path = var.private_key_path
}
1-76
Chapter 1
Devops
Note:
Instance principal authorization applies only to instances that are running in Oracle
Cloud Infrastructure.
In the example below, an region argument is required for the OCI Terraform provider, and an
auth argument is required for Instance Principal Authorization. You can provide the value for
region argument as Environment Variables or within Terraform configuration variables (as
mentioned in Substep 1.3: Loading Terraform Configuration Variables).
variable "region" {
}
provider "oci" {
auth = "InstancePrincipal"
region = var.region
}
Note:
This token expires after one hour. Avoid using this authentication method when
provisioning of resources takes longer than one hour. See Refreshing a Token for
more information.
In the example below, region an argument is required for the OCI Terraform provider. The
auth and config_file_profile arguments are required for Security Token authentication.
You can provide the value for API Key Authentication keys (tenancy_ocid, user_ocid,
private_key_path, and fingerprint) as Environment Variables or in the OCI config file
(~/.oci/config). You can also provide the value for region and config_file_profile as
Environment Variables or within Terraform configuration variables (as mentioned in Substep
1.3: Loading Terraform Configuration Variables).
variable "region" {
}
variable "config_file_profile" {
}
provider "oci" {
auth = "SecurityToken"
config_file_profile = var.config_file_profile
region = var.region
}
1-77
Chapter 1
Devops
Create a new file named "nosql.tf" that contains the NoSQL terraform configuration
resources for creating NoSQL Database Cloud Service tables or indexes. For more
information about the NoSQL Database resources and data sources, see
oci_nosql_table.
In the example below, we are creating 2 NoSQL tables. compartment_ocid argument
is required for NoSQL Database resources such as tables and indexes. You can
provide the value for compartment_ocid as Environment Variables or within Terraform
configuration variables (as mentioned in Substep 1.3: Loading Terraform
Configuration Variables).
variable "compartment_ocid" {
}
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demoKeyVal (key
INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO
CYCLE), value JSON, PRIMARY KEY (key))"
name = "demoKeyVal"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}
1-78
Chapter 1
Devops
tenancy_ocid =
"ocid1.tenancy.oc1..aaaaaaaaqljdu37xcfoqvyj47pf5dqutpxu4twoqc7hukwgpbavpdwkqx
c6q"
user_ocid =
"ocid1.user.oc1..aaaaaaaafxz473ypsc6oqiespihan6yi6obse3o4e4t5zmpm6rdln6fnkurq
"
fingerprint = "2c:9b:ed:12:81:8d:e6:18:fe:1f:0d:c7:66:cc:03:3c"
private_key_path = "~/NoSQLLabPrivateKey.pem"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"
region = "us-phoenix-1"
For example:
region = "us-phoenix-1"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"
For example:
region = "us-phoenix-1"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"
config_file_profile = "PROFILE"
1-79
Chapter 1
Devops
available for NoSQL Database. In the example, the default value of the read, write,
and storage units for NoSQL table are set to 10, 10, and 1 respectively.
variable "table_table_limits_max_read_units" {
default = 10
}
variable "table_table_limits_max_write_units" {
default = 10
}
variable "table_table_limits_max_storage_in_gbs" {
default = 1
}
Using Cloud Shell from the Console, we have created all the required terraform
configuration files for provider definition, NoSQL database resources, authentication
values, and input variables.
When creating a stack with Resource Manager, you can select your Terraform
configuration from the following sources.
• Local .zip file
• Local folder
• Object Storage bucket
The most recent contents of the bucket are automatically used by any job running
on the associated stack.
• Source code control systems, such as GitHub and GitLab
The latest version of your configuration is automatically used by any job running
on the associated stack.
• Template (pre-built Terraform configuration from Oracle or a private template)
• Existing compartment (Resource Discovery)
1-80
Chapter 1
Devops
Substep 2.1: Create Configuration Source Providers for Remote Terraform Configurations
You need to create a source configuration provider in case you want to use the remote
terraform configurations that are hosted on a source control system, such as GitHub and
GitLab.
For more information on how to create configuration source providers for remote Terraform
configurations, see Managing Configuration Source Providers.
Use the command related to your file location. For Terraform configuration sources supported
with Resource Manager, see Where to Store Your Terraform Configurations.
For this tutorial, we are going to create a stack using Instance Principal authentication
method from a local terraform configuration zip file, terraform.zip. The terraform.zip
file contains the following files:
• provider.tf
• nosql.tf
• terraform.tfvars
• variables.tf
Note:
In this tutorial, we are using OCI Resource Manager CLI commands to create a
stack. You can perform the same task using OCI Resource Manager Console.
1-81
Chapter 1
Devops
Where,
• --compartment-id is the OCID of the compartment where you want to create the
stack.
• --config-source is the name of a .zip file that contains one or more Terraform
configuration files.
• (Optional) --variables is the path to the file specifying input variables for your
resources.
The Oracle Cloud Infrastructure Terraform provider requires additional parameters
when running Terraform locally (unless you are using instance principals). For
more information on using variables in Terraform, see Input Variables. See also
Input Variable Configuration.
1-82
Chapter 1
Devops
Example response:
{
"data": {
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-type": "ZIP_UPLOAD",
"working-directory": null
},
"defined-tags": {},
"description": null,
"display-name": "ormstack20220117104810",
"freeform-tags": {},
"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"lifecycle-state": "ACTIVE",
"stack-drift-status": "NOT_CHECKED",
"terraform-version": "1.0.x",
"time-created": "2022-01-17T10:48:10.878000+00:00",
"time-drift-last-checked": null,
"variables": {}
},
"etag": "dd62ace0b9e9d825d825c05d4588b73fede061e55b75de6436b84fb2bb794185"
}
We have created a stack from the terraform configuration file(s) and generated a stack id. In
the next step, this stack id is used to generate an execution plan for the deployment of
NoSQL tables.
"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq"
1-83
Chapter 1
Devops
For example:
Example response:
...
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demo will be read
during apply",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\" -> (known
after apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtl
xxsrvrc4zxr6lo4a\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ is_auto_reclaimable = true -> (known after
apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
1-84
Chapter 1
Devops
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " - \"orcl-cloud.free-tier-retained\" = \"true\"",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demoKeyVal will be read
during apply",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT EXISTS
demoKeyVal(key INTEGER, value JSON, shortName STRING, PRIMARY
KEY(SHARD(key)))\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrlbd
54l3wdo7hq\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ is_auto_reclaimable = true -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be updated in-place",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
1-85
Chapter 1
Devops
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, fullName STRING, PRIMARY
KEY(SHARD(ticketNo)))\" -> \"ALTER TABLE demo (DROP fullName)\"",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be updated
in-place",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\"
\"nosql_demoKeyVal\" {",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demoKeyVal(key INTEGER, value JSON, PRIMARY KEY(SHARD(key)))\" -
> \"ALTER TABLE demoKeyVal (ADD shortName STRING)\"",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ nosql_kv_table_ddl_statement = \"CREATE TABLE IF
NOT EXISTS demoKeyVal(key INTEGER, value JSON, shortName STRING,
PRIMARY KEY(SHARD(key)))\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ nosql_table_ddl_statement = \"CREATE TABLE IF
NOT EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\" -> (known
after apply)",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...
This step is very important as it validates if the updated stack code contains any
syntax errors, and exactly how many OCI resources are being added, updated, or
1-86
Chapter 1
Devops
destroyed. In the tutorial, we are updating the schema of two NoSQL tables: demo and
demoKeyVal by adding and dropping column(s)..
{
...
"message": "Plan: 0 to add, 2 to change, 0 to destroy.",
...
}
Note:
In this tutorial, we are using OCI Resource Manager CLI commands to generate an
execution plan. You can perform the same task using OCI Resource Manager
Console.
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": null,
"cancellation-details": {
"is-forced": false
},
"compartment-id":
1-87
Chapter 1
Devops
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ormjob20220117104856",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacrylnpglae4yvwo4q2r2tk5z5x5v6bwjsoxgn26mo
yg3eqwnt2aq",
"job-operation-details": {
"operation": "PLAN",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "PLAN",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"time-created": "2022-01-17T10:48:56.324000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag":
"a6f75ec1e205cd9105705fd7c8d65bf262159a7e733b27148049e70ce6fc14fe"
}
We have generated an execution plan from a stack. The Resource Manager creates a
job with a unique id corresponding to this execution plan. This plan job id can be used
later to review the execution plan details before running the apply operation to deploy
the NoSQL database resources on the OCI cloud.
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacrylnpglae4yvwo4q2r2tk5z5x5v6bwjsoxgn26mo
yg3eqwnt2aq",
"job-operation-details": {
"operation": "PLAN"
...
}
1-88
Chapter 1
Devops
For example:
Example response:
...
{
"level": "INFO",
"message": "Terraform used the selected providers to generate the
following execution",
"timestamp": "2022-01-17T10:49:21.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "plan. Resource actions are indicated with the following
symbols:",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + create",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be created",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
1-89
Chapter 1
Devops
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not
exists demo (ticketNo INTEGER, fullName STRING, contactPhone STRING,
confNo STRING, gender STRING, bagInfo JSON, PRIMARY KEY (ticketNo))\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demo\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be
created",
1-90
Chapter 1
Devops
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demoKeyVal\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
1-91
Chapter 1
Devops
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:49:21.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
"timestamp": "2022-01-17T10:49:21.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...
This step is very important as it validates to see if the stack code contains any syntax
errors, and exactly how many OCI resources are being added, updated, or destroyed.
In the tutorial, we are deploying two NoSQL tables: demo and demoKeyVal.
{
...
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
...
}
• To automatically approve the apply job (no plan job specified), use AUTO_APPROVED:
1-92
Chapter 1
Devops
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Create NoSQL Tables Using Terraform",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5dsxd6f
hescq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-17T10:54:46.346000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
1-93
Chapter 1
Devops
},
"etag":
"4042a300e8f678dd6da0f49ffeccefed66902b51331ebfbb559da8077a728126"
}
We have run the apply operation on the execution plan from a stack. The Resource
Manager creates a job with a unique id to run the apply operation. This apply job id
can be later used to review the logs generated as part of NoSQL Database table
deployment on OCI cloud.
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5
dsxd6fhescq",
"job-operation-details": {
"operation": "APPLY"
...
}
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Create NoSQL Tables Using Terraform",
1-94
Chapter 1
Devops
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5dsxd6f
hescq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "SUCCEEDED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-17T10:54:46.346000+00:00",
"time-finished": "2022-01-17T10:55:28.853000+00:00",
"variables": {},
"working-directory": null
},
"etag": "9e9f524b87e3c47b3f3ea3bbb4c1f956172a48e4c2311a44840c8b96e318bcaf--
gzip"
}
You can check the status of your apply job to verify if the job is SUCCESSFUL or FAILED.
{
...
"lifecycle-state": "SUCCEEDED",
...
}
For example:
1-95
Chapter 1
Devops
Example response:
...
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be created",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not
exists demo (ticketNo INTEGER, fullName STRING, contactPhone STRING,
confNo STRING, gender STRING, bagInfo JSON, PRIMARY KEY (ticketNo))\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demo\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
1-96
Chapter 1
Devops
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be created",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
1-97
Chapter 1
Devops
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demoKeyVal\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Creating...",
"timestamp": "2022-01-17T10:55:06.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Creating...",
"timestamp": "2022-01-17T10:55:06.582000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Creation complete
after 6s
1-98
Chapter 1
Devops
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrl
bd54l3wdo7hq]",
"timestamp": "2022-01-17T10:55:12.582000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Creation complete after 9s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsr
vrc4zxr6lo4a]",
"timestamp": "2022-01-17T10:55:15.583000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Apply complete! Resources: 2 added, 0 changed, 0
destroyed.",
"timestamp": "2022-01-17T10:55:15.583000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...
This step is very important as it confirms exactly how many OCI resources were added,
updated, or destroyed. In the tutorial, we have successfully deployed two NoSQL tables: demo
and demoKeyVal.
{
...
"message": "Apply complete! Resources: 2 added, 0 changed, 0
destroyed.",
...
}
We have covered a lot of details in this tutorial. We created terraform configuration files
required for deployment of NoSQL database tables on OCI cloud and then configured the
source location for these files. We then used the OCI Resource Manager CLI to create a
stack, generate an execution plan, and run apply job on the execution plan.
1-99
Chapter 1
Devops
For example:
If you have a Terraform configuration nosql.tf with the following contents:
variable "compartment_ocid" {
}
resource "oci_nosql_table" "nosql_demo" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demo (ticketNo
INTEGER, fullName STRING, contactPhone STRING, confNo STRING, gender
STRING, bagInfo JSON, PRIMARY KEY (ticketNo))"
name = "demo"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}
resource "oci_nosql_table" "nosql_demoKeyVal" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demoKeyVal (key
INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO
CYCLE), value JSON, PRIMARY KEY (key))"
name = "demoKeyVal"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}
You now want to modify the demo table and drop an existing column, named fullName,
and modify the demoKeyVal table to add a new column, named shortName. Then
create a file nosql_override.tf or override.tf containing the following content:
variable "compartment_ocid" {
}
resource "oci_nosql_table" "nosql_demo" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demo (ticketNo
1-100
Chapter 1
Devops
When Terraform processes this file (nosql_override.tf), internally it parses the DDL
statement (CREATE TABLE statement) and compares it with the existing table definition and
generates an equivalent ALTER TABLE statement, and applies it.
Note:
These instructions don't apply to configurations stored in source code control
systems. If you are using source code control systems such as GitHub and Gitlab to
maintain terraform configuration files, you can skip this step and directly move to
Step 3. The latest version of your configuration is automatically used by any job
running on the associated stack.
For this tutorial, we are going to update the execution plan with the updated terraform
configuration zip file, terraform.zip file. The updated terraform.zip file contains the
following files:
• provider.tf
• nosql.tf
• nosql_override.tf or override.tf
• terraform.tfvars
• variables.tf
For example:
Example response:
{
"data": {
"compartment-id":
1-101
Chapter 1
Devops
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-type": "ZIP_UPLOAD",
"working-directory": null
},
"defined-tags": {},
"description": null,
"display-name": "ormstack20220117104810",
"freeform-tags": {},
"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"lifecycle-state": "ACTIVE",
"stack-drift-status": "NOT_CHECKED",
"terraform-version": "1.0.x",
"time-created": "2022-01-17T10:48:10.878000+00:00",
"time-drift-last-checked": null,
"variables": {}
},
"etag":
"068e7b962aa43c7b3e7bf5c24b2d7f937db0901a784a9dce8715d76d78ad30f3"
}
We have updated an existing stack with the new updated zip file containing override
terraform configuration file(s).
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": null,
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
1-102
Chapter 1
Devops
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ormjob20220124122310",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaagke5ajwwchvxkql2c56qoohhvc2dxu5fnqswnpw4hsombrf
ijnia",
"job-operation-details": {
"operation": "PLAN",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "PLAN",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-24T12:23:10.366000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag": "b77d497287af3dd2d166871457d880ffee9952ee2c9a44e8f9dfa3e02b974c95"
}
We have generated an execution plan from a stack. The Resource Manager creates a job
with a unique id corresponding to this execution plan. This plan job id can be used later to
review the execution plan details before running the apply operation to deploy the NoSQL
database resources on the OCI cloud.
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaagke5ajwwchvxkql2c56qoohhvc2dxu5fnqswnpw4hsombrf
ijnia",
"job-operation-details": {
"operation": "PLAN"
...
}
1-103
Chapter 1
Devops
--execution-plan-job-id <plan_job_OCID>
--display-name "Example Apply Job"
• To automatically approve the apply job (no plan job specified), use AUTO_APPROVED:
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7nf5ca
3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Update NoSQL Tables Using Terraform",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmbzg3dmuc3b
q",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
1-104
Chapter 1
Devops
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpkscmr57bq",
"time-created": "2022-01-24T12:36:52.911000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag": "b2af026af48897c7839c347e06a8c40ec3ce1cac08a3da2f0c6ee74fb07078ab"
}
For example:
Example response:
{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ALTER NoSQL Table Schema",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmbzg3dm
uc3bq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
1-105
Chapter 1
Devops
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "SUCCEEDED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"time-created": "2022-01-20T11:14:13.916000+00:00",
"time-finished": "2022-01-20T11:14:51.921000+00:00",
"variables": {},
"working-directory": null
},
"etag":
"13b1253bd5e6ca78778b4cf6aad38d262b1476aae06e6f36b40b5f914016b899--
gzip"
}
You can check the status of your apply job to verify if the job is SUCCESSFUL or
FAILED.
{
...
"lifecycle-state": "SUCCEEDED",
...
}
For example:
Example response:
...
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Refreshing
state...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudx
hwlqrlbd54l3wdo7hq]",
1-106
Chapter 1
Devops
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Refreshing state...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsr
vrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "plan. Resource actions are indicated with the following
symbols:",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ update in-place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demo will be read during
apply",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_name_or_id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvr
1-107
Chapter 1
Devops
c4zxr6lo4a\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demoKeyVal\"
{",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_name_or_id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhw
lqrlbd54l3wdo7hq\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be updated in-
place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, fullName STRING, PRIMARY
KEY(SHARD(ticketNo)))\" -> \"ALTER TABLE demo (DROP fullName)\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtl
xxsrvrc4zxr6lo4a\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
1-108
Chapter 1
Devops
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " name = \"demo\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be updated in-
place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\" -> \"ALTER TABLE demoKeyVal
(ADD shortName STRING)\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrlbd
54l3wdo7hq\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " name = \"demoKeyVal\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 0 to add, 2 to change, 0 to destroy.",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Modifying...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrl
bd54l3wdo7hq]",
1-109
Chapter 1
Devops
"timestamp": "2022-01-20T11:14:27.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Modifying...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:27.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Modifications complete
after 9s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demo: Reading...",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demo: Read complete after
0s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demoKeyVal: Reading...",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demoKeyVal: Read complete
after 0s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudx
hwlqrlbd54l3wdo7hq]",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Apply complete! Resources: 0 added, 2 changed, 0
destroyed.",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
1-110
Chapter 1
Develop
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "nosql_kv_table_ddl_statement = \"CREATE TABLE IF NOT
EXISTS demoKeyVal(key INTEGER, value JSON, shortName STRING, PRIMARY
KEY(SHARD(key)))\"",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "nosql_table_ddl_statement = \"CREATE TABLE IF NOT EXISTS
demo(ticketNo INTEGER, contactPhone STRING, confNo STRING, gender STRING,
bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\"",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...
This step is very important as it confirms exactly how many OCI resources were added,
updated, or destroyed. In the tutorial, we have successfully updated the schema of the two
NoSQL tables: demo and demoKeyVal.
{
...
"message": "Apply complete! Resources: 0 added, 2 changed, 0
destroyed.",
...
}
We have covered a lot of details in this tutorial. We created override terraform configuration
files required for updating the schema of NoSQL database tables on OCI cloud and then
used the OCI Resource Manager CLI to update the existing stack, generate an execution
plan, and run apply job on the execution plan.
Develop
• Install Analytics Integrator
• Using console to create tables
• Using APIs to create tables
• Using Plugins
• Designing a Table in Oracle NoSQL Database Cloud Service
• Developing in Oracle NoSQL Database Cloud Simulator
• Using Oracle NoSQL Database Migrator
1-111
Chapter 1
Develop
• Installation
• Verify the data in the Oracle Autonomous Database
• Verify the data in Oracle Analytics
package nosql.cloud.table;
import java.io.File;
import java.math.BigDecimal;
import java.security.SecureRandom;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import oracle.nosql.driver.NoSQLHandle;
import oracle.nosql.driver.NoSQLHandleConfig;
import oracle.nosql.driver.NoSQLHandleFactory;
import oracle.nosql.driver.Region;
import oracle.nosql.driver.iam.SignatureProvider;
import oracle.nosql.driver.ops.DeleteRequest;
import oracle.nosql.driver.ops.DeleteResult;
import oracle.nosql.driver.ops.GetRequest;
import oracle.nosql.driver.ops.GetResult;
import oracle.nosql.driver.ops.PutRequest;
import oracle.nosql.driver.ops.PutResult;
import oracle.nosql.driver.ops.QueryRequest;
import oracle.nosql.driver.ops.QueryResult;
import oracle.nosql.driver.ops.TableLimits;
import oracle.nosql.driver.ops.TableRequest;
import oracle.nosql.driver.ops.TableResult;
1-112
Chapter 1
Develop
import oracle.nosql.driver.util.TimestampUtil;
import oracle.nosql.driver.values.ArrayValue;
import oracle.nosql.driver.values.LongValue;
import oracle.nosql.driver.values.MapValue;
import oracle.nosql.driver.values.StringValue;
if (nArgs == 0) {
usage(null);
}
while (argc < nArgs) {
final String thisArg = argv[argc++];
if ("-tenant".equals(thisArg)) {
if (argc < nArgs) {
tenantOcid = argv[argc++];
} else {
usage("-tenant argument requires an argument");
}
} else if ("-user".equals(thisArg)) {
if (argc < nArgs) {
userOcid = argv[argc++];
} else {
usage("-user requires an argument");
1-113
Chapter 1
Develop
}
} else if ("-fp".equals(thisArg)) {
if (argc < nArgs) {
fingerprint = argv[argc++];
} else {
usage("-fp requires an argument");
}
} else if ("-pem".equals(thisArg)) {
if (argc < nArgs) {
privateKeyFilename = argv[argc++];
privateKeyFile = new File(privateKeyFilename);
} else {
usage("-pem requires an argument");
}
} else if ("-compartment".equals(thisArg)) {
if (argc < nArgs) {
compartment = argv[argc++];
} else {
usage("-compartment requires an argument");
}
} else if ("-table".equals(thisArg)) {
if (argc < nArgs) {
tableName = argv[argc++];
} else {
usage("-table requires an argument");
}
} else if ("-n".equals(thisArg)) {
if (argc < nArgs) {
nOps = Long.parseLong(argv[argc++]);
} else {
usage("-n requires an argument");
}
} else if ("-phrase".equals(thisArg)) {
passStr = argv[argc++];
passPhrase = passStr.toCharArray();
} else if ("-delete".equals(thisArg)) {
deleteExisting = true;
} else {
usage("Unknown argument: " + thisArg);
}
}
nRowsAdded = nOps;
System.out.println("COMPARTMENT: " + compartment);
System.out.println("TABLE: " + tableName);
final SignatureProvider auth =
new SignatureProvider(tenantOcid, userOcid, fingerprint,
privateKeyFile, passPhrase);
final NoSQLHandleConfig config =
new NoSQLHandleConfig(Region.US_ASHBURN_1, auth);
ociNoSqlHndl = NoSQLHandleFactory.createNoSQLHandle(config);
createTable();
}
private void usage(final String message) {
if (message != null) {
System.out.println("\n" + message + "\n");
1-114
Chapter 1
Develop
}
System.out.println("usage: " + getClass().getName());
System.out.println
("\t-tenant <tenant ocid>\n" +
"\t-user <user ocid>\n" +
"\t-fp <fingerprint>\n" +
"\t-pem <private key file>\n" +
"\t-compartment <compartment name>\n" +
"\t-table <table name>\n" +
"\t-n <total records to create>\n" +
"\t[-phrase <[pass phrase>]\n" +
"\t-delete (default: false) [delete all “ +
“pre-existing data]\n");
System.exit(1);
}
private void run() {
if (deleteExisting) {
deleteExistingData();
}
doLoad();
}
private void createTable() {
final int readUnits = 10;
final int writeUnits = 10;
final int storageGb = 1;
final int ttlDays = 1;
/* Wait no more than 2 minutes for table create. */
final int waitMs = 2 * 60 * 1000;
/* Check for table existence every 2 seconds. */
final int delayMs = 2 * 1000;
/* Table creation statement. */
final String statement =
"CREATE TABLE IF NOT EXISTS " + tableName +
" (" +
"ID INTEGER," +
"AINT INTEGER," +
"ALON LONG," +
"ADOU DOUBLE," +
"ANUM NUMBER," +
"AUUID STRING," +
"ATIM_P0 TIMESTAMP(0)," +
"ATIM_P3 TIMESTAMP(3)," +
"ATIM_P6 TIMESTAMP(6)," +
"ATIM_P9 TIMESTAMP(9)," +
"AENU ENUM(S,M,L,XL,XXL,XXXL)," +
"ABOO BOOLEAN," +
"ABIN BINARY," +
"AFBIN BINARY(16)," +
"ARRY ARRAY (INTEGER)," +
"AMAP MAP (DOUBLE)," +
"AREC RECORD(" +
"BLON LONG," +
"BTIM_P6 TIMESTAMP(6)," +
"BNUM NUMBER," +
"BSTR STRING," +
1-115
Chapter 1
Develop
"BRRY ARRAY(DOUBLE))," +
"AJSON JSON," +
"PRIMARY KEY (SHARD(AINT), ALON, ADOU, ID)" +
")" +
" USING TTL " + ttlDays + " days";
System.out.println(statement);
final TableRequest tblRqst = new TableRequest();
tblRqst.setCompartment(compartment).setStatement(statement);
final TableLimits tblLimits =
new TableLimits(readUnits, writeUnits, storageGb);
tblRqst.setTableLimits(tblLimits);
final TableResult tblResult =
ociNoSqlHndl.tableRequest(tblRqst);
tblResult.waitForCompletion(ociNoSqlHndl, waitMs, delayMs);
if (tblResult.getTableState() != TableResult.State.ACTIVE) {
final String msg =
"TIMEOUT: Failed to create table in OCI NoSQL “ +
“[table=" + tableName + "]";
throw new RuntimeException(msg);
}
}
private void doLoad() {
final List<MapValue> rows = generateData(nOps);
for (MapValue row : rows) {
addRow(row);
}
displayRow();
final long nRowsTotal = nRowsInTable();
if (nOps > nRowsAdded) {
System.out.println(
nOps + " records requested, " +
nRowsAdded + " unique records actually added " +
"[" + (nOps - nRowsAdded) + " duplicates], " +
nRowsTotal + " records total in table");
} else {
System.out.println(
nOps + " records requested, " +
nRowsAdded + " unique records added, " +
nRowsTotal + " records total in table");
}
}
private void addRow(final MapValue row) {
final PutRequest putRqst = new PutRequest();
putRqst.setCompartment(compartment).setTableName(tableName);
putRqst.setValue(row);
final PutResult putRslt = ociNoSqlHndl.put(putRqst);
if (putRslt.getVersion() == null) {
final String msg =
"PUT: Failed to insert row [table=" + tableName +
", row = " + row + "]";
}
}
/* Retrieves and deletes each row from the table. */
private void deleteExistingData() {
final String selectAll = "SELECT * FROM " + tableName;
1-116
Chapter 1
Develop
long cnt = 0;
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
final DeleteRequest delRqst = new DeleteRequest();
delRqst.setCompartment(compartment)
.setTableName(tableName);
delRqst.setKey(row);
final DeleteResult delRslt =
ociNoSqlHndl.delete(delRqst);
if (delRslt.getSuccess()) {
cnt++;
}
}
} while (!queryRqst.isDone());
System.out.println(cnt + " records deleted");
}
/* Counts the number of rows in the table. */
private long nRowsInTable() {
final String selectAll = "SELECT * FROM " + tableName;
final QueryRequest queryRqst = new QueryRequest();
queryRqst.setCompartment(compartment).setStatement(selectAll);
long cnt = 0;
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
cnt++;
}
} while (!queryRqst.isDone());
return cnt;
}
/* Convenience method for displaying output when debugging. */
private void displayRow() {
final String selectAll = "SELECT * FROM " + tableName;
final QueryRequest queryRqst = new QueryRequest();
queryRqst.setCompartment(compartment).setStatement(selectAll);
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
System.out.println(row);
}
} while (!queryRqst.isDone());
}
/* Generates randomized data with which to populate the table. */
private List<MapValue> generateData(final long count) {
List<MapValue> rows = new ArrayList<>();
final BigDecimal[] numberArray = {
new BigDecimal("3E+8"),
new BigDecimal("-1.7976931348623157E+2"),
1-117
Chapter 1
Develop
new BigDecimal("12345.76455"),
new BigDecimal("12345620.789"),
new BigDecimal("1234562078912345678988765446777475657"),
new BigDecimal("1.7976931348623157E+305"),
new BigDecimal("-1.7976931348623157E+304")
};
final Timestamp[] timeArray_p0 = {
TimestampUtil.parseString("2010-05-05T10:45:00"),
TimestampUtil.parseString("2011-05-05T10:45:01"),
Timestamp.from(Instant.parse("2021-07-15T11:31:21Z"))
};
final Timestamp[] timeArray_p3 = {
TimestampUtil.parseString("2011-05-05T10:45:01.123"),
Timestamp.from(
Instant.parse("2021-07-15T11:31:47.549Z")),
Timestamp.from(
Instant.parse("2021-07-15T11:32:12.836Z"))
};
final Timestamp[] timeArray_p6 = {
TimestampUtil.parseString(
"2014-05-05T10:45:01.789456Z"),
TimestampUtil.parseString(
"2013-08-20T12:34:56.123456Z"),
Timestamp.from(Instant.parse(
"2021-07-15T11:31:47.549213Z")),
Timestamp.from(Instant.parse(
"2021-07-15T11:32:12.567836Z"))
};
final Timestamp[] timeArray_p9 = {
Timestamp.from(Instant.parse(
"2021-07-15T12:46:35.574639954Z")),
Timestamp.from(Instant.parse(
"2021-07-15T12:47:32.883922660Z")),
Timestamp.from(Instant.parse(
"2021-07-15T12:48:11.321131987Z"))
};
final String[] enumArray =
{"S", "M", "L", "XL", "XXL", "XXXL"};
for(int i = 1; i <= count; ++i) {
byte[] byteArray = new byte[16];
generator.nextBytes(byteArray);
row.put("ID", i);
row.put("AINT", generator.nextInt());
row.put("ALON", generator.nextLong());
row.put("ADOU", generator.nextDouble());
row.put("ANUM",
numberArray[generator.nextInt(
numberArray.length)]);
row.put("AUUID", UUID.randomUUID().toString());
/* TIMESTAMP */
row.put("ATIM_P0",
timeArray_p0[generator.nextInt(
1-118
Chapter 1
Develop
timeArray_p0.length)]);
row.put("ATIM_P3",
timeArray_p3[generator.nextInt(
timeArray_p3.length)]);
row.put("ATIM_P6",
timeArray_p6[generator.nextInt(
timeArray_p6.length)]);
row.put("ATIM_P9",
timeArray_p9[generator.nextInt(
timeArray_p9.length)]);
/* ENUM */
row.put("AENU", enumArray[i % enumArray.length]);
/* BOOLEAN */
row.put("ABOO", generator.nextBoolean());
/* BINARY & FIXED_BINARY stored as strings */
row.put("ABIN", byteArray);
row.put("AFBIN", byteArray);
/* ARRAY of INTEGER */
ArrayValue integerArr = new ArrayValue();
for (int j = 0; j < 3; ++j) {
integerArr.add(generator.nextInt());
}
row.put("ARRY", integerArr);
/* MAP of DOUBLE */
MapValue map = new MapValue(true,3);
map.put("d1", generator.nextDouble());
map.put("d2", generator.nextDouble());
row.put("AMAP", map);
/*
* RECORD of: LONG, TIMESTAMP, NUMBER,
* STRING, ARRAY of DOUBLE
*/
MapValue record = new MapValue(true,5);
/* LONG element */
record.put("BLON", generator.nextLong());
/* TIMESTAMP element */
record.put("BTIM_P6",
timeArray_p6[generator.nextInt(
timeArray_p6.length)]);
/* NUMBER element */
record.put("BNUM",
numberArray[generator.nextInt(
numberArray.length)]);
/* STRING element */
record.put("BSTR", Double.toString(
generator.nextDouble()));
/* ARRAY of DOUBLE element */
ArrayValue doubleArr = new ArrayValue();
for (int j = 0; j < 3; ++j) {
doubleArr.add(generator.nextDouble());
}
record.put("BRRY", doubleArr);
row.put("AREC", record);
/* JSON */
MapValue json = new MapValue(true,5);
1-119
Chapter 1
Develop
json.put("id", i);
json.put("name", "name_" + i);
json.put("age", i + 10);
row.put("AJSON", json);
rows.add(row);
}
return rows;
}
}
In the example above, all of the arguments related to your user credentials are
required (-tenant, -fp, -pem, and -compartment), as is the argument -table,
which specifies the name of the table to create and load with data. The remaining
arguments (-n and -delete) are optional.
If the -n argument is specified, the value specified represents the number of new rows
to generate and write to the table. If the argument is not specified, then 10 rows will be
written to the table by default.
If the -delete argument is specified, then all existing rows written to the table by
previous executions of the application will first be deleted from the table before adding
any new rows.
After the application completes executing, you can verify that the table exists and is
populated with data by logging into the Oracle Cloud Console, navigating to the tables
section of the Oracle NoSQL Database service, and querying the table having the
name you specified for the -table argument.
1-120
Chapter 1
Develop
Prerequisites
In order to use the Oracle NoSQL Database Analytics Integrator, you must complete the
following:
• You need to have Java 11 or higher.
• Sign up for an account on the Oracle Cloud Infrastructure. See Oracle Cloud
Infrastructure - Signup for more details.
• Create a Compute Instance from which the Oracle NoSQL Database Analytics Integrator
can be installed and executed. See Compute Instance for more information.
• Create one or more tables in the Oracle NoSQL Database Cloud Service and populate
those tables with data. See Create and populate a NoSQL table for more details.
• Create a bucket in OCI Object Storage. See Create a bucket in object Storage for more
details.
• Create a database in the Oracle Autonomous Data Warehouse (ADW). See Create a
database in the Autonomous Data Warehouse for more details.
• Download and install the client credentials (wallet) needed to establish a secure
connection to the ADW database. See Install credentials needed for a secure database
connection for more details.
• If you wish to employ user-to-service based authentication instead of service-to-service
based authentication (via the OCI Resource Principal), then generate an authorization
token to facilitate authentication of the ADW database with Object Storage. See Generate
an authorization token for Object Storage for more details .
• Enable/store the credential the ADW database should use to access the objects in Object
Storage - that is, either enable the OCI Resource Principal Credential or Store/Enable the
User's Object Storage AUTH_TOKEN in the ADW Database. See Enable the OCI
Resource Principal Credential or Store/Enable the User's Object Storage AUTH_TOKEN
in the ADW Database for more details.
• Create a Dynamic group for the Compute Instance and (optionally) the ADW database.
See Create a Dynamic Group for the Compute Instance and optionally the ADW
Database for more details.
• Create a Policy with Appropriate Permissions for the Dynamic Group. See Create a
Policy with appropriate permissions for the dynamic group for more details.
After you have satisfied all of the prerequisites for using the Oracle NoSQL Database
Analytics Integrator, you can then install and configure the utility. You can then execute the
utility to copy the contents of your tables in the NoSQL Cloud Service to the Autonomous
Data Warehouse so that you can analyze the data using Oracle Analytics.
Installation
You can download the Oracle NoSQL Database Analytics Integrator from Oracle Technology
Network.You can install it in the desired compute environment; which can be an Oracle Cloud
Compute Instance or your own local environment, outside of the Oracle Cloud. The utility’s
installation package is provided as either a compressed tar file or a zip file; for example,
nosqlanalytics-<version>.tar.gz or nosqlanalytics-<version>.zip. If you decide to
install the utility on an Oracle Cloud Compute Instance, then after downloading the desired
installation package, you should remote copy that package to the compute instance.
1-121
Chapter 1
Develop
For example, suppose you download the zip file for version 1.0.1 to the ~/Downloads
directory of your local environment, then you would do the following:
This installs the utility under the home directory for the user named opc on the
compute instance; that is, /home/opc/nosqlanalytics-1.0.1.
Note:
If you install the utility on an Oracle Cloud Compute instance, then the utility
can be executed using either your own security credentials or an Oracle
Cloud Instance Principal. But if you install the utility outside of the Oracle
Cloud Infrastructure for testing purposes, then you must use your own Oracle
Cloud security credentials to run the utility. You should execute the utility
from your local environment only when the NoSQL tables that you want to
copy are small in size.
{
"nosqlstore": {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"useInstancePrincipal" : true,
"compartment" : <ocid.of.compartment.containing.nosql.tables>,
"table" : <tableName1,tableName2,tableName3>,
"readUnitsPercent" : "90,90,90",
"requestTimeoutMs" : "5000"
},
"objectstore" : {
"type" : "object_storage_oci",
"endpoint" : "us-ashburn-1",
"useInstancePrincipal" : true,
"compartment" : <ocid.of.compartment.containing.bucket>,
1-122
Chapter 1
Develop
"bucket" : <bucket-name-objectstorage>,
"compression" : "snappy"
},
"database": {
"type" : "database_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <profile-for-adw-auth>,
"databaseName" : <database-name>,
"databaseUser" : "ADMIN",
"databaseWallet"” : <path-where-wallet-unzipped>
}
}
Example 2: You prefer to authenticate using your own user credentials, or you are executing
from outside of the Oracle Cloud and thus Instance Principal authentication is not available.
{
"nosqlstore": {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <nosqldb-user-credentials>,
"table" : <tableName1,tableName2,tableName3>,
"readUnitsPercent" : "90,90,90",
"requestTimeoutMs" : "5000"
},
"objectstore" : {
"type" : "object_storage_oci",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <objectstorage-user-credentials>,
"bucket" : <bucket-name-objectstorage>,
"compression" : "snappy"
},
"database": {
"type" : "database_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <adw-user-credentials>,
"databaseName" : <database-name>,
"databaseUser" : "ADMIN",
"databaseWallet" : <path-where-wallet-unzipped>
}
"abortOnError" : false
}
The configuration is divided into three sections – nosqlstore, objectstore, and database -
whose entries are used to specify how the utility interacts with each respective cloud service:
the NoSQL Cloud Service, Oracle ObjectStorage, and Oracle Autonomous Data Warehouse.
There are some parameters that are common in all three sections.
1-123
Chapter 1
Develop
1-124
Chapter 1
Develop
No
te:
Us
er
cre
den
tial
s
mu
st
be
spe
cifi
ed
in
the
dat
aba
se
sec
tion
bec
aus
1-125
Chapter 1
Develop
e
the
Aut
ono
mo
us
Dat
aba
se
hos
ted
in
AD
W
req
uire
s it.
1-126
Chapter 1
Develop
1-127
Chapter 1
Develop
1-128
Chapter 1
Develop
1-129
Chapter 1
Develop
No
te:
If
the
co
mp
res
sio
n
ent
ry
is
not
spe
cifi
ed,
the
n
sna
ppy
co
mp
res
sio
n
will
be
per
for
me
d.
1-130
Chapter 1
Develop
Note:
Each entry in the configuration file can be overridden on the command line by
setting a system property with the name of the form, section.entry for example, -
Dnosqlstore.table=tableName1,tableName3. If an entry is not located within a
section, then the name to use for such a property is simply the name of the entry
itself; for example, -DabortOnError=false. This feature may be useful when
testing or writing scripts that run the utility at regular intervals.
1-131
Chapter 1
Develop
[DEFAULT]
user=<ocid.of.default.user>
fingerprint=<fingerprint.of.default.user>
key_file=<path.to.default.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.default.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.default.compartment>
[nosqldb-user-credentials]
user=<ocid.of.nosqldb.user>
fingerprint=<fingerprint.of.nosqldb.user>
key_file=<path.to.nosqldb.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.nosqldb.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.nosqldb.compartment>
[objectstorage-user-credentials]
user=<ocid.of.objectstorage.user>
fingerprint=<fingerprint.of.objectstorage.user>
key_file=<path.to.objectstorage.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.objectstorage.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.objectstorage.compartment>
[adw-user-credentials]
user=<ocid.of.adw.user>
fingerprint=<fingerprint.of.adw.user>
key_file=<path.to.adw.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.adw.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.adw.compartment>
dbmsOcid=<ocid.of.autonomous.database.in.adw>
dbmsCredentialName=<OCI$RESOURCE_PRINCIPAL or
NOSQLADWDB_OBJ_STORE_CREDENTIAL>
Note:
In the above configuration file, there are three separate entries for nosql-db-
user, objectstorage-user and adw-user. This is not mandatory and a config
file can exist with only one DEFAULT profile. However, having separate
profiles is a good practice rather than combining all parameters in the
DEFAULT profile.
1-132
Chapter 1
Develop
cd /home/opc/nosqlanalytics-1.0.1/nosqlanalytics
• Invoke the utility using the following command. The configuration file oci-
nosqlanalytics-config.json is present under the .oci directory inside the home
directory.
java -Djava.util.logging.config.file=./src/main/resources/logging/java-
util-logging.properties
-Dlog4j.configurationFile=file:./src/main/resources/logging/log4j2-
analytics.properties
-jar ./lib/nosqlanalytics-1.0.1.jar
-config ~/.oci/oci-nosqlanalytics-config.json
1-133
Chapter 1
Develop
Note:
The system properties that configure the loggers used during execution are
optional. If those system properties are not specified, then the utility will
produce no logging output.
Logging
The Oracle NoSQL Database Analytics Integrator executes software from multiple
third-party libraries, where each library defines its own set of loggers with different
namespaces. For convenience, the Oracle NoSQL Database Analytics Integrator
provides two logging configuration files as part of the release; one to configure logging
mechanisms based on java.util.logging, and one for loggers based on Log4j2.
Note:
By default, the logger configuration files provided with the utility are designed
to produce minimal output as the utility executes. But if you wish to see
verbose output from the various components that are employed by the utility,
then you should increase the logging levels of the specific loggers whose
behavior you wish to analyze.
1-134
Chapter 1
Develop
1-135
Chapter 1
Develop
• Select Development from the menu on the left side of the display.
1-136
Chapter 1
Develop
From the Navigator tab of the window on the left of the display, first, verify that the table you
created appears in the list of tables contained in the database. If it does, then from the
Worksheet* window execute the SQL query,
Verify that the expected contents of the table are displayed in the window in the center
bottom of the display, under the window Query Result.
1-137
Chapter 1
Develop
1-138
Chapter 1
Develop
1-139
Chapter 1
Develop
• Click on Save.
Note:
For Client Credentials, when you enter the path to the wallet zip file, the tool
will extract the file cwallet.sso and replace what you entered with that file’s
name. Finally, once you enter the Connection Name, the tool will
automatically enter a value for Service Name that is based on what was
entered for Connection Name.
• Click ADMIN.
• Scroll down the list of tables and select the table you copied from NoSQL
Database Cloud Service.
1-140
Chapter 1
Develop
• Click Add to Data Set. Verify the data displayed is what you expect.
Your data is now ready to be analyzed using all of the facilities provided by Oracle Analytics.
Creating a Compartment
When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy with a root
compartment that holds all your cloud resources. You then create additional compartments
within the tenancy (root compartment) and corresponding policies to control access to the
resources in each compartment. Before you create an Oracle NoSQL Database Cloud
Service table, Oracle recommends that you set up the compartment where you want the table
to belong.
You create compartments in Oracle Cloud Infrastructure Identity and Access Management
(IAM). See Setting Up Your Tenancy and Managing Compartments in Oracle Cloud
Infrastructure Documentation.
Creating Tables
You can create new Oracle NoSQL Database Cloud Service table from the NoSQL console.
The NoSQL console lets you create the Oracle NoSQL Database Cloud Service tables in two
modes:
1-141
Chapter 1
Develop
1. Simple Input Mode: You can use this mode to create the NoSQL Database Cloud
Service table declaratively, that is, without writing a DDL statement.
2. Advanced DDL Input Mode: You can use this mode to create the NoSQL
Database Cloud Service table using a DDL statement.
1-142
Chapter 1
Develop
If you want to create a regular table, then disable the toggle button. You will be able
to enter the appropriate capacity values for the table.
– Read Capacity (ReadUnits): Enter the number of read units. See Estimating
Capacity to learn about read units.
– Write Capacity (WriteUnits): Enter the number of write units. See Estimating
Capacity to learn about write units.
– Disk Storage (GB): Specify the disk space in gigabytes (GB) to be used by the
table. See Estimating Capacity to learn about storage capacity.
• Capacity mode
You can specify the option for Capacity mode as Provisioned Capacity or On
Demand Capacity. Provisioned Capacity and On Demand Capacity modes are
mutually exclusive options. If you enable On Demand Capacity for a table, you don't
need to specify the read/write capacity of the table. You are charged for the actual
read and write units usage, not the provisioned usage.
Enabling On Demand Capacity for a table is a good option if any of the following are
true:
a. You create new tables with unknown workloads.
b. You have unpredictable application traffic.
c. You prefer the ease of paying for only what you use.
Limitations of enabling On Demand Capacity for a table:
a. On Demand Capacity limits the capacity of the table to 5,000 writes and 10,000
reads.
1-143
Chapter 1
Develop
1-144
Chapter 1
Develop
5. In the Name field, enter a table name that is unique within your tenancy.
Table names must conform to Oracle NoSQL Database Cloud Service naming
conventions. See Oracle NoSQL Database Cloud Service Limits .
6. In the Primary Key Columns section, enter primary key details:
• Column Name: Enter a column name for the primary key in your table. See Oracle
NoSQL Database Cloud Service Limits to learn about column naming requirements.
• Type: Select the data type for your primary key column.
• Precision:This is applicable for TIMESTAMP typed columns only. Timestamp values
have precision in fractional seconds that range from 0 to 9. For example, a precision
of 0 means that no fractional seconds are stored, 3 means that the timestamp stores
milliseconds and 9 means a precision of nanoseconds. 0 is the minimum precision,
and 9 is the maximum.
• Set as Shard Key: Click this option to set this primary key column as shard key.
Shard key is to distribute data across the Oracle NoSQL Database Cloud Service
cluster for increased efficiency, and to position records that share the shard key
locally for easy reference and access. Records that share the shard key are stored in
the same physical location and can be accessed atomically and efficiently.
• + Another Primary Key Column: Click this button to add more columns while
creating a composite (multi-column) primary key.
• Use the up and down arrows to change the sequence of columns while creating a
composite primary key.
1-145
Chapter 1
Develop
• Column Name: Enter the column name. Ensure that you conform to column
naming requirements described in Oracle NoSQL Database Cloud Service
Limits .
• Type: Select the data type for your column.
• Precision:This is applicable for TIMESTAMP typed columns only. Timestamp
values have precision in fractional seconds that range from 0 to 9. For
example, a precision of 0 means that no fractional seconds are stored, 3
means that the timestamp stores milliseconds and 9 means a precision of
nanoseconds. 0 is the minimum precision, and 9 is the maximum.
• Size: This is applicable for BINARY typed columns only. Specify the size in
bytes to make the binary a fixed binary.
• Default Value: (optional) Supply a default value for the column.
Note:
Default values can not be specified for binary and JSON data type
columns.
• Value is Not Null: Click this option to specify that a column must always have
a value.
• + Another Column: Click this button to add more columns.
• Click the delete icon to delete a column.
8. (Optional) To specify advanced options, click Show Advanced Options and enter
advanced details:
• Table Time to Live (Days): (optional) Specify expiration duration (no. of days)
for the rows in the table. After the number of days, the rows expire
1-146
Chapter 1
Develop
automatically, and are no longer available. The default value is zero, indicating no
expiration time.
Note:
Updating Table Time to Live (TTL) will not change the TTL value of any
existing data in the table. The new TTL value will only apply to those rows
that are added to the table after this value is modified and to the rows for
which no overriding row-specific value has been supplied.
1-147
Chapter 1
Develop
3. In the Create Table window, select Advanced DDL Input for Table Creation
Mode.
4. Under Reserved Capacity, you have the following options.
• Always Free Configuration:
Enable the toggle button to create an Always Free NoSQL table. Disabling the
toggle button creates a regular NoSQL table. You can create up to three
Always Free NoSQL tables in the tenancy. If you have three Always Free
NoSQL tables in the tenancy, the toggle button to create an Always Free SQL
table is disabled.
If you enable the toggle button to create an Always Free NoSQL table, the
Read capacity, Write capacity, and Disk storage fields are assigned default
values. The Capacity mode becomes Provisioned Capacity. These values
cannot be changed.
If you want to create a regular table, then disable the toggle button. You will be
able to enter the appropriate capacity values for the table.
– Read Capacity (ReadUnits): Enter the number of read units. See
Estimating Capacity to learn about read units.
– Write Capacity (WriteUnits): Enter the number of write units. See
Estimating Capacity to learn about write units.
– Disk Storage (GB): Specify the disk space in gigabytes (GB) to be used
by the table. See Estimating Capacity to learn about storage capacity.
1-148
Chapter 1
Develop
• Capacity mode
You can specify the option for Capacity mode as Provisioned Capacity or On
Demand Capacity. Provisioned Capacity and On Demand Capacity modes are
mutually exclusive options. If you enable On Demand Capacity for a table, you don't
need to specify the read/write capacity of the table. You are charged for the actual
read and write units usage, not the provisioned usage.
Enabling On Demand Capacity for a table is a good option if any of the following are
true:
a. You create new tables with unknown workloads.
b. You have unpredictable application traffic.
c. You prefer the ease of paying for only what you use.
Limitations of enabling On Demand Capacity for a table:
a. On Demand Capacity limits the capacity of the table to 5,000 writes and 10,000
reads.
b. The number of tables with On Demand Capacity per tenant is limited to 3.
c. You pay more per unit for On Demand Capacity table units than provisioned table
units.
1-149
Chapter 1
Develop
1-150
Chapter 1
Develop
5. In the DDL input section, enter the create table DDL statement for Query. You may get
an error that your statement is Incomplete or faulty. See Debugging SQL statement errors
in the OCI console to learn about possible errors in the OCI console and how to fix them.
See Developers Guide for examples on create table statement.
6. (Optional) To specify advanced options, click Show Advanced Options and enter
advanced details:
• Tag Namespace: Select a tag namespace from the select list. A tag namespace is
like a container for your tag keys. It is case insensitive and must be unique across the
tenancy.
• Tag Key: Enter the name to use to refer to the tag. A tag key is case insensitive and
must be unique within a namespace.
• Value: Enter the value to give your tag.
• + Additional Tag: Click to add more tags.
Table Hierarchies
You can use the create table statement to create a table as a child table of another table,
which then becomes the parent of the new table. This is done by using a composite name (a
name_path) for the child table. A composite name consists of a number N (N > 1) of
identifiers separated by dots. The last identifier is the local name of the child table and the
first N-1 identifiers are the name of the parent.
A
/ \
1-151
Chapter 1
Develop
A.B A.G
/
A.B.C
/
A.B.C.D
The top-most parent table is A. The child table B gets the composite name A.B. The
next level of child table C gets the composite name A.B.C and so on.
If child tables are not there, achieving ACID transactions across multiple objects is a
tedious procedure. Without child tables, you do the following:
• Find the shard key values for all the objects that you want to include in a
transaction.
• Make sure that the shard keys for all the objects are equal.
• Use writeMutliple API to add every object to a collection.
Use child tables to easily achieve ACID transactions across multiple objects.
1-152
Chapter 1
Develop
For example, if you want to insert data into the child table myTable.child1, which you don't
own, then you must have the INSERT privilege on the child table and READ and/or INSERT
privileges on myTable. Granting privileges to child tables is independent of granting privileges
to the parent table. That means you can give specific privileges to the child table without
giving the same privilege to its parent table. Any parent/child join queries require the relevant
privileges on all tables used in the query. See Using Left Outer joins with parent-child tables
for more details.
• – The list of child tables for the parent table is displayed. To create a child table, click
Create Child Table.
• You can choose Simple input method or Advanced DDL input method to create the child
table.
1-153
Chapter 1
Develop
• Specify a name for the child table. This is automatically prefixed with the name of
the parent table followed by a dot. Specify the list of columns and primary key
columns.
• The Set as shard key checkbox is not shown while creating a child table, as the
child tables inherit their shard key from their top-level parent table.
Note:
The Read Capacity, Write Capacity, and Disk Storage fields are not specified
because a child table inherits these limits from the top-level table. The limits
set for the top-level table are automatically applied to the child table.
1-154
Chapter 1
Develop
Creating Indexes
Learn how to create indexes in Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
To create indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.
4. Click Add Index.
5. In the Create Index window, enter a name for the index that is unique within the table.
See Oracle NoSQL Database Cloud Service Limits to learn about the naming restrictions
for indexes.
6. In the Index Columns section, enter index details:
• Index Column Name: Select the column that you would like included in the index.
• + Another Index Column: Click this button to include another column in the index.
• Use the up and down arrow to change the sequence of the columns in the index
being created.
• Click the delete icon next to any column to remove it from the index being created.
7. Click Create Index.
The index is created.
To view help for the current page, click the help link at the top of the page.
1-155
Chapter 1
Develop
Note:
Entering a value is mandatory for all non-nullable columns of the table.
1-156
Chapter 1
Develop
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. Click Insert Row.
4. In the Insert Record window, select Advanced JSON Input for Entry Mode.
5. Paste or upload the Record Definition in JSON format.
6. Click Insert Row.
The record is inserted into the table.
To view help for the current page, click the help link at the top of the page.
The following example illustrates using the array format for the file content.
[
{
"id": 0,
"val": "0"
},
{
"id": 1,
"val": "2"
}, ...
]
1-157
Chapter 1
Develop
The following example illustrates using simple objects for the file content.
{
"id": 0,
"val": "0"
}
{
"id": 1,
"val": "2"
}, ...
• If a column value is not required by the table's schema, then the corresponding
JSON property may be left out.
• If a column value is GENERATED ALWAYS, then the corresponding JSON
property must be left out.
• If a JSON object contains properties with names that do not match any column
names, those properties are ignored.
To use the upload feature, click the Upload Data button and select the file to be
uploaded. The upload begins immediately, and progress will be shown on the page.
Upon successful completion, the total number of rows inserted will be shown. You can
interrupt the upload by clicking the Stop Uploading button. The number of rows that
were successfully committed to the database will be shown.
If an error in the input file is detected, then uploading will stop and an error message
with an approximate line number will be shown. Input errors might be caused by
incorrect JSON syntax or schema nonconformance. Errors can also occur during
requests for the service. Such errors also stop the uploading and display a message.
If the upload is stopped in the middle for any reason, you can do one of the following:
• If there are no columns with generated key values (that is, if the keys are entirely
dictated by the JSON file), then you can simply start over with the same file. The
already-written rows will be written again.
• If there are generated key values, then starting over would write new records
instead of overwriting existing records. The easiest path would be to drop the table
and create it again.
• Alternatively, you could remove all records from the table by executing the
statement DELETE FROM tablename in the Explore data form.
If the provisioned write limit is exceeded during the upload process, a transient
message indicating so will be displayed, and the uploading will be slowed down to
avoid exceeding the limit again.
1-158
Chapter 1
Develop
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
Java
The Oracle NoSQL Database SDK for Java is available in Maven Central repository, details
available here. The main location of the project is in GitHub.
You can get all the required files for running the SDK with the following POM file
dependencies.
Note:
The version changes with each release.
<dependency>
<groupId>com.oracle.nosql.sdk</groupId>
<artifactId>nosqldriver</artifactId>
<version>5.2.31</version>
</dependency>
The Oracle NoSQL Database SDK for Java provides you with all the Java classes, methods,
interfaces and examples. Documentation is available as javadoc in GitHub or from Java API
Reference Guide.
Python
You can install the Python SDK through the Python Package Index with the command given
below.
1-159
Chapter 1
Develop
The Oracle NoSQL SDK for Python provides you with all the Python classes, methods,
interfaces and examples. Documentation is available in Python API Reference Guide.
Go
Open the Go Downloads page in a browser and click the download tab corresponding
to your operating system. Save the file to your home folder.
Install Go in your operating system.
• On Windows systems, Open the MSI file you downloaded and follow the prompts
to install Go.
• On Linux systems, Extract the archive you downloaded into /usr/local, creating
a Go tree in /usr/local/go. Add /usr/local/go/bin to the PATH environment
variable.
Access the online godoc for information on using the SDK and to reference Go driver
packages, types, and methods.
Node.js
Download and install Node.js 12.0.0 or higher version from Node.js Downloads.
Ensure that Node Package Manager (npm) is installed along with Node.js. Install the
node SDK for Oracle NoSQL Database as shown below.
Access the Node.js API Reference Guide to reference Node.js classes, events, and
global objects.
C#
You can install the SDK from NuGet Package Manager either by adding it as a
reference to your project or independently.
• Add the SDK as a Project Reference: You may add the SDK NuGet Package as a
reference to your project by using .Net CLI.
cd <your-project-directory>
dotnet add package Oracle.NoSQL.SDK
Alternatively, you may perform the same using NuGet Package Manager in Visual
Studio.
• Independent Install: You may install the SDK independently into a directory of your
choice by using nuget.exe CLI.
Spring Data
The Oracle NoSQL Database SDK for Spring Data is available in the Maven Central
repository, details are available here. The main development location is the oracle-
spring-sdk project on GitHub.
1-160
Chapter 1
Develop
You can get all the required files for running the Spring Data Framework with the following
POM file dependencies.
Note:
The version changes with each release.
<dependency>
<groupId>com.oracle.nosql.sdk</groupId>
<artifactId>spring-data-oracle-nosql</artifactId>
<version>1.4.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<version>2.7.0</version>
</dependency>
The Oracle NoSQL Database SDK for Spring Data provides you with all the Spring Data
classes, methods, interfaces, and examples. Documentation is available as nosql-spring-sdk
in GitHub or from SDK for Spring Data API Reference.
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
1-161
Chapter 1
Develop
Java
To create a connection represented by a NoSQLHandle, obtain a handle using the
NoSQLHandleFactory.createNoSQLHandle method and the NoSQLHandleConfig class.
The NoSQLHandleConfig class allows an application to specify the handle
configuration. See the Java API Reference Guide to learn more.
Use the following code to obtain a NoSQL handle:
/* Sets a default compartment for all requests from this handle. This
* may be overridden in individual requests or by using a
* compartment-name prefixed table name.
*/
config.setDefaultCompartment("mycompartment");
A handle has memory and network resources associated with it. Use the
NoSQLHandle.close method to free up the resources when your application is done
using the handle.
To minimize network activity and resource allocation and deallocation overheads, it's
best to avoid creating and closing handles repeatedly. For example, creating and
closing a handle around each operation would result in poor application performance.
A handle permits concurrent operations, so a single handle is sufficient to access
tables in a multi-threaded application. The creation of multiple handles incurs
additional resource overheads without providing any performance benefit.
Python
A handle is created by first creating a borneo.NoSQLHandleConfig instance to
configure the communication endpoint, authorization information, as well as default
values for handle configuration. borneo.NoSQLHandleConfig represents a connection
to the service. Once created it must be closed using the method
borneo.NoSQLHandle.close() in order to clean up resources. Handles are thread-safe
and intended to be shared.
1-162
Chapter 1
Develop
An example of acquiring a NoSQL Handle for the Oracle NoSQL Cloud Service:
Note:
To reduce resource usage and overhead of handle creation it is best to avoid
excessive creation and closing of borneo.NoSQLHandle instances.
Go
The first step in any Oracle NoSQL Database Cloud Service go application is to create a
nosqldb.Client handle used to send requests to the service. Instances of the Client handle
are safe for concurrent use by multiple goroutines and intended to be shared in a multi-
goroutines application. The handle is configured using your credentials and other
authentication information.
Node.js
Class NoSQLClient represents the main access point to the service. To create instance of
NoSQLClient you need to provide appropriate configuration information. This information is
represented by a plain JavaScript object and may be provided to the constructor of
NoSQLClient as the object literal. Alternatively, you may choose to store this information in a
JSON configuration file and the constructor of NoSQLClient with the path (absolute or relative
to the application's current directory) to that file.
The first example below creates instance of NoSQLClient for the Cloud Service using
configuration object literal. It also adds a default compartment and overrides some default
timeout values in the configuration object.
1-163
Chapter 1
Develop
region: Region.US_ASHBURN_1,
timeout: 20000,
ddlTimeout: 40000,
compartment: 'mycompartment',
auth: {
iam: {
configFile: '~/myapp/.oci/config',
profileName: 'Jane'
}
}
});
The second example stores the same configuration in a JSON file config.json and
uses it to create NoSQLClient instance.
{
"region": "US_ASHBURN_1",
"timeout": 20000,
"ddlTimeout": 40000,
"compartment": "mycompartment",
"auth": {
"iam": {
"configFile": "~/myapp/.oci/config",
"profileName": "Jane"
}
}
}
Application code:
C#
Class NoSQLClient represents the main access point to the service. To create an
instance of NoSQLClient you need to provide appropriate configuration information.
This information is represented by NoSQLConfig class which instance can be provided
to the constructor of NoSQLClient. Alternatively, you may choose to store the
configuration information in a JSON configuration file and use the constructor of
NoSQLClient that takes the path (absolute or relative to current directory) to that file.
The first example below creates instance of NoSQLClient for the Cloud Service using
NoSQLConfig. It also adds a default compartment and overrides some default timeout
values in NoSQLConfig.
1-164
Chapter 1
Develop
TableDDLTimeout = TimeSpan.FromSeconds(20),
Compartment = "mycompartment",
AuthorizationProvider = new IAMAuthorizationProvider(
"~/myapp/.oci/config", "Jane")
});
The second example stores the same configuration in a JSON file config.json and uses it to
create NoSQLClient instance.
config.json
{
"Region": "us-ashburn-1",
"Timeout": 20000,
"TableDDLTimeout": 40000,
"compartment": "mycompartment",
"AuthorizationProvider":
{
"AuthorizationType": "IAM",
"ConfigFile": "~/myapp/.oci/config",
"ProfileName": "Jane"
}
}
Application code:
Spring Data
Obtaining a NoSQL connection
In a Spring Data application, you must set up the AppConfig class that provides a
NosqlDbConfig Spring bean. The NosqlDbConfig Spring bean describes how to connect to
the Oracle NoSQL Database Cloud Service.
Create the AppConfig class that extends the AbstractNosqlConfiguration class. This
exposes the connection and security parameters to the Oracle NoSQL Database SDK for
Spring Data.
Return a NosqlDbConfig instance object with the connection details to the Oracle NoSQL
Database Cloud Service. Provide the @Configuration and @EnableNoSQLRepositories
annotations to this NosqlDbConfig class. The @Configuration annotation informs the Spring
Data Framework that the AppConfig class is a configuration class that should be loaded
before running the program. The @EnableNoSQLRepositories annotation informs the Spring
Data Framework that it needs to load the program and look up for the repositories that extend
the NosqlRepository interface. The @Bean annotation is required for the repositories to be
instantiated.
Create a nosqlDbConfig @Bean annotated method to return an instance of the
NosqlDBConfig class. The NosqlDBConfig instance object will be used by the Spring Data
Framework to authenticate the Oracle NoSQL Database Cloud Service.
You can use different methods to connect to the Oracle NoSQL Database Cloud Service. For
more details, see Connecting your Application to NDCS.
1-165
Chapter 1
Develop
In the following example, you authenticate using the SignatureProvider method. You
require tenancy id, user id, and fingerprint information which can be found on the user
profile page of the cloud account under the User Information tab on View
Configuration File. You can also add the passphrase to your private key. For more
details, see Authentication to connect to Oracle NoSQL Database.
import com.oracle.nosql.spring.data.config.AbstractNosqlConfiguration;
import com.oracle.nosql.spring.data.config.NosqlDbConfig;
import
com.oracle.nosql.spring.data.repository.config.EnableNosqlRepositories;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import oracle.nosql.driver.kv.StoreAccessTokenProvider;
import oracle.nosql.driver.Region;
import oracle.nosql.driver.iam.SignatureProvider;
import java.io.File;
{
/* Annotation to tell the Spring Data Framework that the returned
object should be registered as a bean in the Spring application.*/
@Bean
public NosqlDbConfig nosqlDbConfig()
{
SignatureProvider signatureProvider;
char passphrase[] = < Pass phrase > ; //Optional. A passphrase
for the key, if it is encrypted.
1-166
Chapter 1
Develop
About Compartments
Learn how to specify the compartment while creating and working with Oracle NoSQL
Database Cloud Service tables using Oracle NoSQL Database Drivers.
Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped
to that compartment. When authenticated as a specific user, your tables are managed in the
root compartment of your tenancy unless otherwise specified. Organizing tables into different
compartments will help with respect to organization and security.
If you have been authenticated using an instance principal (accessing the service from an
OCI compute instance), you must specify a compartment using its id (OCID), as there is no
default in this case. See Calling Service From an Instance in Oracle Cloud Infrastructure
Documentation.
• Java
• Python
• Go
• Node.js
• C#
Java
There are several ways to specify a compartment in your application code:
1. Use a default compartment in NoSQLHandleConfig so that it applies to all the operations
using the handle. See Obtaining a NoSQL Handle for an example.
2. Use the compartment name or id (OCID) in each request in addition to the table name.
This overrides any default compartment.
For example:
3. Use the compartment name as a prefix on the table name. This overrides any default
compartment as well as a compartment specified using API.
For example:
When using a named compartment, the name can be the simple name of a top-level
compartment or a path to a nested compartment. In the latter case, the path is a "." (dot)
separated path.
1-167
Chapter 1
Develop
Note:
While specifying the path to a nested compartment, do not include the top-
level compartment's name in the path as that is inferred from the tenancy.
Python
There are several ways to specify a compartment in your application code:
• A method exists to allow specification of a default compartment for requests in
borneo.NoSQLHandleConfig.set_compartment(). This overrides the user’s default
compartment.
• In addition, it is possible to specify a compartment in each Request instance.
Note:
If a compartment path is used to reference a nested compartment, the path
is a dot-separate path that excludes the top-level compartment of the path,
for example, compartmentA.compartmentB.
...
request = PutRequest().set_table_name('mycompartment:mytable')
...
create_statement = 'create table mycompartment:mytable(...)'
...
request = GetRequest().set_table_name('compartmentA.compartmentB')
Go
There are several ways to specify a compartment in your application code:
• You can set a desired compartment name or id.
• Set to an empty string to use the default compartment, that is the root
compartment of the tenancy.
• If using a nested compartment, specify the full compartment path relative to the
root compartment as compartmentID. For example, if using
rootCompartment.compartmentA.compartmentB, the compartmentID should be
set to compartmentA.compartmentB.
• You can also use the compartment OCID as the string value.
compartmentID:="<optional-compartment-name-or-ID>"
iam.NewRawSignatureProvider(tenancy, user, region, fingerprint,
1-168
Chapter 1
Develop
compartmentID,
privateKey, &privateKeyPassphrase)
Node.js
The default compartment for tables is the root compartment of the user's tenancy. A default
compartment for all operations can be specified by setting the Compartment property of
NoSQLConfig. For example:
C#
The default compartment for tables is the root compartment of the user's tenancy. A default
compartment for all operations can be specified by setting the Compartment property of
NoSQLConfig. For example:
The string value may be either a compartment OCID or a compartment name or path. If it is a
simple name it must specify a top-level compartment. If it is a path to a nested compartment,
the top-level compartment must be excluded as it is inferred from the tenancy.
In addition, all operation options classes have Compartment property, such as
TableDDLOptions.Compartment, GetOptions.Compartment, PutOptions.Compartment, etc.
Thus you may also specify comparment separately for any operation. This value, if set, will
override the compartment value in NoSQLConfig, if any.
If compartment is not supplied, the tenancy OCID will be used as default. Note this only
applies if you are authorizing with user's identity. When using instance principal or resource
principal, compartment id must be specified.
1-169
Chapter 1
Develop
/* Create a new table called users and set the TTL value to 4 days */
CREATE TABLE IF NOT EXISTS users(id INTEGER,
name STRING,
PRIMARY KEY(id))
USING TTL 4 days
/* Create a new index called nameIdx on the name field in the users
table */
CREATE INDEX IF NOT EXISTS nameIdx ON users(name)
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments .
1-170
Chapter 1
Develop
Create a table and index using the TableRequest and its methods. The TableRequest class
lets you pass a DDL statement to the TableRequest.setStatement method.
/* Create a simple table with an integer key and a single json data
* field and set your desired table capacity.
* Set the table TTL value to 3 days.
*/
String createTableDDL = "CREATE TABLE IF NOT EXISTS users " +
"(id INTEGER, name STRING, " +
"PRIMARY KEY(id)) USING TTL 3 days";
// Create an index called nameIdx on the name field in the users table.
treq = new TableRequest().setStatement("CREATE INDEX
IF NOT EXISTS nameIdx ON users(name)
");
Creating a child table: You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.
1-171
Chapter 1
Develop
"PRIMARY KEY(address))";
TableRequest treq = new
TableRequest().setStatement(createchildTableDDL);
System.out.println("Creating child table " + tableName);
TableResult tres = handle.tableRequest(treq);
/* The request is async,
* so wait for the table to become active.
*/
System.out.println("Waiting for "+ childtableName + " to become
active");
tres.waitForCompletion(handle, 60000, /* wait 60 sec */
1000); /* delay ms for poll */
System.out.println("Table " + childtableName + " is active");
Python
DDL statements are executed using the borneo.TableRequest class. All calls to
borneo.NoSQLHandle.table_request() are asynchronous so it is necessary to check
the result and call borneo.TableResult.wait_for_completion() to wait for the
operation to complete.
1-172
Chapter 1
Develop
# assume that a handle has been created, as handle, make the request
#wait for 60 seconds, polling every 1 seconds
result = handle.do_table_request(request, 60000, 1000)
# the above call to do_table_request is equivalent to
# result = handle.table_request(request)
result.wait_for_completion(handle, 60000, 1000)
#Create an index called nameIdx on the name field in the users table.
request = TableRequest().set_statement("CREATE INDEX IF NOT EXISTS nameIdx
ON users(name)")
# assume that a handle has been created, as handle, make the request
#wait for 60 seconds, polling every 1 seconds
result = handle.do_table_request(request, 60000, 1000)
# the above call to do_table_request is equivalent to
# result = handle.table_request(request)
result.wait_for_completion(handle, 60000, 1000)
Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.
ltr = ListTablesRequest()
list(str)= handle.list_tables(ltr).getTables()
if list(str).len() = 0
print ("No tables available")
1-173
Chapter 1
Develop
else
print('The tables available are: ' + list(str))
request = GetTableRequest().set_table_name(table_name)
result = handle.get_table(request)
print('The schema details for the table is: ' + result.get_schema())
Go
The following example creates a simple table with an integer key and a single STRING
field. The create table request is asynchronous. You wait for the table creation to
complete.
//Create an index called nameIdx on the name field in the users table
stmt_ind := fmt.Sprintf("CREATE INDEX IF NOT EXISTS nameIdx ON
1-174
Chapter 1
Develop
users(name)")
tableReq := &nosqldb.TableRequest{Statement: stmt_ind}
tableRes, err := client.DoTableRequest(tableReq)
if err != nil {
fmt.Printf("cannot initiate CREATE INDEX request: %v\n", err)
return
}
_, err = tableRes.WaitForCompletion(client, 60*time.Second, time.Second)
if err != nil {
fmt.Printf("Error finishing CREATE INDEX request: %v\n", err)
return
}
fmt.Println("Created index nameIdx ")
Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.
// Creates a simple child table with a string key and a single integer field.
childtableName := "users.userDetails"
stmt1 := fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s ("+
"address STRING, "+
"salary INTEGER, "+
"PRIMARY KEY(address))",
childtableName)
tableReq1 := &nosqldb.TableRequest{Statement: stmt1}
tableRes1, err := client.DoTableRequest(tableReq1)
if err != nil {
fmt.Printf("cannot initiate CREATE TABLE request: %v\n", err)
return
}
// The create table request is asynchronous, wait for table creation to
complete.
_, err = tableRes1.WaitForCompletion(client, 60*time.Second, time.Second)
if err != nil {
fmt.Printf("Error finishing CREATE TABLE request: %v\n", err)
return
}
fmt.Println("Created table ", childtableName)
1-175
Chapter 1
Develop
req := &nosqldb.GetTableRequest{
TableName: table_name, Timeout: 3 * time.Second, }
res, err := client.GetTable(req)
fmt.Printf("The schema details for the table is:state=%s,
limits=%v\n", res.State,res.Limits)
Node.js
Table DDL statements are executed by tableDDL method. Like most other methods of
NoSQLClient class, this method is asynchronous and it returns a Promise of
TableResult. TableResult is a plain JavaScript object that contains status of DDL
operation such as its TableState, name, schema and its TableLimit.
tableDDL method takes opt object as the 2nd optional argument. When you are
creating a table, you must specify its TableLimits as part of the opt argument.
TableLimits specifies maximum throughput and storage capacity for the table as the
amount of read units, write units, and Gigabytes of storage.
Note that tableDDL method only launches the specified DDL operation in the
underlying store and does not wait for its completion. The resulting TableResult will
most likely have one of intermediate table states such as TableState.CREATING,
TableState.DROPPING or TableState.UPDATING (the latter happens when table is in
the process of being altered by ALTER TABLE statement, table limits are being
changed or one of its indexes is being created or dropped).
When the underlying operation completes, the table state should change to
TableState.ACTIVE or TableState.DROPPED (the latter if the DDL operation was
DROP TABLE).
1-176
Chapter 1
Develop
After the above call returns, result will reflect final state of the operation. Alternatively, to use
complete option, substitute the code in try-catch block above with the following:
You need not specify TableLimits for any DDL operation other than CREATE TABLE. You
may also change table limits of the table after it has been created by calling setTableLimits
method. This may also require waiting for the completion the operation in the same way as
waiting for completion of operations initiated by tableDDL.
// Create an index called nameIdx on the name field in the users table.
try {
const statement = 'CREATE INDEX IF NOT EXISTS nameIdx ON users(name))';
let result = await client.tableDDL(statement);
result = await client.forCompletion(result);
console.log('Index nameIdx created');
} catch(error){
//handle errors
}
Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.
/**
* This function will create the child table userDetails with two columns,
* one string column address which will be the primary key and one integer
column
* which will be the salary.
1-177
Chapter 1
Develop
C#
To create tables and execute other Data Definition Language (DDL) statements, such
as creating, modifying and dropping tables as well as creating and dropping indexes,
use methods ExecuteTableDDLAsync and ExecuteTableDDLWithCompletionAsync.
Methods ExecuteTableDDLAsync and ExecuteTableDDLWithCompletionAsync return
Task<TableResult>. TableResult instance contains status of DDL operation such as
TableState and table schema. Each of these methods comes with several overloads.
In particular, you may pass options for the DDL operation as TableDDLOptions.
When creating a table, you must specify its TableLimits. Table limits specify maximum
throughput and storage capacity for the table as the amount of read units, write units
and Gigabytes of storage. You may use an overload that takes tableLimits parameter
or pass table limits as TableLimits property of TableDDLOptions.
Note that these are potentially long running operations. The method
ExecuteTableDDLAsync only launches the specified DDL operation by the service and
does not wait for its completion. You may asynchronously wait for table DDL operation
completion by calling WaitForCompletionAsync on the returned TableResult instance.
1-178
Chapter 1
Develop
// 1) Provisioned Capacity
// new TableLimits(50, 50, 25);
// 2) On-demand Capacity - only set storage limit
// new TableLimits( 25 );
// In this example, we will use Provisioned Capacity
var result = await client.ExecuteTableDDLAsync(statement, new
TableLimits(50, 50, 25));
await result.WaitForCompletionAsync();
Console.WriteLine("Table users created.");
} catch(Exception ex) {
// handle exceptions
}
Note that WaitForCompletionAsync will change the calling TableResult instance to reflect the
operation completion.
Alternatively you may use ExecuteTableDDLWithCompletionAsync. Substitute the statements
in the try-catch block with the following:
You need not specify TableLimits for any DDL operation other than CREATE TABLE. You
may also change table limits of an existing table by calling SetTableLimitsAsync or
SetTableLimitsWithCompletionAsync methods.
Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.
1-179
Chapter 1
Develop
Spring Data
In Spring data applications, the tables are automatically created at the beginning of the
application when the entities are initialized unless @NosqlTable.autoCreateTable is
set to false.
Create a Users entity class to persist. This entity class represents a table in the Oracle
NoSQL Database and an instance of this entity corresponds to a row in that table.
You can set the default TableLimits in the @NosqlDbConfig instance using
NosqlDbConfig.getDefaultCapacityMode(),
NosqlDbConfig.getDefaultStorageGB(), NosqlDbConfig.getDefaultReadUnits(),
and NosqlDbConfig.getDefaultWriteUnits() methods. TableLimits can also be
specified per table if @NosqlTable annotation is used, through capacityMode,
readUnits, writeUnits, and storageGB fields.
Provide the @NosqlId annotation to indicate the ID field. The generated=true attribute
specifies that the ID will be auto-generated. You can set the table level TTL by
providing the ttl() and ttlUnit() parameters in the @NosqlTable annotation of the
entity class. For details on all the Spring Data classes, methods, interfaces, and
examples see SDK for Spring Data API Reference.
If the ID field type is a String, a UUID will be used. If the ID field type is int or long, a
"GENERATED ALWAYS as IDENTITY (NO CYCLE)" sequence is used.
import com.oracle.nosql.spring.data.core.mapping.NosqlId;
import com.oracle.nosql.spring.data.core.mapping.NosqlTable;
1-180
Chapter 1
Develop
@Override
public String toString()
{
return "Users{" +
"id=" + id + ", " +
"firstName=" + firstName + ", " +
"lastName=" + lastName +
'}';
}
}
Create the following UsersRepository interface. This interface extends the NosqlRepository
interface and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. This NosqlRepository interface
provides methods that are used to store or retrieve data from the database.
import com.oracle.nosql.spring.data.repository.NosqlRepository;
/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the
Users class. */
You can use Spring's CommandLineRunner interface to show the application code that
implements the run method and has the main method.
Note:
You can code the functionality as per your requirements by implementing any of the
various interfaces that the Spring Data Framework provides. For more information
on setting up a Spring boot application, see Spring Boot.
import com.oracle.nosql.spring.data.core.NosqlTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
1-181
Chapter 1
Develop
@Override
public void run(String... args) throws Exception {
}
}
When a table is created through the Spring Data application, a default schema is
created automatically, which includes two columns - the primary key column (types
String, int, long, or timestamp) and a JSON column called kv_json_.
Note:
If a table exists already, it must comply with the generated schema.
In this example, you create an index on the lastName field in the Users table.
import com.oracle.nosql.spring.data.core.NosqlTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
1-182
Chapter 1
Develop
try {
AppConfig config = new AppConfig();
NosqlTemplate idx = NosqlTemplate.create(config.nosqlDbConfig());
idx.runTableRequest("CREATE INDEX IF NOT EXISTS nameIdx ON
Users(kv_json_.lastName AS STRING)");
System.out.println("Index created successfully");
} catch (Exception e) {
System.out.println("Exception creating index" + e);
}
For more details on table creation, see Example: Accessing Oracle NoSQL Database Using
Spring Data Framework in the Spring Data SDK Developers Guide.
Related Topics
• About Time to Live
Adding Data
Add rows to your table. When you store data in table rows, your application can easily
retrieve, add to, or delete information from a table.
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
Java
The PutRequest class represents the input to a
NoSQLHandle.put(oracle.nosql.driver.ops.PutRequest) operation. This request can be
used to perform unconditional and conditional puts to:
• Overwrite any existing row. Overwrite is the default functionality.
• Succeed only if the row does not exist. Use the PutRequest.Option.IfAbsent method in
this case.
• Succeed only if the row exists. Use the PutRequest.Option.IfPresent method in this
case.
• Succeed only if the row exists and the version matches a specific version. Use the
PutRequest.Option.IfVersion method for this case and the
setMatchVersion(oracle.nosql.driver.Version) method to specify the version to
match.
1-183
Chapter 1
Develop
Note:
First, connect your client driver to Oracle NoSQL Database Cloud Service to
get a handle and then complete other steps. This topic omits the steps for
connecting your client driver and creating a table.
If you do not yet have a table, see Creating Tables and Indexes .
/* use the MapValue class and input the contents of a new row */
MapValue value = new MapValue().put("id", 1).put("name", "myname");
/* create the PutRequest, setting the required value and table name */
PutRequest putRequest = new PutRequest().setValue(value)
.setTableName("users");
You can perform a sequence of PutRequest operations on a table that share the shard
key using the WriteMultipleRequest class. If the operation is successful, the
WriteMultipleResult.getSuccess() method returns true.
See the Java API Reference Guide for more information about the APIs.
You can also add JSON data to your table. You can either convert JSON data into a
record for a fixed schema table or you can insert JSON data into a column whose data
type is of type JSON.
The PutRequest class also provides the setValueFromJson method which takes a
JSON string and uses that to populate a row to insert into the table. The JSON string
should specify field names that correspond to the table field names.
To add JSON data to your table:
1-184
Chapter 1
Develop
* "ipaddr": "10.0.00.xxx",
* "audience_segment": {
* "sports_lover": "2018-11-30",
* "book_reader": "2018-12-01"
* }
* }
* }
*/
MapValue segments = new MapValue()
.put("sports_lover", new TimestampValue("2018-11-30"))
.put("book_reader", new TimestampValue("2018-12-01"));
MapValue value = new MapValue()
.put("cookie_id", 123) // fill in cookie_id field
.put("ipaddr", "10.0.00.xxx")
.put("audience_segment", segments);
PutRequest putRequest = new PutRequest()
.setValue(value)
.setTableName(tableName);
PutResult putRes = handle.put(putRequest);
The same row can be inserted into the table as a JSON string:
Python
The borneo.PutRequest class represents input to the borneo.NoSQLHandle.put() method
used to insert single rows. This method can be used for unconditional and conditional puts to:
• Overwrite any existing row. This is the default.
• Succeed only if the row does not exist. Use borneo.PutOption.IF_ABSENT for this case.
• Succeed only if the row exists. Use borneo.PutOption.IF_PRESENT for this case.
• Succeed only if the row exists and its borneo.Version matches a specific borneo.Version.
Use borneo.PutOption.IF_VERSION for this case and
borneo.PutRequest.set_match_version() to specify the version to match.
1-185
Chapter 1
Develop
When adding data the values supplied must accurately correspond to the schema for
the table. If they do not, IllegalArgumentException is raised. Columns with default or
nullable values can be left out without error, but it is recommended that values be
provided for all columns to avoid unexpected defaults. By default, unexpected columns
are ignored silently, and the value is put using the expected columns.
If you have multiple rows that share the same shard key they can be put in a single
request using borneo.WriteMultipleRequest which can be created using a number of
PutRequest or DeleteRequest objects. You can also add JSON data to your table. In
the case of a fixed-schema table the JSON is converted to the target schema. JSON
data can be directly inserted into a column of type JSON. The use of the JSON data
type allows you to create table data without a fixed schema, allowing more flexible use
of the data.
The data value provided for a row or key is a Python dict. It can be supplied to the
relevant requests (GetRequest, PutRequest, DeleteRequest) in multiple ways:
• as a Python dict directly:
request.set_value({'id': 1})
request.set_key({'id': 1 })
• as a JSON string:
In both cases the keys and values provided must accurately correspond to the schema
of the table. If not an borneo.IllegalArgumentException exception is raised. If the
data is provided as JSON and the JSON cannot be parsed a ValueError is raised.
Go
The nosqldb.PutRequest represents an input to the nosqldb.Put() function used to
insert single rows. This function can be used for unconditional and conditional puts to:
• Overwrite any existing row. This is the default.
• Succeed only if the row does not exist. Specify types.PutIfAbsent for the
PutRequest.PutOption field for this case.
• Succeed only if the row exists. Specify types.PutIfPresent for the
PutRequest.PutOption field for this case.
• Succeed only if the row exists and its version matches a specific version. Specify
types.PutIfVersion for the PutRequest.PutOption field and a desired version for
the PutRequest.MatchVersion field for this case.
The data value provided for a row (in PutRequest) or key (in GetRequest and
DeleteRequest ) is a *types.MapValue. The key portion of each entry in the MapValue
must match the column name of target table, and the value portion must be a valid
value for the column. There are several ways to create a MapValue for the row to put
into a table:
1. Create an empty MapValue and put values for each column.
value:=&types.MapValue{}
value.Put("id", 1).Put("name", "Jack")
1-186
Chapter 1
Develop
req:=&nosqldb.PutRequest{
TableName: "users",
Value: value,
}
res, err:=client.Put(req)
m:=map[string]interface{}{
"id": 1,
"name": "Jack",
}
value:=types.NewMapValue(m)
req:=&nosqldb.PutRequest{
TableName: "users",
Value: value,
}
res, err:=client.Put(req)
3. Create a MapValue from JSON. This is convenient for setting values for a row in the case
of a fixed-schema table where the JSON is converted to the target schema. For example:
JSON data can also be directly inserted into a column of type JSON. The use of the JSON
data type allows you to create table data without a fixed schema, allowing more flexible use
of the data.
Node.js
Method put is used to insert a single row into the table. It takes table name, row value as
plain JavaScript object and opts as an optional 3rd argument. This method can be used for
unconditional and conditional puts to:
• Overwrite existing row with the same primary key if present. This is the default.
• Succeed only if the row with the same primary key does not exist. Specify ifAbsent in
the opt argument for this case: { ifAbsent: true }. Alternatively, you may use
putIfAbsent method.
• Succeed only if the row with the same primary key exists. Specify ifPresent in the opt
argument for this case: { ifPresent: true }. Alternatively, you may use putIfPresent
method.
• Succeed only if the row with the same primary key exists and its Version matches a
specific Version value. Set matchVersion in the opt argument for this case to the specific
version: { matchVersion: my_version }. Alternatively, you may use putIfVersion
method and specify the version value as the 3rd argument (after table name and row).
1-187
Chapter 1
Develop
Each put method returns a Promise of PutResult which is a plain JavaScript object
containing information such as success status and resulting row Version. Note that the
property names in the provided row object should be the same as underlying table
column names.
To add rows to your table:
// Will fail since the row with the same primary key exists
result = await client.putIfAbsent(tableName, { id: 1, name:
'Jane' });
// Expected output: putIfAbsent failed
console.log('putIfAbsent ' + result.success ? 'succeeded' :
'failed');
// Will succeed because the row with the same primary key
exists
res = await client.putIfPresent(tableName, { id: 1 , name:
'Jane' });
// Expected output: putIfAbsent succeeded
console.log('putIfPresent ' + result.success ?
'succeeded' : 'failed');
// Will fail because the previous put has changed the row
version, so
// the old version no longer matches.
result = await client.putIfVersion(tableName, { id: 1, name:
'June' },
version);
// Expected output: putIfVersion failed
console.log('putIfVersion ' + result.success ? 'succeeded' :
'failed');
} catch(error) {
//handle errors
}
}
1-188
Chapter 1
Develop
Note that success results in false value only if conditional put operation fails due to condition
not being satisfied (e.g. row exists for putIfAbsent, row doesn't exist for putIfPresent or
version doesn't match for putIfVersion). If put operation fails for any other reason, the
resulting Promise will reject with an error (which you can catch in async function). For
example, this may happen if a column value supplied is of a wrong type, in which case the
put will result in NoSQLArgumentError.
You can perform a sequence of put operations on a table that share the same shard key
using putMany method. This sequence will be executed within the scope of single transaction,
thus making this operation atomic. The result of this operation is a Promise of
WriteMultipleResult. You can also use writeMany if the sequence includes both puts and
deletes.
Using columns of type JSON allows more flexibility in the use of data as the data in the JSON
column does not have predefined schema. To put data into JSON column, provide either
plain JavaScript object or a JSON string as the column value. Note that the data in plain
JavaScript object must be of supported JSON types.
C#
Method PutAsync and related methods PutIfAbsentAsync, PutIfPresentAsync and
PutIfVersionAsync are used to insert a single row into the table or update a single row.
try {
// Uncondintional put, should succeed.
var result = await client.PutAsync(tableName,
new MapValue
{
["id"] = 1,
["name"] = "John"
});
// This Put will fail because the row with the same primary
// key already exists.
1-189
Chapter 1
Develop
// This Put will succeed because the row with the same primary
// key exists.
result = await client.PutIfPresentAsync(tableName,
new MapValue
{
["id"] = 1,
["name"] = "Jane"
});
// This Put will fail because the previous Put has changed
// the row version, so the old version no longer matches.
result = await client.PutIfVersionAsync(
tableName,
new MapValue
{
["id"] = 1,
["name"] = "June"
}),
rowVersion);
1-190
Chapter 1
Develop
Note that Success property of the result only indicates successful completion as related to
conditional Put operations and is always true for unconditional Puts. If the Put operation fails
for any other reason, an exception will be thrown.
You can perform a sequence of put operations on a table that share the same shard key
using PutManyAsync method. This sequence will be executed within the scope of single
transaction, thus making this operation atomic. You can also call WriteManyAsync to perform
a sequence that includes both Put and Delete operations.
Using fields of data type JSON allows more flexibility in the use of data as the data in JSON
field does not have a predefined schema. To put value into a JSON field, supply a MapValue
instance as its field value as part of the row value. You may also create its value from a JSON
string via FieldValue.FromJsonString.
Spring Data
Use one of these methods to add rows to the table - NosqlRepository
save(entity_object), saveAll(Iterable<T> iterable), or NosqlTemplate
insert(entity). For details, see SDK for Spring Data API Reference.
In this section, you use the repository.save(entity_object) method to add the rows.
Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.
To add rows to your table, you can include the following code in your application.
@Override
public void run(String...args) throws Exception {
1-191
Chapter 1
Develop
u1.lastName = "Doe";
/* Create a second User instance and load values into it. Save the
instance.*/
Users u2 = new Users();
u2.firstName = "Angela";
u2.lastName = "Willard";
repo.save(u2);
}
This creates and saves two user entities. For each entity, the Spring Data SDK creates
two columns:
1. Primary key column
2. JSON data type column
Here, the primary key is auto-generated. The @NosqlId annotation in the Users class
specifies that the id field will act as the ID and be the primary key of the underlying
storage table.
The generated=true attribute specifies that this ID will be auto-generated by a
sequence. The rest of the entity fields, that is, the firstName and lastName fields are
stored in the JSON column.
Using Plugins
• Using IntelliJ Plugin for Development
• Using Eclipse Plugin for Development
• About Oracle NoSQL Database Visual Studio Code Extension
1-192
Chapter 1
Develop
• Drop Columns.
• Create Indexes.
• Drop Indexes.
• Execute SELECT SQL queries on a table and view query results in tabular format.
• Execute DML statements to update, insert, and delete data from a table.
This article has the following topics:
Tip:
Don't extract the downloaded plugin zip file. Select the plugin in the zip format
while installing it from disk.
After you successfully set up your IntelliJ plugin, create a NoSQL project, and connect it to
your Oracle NoSQL Database Cloud Service instance or simulator.
1-193
Chapter 1
Develop
2. Click the icon in the Schema Explorer window to open the Settings dialog for
the plugin.
3. Expand Tools > Oracle NoSQL in the Settings Explorer, and click Connections.
4. Select Cloud from the drop-down menu for the connection type.
5. Enter values for the following connection parameters, and click OK.
1-194
Chapter 1
Develop
6. The Intellij plugin connects your project to the Oracle NoSQL Database Cloud Service
and displays its schema in the Schema Explorer window.
7. If required, you can change your service endpoint or compartment from the Schema
Explorer window itself. To do this, click the icon in the Schema Explorer window.
A dialog window appears where you can provide the new values for Endpoint and
Compartment. Enter the values that you want to modify, and click OK.
You can provide values for:
• Both Endpoint and Compartment, or
• Endpoint alone. In this case, the Compartment defaults to the Root compartment in
that region.
After you successfully connect your project to your Oracle NoSQL Database Cloud Service,
you can manage the tables and data in your schema.
2. Click the icon in the Schema Explorer window to open the Settings dialog for the
plugin.
3. Expand Tools > Oracle NoSQL in the Settings Explorer, and click Connections.
4. Select Cloudsim from the drop-down menu for the connection type.
5. Enter values for the following connection parameters, and click OK.
1-195
Chapter 1
Develop
6. The Intellij plugin connects your project to the Oracle NoSQL Database Cloud
Simulator and displays its schema in the Schema Explorer window.
Note:
Before connecting your project to Oracle NoSQL Database Cloud
Simulator, it must be started and running. Otherwise, your connection
request will fail in IntelliJ.
After you successfully connect your project to your Oracle NoSQL Database Cloud
Simulator, you can manage the tables and data in your schema.
1-196
Chapter 1
Develop
4. To execute this program, click Run > Run 'BasicExampleTable' or press Shift + 10.
5. Verify the logs in the terminal to confirm that the code executed successfully. You can see
the display messages that indicate table creation, rows insertion, and so on.
Tip:
As the BasicExampleTable deletes the inserted rows and drops the
audienceData table, you can't view this table in the Schema Explorer. If you
want to see the table in the Schema Explorer, comment the code that deletes
the inserted rows and drops the table, and rerun the program.
a. Locate the Schema Explorer, and click the icon to reload the schema.
b. Locate the audienceData table under your tenant identifier, and expand it to view its
columns, primary key, and shard key details.
c. Double-click the table name to view its data. Alternatively, you can right-click the
table and select Browse Table.
d. A record viewer window appears in the main editor. Click Execute to run the query
and display table data.
e. To view individual cell data separately, double-click the cell.
1-197
Chapter 1
Develop
CREATE TABLE
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click the connection name and choose Create Table.
• In the prompt, enter the details for your new table. You can create the Oracle
NoSQL Database table in two modes:
– **Simple DDL Input** : You can use this mode to create the table declaratively,
that is, without writing a DDL statement.
– **Advanced DDL Input** : You can use this mode to create the table using a
DDL statement.
• You have the option to view the DDL statement before creating. Click Show DDL
to view the DDL statement formed based on the values entered in the fields in the
Simple DDL input mode. This DDL statement gets executed when you click
Create.
• Click Create to create the table.
DROP TABLE
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table that you want to drop. Choose Drop Table.
• A confirmation window appears, click Ok to confirm the drop action.
CREATE INDEX
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where index need to be created. Choose Create Index.
• In the Create Index panel, enter the details for creating an index without writing
any DDL statement. Specify the name of the index and the columns to be part of
the index.
• Click Add Index.
DROP INDEX
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Click on the target table to see the listed columns, Primary Keys, Indexes and
Shard Keys.
• Locate the target-index which has to be dropped and right-click on it. Click Drop
Index.
• A confirmation window appears, click Ok to confirm the drop action.
ADD COLUMN
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where column needs to be added. Choose Add Column.
• You can add new COLUMNs in two modes:
– Simple DDL Input : You can use this mode to add new columns without writing
a DDL statement.
1-198
Chapter 1
Develop
– Advanced DDL Input : You can use this mode to add new columns into the table by
supplying a valid DDL statement.
• In both the modes, specify the name of the column and define the column with its
properties - datatype, default value and whether it is nullable.
• Click Add Column.
DROP COLUMN
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Click on the target table to see the listed columns, Primary Keys, Indexes and Shard
Keys.
• Locate the target-column which has to be dropped and right-click on it. Click Drop
Column.
• A confirmation window appears, click Ok to confirm the drop action.
Insert data
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where a row needs to be inserted. Choose Insert Row.
• In the Insert Row panel, enter the details for inserting a new row. You can INSERT a new
ROW in two modes:
– Simple Input : You can use this mode to insert the new row without writing a DML
statement. Here a form based row fields entry is loaded, where you can enter the
value of every field in the row.
– Advanced JSON Input : You can use this mode to insert a new row into the table by
supplying a JSON Object containing the column name and its corresponding value as
key-value pairs.
• Click Insert Row.
1-199
Chapter 1
Develop
Note:
In any row, PRIMARY KEY and GENERATED ALWAYS AS
IDENTITY columns cannot be updated.
Query tables
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table and choose Browse Table.
• In the textbox on the left, enter the SELECT statement to fetch data from your
table.
• Click Execute to run the query. The corresponding data is retrieved from the table.
• Right click on any row and click View JSON to view the entire row object in the
JSON format.
• Click Show Query Plan to view the execution plan of the query.
1-200
Chapter 1
Develop
• Retrieve columns, indexes, primary keys, and shard keys for each table.
• Build and test your SQL queries on a table and obtain results in a tabular format.
To use the Eclipse plugin:
1. Download the Eclipse plugin from Oracle Technology Network.
2. Follow the instructions given in the README file and install the plugin.
3. After installing the Eclipse plugin, you can connect to your Oracle NoSQL Database
Cloud Service or Oracle NoSQL Database Cloud Simulator and execute the code to
read/write the tables. For more details, you can access the help content embedded with
Eclipse. To access the help content:
a. Click Help Contents from the Help menu.
b. Locate and expand the Oracle NoSQL Plugin Help Contents section.
c. This lists all the help topics available for Oracle NoSQL Plugin.
d. Refer the help topic as per your requirement.
1-201
Chapter 1
Develop
Before you can install the Oracle NoSQL Database Visual Studio (VS) Code
extension, you must install Visual Studio Code. You can download Visual Studio Code
from here.
1-202
Chapter 1
Develop
4. Browse to the location where the *.vsix file is stored and click Install.
Connecting to Oracle NoSQL Database Cloud Service from Visual Studio Code
Oracle NoSQL Database Visual Studio (VS) Code extension provides two methods to
connect to Oracle NoSQL Database Cloud Service or Oracle NoSQL Database Cloud
Simulator.
You can either provide a config file with the connection information or fill in the connection
information in the specific fields. If you are using a Node.js driver and already have
connection details saved in a file, use the Connect via Config File option to connect to the
Oracle NoSQL Database Cloud Service. Otherwise, if you are creating a new connection, use
the Fill in Individual Fields option.
1-203
Chapter 1
Develop
2. Open the Oracle NoSQL DB Show Connection Settings page from the
Command Palette or the Oracle NoSQL DB view in the Activity Bar.
• Open from Command Palette
a. Open the Command Palette by pressing:
– (Windows and Linux) Control + Shift + P
– (macOS) Command + Shift + P
b. From the Command Palette, select OracleNoSQL: Show Connections
Settings.
Tip:
Enter oraclenosql in the Command Palette to display all of
the Oracle NoSQL DB commands you can use.
1-204
Chapter 1
Develop
3. In the Show Connection Settings page, click Cloud or CloudSim to connect to Oracle
NoSQL Database Cloud Service or Oracle NoSQL Database Cloud Simulator.
1-205
Chapter 1
Develop
1-206
Chapter 1
Develop
1-207
Chapter 1
Develop
5. Click Connect.
6. Click Reset to clear the saved connection details from the workspace.
1-208
Chapter 1
Develop
1-209
Chapter 1
Develop
Oracle NoSQL Database Cloud Service Oracle NoSQL Database Cloud Simulator
{
"region": "<region-id-of-nosql-
cloud-service-endpoint>",
"compartment": "<oci-
compartment-name-or-id>",
"auth":
{
"iam":
{
"tenantId": "<tenancy-
ocid>",
"userId": "<user-ocid>",
"fingerprint":
"<fingerprint-for-the-signing-
key>",
"privateKeyFile": "<path-
to-the-private-key>",
"passphrase": "<passphrase-
of-the-signing-key>"
}
}
}
1-210
Chapter 1
Develop
Tip:
Enter oraclenosql in the Command Palette to display all of the Oracle
NoSQL DB commands you can use.
4. Browse to the location where the *.config file is stored and click Select.
1-211
Chapter 1
Develop
• You can refresh the schema or table at any time to re-query your deployment and
populate Oracle NoSQL Database Cloud Service with the most up-to-date data.
– In the TABLE EXPLORER, locate the connection and click the Refresh icon to
reload the schema. Alternatively, you can right-click the connection and select
Refresh Schema.
– In the TABLE EXPLORER, locate the table name and click the Refresh icon to
reload the table. Alternatively, you can right-click the table name and select
Refresh Table.
1-212
Chapter 1
Develop
• DROP INDEX
• ADD COLUMN
• DROP COLUMN
CREATE TABLE
You can create the Oracle NoSQL Database table in two modes:
• Simple DDL Input: You can use this mode to create the Oracle NoSQL Database table
declaratively, that is, without writing a DDL statement.
• Advanced DDL Input: You can use this mode to create the Oracle NoSQL Database
table using a DDL statement.
1. Hover over the Oracle NoSQL Database connection to add the new table.
2. Click the Plus icon that appears.
3. In the Create Table page, select Simple DDL Input.
Field Description
Table Name: Specify a unique table name.
Column Name Specify a column name for the primary key in
your table.
1-213
Chapter 1
Develop
Field Description
Column Type Select the data type for your primary key
column.
Set as Shard Key Select this option to set this primary key column
as shard key. Shard key is to distribute data
across the Oracle NoSQL Database cluster for
increased efficiency, and to position records that
share the shard key locally for easy reference
and access. Records that share the shard key
are stored in the same physical location and can
be accessed atomically and efficiently.
Remove Click this button to delete an existing column.
+ Add Primary Key Column Click this button to add more columns while
creating a composite (multi-column) primary
key.
Column Name Specify the column name.
Column Type Select the data type for your column.
Default Value (optional) Specify a default value for the column.
Note:
Default values can not be specified
for binary and JSON data type
columns.
Note:
Updating Table Time to Live (TTL)
does not change the TTL value of
any existing data in the table. The
new TTL value applies only to those
rows that are added to the table after
this value is modified and to the rows
for which no overriding row-specific
value has been supplied.
1-214
Chapter 1
Develop
4. Click Show DDL to view the DDL statement formed based on the values entered in the
fields in the Simple DDL input mode. This DDL statement gets executed when you click
Create.
5. Click Create.
DROP TABLE
1. Right-click the target table.
2. Click Drop Table.
3. Click Yes to drop the table.
CREATE INDEX
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Right click on the table where index need to be created. Choose Create Index.
• Specify the name of the index and the columns to be part of the index.
• Click Add Index.
DROP INDEX
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Click on the table where the index needs to be removed. The list of indexes are displayed
below the column names.
• Right click on the index to be dropped. Click Drop Index.
• A confirmation window appears, click Ok to confirm the drop action.
ADD COLUMN
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Right click on the table where column needs to be added. Click Add columns.
• Specify the name of the column and define the column with its properties - datatype,
default value and whether it is nullable.
• Click Add New Columns.
DROP COLUMN
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Expand the table where column needs to be removed.
• Right click the column to be removed and choose Drop Column.
• A confirmation window appears, click Ok to confirm the drop action.
Insert Data
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
1-215
Chapter 1
Develop
• Right click on the table where a row needs to be inserted. Choose Insert Row.
• In the Insert Row panel, enter the details for inserting a new row. You can INSERT
a new ROW in two modes:
– Simple Input : You can use this mode to insert the new row without writing a
DML statement. Here a form based row fields entry is loaded, where you can
enter the value of every field in the row.
– Advanced JSON Input : You can use this mode to insert a new row into the
table by supplying a JSON Object containing the column name and its
corresponding value as key-value pairs.
• Click Insert Row.
Note:
In any row, PRIMARY KEY and GENERATED ALWAYS AS
IDENTITY columns cannot be updated.
1-216
Chapter 1
Develop
Removing a Connection
Oracle NoSQL Database Connector provides two methods to remove a connection from
Visual Studio (VS) Code.
You can:
• Remove a connection with the Command Palette, or
• Remove a connection from the Oracle NoSQL DB view in the Activity Bar.
Note:
Removing a connection from Visual Studio Code deletes the persisted connection
details from the current workspace.
1-217
Chapter 1
Develop
Tip:
Enter oraclenosql in the Command Palette to display all of the Oracle
NoSQL DB commands you can use.
Table Fields
Learn how to design and configure data using table fields.
An application may choose to use schemaless tables, where a row consists of key
fields and a single JSON data field. A schemaless table offers flexibility in what can be
stored in a row.
Alternatively, the application can choose to use fixed schema tables, where all of the
table fields are defined as specific types.
Fixed schema tables with typed data are safer to use from an enforcement and
storage efficiency standpoint. Even though the schema of fixed schema tables can be
modified, their table structure cannot easily be changed. A schemaless table is flexible
and the table structure can be easily modified.
Finally, an application can also use a hybrid data model approach where a table can
have typed data and JSON data fields.
The following examples demonstrate how to design and configure data for all three
approaches.
1-218
Chapter 1
Develop
In this case, the audience_info table can hold a JSON object such as:
{
"cookie_id": "",
"audience_data": {
"ipaddr" : "10.0.00.xxx",
"audience_segment: {
"sports_lover" : "2018-11-30",
"book_reader" : "2018-12-01"
}
}
}
Your application will have a key field and a data field for this table. You have flexibility in what
you chose to store as information in your audience_data field. Therefore, you can easily
change the types of information available.
In this example, your table has a key field and two data fields. Your data is more compact,
and you are able to ensure that all data fields are accurate.
1-219
Chapter 1
Develop
ipaddr STRING,
audience_segment JSON,
PRIMARY KEY(cookie_id))
Primary Keys
You must designate one or more primary key columns when you create your table. A
primary key uniquely identifies every row in the table. For simple CRUD operations,
Oracle NoSQL Database Cloud Service uses the primary key to retrieve a specific row
to read or modify. For example, consider a table has the following fields:
• productName
• productType
• productLine
From experience, you know that the product name is important as well as unique to
each row, so you set the productName as the primary key. Then, you retrieve rows of
interest based on the productName. In such a case, use a statement like this, to define
the table.
Shard Keys
The main purpose of shard keys is to distribute data across the Oracle NoSQL
Database Cloud Service cluster for increased efficiency, and to position records that
share the shard key locally for easy reference and access. Records that share the
shard key are stored in the same physical location and can be accessed atomically
and efficiently.
Your Primary and shard key design has implications on scaling and achieving
provisioned throughput. For example, when records share shard keys, you can delete
multiple table rows in an atomic operation, or retrieve a subset of rows in your table in
a single atomic operation. In addition to enabling scalability, well-designed shard keys
can improve performance by requiring fewer cycles to put data to, or get data from, a
single shard.
1-220
Chapter 1
Develop
For example, suppose that you designate three primary key fields:
Because you know that your application frequently makes queries using the productName and
productType columns, specifying those fields as shard keys is appropriate. The shard key
designation guarantees that all rows for these two columns are stored on the same shard. If
these two fields are not shard keys, the most frequently queried columns could be stored on
any shard. Then, locating all rows for both fields requires scanning all data storage, rather
than one shard.
Shard keys designate storage on the same shard to facilitate efficient queries for key values.
However, because you want your data be distributed across the shards for best performance,
you must avoid shard keys that have few unique values.
Note:
If you do not designate shard keys when creating a table, Oracle NoSQL Database
Cloud Service uses the primary keys for shard organization.
Time to Live
Learn how to specify expiration times for tables and rows using the Time-to-Live (TTL)
feature.
Many applications handle data that has a limited useful lifetime. Time-to-Live (TTL) is a
mechanism that allows you to set a time frame on table rows, after which the rows expire
automatically, and are no longer available. It is the amount of time data is allowed to remain
in the Oracle NoSQL Database Cloud Service. Data that reaches expiration time can no
longer be retrieved, and does not appear in any storage statistics.
1-221
Chapter 1
Develop
By default, every table that you create has a TTL value of zero, indicating no expiration
time. You can declare a TTL value when you create a table, specifying the TTL with a
number, followed by either HOURS or DAYS. Table rows inherit the TTL value of the table
in which they reside, unless you explicitly set a TTL value for table rows. Setting a
row's TTL value overrides the table's TTL value. If you change the table's TTL value
after the row has a TTL value, the row's TTL value persists.
You can update the TTL value for a table row at any time before the row reaches the
expiration time. Expired data can no longer be accessed. Therefore, using TTL values
is more efficient than manually deleting rows, because the overhead of writing a
database log entry for the data deletion is avoided. Expired data is purged from the
disk after expiration date.
1-222
Chapter 1
Develop
Note:
Once dropped, a table with the same name can
be created again.
Topics
1. Downloading the Oracle NoSQL Database Cloud Simulator
2. Oracle NoSQL Database Cloud Simulator Compared With Oracle NoSQL Database
Cloud Service
Note:
Your local system should meet the following requirements to run the Oracle NoSQL
Database Cloud Simulator:
• Java JDK version 10 or higher installed in your machine.
• A minimum of 5-GB available disk space where you plan to install the Oracle
NoSQL Database Cloud Simulator.
1-223
Chapter 1
Develop
The output displays all directories and files that are part of the package. All the
Oracle NoSQL Database Cloud Simulator related .jar files are placed in the
cloudsim/lib directory.
After extracting the package, read the oracle-nosql-cloud-simulator-
<version_number>/README.txt file for instructions on how to start and stop the
simulator.
In order to use the Oracle NoSQL Database Cloud Simulator you must download one
of the supported Oracle NoSQL language SDKs. The SDKs have instructions and
example code to connect to either the Oracle NoSQL Database Cloud Simulator or the
Oracle NoSQL Database Cloud Service.
1-224
Chapter 1
Develop
Simulator data is located). Oracle NoSQL Database Cloud Simulator assumes exclusive
control over the data storage directory.
• The Oracle NoSQL Database Cloud Simulator does not support or require security-
relevant configurations.
• No hard limit is enforced on the number of tables, size of tables, number of indexes, or
maximum throughput specified for tables (except for the amount of storage on the local
disk drive).
• Data Definition Language (DDL) operations, such as creating or dropping a table, and
creating or dropping an index, are not throttled.
• Operational history is not maintained.
Overview
Oracle NoSQL Database Migrator lets you move Oracle NoSQL tables from one data source
to another, such as Oracle NoSQL Database on-premises or cloud or even a simple JSON
file.
There can be many situations that require you to migrate NoSQL tables from or to an Oracle
NoSQL Database. For instance, a team of developers enhancing a NoSQL Database
application may want to test their updated code in the local Oracle NoSQL Database Cloud
Service (NDCS) instance using cloudsim. To verify all the possible test cases, they must set
up the test data similar to the actual data. To do this, they must copy the NoSQL tables from
the production environment to their local NDCS instance, the cloudsim environment. In
another situation, NoSQL developers may need to move their application data from on-
premise to the cloud and vice-versa, either for development or testing.
In all such cases and many more, you can use Oracle NoSQL Database Migrator to move
your NoSQL tables from one data source to another, such as Oracle NoSQL Database on-
premise or cloud or even a simple JSON file. You can also copy NoSQL tables from a
MongoDB-formatted JSON input file, DynamoDB-formatted JSON input file (either stored in
AWS S3 source or from files), or a CSV file into your NoSQL Database on-premises or cloud.
As depicted in the following figure, the NoSQL Database Migrator utility acts as a connector
or pipe between the data source and the target (referred to as the sink). In essence, this
utility exports data from the selected source and imports that data into the sink. This tool is
table-oriented, that is, you can move the data only at the table level. A single migration task
operates on a single table and supports migration of table data from source to sink in various
data formats.
1-225
Chapter 1
Develop
Oracle NoSQL Database Migrator is designed such that it can support additional
sources and sinks in the future. For a list of sources and sinks supported by Oracle
NoSQL Database Migrator as of the current release, see Supported Sources and
Sinks.
Transformations
NoSQL NoSQL
Table Data Table Data
Migration Pipe
Source Sink
{
"source": {
"type" : <source type>,
//source-configuration for type. See Source Configuration
Templates .
},
"sink": {
1-226
Chapter 1
Develop
1-227
Chapter 1
Develop
Note:
As JSON file is case-sensitive, all the parameters defined in the
configuration file are case-sensitive unless specified otherwise.
NA Y Y
Oracle NoSQL Database
(nosqldb)
NA Y Y
Oracle NoSQL Database
Cloud Service
(nosqldb_cloud)
1-228
Chapter 1
Develop
Y Y
File system JSON
(file) (json)
Y N
MongoDB JSON
(mongodb_json)
Y N
DynamoDB JSON
(dynamodb_json)
N Y
Parquet(parquet)
Y N
CSV
(csv)
Y Y
OCI Object Storage JSON
(object_storage_oci) (json)
Y N
MongoDB JSON
(mongodb_json)
N Y
Parquet(parquet)
Y N
CSV
(csv)
AWS S3 Y N
DynamoDB JSON
(dynamodb_json)
Note:
Many configuration parameters are common across the source and sink
configuration. For ease of reference, the description for such parameters is
repeated for each source and sink in the documentation sections, which explain
configuration file formats for various types of sources and sinks. In all the cases, the
syntax and semantics of the parameters with the same name are identical.
1-229
Chapter 1
Develop
All sources and sinks that use services in the Oracle Cloud Infrastructure (OCI) can
use certain parameters for providing optional security information. This information can
be provided using an OCI configuration file or Instance Principal.
Oracle NoSQL Database sources and sinks require mandatory security information if
the installation is secure and uses an Oracle Wallet-based authentication. This
information can be provided by adding a jar file to the <MIGRATOR_HOME>/lib
directory.
Wallet-based Authentication
If an Oracle NoSQL Database installation uses Oracle Wallet-based authentication,
you need an additional jar file that is part of the EE installation. For more information,
see Oracle Wallet.
Without this jar file, you will get the following error message:
Could not find kvstore-ee.jar in lib directory. Copy kvstore-
ee.jar to lib directory.
To prevent the exception shown above, you must copy the kvstore-ee.jar file from
your EE server package to the <MIGRATOR_HOME>/lib directory.
<MIGRATOR_HOME> is the nosql-migrator-M.N.O/ directory created by
extracting the Oracle NoSQL Database Migrator package and M.N.O represent the
software release.major.minor numbers. For example, nosql-migrator-1.1.0/lib.
Note:
The wallet-based authentication is supported ONLY in the Enterprise Edition
(EE) of Oracle NoSQL Database.
1-230
Chapter 1
Develop
BEGIN
OR
Proceed to Migration Save the Configuration You can reuse the Run runMigrator by
with the Generated JSON File for a Future Config JSON File passing the Configuration
Configuration JSON File Migration multiple times. JSON File as a Parameter
END
Note:
Oracle NoSQL Database Migrator utility requires Java 11 or higher versions to run.
1-231
Chapter 1
Develop
referring to Supported Sources and Sinks. This is also an appropriate phase to decide
the schema for your NoSQL table in the target or sink, and create them.
• Identify Sink Table Schema: If the sink is Oracle NoSQL Database on-premise or
cloud, you must identify the schema for the sink table and ensure that the source
data matches with the target schema. If required, use transformations to map the
source data to the sink table.
– Default Schema: NoSQL Database Migrator provides an option to create a
table with the default schema without the need to predefine the schema for the
table. This is useful primarily when loading JSON source files into Oracle
NoSQL Database.
If the source is a MongoDB-formatted JSON file, the default schema for the
table will be as follows:
Where:
— tablename = value provided for the table attribute in the configuration.
— ID = _id value from each document of the mongoDB exported JSON source
file.
— DOCUMENT = For each document in the mongoDB exported file, the
contents excluding the _id field are aggregated into the DOCUMENT column.
If the source is a DynamoDB-formatted JSON file, the default schema for the
table will be as follows:
Where:
— TABLE_NAME = value provided for the sink table in the configuration
— DDBPartitionKey_name = value provided for the partition key in the
configuration
— DDBPartitionKey_type = value provided for the data type of the partition
key in the configuration
— DDBSortKey_name = value provided for the sort key in the configuration if
any
— DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
— DOCUMENT = All attributes except the partition and sort key of a Dynamo
DB table item aggregated into a NoSQL JSON column
If the source format is a CSV file, a default schema is not supported for the
target table. You can create a schema file with a table definition containing the
same number of columns and data types as the source CSV file. For more
details on the Schema file creation, see Providing Table Schema.
1-232
Chapter 1
Develop
For all the other sources, the default schema will be as follows:
Where:
— tablename = value provided for the table attribute in the configuration.
— ID = An auto-generated LONG value.
— DOCUMENT = The JSON record provided by the source is aggregated into the
DOCUMENT column.
Note:
If the _id value is not provided as a string in the MongoDB-formatted JSON
file, NoSQL Database Migrator converts it into a string before inserting it
into the default schema.
• Providing Table Schema: NoSQL Database Migrator allows the source to provide
schema definitions for the table data using schemaInfo attribute. The schemaInfo
attribute is available in all the data sources that do not have an implicit schema already
defined. Sink data stores can choose any one of the following options.
– Use the default schema defined by the NoSQL Database Migrator.
– Use the source-provided schema.
– Override the source-provided schema by defining its own schema. For example, if
you want to transform the data from the source schema to another schema, you need
to override the source-provided schema and use the transformation capability of the
NoSQL Database Migrator tool.
The table schema file, for example, mytable_schema.ddl can include table DDL
statements. The NoSQL Database Migrator tool executes this table schema file before
1-233
Chapter 1
Develop
starting the migration. The migrator tool supports no more than one DDL
statement per line in the schema file. For example,
Note:
Migration will fail if the table is present at the sink and the DDL in the
schemaPath is different than the table.
• Create Sink Table: Once you identify the sink table schema, create the sink table
either through the Admin CLI or using the schemaInfo attribute of the sink
configuration file. See Sink Configuration Templates .
Note:
If the source is a CSV file, create a file with the DDL commands for the
schema of the target table. Provide the file path in
schemaInfo.schemaPath parameter of the sink configuration file.
Note:
The support for migrating TTL metadata for table rows is only available for
Oracle NoSQL Database and Oracle NoSQL Database Cloud Service.
//Row 1
{
"id" : 1,
"name" : "xyz",
"age" : 45,
1-234
Chapter 1
Develop
"_metadata" : {
"expiration" : 1629709200000 //Row Expiration time in milliseconds
}
}
//Row 2
{
"id" : 2,
"name" : "abc",
"age" : 52,
"_metadata" : {
"expiration" : 1629709400000 //Row Expiration time in milliseconds
}
}
{
"id" : 8,
1-235
Chapter 1
Develop
"name" : "xyz",
"_metadata" : {
"expiration" : 1629709200000 //Monday, August 23, 2021 9:00:00
AM in UTC
}
}
{
"id" : 8,
"name" : "xyz",
"_metadata" : {
"ttl" : 1629712800000 //Monday, August 23, 2021 10:00:00 AM UTC
}
}
[~]$ ./runMigrator
configuration file is not provided. Do you want to generate
configuration?
(y/n)
[n]: y
...
...
1-236
Chapter 1
Develop
manually before running the runMigrator command with the -c or --config option. For
any help with the source and sink configuration parameters, see Oracle NoSQL
Database Migrator Reference.
Example:
• Log File:
You can specify the name of the log file using --log-file or -f parameter. If --log-
file is passed as run time parameter to the runMigrator command, the NoSQL
Database Migrator writes all the log messages to the file else to the standard output.
Example:
1-237
Chapter 1
Develop
Topics:
• Migrate from Oracle NoSQL Database Cloud Service to a JSON file
• Migrate from Oracle NoSQL Database On-Premise to Oracle NoSQL Database
Cloud Service
• Migrate from JSON file source to Oracle NoSQL Database Cloud Service
• Migrate from MongoDB JSON file to an Oracle NoSQL Database Cloud
Service
• Migrate from DynamoDB JSON file in AWS S3 to an Oracle NoSQL Database
Cloud Service
• Migrate from DynamoDB JSON file to Oracle NoSQL Database
• Migrate from CSV file to Oracle NoSQL Database
Use Case
An organization decides to train a model using the Oracle NoSQL Database Cloud
Service (NDCS) data to predict future behaviors and provide personalized
recommendations. They can take a periodic copy of the NDCS tables' data to a JSON
file and apply it to the analytic engine to analyze and train the model. Doing this helps
them separate the analytical queries from the low-latency critical paths.
Example
For the demonstration, let us look at how to migrate the data and schema definition of
a NoSQL table called myTable from NDCS to a JSON file.
Prerequisites
• Identify the source and sink for the migration.
– Source: Oracle NoSQL Database Cloud Service
– Sink: JSON file
• Identify your OCI cloud credentials and capture them in the OCI config file. Save
the config file in /home/.oci/config. See Acquiring Credentials in Using Oracle
NoSQL Database Cloud Service.
[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>
• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-phoenix-1
1-238
Chapter 1
Develop
– compartment: developers
Procedure
To migrate the data and schema definition of myTable from Oracle NoSQL Database Cloud
Service to a JSON file:
1. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
2. To generate the configuration file using the NoSQL Database Migrator, run the
runMigrator command without any runtime parameters.
[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
3. As you did not provide the configuration file as a runtime parameter, the utility prompts if
you want to generate the configuration now. Type y.
4. Based on the prompts from the utility, choose your options for the Source configuration.
5. Based on the prompts from the utility, choose your options for the Sink configuration.
1-239
Chapter 1
Develop
Would you like to migrate the table schema also? (y/n) [y]: y
Enter path to a file to store table schema: /home/apothula/
nosqlMigrator/myTableSchema
6. Based on the prompts from the utility, choose your options for the source data
transformations. The default value is n.
7. Enter your choice to determine whether to proceed with the migration in case any
record fails to migrate.
9. Finally, the utility prompts for your choice to decide whether to proceed with the
migration with the generated configuration file or not. The default option is y.
Note:
If you select n, you can use the generated configuration file to run the
migration using the ./runMigrator -c or the ./runMigrator --config
option.
1-240
Chapter 1
Develop
10. The NoSQL Database Migrator migrates your data and schema from NDCS to the JSON
file.
Validation
To validate the migration, you can open the JSON Sink files and view the schema and data.
[~/nosqlMigrator]$cat myTableJSON
{
"id" : 10,
"document" : {
"course" : "Computer Science",
"name" : "Neena",
"studentid" : 105
}
}
{
"id" : 3,
"document" : {
"course" : "Computer Science",
"name" : "John",
"studentid" : 107
}
}
{
"id" : 4,
"document" : {
"course" : "Computer Science",
"name" : "Ruby",
"studentid" : 100
}
}
{
"id" : 6,
"document" : {
"course" : "Bio-Technology",
"name" : "Rekha",
"studentid" : 104
}
}
{
"id" : 7,
"document" : {
"course" : "Computer Science",
"name" : "Ruby",
"studentid" : 100
}
1-241
Chapter 1
Develop
}
{
"id" : 5,
"document" : {
"course" : "Journalism",
"name" : "Rani",
"studentid" : 106
}
}
{
"id" : 8,
"document" : {
"course" : "Computer Science",
"name" : "Tom",
"studentid" : 103
}
}
{
"id" : 9,
"document" : {
"course" : "Computer Science",
"name" : "Peter",
"studentid" : 109
}
}
{
"id" : 1,
"document" : {
"course" : "Journalism",
"name" : "Tracy",
"studentid" : 110
}
}
{
"id" : 2,
"document" : {
"course" : "Bio-Technology",
"name" : "Raja",
"studentid" : 108
}
}
[~/nosqlMigrator]$cat myTableSchema
CREATE TABLE IF NOT EXISTS myTable (id INTEGER, document JSON, PRIMARY
KEY(SHARD(id)))
1-242
Chapter 1
Develop
Migrate from Oracle NoSQL Database On-Premise to Oracle NoSQL Database Cloud Service
This example shows how to use the Oracle NoSQL Database Migrator to copy data and the
schema definition of a NoSQL table from Oracle NoSQL Database to Oracle NoSQL
Database Cloud Service (NDCS).
Use Case
As a developer, you are exploring options to avoid the overhead of managing the resources,
clusters, and garbage collection for your existing NoSQL Database KVStore workloads. As a
solution, you decide to migrate your existing on-premise KVStore workloads to Oracle
NoSQL Database Cloud Service because NDCS manages them automatically.
Example
For the demonstration, let us look at how to migrate the data and schema definition of a
NoSQL table called myTable from the NoSQL Database KVStore to NDCS. We will also use
this use case to show how to run the runMigrator utility by passing a precreated
configuration file.
Prerequisites
• Identify the source and sink for the migration.
– Source: Oracle NoSQL Database
– Sink: Oracle NoSQL Database Cloud Service
• Identify your OCI cloud credentials and capture them in the OCI config file. Save the
config file in /home/.oci/config. See Acquiring Credentials in Using Oracle NoSQL
Database Cloud Service.
[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>
• Identify the region endpoint and compartment name for your Oracle NoSQL Database
Cloud Service.
– endpoint: us-phoenix-1
– compartment: developers
• Identify the following details for the on-premise KVStore:
– storeName: kvstore
– helperHosts: <hostname>:5000
– table: myTable
Procedure
To migrate the data and schema definition of myTable from NoSQL Database KVStore to
NDCS:
1-243
Chapter 1
Develop
1. Prepare the configuration file (in JSON format) with the identified Source and Sink
details. See Source Configuration Templates and Sink Configuration Templates .
{
"source" : {
"type" : "nosqldb",
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"],
"table" : "myTable",
"requestTimeoutMs" : 5000
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-phoenix-1",
"table" : "myTable",
"compartment" : "developers",
"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/JSON/file/with/DDL/
commands/for/the/schema/definition>",
"readUnits" : 100,
"writeUnits" : 100,
"storageSize" : 1
},
"credentials" : "<complete/path/to/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
2. Open the command prompt and navigate to the directory where you extracted the
NoSQL Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --
config or -c option.
[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator --config
<complete/path/to/the/JSON/config/file>
Validation
To validate the migration, you can login to your NDCS console and verify that myTable
is created with the source data.
1-244
Chapter 1
Develop
Migrate from JSON file source to Oracle NoSQL Database Cloud Service
This example shows the usage of Oracle NoSQL Database Migrator to copy data from a
JSON file source to Oracle NoSQL Database Cloud Service.
After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud
Service as its NoSQL Database platform. As its source contents are in JSON file format, they
are looking for a way to migrate them to Oracle NoSQL Database Cloud Service.
In this example, you will learn to migrate the data from a JSON file called SampleData.json.
You run the runMigrator utility by passing a pre-created configuration file. If the configuration
file is not provided as a run time parameter, the runMigrator utility prompts you to generate
the configuration through an interactive procedure.
Prerequisites
• Identify the source and sink for the migration.
– Source: JSON source file.
SampleData.json is the source file. It contains multiple JSON documents with one
document per line, delimited by a new line character.
{"id":6,"val_json":{"array":
["q","r","s"],"date":"2023-02-04T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-03-04T02:38:57.520Z","numfield":30,"strfield":"foo
54"},
{"datefield":"2023-02-04T02:38:57.520Z","numfield":56,"strfield":"bar2
3"}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":3,"val_json":{"array":
["g","h","i"],"date":"2023-02-02T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-02-02T02:38:57.520Z","numfield":28,"strfield":"foo
3"},
{"datefield":"2023-02-02T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":7,"val_json":{"array":
["a","b","c"],"date":"2023-02-20T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-01-20T02:38:57.520Z","numfield":28,"strfield":"foo
"},
{"datefield":"2023-01-22T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":4,"val_json":{"array":
["j","k","l"],"date":"2023-02-03T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-02-03T02:38:57.520Z","numfield":28,"strfield":"foo
"},
{"datefield":"2023-02-03T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
1-245
Chapter 1
Develop
• Identify your OCI cloud credentials and capture them in the configuration file. Save
the config file in /home/user/.oci/config. For more details, see Acquiring
Credentials in Using Oracle NoSQL Database Cloud Service.
[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
region=us-ashburn-1
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>
• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-ashburn-1
– compartment: Training-NoSQL
• Identify the following details for the JSON source file:
– schemaPath: <absolute path to the schema definition file containing
DDL statements for the NoSQL table at the sink>.
In this example, the DDL file is schema_json.ddl.
{
"source" : {
"type" : "file",
"format" : "json",
"schemaInfo" : {
"schemaPath" : "[~/nosql-migrator-1.5.0]/schema_json.ddl"
},
"dataPath" : "[~/nosql-migrator-1.5.0]/SampleData.json"
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"table" : "Migrate_JSON",
1-246
Chapter 1
Develop
"compartment" : "Training-NoSQL",
"includeTTL" : false,
"schemaInfo" : {
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1,
"useSourceSchema" : true
},
"credentials" : "/home/user/.oci/config",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"overwrite" : true,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.5.0"
}
2. Open the command prompt and navigate to the directory where you extracted the Oracle
NoSQL Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.
4. The utility proceeds with the data migration, as shown below. The Migrate_JSON table is
created at the sink with the schema provided in the schemaPath.
Validation
To validate the migration, you can log in to your Oracle NoSQL Database Cloud Service
console and verify that the Migrate_JSON table is created with the source data. For the
procedure to access the console, see Accessing the Service from the Infrastructure Console
article in the Oracle NoSQL Database Cloud Service document.
1-247
Chapter 1
Develop
Figure 1-2 Oracle NoSQL Database Cloud Service Console Table Data
Migrate from MongoDB JSON file to an Oracle NoSQL Database Cloud Service
This example shows how to use the Oracle NoSQL Database Migrator to copy Mongo-
DB Formatted Data to the Oracle NoSQL Database Cloud Service (NDCS).
Use Case
After evaluating multiple options, an organization finalizes Oracle NoSQL Database
Cloud Service as its NoSQL Database platform. As its NoSQL tables and data are in
MongoDB, they are looking for a way to migrate those tables and data to Oracle
NDCS.
You can copy a file or directory containing the MongoDB exported JSON data for
migration by specifying the file or directory in the source configuration template.
A sample MongoDB-formatted JSON File is as follows:
{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},
1-248
Chapter 1
Develop
{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}
MongoDB supports two types of extensions to the JSON format of files, Canonical mode and
Relaxed mode. You can supply the MongoDB-formatted JSON file that is generated using the
mongoexport tool in either Canonical or Relaxed mode. Both the modes are supported by the
NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See mongoexport_formats.
For more information on the generation of MongoDB-formatted JSON file, See mongoexport.
Example
For the demonstration, let us look at how to migrate a MongoDB-formatted JSON file to
NDCS. We will use a manually created configuration file for this example.
Prerequisites
• Identify the source and sink for the migration.
– Source: MongoDB-Formatted JSON File
– Sink: Oracle NoSQL Database Cloud Service
• Extract the data from Mongo DB using the mongoexport utility. See mongoexport for
more information.
• Create a NoSQL table in the sink with a table schema that matches the data in the
Mongo-DB-formatted JSON file. As an alternative, you can instruct the NoSQL Database
Migratorto create a table with the default schema structure by setting the defaultSchema
attribute to true.
1-249
Chapter 1
Develop
Note:
For a MongoDB-Formatted JSON source, the default schema for the
table will be as:
Where:
– tablename = value of the table config.
– ID = _id value from the mongoDB exported JSON source file.
– DOCUMENT = The entire contents of the mongoDB exported JSON
source file is aggregated into the DOCUMENT column excluding the _id
field.
• Identify your OCI cloud credentials and capture them in the OCI config file. Save
the config file in /home/.oci/config.See Acquiring Credentials in Using Oracle
NoSQL Database Cloud Service.
[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>
• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-phoenix-1
– compartment: developers
Procedure
To migrate the MongoDB-formatted JSON data to the Oracle NoSQL Database Cloud
Service:
1. Prepare the configuration file (in JSON format) with the identified Source and Sink
details. See Source Configuration Templates and Sink Configuration Templates .
{
"source" : {
"type" : "file",
"format" : "mongodb_json",
"dataPath" : "<complete/path/to/the/MongoDB/Formatted/JSON/
file>"
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-phoenix-1",
"table" : "mongoImport",
"compartment" : "developers",
1-250
Chapter 1
Develop
"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.
Validation
To validate the migration, you can login to your NDCS console and verify that myTable is
created with the source data.
Migrate from DynamoDB JSON file in AWS S3 to an Oracle NoSQL Database Cloud
Service
This example shows how to use the Oracle NoSQL Database Migrator to copy DynamoDB
JSON file stored in an AWS S3 store to the Oracle NoSQL Database Cloud Service (NDCS).
Use Case:
After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud
Service over DynamoDB database. The organization wants to migrate their tables and data
from DynamoDB to Oracle NoSQL Database Cloud Service.
See Mapping of DynamoDB table to Oracle NoSQL table for more details.
You can migrate a file containing the DynamoDB exported JSON data from the AWS S3
storage by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:
{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
1-251
Chapter 1
Develop
{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":
["Red","Green"]},"Age":{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":
{"N":"1024"},"City":{"S":"Wales"}}},"FirstName":
{"S":"John"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":{"N":"48"}}}
Example:
For this demonstration, you will learn how to migrate a DynamoDB JSON file in an
AWS S3 source to NDCS. You will use a manually created configuration file for this
example.
Prerequisites
• Identify the source and sink for the migration.
– Source: DynamoDB JSON File in AWS S3
– Sink: Oracle NoSQL Database Cloud Service
• Identify the table in AWS DynamoDB that needs to be migrated to NDCS. Login to
your AWS console using your credentials. Go to DynamoDB. Under Tables,
choose the table to be migrated.
• Create an object bucket and export the table to S3. From your AWS console, go to
S3. Under buckets, create a new object bucket. Go back to DynamoDB and click
Exports to S3. Provide the source table and the destination S3 bucket and click
Export.
Refer to steps provided in Exporting DynamoDB table data to Amazon S3 to
export your table. While exporting, you select the format as DynamoDB JSON.
The exported data contains DynamoDB table data in multiple gzip files as shown
below.
/ 01639372501551-bb4dd8c3
|-- 01639372501551-bb4dd8c3 ==> exported data prefix
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started
• You need aws credentials (including access key ID and secret access key) and
config files (credentials and optionally config) to access AWS S3 from the migrator.
See Set and view configuration settings for more details on the configuration files.
See Creating a key pair for more details on creating access keys.
1-252
Chapter 1
Develop
• Identify your OCI cloud credentials and capture them in the OCI config file. Save the
config file in a directory .oci under your home directory (~/.oci/config). See Acquiring
Credentials for more details.
[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>
• Identify the region endpoint and compartment name for your Oracle NoSQL Database.
For example,
– endpoint: us-phoenix-1
– compartment: developers
Procedure
To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
1. Prepare the configuration file (in JSON format) with the identified source and sink details.
See Source Configuration Templates and Sink Configuration Templates .
You can choose one of the following two options.
• Option 1: Importing DynamoDB table a as JSON document using default schema
config.
Here the defaultSchema is TRUE and so the migrator creates the default schema at
the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL
column type. Else an error is thrown.
{
"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<region_name>",
"table" : "<table_name>",
"compartment" : "<compartment_name>",
"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"DDBPartitionKey" : "<PrimaryKey:Datatype>",
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
1-253
Chapter 1
Develop
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
For a DynamoDB JSON source, the default schema for the table will be as
shown below:
Where
TABLE_NAME = value provided for the sink 'table' in the configuration
DDBPartitionKey_name = value provided for the partiiton key in the
configuration
DDBPartitionKey_type = value provided for the data type of the partition key in
the configuration
DDBSortKey_name = value provided for the sort key in the configuration if any
DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
DOCUMENT = All attributes except the partition and sort key of a Dynamo DB
table item aggregated into a NoSQL JSON column
• Option 2: Importing DynamoDB table as fixed columns using a user-supplied
schema file.
Here the defaultSchema is FALSE and you specify the schemaPath as a file
containing your DDL statement. See Mapping of DynamoDB types to Oracle
NoSQL types for more details.
Note:
If the Dynamo DB table has a data type that is not supported in
NoSQL, the migration fails.
The schema file is used to create the table at the sink as part of the migration.
As long as the primary key data is provided, the input JSON record will be
inserted, otherwise it throws an error.
1-254
Chapter 1
Develop
Note:
If the input data does not contain a value for a particular column(other than
the primary key) then the column default value will be used. The default
value should be part of the column definition while creating the table. For
example id INTEGER not null default 0. If the column does not have a
default definition then SQL NULL is inserted if no values are provided for
the column.
{
"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<region_name>",
"table" : "<table_name>",
"compartment" : "<compartment_name>",
"schemaInfo" : {
"defaultSchema" : false,
"readUnits" : 100,
"writeUnits" : 60,
"schemaPath" : "<full path of the schema file with the DDL
statement>",
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.
[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
--config <complete/path/to/the/JSON/config/file>
1-255
Chapter 1
Develop
Records skipped=0.
Elapsed time: 0 min 2sec 50ms
Migration completed.
Validation
You can login to your NDCS console and verify that the new table is created with the
source data.
{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":
{"N":"201"},"City":{"S":"London"}}},"FirstName":
{"S":"Fred"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":
{"N":"1024"},"City":{"S":"Wales"}}},"FirstName":
{"S":"John"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":{"N":"48"}}}
You copy the exported DynamoDB table data from AWS S3 storage to a local mounted
file system.
Example:
For this demonstration, you will learn how to migrate a DynamoDB JSON file to Oracle
NoSQL Database(On-premises). You will use a manually created configuration file for
this example.
Prerequisites
• Identify the source and sink for the migration.
– Source: DynamoDB JSON File
– Sink: Oracle NoSQL Database (On-premises)
1-256
Chapter 1
Develop
• In order to import DynamoDB table data to Oracle NoSQL Database, you must first
export the DynamoDB table to S3. Refer to steps provided in Exporting DynamoDB table
data to Amazon S3 to export your table. While exporting, you select the format as
DynamoDB JSON. The exported data contains DynamoDB table data in multiple gzip
files as shown below.
/ 01639372501551-bb4dd8c3
|-- 01639372501551-bb4dd8c3 ==> exported data prefix
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started
• You must download the files from AWS s3. The structure of the files after the download
will be as shown below.
download-dir/01639372501551-bb4dd8c3
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started
Procedure
To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
1. Prepare the configuration file (in JSON format) with the identified source and sink
details.See Source Configuration Templates and Sink Configuration Templates .
You can choose one of the following two options.
• Option 1: Importing DynamoDB table a as JSON document using default schema
config.
Here the defaultSchema is TRUE and so the migrator creates the default schema at
the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL
column type. Else an error is thrown.
{
"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
},
"sink" : {
"type" : "nosqldb",
"table" : "<table_name>",
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"]
"schemaInfo" : {
"defaultSchema" : true,
1-257
Chapter 1
Develop
"DDBPartitionKey" : "<PrimaryKey:Datatype>",
},
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
For a DynamoDB JSON source, the default schema for the table will be as
shown below:
Where
TABLE_NAME = value provided for the sink 'table' in the configuration
DDBPartitionKey_name = value provided for the partiiton key in the
configuration
DDBPartitionKey_type = value provided for the data type of the partition key in
the configuration
DDBSortKey_name = value provided for the sort key in the configuration if any
DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
DOCUMENT = All attributes except the partition and sort key of a Dynamo DB
table item aggregated into a NoSQL JSON column
• Option 2: Importing DynamoDB table as fixed columns using a user-supplied
schema file.
Here the defaultSchema is FALSE and you specify the schemaPath as a file
containing your DDL statement. See Mapping of DynamoDB types to Oracle
NoSQL types for more details.
Note:
If the Dynamo DB table has a data type that is not supported in
NoSQL, the migration fails.
The schema file is used to create the table at the sink as part of the migration.
As long as the primary key data is provided, the input JSON record will be
inserted, otherwise it throws an error.
1-258
Chapter 1
Develop
Note:
If the input data does not contain a value for a particular column(other than
the primary key) then the column default value will be used. The default
value should be part of the column definition while creating the table. For
example id INTEGER not null default 0. If the column does not have a
default definition then SQL NULL is inserted if no values are provided for
the column.
{
"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
},
"sink" : {
"type" : "nosqldb",
"table" : "<table_name>",
"schemaInfo" : {
"defaultSchema" : false,
"readUnits" : 100,
"writeUnits" : 60,
"schemaPath" : "<full path of the schema file with the DDL
statement>",
"storageSize" : 1
},
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"]
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}
2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.
[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
--config <complete/path/to/the/JSON/config/file>
Validation
1-259
Chapter 1
Develop
Verify that the new table is created with the source data:
desc <table_name>
SELECT * from <table_name>
Example
After evaluating multiple options, an organization finalizes Oracle NoSQL Database as
its NoSQL Database platform. As its source contents are in CSV file format, they are
looking for a way to migrate them to Oracle NoSQL Database.
In this example, you will learn to migrate the data from a CSV file called course.csv,
which contains information about various courses offered by a university. You generate
the configuration file from the runMigrator utility.
You can also prepare the configuration file with the identified source and sink details.
See Oracle NoSQL Database Migrator Reference.
Prerequisites
• Identify the source and sink for the migration.
– Source: CSV file
In this example, the source file is course.csv
cat [~/nosql-migrator-1.5.0]/course.csv
1,"Computer Science", "San Francisco", "2500"
2,"Bio-Technology", "Los Angeles", "1200"
3,"Journalism", "Las Vegas", "1500"
4,"Telecommunication", "San Francisco", "2500"
cat [~/nosql-migrator-1.5.0]/mytable_schema.ddl
create table course (id INTEGER, name STRING, location STRING, fees
INTEGER, PRIMARY KEY(id));
Procedure
1-260
Chapter 1
Develop
To migrate the CSV file data from course.csv to Oracle NoSQL Database Service, perform
the following steps:
1. Open the command prompt and navigate to the directory where you extracted the Oracle
NoSQL Database Migrator utility.
2. To generate the configuration file using Oracle NoSQL Database Migrator, execute the
runMigrator command without any runtime parameters.
[~/nosql-migrator-1.5.0]$./runMigrator
3. As you did not provide the configuration file as a runtime parameter, the utility prompts if
you want to generate the configuration now. Type y.
You can choose a location for the configuration file or retain the default location by
pressing the Enter key.
4. Based on the prompts from the utility, choose your options for the Source configuration.
5. Provide the path to the source CSV file. Further, based on the prompts from the utility,
you can choose to reorder the column names, select the encoding method, and trim the
tailing spaces from the target table.
1-261
Chapter 1
Develop
UTF-8,UTF-16,US-ASCII,ISO-8859-1. [UTF-8]:
Do you want to trim the tailing spaces? (y/n) [n]: n
6. Based on the prompts from the utility, choose your options for the Sink
configuration.
7. Based on the prompts from the utility, provide the name of the target table.
8. Enter your choice to set the TTL value. The default value is n.
Include TTL data? If you select 'yes' TTL value provided by the
source will be set on imported rows. (y/n) [n]: n
9. Based on the prompts from the utility, specify whether or not the target table must
be created through the Oracle NoSQL Database Migrator tool. If the table is
already created, it is suggested to provide n. If the table is not created, the utility
will request the path for the file containing the DDL commands for the schema of
the target table.
10. Enter your choice to determine whether to proceed with the migration in case any
record fails to migrate.
1-262
Chapter 1
Develop
migrated?
(y/n) [n]: n
12. Finally, the utility prompts you to specify whether or not to proceed with the migration
using the generated configuration file. The default option is y.
Note: If you select n, you can use the generated configuration file to perform the
migration. Specify the ./runMigrator -c or the ./runMigrator --config option.
13. The NoSQL Database Migrator copies your data from the CSV file to Oracle NoSQL
Database.
1-263
Chapter 1
Develop
Validation
Start the SQL prompt in your KVStore.
Verify that the new table is created with the source data:
4 rows returned
Topics
• JSON as the File Source
1-264
Chapter 1
Develop
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a JSON file as a source to a valid sink.
• JSON File in OCI Object Storage Bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a JSON file in the OCI Object Storage bucket as a source to a valid sink.
• MongoDB-Formatted JSON File
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a MongoDB-Formatted JSON file as a source to a valid sink.
• MongoDB-Formatted JSON File in OCI Object Storage bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a MongoDB-Formatted JSON file in the OCI Object Storage bucket as a source
to a valid sink.
• DynamoDB-Formatted JSON File stored in AWS S3
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a DynamoDB-Formatted JSON file in the AWS S3 storage as a source to a
valid sink.
• DynamoDB-Formatted JSON File
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a DynamoDB-Formatted JSON file as a source to a valid sink.
• Oracle NoSQL Database
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from Oracle NoSQL Database tables as a source to a valid sink.
• Oracle NoSQL Database Cloud Service
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from Oracle NoSQL Database Cloud Service tables as a source to a valid sink.
• CSV as the File Source
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from CSV file as a source to a valid sink.
• CSV file in OCI Object Storage Bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from CSV file stored in OCI Object Storage bucket as a source to a valid sink.
{"Item":{"PK":{"S":"ACCT#82691500"},"SK":
{"S":"ACCT#82691500"},"AccountIndexId":{"S":"ACCT#82691500"},"Emailid":
{"S":"alejandro.rosalez11@example.org"},"AccountId":
{"N":"82691500"},"PlasticCardNumber":{"S":"9610432116466295"},"FirstName":
{"S":"Alejandro"},"Addresses":{"M":{"RESIDENCE":{"M":{"city":{"S":"Any
Town"},"country":{"S":"USA"},"street":{"S":"123 Any Street"}}},"BUSINESS":
{"M":{"city":{"S":"Anytown"},"country":{"S":"country"},"street":{"S":"221
Main Street"}}}}},"LastName":{"S":"Rosalez"}}}
{"Item":{"PK":{"S":"ACCT#76584123"},"SK":
1-265
Chapter 1
Develop
{"S":"ACCT#76584123"},"AccountIndexId":{"S":"ACCT#76584123"},"Emailid":
{"S":"zhang.wei@example.com"},"AccountId":
{"N":"76584123"},"PlasticCardNumber":
{"S":"4235400034568756"},"FirstName":{"S":"Zhang"},"Addresses":{"M":
{"RESIDENCE":{"M":{"city":{"S":"Any Town"},"country":
{"S":"USA"},"street":{"S":"135 Any Street"}}},"BUSINESS":{"M":{"city":
{"S":"AnyTown"},"country":{"S":"country"},"street":{"S":"100 Main
Street"}}}}},"LastName":{"S":"Wei"},"AuthUsers":{"M":{"AUTHUSER-2":
{"M":{"Name":{"S":"Mateo Jackson"},"PlasticCardNumber":
{"S":"4036516984267960"}}},"AUTHUSER-1":{"M":{"Name":{"S":"Paulo
Santos"},"PlasticCardNumber":{"S":"4036546984262340"}}}}}}}
"source" : {
"type" : "file",
"format" : "json",
"dataPath": "</path/to/a/json/file>",
"schemaInfo": {
"schemaPath": "</path/to/schema/file>"
}
}
Source Parameters
• type
• format
• dataPath
• schemaInfo
• schemaInfo.schemaPath
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"
format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"
dataPath
• Purpose: Specifies the absolute path to a file or directory containing the JSON
data for migration.
1-266
Chapter 1
Develop
You must ensure that this data matches with the NoSQL table schema defined at the
sink. If you specify a directory, the NoSQL Database Migrator identifies all the files with
the .json extension in that directory for the migration. Sub-directories are not supported.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a JSON file
"dataPath" : "/home/user/sample.json"
– Specifying a directory
"dataPath" : "/home/user"
schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaPath
• Purpose: Specifies the absolute path to the schema definition file containing DDL
statements for the NoSQL table being migrated.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
"schemaInfo" : {
"schemaPath" : "/home/user/mytable/Schema/schema.ddl"
}
{"Item":{"PK":{"S":"ACCT#82691500"},"SK":
{"S":"ACCT#82691500"},"AccountIndexId":{"S":"ACCT#82691500"},"Emailid":
{"S":"alejandro.rosalez11@example.org"},"AccountId":
{"N":"82691500"},"PlasticCardNumber":{"S":"9610432116466295"},"FirstName":
{"S":"Alejandro"},"Addresses":{"M":{"RESIDENCE":{"M":{"city":{"S":"Any
Town"},"country":{"S":"USA"},"street":{"S":"123 Any Street"}}},"BUSINESS":
{"M":{"city":{"S":"Anytown"},"country":{"S":"country"},"street":{"S":"221
Main Street"}}}}},"LastName":{"S":"Rosalez"}}}
{"Item":{"PK":{"S":"ACCT#76584123"},"SK":
1-267
Chapter 1
Develop
{"S":"ACCT#76584123"},"AccountIndexId":{"S":"ACCT#76584123"},"Emailid":
{"S":"zhang.wei@example.com"},"AccountId":
{"N":"76584123"},"PlasticCardNumber":
{"S":"4235400034568756"},"FirstName":{"S":"Zhang"},"Addresses":{"M":
{"RESIDENCE":{"M":{"city":{"S":"Any Town"},"country":
{"S":"USA"},"street":{"S":"135 Any Street"}}},"BUSINESS":{"M":{"city":
{"S":"AnyTown"},"country":{"S":"country"},"street":{"S":"100 Main
Street"}}}}},"LastName":{"S":"Wei"},"AuthUsers":{"M":{"AUTHUSER-2":
{"M":{"Name":{"S":"Mateo Jackson"},"PlasticCardNumber":
{"S":"4036516984267960"}}},"AUTHUSER-1":{"M":{"Name":{"S":"Paulo
Santos"},"PlasticCardNumber":{"S":"4036546984262340"}}}}}}}
Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.
"source" : {
"type" : "object_storage_oci",
"format" : "json",
"endpoint" : "<OCI Object Storage service endpoint URL or region
ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"schemaInfo" : {
"schemaObject" : "<object name>"
},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}
Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• schemaInfo
• schemaInfo.schemaObject
• credentials
• credentialsProfile
1-268
Chapter 1
Develop
• useInstancePrincipal
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"
format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"
endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"
namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"
bucket
• Purpose: Specifies the name of the bucket, which contains the source JSON files.
Ensure that the required bucket already exists in the OCI Object Storage instance and
has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"
1-269
Chapter 1
Develop
prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, no filter is applied and all the objects present in
the bucket are migrated.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_table/Data/000000.json" (migrates only 000000.json)
2. "prefix" : "my_table/Data" (migrates all the objects with prefix my_table/
Data)
schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaObject
• Purpose: Specifies the name of the object in the bucket where NoSQL table
schema definitions for the data being migrated are stored.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
"schemaInfo" : {
"schemaObject" : "mytable/Schema/schema.ddl"
}
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.
1-270
Chapter 1
Develop
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a 'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.
Note:
This parameter is valid ONLY if the credentials parameter is specified.
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
If not specified, it defaults to false.
Note:
1-271
Chapter 1
Develop
{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},
{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}
MongoDB supports two types of extensions to the JSON format of files, Canonical
mode and Relaxed mode. You can supply the MongoDB-formatted JSON file that is
generated using the mongoexport tool in either Canonical or Relaxed mode. Both the
modes are supported by the NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See
mongoexport_formats.
For more information on the generation of MongoDB-formatted JSON file, see
mongoexport for more information.
"source" : {
"type" : "file",
"format" : "mongodb_json",
"dataPath": "</path/to/a/json/file>",
"schemaInfo": {
"schemaPath": "</path/to/schema/file>"
}
}
1-272
Chapter 1
Develop
Source Parameters
• type
• format
• dataPath
• schemaInfo
• schemaInfo.schemaPath
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"
format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "mongodb_json"
dataPath
• Purpose: Specifies the absolute path to a file or directory containing the MongoDB
exported JSON data for migration.
You must have generated these files using the mongoexport tool. See mongoexport for
more information.
You can supply the MongoDB-formatted JSON file that is generated using the
mongoexport tool in either canonical or relaxed mode. Both the modes are supported by
the NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See
mongoexport_formats.
If you specify a directory, the NoSQL Database Migrator identifies all the files with
the .json extension in that directory for the migration. Sub-directories are not supported.
You must ensure that this data matches with the NoSQL table schema defined at the
sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a MongoDB formatted JSON file
"dataPath" : "/home/user/sample.json"
– Specifying a directory
"dataPath" : "/home/user"
1-273
Chapter 1
Develop
schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaPath
• Purpose: Specifies the absolute path to the schema definition file containing DDL
statements for the NoSQL table being migrated.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
"schemaInfo" : {
"schemaPath" : "/home/user/mytable/Schema/schema.ddl"
}
{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},
{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}
Extract the data from MongoDB using the mongoexport utility and upload it to the OCI
Object Storage bucket. See mongoexport for more information.
1-274
Chapter 1
Develop
Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.
"source" : {
"type" : "object_storage_oci",
"format" : "mongodb_json",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"schemaInfo" : {
"schemaObject" : "<object name>"
},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}
Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• schemaInfo
• schemaInfo.schemaObject
• credentials
• credentialsProfile
• useInstancePrincipal
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"
format
• Purpose: Specifies the source format.
1-275
Chapter 1
Develop
endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud
Service for the list of data regions supported for Oracle NoSQL Database Cloud
Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"
namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an
optional parameter. If you don't specify this parameter, the default namespace of
the tenancy is used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"
bucket
• Purpose: Specifies the name of the bucket, which contains the source MongoDB-
Formatted JSON files. Ensure that the required bucket already exists in the OCI
Object Storage instance and has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"
prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, no filter is applied and all the MongoDB JSON
formatted objects present in the bucket are migrated. Extract the data from
MongoDB using the mongoexport utility and upload it to the OCI Object Storage
bucket. See mongoexport for more information.
If you do not provide any value, no filter is applied and all the objects present in
the bucket are migrated.
• Data Type: string
1-276
Chapter 1
Develop
• Mandatory (Y/N): N
• Example:
1. "prefix" : "mongo_export/Data/table.json" (migrates only table.json)
2. "prefix" : "mongo_export/Data" (migrates all the objects with prefix
mongo_export/Data)
schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaObject
• Purpose: Specifies the name of the object in the bucket where NoSQL table schema
definitions for the data being migrated are stored.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
"schemaInfo" : {
"schemaObject" : "mytable/Schema/schema.ddl"
}
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal parameters
are not mandatory individually, one of these parameters MUST be specified.
Additionally, these two parameters are mutually exclusive. Specify ONLY one of
these parameters, but not both at the same time.
1-277
Chapter 1
Develop
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.
Note:
This parameter is valid ONLY if the credentials parameter is
specified.
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For
more information on Instance Principal authentication method, see Source and
Sink Security .
If not specified, it defaults to false.
Note:
1-278
Chapter 1
Develop
You can migrate a file containing the DynamoDB exported JSON data from the AWS S3
storage by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:
{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":
{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":{"N":"1024"},"City":
{"S":"Wales"}}},"FirstName":{"S":"John"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":
{"N":"48"}}}
You must export the DynamoDB table to AWS S3 storage as specified in Exporting
DynamoDB table data to Amazon S3.
The valid sink types for DynamoDB-formatted JSON stored in AWS S3 are nosqldb and
nosqldb_cloud.
"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<S3 object url>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
}
Source Parameters:
• type
• format
• s3URL
• credentials
• credentialsProfile
type
• Purpose:Identifies the source type.
• Data Type: string
• Mandatory (Y/N):Y
• Example: "type" : "aws_s3"
format
• Purpose:Specifies the source format.
• Data Type: string
1-279
Chapter 1
Develop
• Mandatory (Y/N):Y
• Example: "format" : "dynamodb_json"
Note:
If the value of the "type" is aws_s3, then format must be dynamodb_json.
s3URL
• Purpose: Specifies the URL of an exported DynamoDB table stored in AWS S3.
You can obtain this URL from the AWS S3 console. Valid URL format is https://
<bucket-name>.<s3_endpoint>/<prefix>. The migrator will look for json.gz files
in the prefix for import.
Note:
You must export DynamoDB table as specified in Exporting DynamoDB
table data to Amazon S3.
credentials
• Purpose: Specifies the absolute path to a file containing the AWS credentials. If
not specified, it defaults to $HOME/.aws/credentials. Please refer to Configuration
and credential file settings for more details on the credentials file.
• Data Type: string
• Mandatory (Y/N):N
• Example:
"credentials" : "/home/user/.aws/credentials"
"credentials" : "/home/user/security/credentials
Note:
The Migrator does not log any of the credentials information. You should
properly protect the credentials file from unauthorized access.
credentialsProfile
• Purpose: Name of the profile in the AWS credentials file to be used to connect to
AWS S3. User account credentials are referred to as a profile. If you do not specify
this value, it defaults to the default profile. Please refer to Configuration and
credential file settings for more details on the credentials file.
1-280
Chapter 1
Develop
"credentialsProfile" : "default"
"credentialsProfile": "test"
{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":
{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":{"N":"1024"},"City":
{"S":"Wales"}}},"FirstName":{"S":"John"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":
{"N":"48"}}}
You must copy the exported DynamoDB table data from AWS S3 storage to a local mounted
file system.
The valid sink types for DynamoDB JSON file are nosqldb and nosqldb_cloud.
"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<path to a file or directory containing exported DDB table
data>"
}
Source Parameters:
• type
• format
• dataPath
type
• Purpose:Identifies the source type.
1-281
Chapter 1
Develop
format
• Purpose:Specifies the source format.
• Data Type: string
• Mandatory (Y/N):Y
• Example: "format" : "dynamodb_json"
dataPath
• Purpose: Specifies the absolute path to a file or directory containing the exported
DynamoDB table data. You must copy exported DynamoDB table data from AWS
S3 to a local mounted file system. You must ensure that this data matches with the
NoSQL table schema defined at the sink. If you specify a directory, the NoSQL
Database Migrator identifies all the files with the .json.gz extension in that
directory and the datasub-directory.
• Data Type: string
• Mandatory (Y/N):Y
• Example:
– Specifying a file
"dataPath" : "/home/user/AWSDynamoDB/01639372501551-bb4dd8c3/
data/zclclwucjy6v5mkefvckxzhfvq.json.gz"
– Specifying a directory
"dataPath" : "/home/user/AWSDynamoDB/01639372501551-bb4dd8c3"
{"id":20,"firstName":"Jane","lastName":"Smith","otherNames":
[{"first":"Jane","last":"teacher"}],"age":25,"income":55000,"address":
{"city":"San Jose","number":201,"phones":
[{"area":608,"kind":"work","number":6538955},
{"area":931,"kind":"home","number":9533341},
{"area":931,"kind":"mobile","number":9533382}],"state":"CA","street":"A
tlantic Ave","zip":95005},"connections":[40,75,63],"expenses":null}
{"id":10,"firstName":"John","lastName":"Smith","otherNames":
[{"first":"Johny","last":"chef"}],"age":22,"income":45000,"address":
{"city":"Santa Cruz","number":101,"phones":
[{"area":408,"kind":"work","number":4538955},
1-282
Chapter 1
Develop
{"area":831,"kind":"home","number":7533341},
{"area":831,"kind":"mobile","number":7533382}],"state":"CA","street":"Pacific
Ave","zip":95008},"connections":[30,55,43],"expenses":null}
{"id":30,"firstName":"Adam","lastName":"Smith","otherNames":
[{"first":"Adam","last":"handyman"}],"age":45,"income":75000,"address":
{"city":"Houston","number":301,"phones":
[{"area":618,"kind":"work","number":6618955},
{"area":951,"kind":"home","number":9613341},
{"area":981,"kind":"mobile","number":9613382}],"state":"TX","street":"Indian
Ave","zip":95075},"connections":[60,45,73],"expenses":null}
"source" : {
"type": "nosqldb",
"table" : "<fully qualified table name>",
"storeName" : "<store name>",
"helperHosts" : ["hostname1:port1","hostname2:port2,..."],
"security" : "</path/to/store/security/file>",
"requestTimeoutMs" : 5000,
"includeTTL": <true|false>
}
Source Parameters
• type
• table
• storeName
• helperHosts
• security
• requestTimeoutMs
• includeTTL
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb"
table
• Purpose: Fully qualified table name from which to migrate the data.
Format: [namespace_name:]<table_name>
If the table is in the DEFAULT namespace, you can omit the namespace_name. The table
must exist in the store.
• Data Type: string
• Mandatory (Y/N): Y
1-283
Chapter 1
Develop
• Example:
– With the DEFAULT namespace "table" :"mytable"
– With a non-default namespace "table" : "mynamespace:mytable"
– To specify a child table "table" : "mytable.child"
storeName
• Purpose: Name of the Oracle NoSQL Database store.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "storeName" : "kvstore"
helperHosts
• Purpose: A list of host and registry port pairs in the hostname:port format. Delimit
each item in the list using a comma. You must specify at least one helper host.
• Data Type: array of strings
• Mandatory (Y/N): Y
• Example: "helperHosts" : ["localhost:5000","localhost:6000"]
security
• Purpose:
If your store is a secure store, provide the absolute path to the security login file
that contains your store credentials. See Configuring Security with Remote Access
in Administrator's Guide to know more about the security login file.
You can use either password file based authentication or wallet based
authentication. However, the wallet based authentication is supported only in the
Enterprise Edition (EE) of Oracle NoSQL Database. For more information on
wallet-based authentication, see Source and Sink Security .
The Community Edition(CE) edition supports password file based authentication
only.
• Data Type: string
• Mandatory (Y/N): Y for a secure store
• Example:
"security" : "/home/user/client.credentials"
Example security file content for password file based authentication:
oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.pwdfile.file=/home/nosql/login.passwd
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)
1-284
Chapter 1
Develop
oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.wallet.dir=/home/nosql/login.wallet
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)
requestTimeoutMs
• Purpose: Specifies the time to wait for each read operation from the store to complete.
This is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000
includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows when
exporting Oracle NoSQL Database tables. If set to true, the TTL data for rows also gets
included in the data provided by the source. TTL is present in the _metadata JSON
object associated with each row. The expiration time for each row gets exported as the
number of milliseconds since the UNIX epoch (Jan 1st, 1970).
If you do not specify this parameter, it defaults to false.
Only the rows having a positive expiration value for TTL get included as part of the
exported rows. If a row does not expire, which means TTL=0, then its TTL metadata is
not included explicitly. For example, if ROW1 expires at 2021-10-19 00:00:00 and ROW2
does not expire, the exported data looks like as follows:
//ROW1
{
"id" : 1,
"name" : "abc",
"_metadata" : {
"expiration" : 1634601600000
}
}
//ROW2
{
"id" : 2,
"name" : "xyz"
}
1-285
Chapter 1
Develop
{"id":20,"firstName":"Jane","lastName":"Smith","otherNames":
[{"first":"Jane","last":"teacher"}],"age":25,"income":55000,"address":
{"city":"San Jose","number":201,"phones":
[{"area":608,"kind":"work","number":6538955},
{"area":931,"kind":"home","number":9533341},
{"area":931,"kind":"mobile","number":9533382}],"state":"CA","street":"A
tlantic Ave","zip":95005},"connections":[40,75,63],"expenses":null}
{"id":10,"firstName":"John","lastName":"Smith","otherNames":
[{"first":"Johny","last":"chef"}],"age":22,"income":45000,"address":
{"city":"Santa Cruz","number":101,"phones":
[{"area":408,"kind":"work","number":4538955},
{"area":831,"kind":"home","number":7533341},
{"area":831,"kind":"mobile","number":7533382}],"state":"CA","street":"P
acific Ave","zip":95008},"connections":[30,55,43],"expenses":null}
{"id":30,"firstName":"Adam","lastName":"Smith","otherNames":
[{"first":"Adam","last":"handyman"}],"age":45,"income":75000,"address":
{"city":"Houston","number":301,"phones":
[{"area":618,"kind":"work","number":6618955},
{"area":951,"kind":"home","number":9613341},
{"area":981,"kind":"mobile","number":9613382}],"state":"TX","street":"I
ndian Ave","zip":95075},"connections":[60,45,73],"expenses":null}
"source" : {
"type" : "nosqldb_cloud",
"endpoint" : "<Oracle NoSQL Cloud Service Endpoint. You can either
specify the complete URL or the Region ID alone>",
"table" : "<table name>",
"compartment" : "<OCI compartment name or id>",
"credentials" : "</path/to/oci/credential/file>",
"credentialsProfile" : "<oci credentials profile name>",
"readUnitsPercent" : <table readunits percent>,
"requestTimeoutMs" : <timeout in milli seconds>,
"useInstancePrincipal" : <true|false>,
"includeTTL": <true|false>
}
Source Parameters
• type
• endpoint
1-286
Chapter 1
Develop
• table
• compartment
• credentials
• credentialsProfile
• readUnitsPercent
• requestTimeoutMs
• useInstancePrincipal
• includeTTL
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb_cloud"
endpoint
• Purpose: Specifies the Service Endpoint of the Oracle NoSQL Database Cloud Service.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://nosql.us-ashburn-1.oci.oraclecloud.com/"
table
• Purpose: Name of the table from which to migrate the data.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– To specify a table "table" : "myTable"
– To specify a child table "table" : "mytable.child"
compartment
• Purpose: Specifies the name or OCID of the compartment in which the table resides.
If you do not provide any value, it defaults to the root compartment.
You can find your compartment's OCID from the Compartment Explorer window under
Governance in the OCI Cloud Console.
• Data Type: string
1-287
Chapter 1
Develop
• Mandatory (Y/N): Yes, if the table is not in the root compartment of the tenancy
OR when the useInstancePrincipal parameter is set to true.
Note:
If the useInstancePrincipal parameter is set to true, the
compartment must specify the compartment OCID and not the name.
• Example:
– Compartment name
"compartment" : "mycompartment"
– Compartment name qualified with its parent compartment
"compartment" : "parent.childcompartment"
– No value provided. Defaults to the root compartment.
"compartment": ""
– Compartment OCID
"compartment" : "ocid1.tenancy.oc1...4ksd"
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.
1-288
Chapter 1
Develop
Note:
This parameter is valid ONLY if the credentials parameter is specified.
readUnitsPercent
• Purpose: Percentage of table read units to be used while migrating the NoSQL table.
The default value is 90. The valid range is any integer between 1 to 100. Please note that
amount of time required to migrate data is directly proportional to this attribute. It's better
to increase the read throughput of the table for the migration activity. You can reduce the
read throughput after the migration process completes.
To learn the daily limits on throughput changes, see Cloud Limits in Using Oracle NoSQL
Database Cloud Service.
The default value is 90. The valid range is any integer between 1 to 100.
Note:
The time required for the data migration is directly proportional to the
writeUnitsPercent value.
See Troubleshooting the Oracle NoSQL Database Migrator to learn how to use this
attribute to improve the data migration speed.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "readUnitsPercent" : 90
requestTimeoutMs
• Purpose: Specifies the time to wait for each read operation in the sink to complete. This
is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
1-289
Chapter 1
Develop
Note:
includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows when
exporting Oracle NoSQL Database Cloud Service tables. If set to true, the TTL
data for rows also gets included in the data provided by the source. TTL is present
in the _metadata JSON object associated with each row. The expiration time for
each row gets exported as the number of milliseconds since the UNIX epoch (Jan
1st, 1970).
If you do not specify this parameter, it defaults to false.
Only the rows having a positive expiration value for TTL get included as part of the
exported rows. If a row does not expire, which means TTL=0, then its TTL
metadata is not included explicitly. For example, if ROW1 expires at 2021-10-19
00:00:00 and ROW2 does not expire, the exported data looks like as follows:
//ROW1
{
"id" : 1,
"name" : "abc",
"_metadata" : {
"expiration" : 1634601600000
}
}
//ROW2
{
"id" : 2,
"name" : "xyz"
}
1-290
Chapter 1
Develop
You can migrate a CSV file or a directory containing the CSV data by specifying the file name
or directory in the source configuration template.
A sample CSV file is as follows:
"source" : {
"type" : "file",
"format" : "csv",
"dataPath": "</path/to/a/csv/file-or-directory>",
"hasHeader" : <true | false>,
"columns" : ["column1", "column2", ....],
"csvOptions" : {
"trim" : <true | false>,
"encoding" : "<character set encoding>"
}
}
Source Parameters
• type
• format
• dataPath
• hasHeader
• columns
• csvOptions
• csvOptions.trim
• csvOptions.encoding
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"
1-291
Chapter 1
Develop
format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "csv"
datapath
• Purpose: Specifies the absolute path to a file or directory containing the CSV data
for migration. If you specify a directory, NoSQL Database Migrator imports all the
files with the .csv or .CSV extension in that directory. All the CSV files are copied
into a single table, but not in any particular order.
CSV files must conform to the RFC4180 standard. You must ensure that the data in
each CSV file matches with the NoSQL Database table schema defined in the sink
table. Sub-directories are not supported.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a CSV file
"dataPath" : "/home/user/sample.csv"
– Specifying a directory
"dataPath" : "/home/user"
Note:
The CSV files must contain only scalar values. Importing CSV files
containing complex types such as MAP, RECORD, ARRAY, and JSON is not
supported. The NoSQL Database Migrator tool does not check for the
correctness of the data in the input CSV file. The NoSQL Database Migrator
tool supports the importing of CSV data that conforms to the RFC4180 format.
CSV files containing data that does not conform to the RFC4180 standard may
not get copied correctly or may result in an error. If the input data is
corrupted, the NoSQL Database Migrator tool will not parse the CSV records.
If any errors are encountered during migration, the NoSQL Database
Migrator tool logs the information about the failed input records for debugging
and informative purposes. For more details, see Logging Migrator Progress
in Using Oracle NoSQL Data Migrator .
hasHeader
• Purpose: Specifies if the CSV file has a header or not. If this is set to true, the
first line is ignored. If it is set to false, the first line is considered a CSV record.
The default value is false.
• Data Type: Boolean
1-292
Chapter 1
Develop
• Mandatory (Y/N): N
• Example: "hasHeader" : "false"
columns
• Purpose: Specifies the list of NoSQL Database table column names. The order of the
column names indicates the mapping of the CSV file fields with corresponding NoSQL
Database table columns. If the order of the input CSV file columns does not match the
existing or newly created NoSQL Database table columns, you can map the ordering
using this parameter. Also, when importing into a table that has an Identity Column, you
can skip the Identity column name in the columns configuration.
Note:
– If the NoSQL Database table has additional columns that are not available
in the CSV file, the values of the missing columns are updated with the
default value as defined in the NoSQL Database table. If a default value is
not provided, a Null value is inserted during migration. For more information
on default values, see Data Type Definitions section in the SQL Reference
Guide.
– If the CSV file has additional columns that are not defined in the NoSQL
Database table, the additional column information is ignored.
– If any value in the CSV record is empty, it is set to the default value of the
corresponding columns in the NoSQL Database table. If a default value is
not provided, a Null value is inserted during migration.
csvOptions
• Purpose: Specifies the formatting options for a CSV file. Provide the character set
encoding format of the CSV file and choose whether or not to trim the blank spaces.
• Data Type: Object
• Mandatory (Y/N): N
csvOptions.trim
• Purpose: Specifies if the leading and trailing blanks of a CSV field value must be
trimmed. The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "trim" : "true"
csvOptions.encoding
• Purpose: Specifies the character set to decode the CSV file. The default value is UTF-8.
The supported character sets are US-ASCII, ISO-8859-1, UTF-8, and UTF-16.
1-293
Chapter 1
Develop
You can migrate a CSV file in the OCI Object Storage bucket by specifying the name
of the bucket in the source configuration template.
A sample CSV file in the OCI Object Storage bucket is as follows:
Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.
"source" : {
"type" : "object_storage_oci",
"format" : "csv",
"endpoint" : "<OCI Object Storage service endpoint URL or region
ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>,
"hasHeader" : <true | false>,
"columns" : ["column1", "column2", ....],
"csvOptions" : {
"trim" : <true | false>,
"encoding" : "<character set encoding>"
}
}
Source Parameters
• type
• format
1-294
Chapter 1
Develop
• endpoint
• namespace
• bucket
• prefix
• credentials
• credentialsProfile
• useInstancePrincipal
• hasHeader
• columns
• csvOptions
• csvOptions.trim
• csvOptions.encoding
type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"
format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "csv"
endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"
namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
1-295
Chapter 1
Develop
bucket
• Purpose: Specifies the name of the bucket, which contains the source CSV files.
The NoSQL Database Migrator imports all the files with the .csv or .CSV extension
object-wise and copies them into a single table in the same order.
Ensure that the required bucket already exists in the OCI Object Storage instance
and has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"
Note:
The CSV files must contain only scalar values. Importing CSV files
containing complex types such as MAP, RECORD, ARRAY, and JSON is
not supported. The NoSQL Database Migrator tool does not check for
the correctness of the data in the input CSV file. The NoSQL Database
Migrator tool supports the importing of CSV data that conforms to the
RFC4180 format. CSV files containing data that does not conform to the
RFC4180 standard may not get copied correctly or may result in an error.
If the input data is corrupted, the NoSQL Database Migrator tool will not
parse the CSV records. If any errors are encountered during migration,
the NoSQL Database Migrator tool logs the information about the failed
input records for debugging and informative purposes. For more details,
see Logging Migrator Progress in Using Oracle NoSQL Data Migrator .
prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, filter is not applied and all the objects present in
the bucket are migrated.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_table/Data/000000.json" (migrates only 000000.json)
2. "prefix" : "my_table/Data" (migrates all the objects with prefix my_table/
Data)
credentials
• Purpose: Absolute path to a file containing OCI credentials.
1-296
Chapter 1
Develop
Note:
You must specify either credentials or useInstancePrincipal parameters in
the configuration template.
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a profile.
If you do not specify this value, it defaults to the DEFAULT profile.
Note:
This parameter is valid only if the credentials parameter is specified.
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Database Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
The default value is false.
1-297
Chapter 1
Develop
Note:
hasHeader
• Purpose: Specifies if the CSV file has a header or not. If this is set to true, the
first line is ignored. If it is set to false, the first line is considered a CSV record.
The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "hasHeader" : "false"
columns
• Purpose: Specifies the list of NoSQL Database table column names. The order of
the column names indicates the mapping of the CSV file fields with corresponding
NoSQL Database table columns. If the order of the input CSV file columns does
not match the existing or newly created NoSQL Database table columns, you can
map the ordering using this parameter. Also, when importing into a table that has
an Identity Column, you can skip the Identity column name in the columns
configuration.
Note:
– If the NoSQL Database table has additional columns that are not
available in the CSV file, the values of the missing columns are
updated with the default value as defined in the NoSQL Database
table. If a default value is not provided, a Null value is inserted during
migration. For more information on default values, see Data Type
Definitions section in the SQL Reference Guide.
– If the CSV file has additional columns that are not defined in the
NoSQL Database table, the additional column information is ignored.
– If any value in the CSV record is empty, it is set to the default value
of the corresponding columns in the NoSQL Database table. If a
default value is not provided, a Null value is inserted during
migration.
1-298
Chapter 1
Develop
csvOptions
• Purpose: Specifies the formatting options for a CSV file. Provide the character set
encoding format of the CSV file and choose whether or not to trim the blank spaces.
• Data Type: Object
• Mandatory (Y/N): N
csvOptions.trim
• Purpose: Specifies if the leading and trailing blanks of a CSV field value must be
trimmed. The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "trim" : "true"
csvOptions.encoding
• Purpose: Specifies the character set to decode the CSV file. The default value is UTF-8.
The supported character sets are US-ASCII, ISO-8859-1, UTF-8, and UTF-16.
• Data Type: String
• Mandatory (Y/N): N
• Example: "encoding" : "UTF-8"
Topics
• JSON as the File Sink
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a JSON file as the sink.
• Parquet File
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a Parquet file as the sink.
• JSON File in OCI Object Storage Bucket
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a JSON file in the OCI Object Storage bucket as the sink.
• Parquet File in OCI Object Storage Bucket
1-299
Chapter 1
Develop
The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to a Parquet file in the OCI Object Storage bucket as
the sink.
• Oracle NoSQL Database
The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to Oracle NoSQL Database tables as the sink.
• Oracle NoSQL Database Cloud Service
The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to Oracle NoSQL Database Cloud Service tables as
the sink.
"sink" : {
"type" : "file",
"format" : "json",
"dataPath": "</path/to/a/file>",
"schemaPath" : "<path/to/a/file>",
"pretty" : <true|false>,
"useMultiFiles" : <true|false>,
"chunkSize" : <size in MB>
}
Sink Parameters
• type
• format
• dataPath
• schemaPath
• pretty
• useMultiFiles
• chunkSize
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"
format
• Purpose: Specifies the sink format.
• Data Type: string
1-300
Chapter 1
Develop
• Mandatory (Y/N): Y
• Example: "format" : "json"
dataPath
• Purpose: Specifies the absolute path to a file where the source data will be copied in the
JSON format.
If the file does not exist in the specified data path, the NoSQL Database Migrator creates
it. If it exists already, the NoSQL Database Migrator will overwrite its contents with the
source data.
You must ensure that the parent directory for the file specified in the data path is valid.
Note:
If the useMultiFiles parameter is set to true, specify the path to a directory
else specifies the path to the file.
schemaPath
• Purpose: Specifies the absolute path to write schema information provided by the
source.
If this value is not defined, the source schema information will not be migrated to the sink.
If this value is specified, the migrator utility writes the schema of the source table into the
file specified here.
The schema information is written as one DDL command per line in this file. If the file
does not exist in the specified data path, NoSQL Database Migrator creates it. If it exists
already, NoSQL Database Migrator will overwrite its contents with the source data. You
must ensure that the parent directory for the file specified in the data path is valid.
• Data Type: string
• Mandatory (Y/N): N
• Example: "schemaPath" : "/home/user/schema_file"
pretty
• Purpose: Specifies whether to beautify the JSON output to increase readability or not.
If not specified, it defaults to false.
• Data Type: boolean
• Mandatory (Y/N): N
1-301
Chapter 1
Develop
useMultiFiles
• Purpose: Specifies whether or not to split the NoSQL table data into multiple files
when migrating source data to a file.
If not specified, it defaults to false.
If set to true, when migrating source data to a file, the NoSQL table data is split
into multiple smaller files. For example, <chunk>.json, where
chunk=000000,000001,000002, and so forth.
dataPath
|--000000.json
|--000001.json
chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at
the sink. During migration, a table is split into chunkSize chunks and each chunk is
written as a separate file to the sink. When the source data being migrated
exceeds this size, a new file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
Note:
This parameter is applicable ONLY when the useMultiFiles
parameter is set to true.
Parquet File
The configuration file format for Parquet File as a sink of NoSQL Database Migrator is
shown below.
"sink" : {
"type" : "file",
"format" : "parquet",
"dataPath": "</path/to/a/dir>",
"chunkSize" : <size in MB>,
"compression": "<SNAPPY|GZIP|NONE>",
"parquetOptions": {
1-302
Chapter 1
Develop
"useLogicalJson": <true|false>,
"useLogicalEnum": <true|false>,
"useLogicalUUID": <true|false>,
"truncateDoubleSpecials": <true|false>
}
}
Sink Parameters
• type
• format
• dataPath
• chunkSize
• compression
• parquetOptions
• parquetOptions.useLogicalJson
• parquetOptions.useLogicalEnum
• parquetOptions.useLogicalUUID
• parquetOptions.truncateDoubleSpecials
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"
format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "parquet"
dataPath
• Purpose: Specifies the path to a directory to use for storing the migrated NoSQL table
data. Ensure that the directory already exists and has read and write permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "dataPath" : "/home/user/migrator/my_table"
chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a
1-303
Chapter 1
Develop
separate file to the sink. When the source data being migrated exceeds this size, a
new file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40
compression
• Purpose: Specifies the compression type to use to compress the Parquet data.
Valid values are SNAPPY, GZIP, and NONE.
If not specified, it defaults to SNAPPY.
• Data Type: string
• Mandatory (Y/N): N
• Example: "compression" : "GZIP"
parquetOptions
• Purpose: Species the options to select Parquet logical types for NoSQL ENUM,
JSON, and UUID columns.
If you do not specify this parameter, the NoSQL Database Migrator writes the data
of ENUM, JSON, and UUID columns as String.
• Data Type: object
• Mandatory (Y/N): N
parquetOptions.useLogicalJson
• Purpose: Specifies whether or not to write NoSQL JSON column data as Parquet
logical JSON type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL JSON
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalJson" : true
parquetOptions.useLogicalEnum
• Purpose: Specifies whether or not to write NoSQL ENUM column data as Parquet
logical ENUM type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL ENUM
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalEnum" : true
1-304
Chapter 1
Develop
parquetOptions.useLogicalUUID
• Purpose: Specifies whether or not to write NoSQL UUID column data as Parquet logical
UUID type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL UUID column
data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalUUID" : true
parquetOptions.truncateDoubleSpecials
• Purpose: Specifies whether or not to truncate the double +Infinity, -Infinity, and Nan
values.
By default, it is set to false. If set to true,
– Positive_Infinity is truncated to Double.MAX_VALUE.
– NEGATIVE_INFINITY is truncated to -Double.MAX_VALUE.
– NaN is truncated to 9.9999999999999990E307.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "truncateDoubleSpecials" : true
Note:
The valid source types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.
"sink" : {
"type" : "object_storage_oci",
"format" : "json",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"chunkSize" : <size in MB>,
"pretty" : <true|false>,
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
1-305
Chapter 1
Develop
"useInstancePrincipal" : <true|false>
}
Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• chunkSize
• pretty
• credentials
• credentialsProfile
• useInstancePrincipal
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"
format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"
endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud
Service for the list of data regions supported for Oracle NoSQL Database Cloud
Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"
1-306
Chapter 1
Develop
namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"
bucket
• Purpose: Specifies the bucket name to use for storing the migrated data. Ensure that the
required bucket already exists in the OCI Object Storage instance and has write
permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"
prefix
• Purpose: Specifies the prefix that is added to the object name when objects are created
in the bucket. The prefix acts as a logical container or directory for storing data. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If not specified, the table name from the source is used as the prefix. If any object with
the same name already exists in the bucket, it is overwritten.
Schema is migrated to the <prefix>/Schema /schema.ddl file and source data is
migrated to the <prefix>/Data/<chunk>.json file(s), where chunk=000000.json,
000001.json, and so forth.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_export"
2. "prefix" : "my_export/2021-04-05/"
chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a
separate file to the sink. When the source data being migrated exceeds this size, a new
file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40
1-307
Chapter 1
Develop
pretty
• Purpose: Specifies whether to beautify the JSON output to increase readability or
not.
If not specified, it defaults to false.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "pretty" : true
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.
Note:
This parameter is valid ONLY if the credentials parameter is
specified.
1-308
Chapter 1
Develop
2. "credentialsProfile": "ADMIN_USER"
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
If not specified, it defaults to false.
Note:
Note:
The valid source types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.
"sink" : {
"type" : "object_storage_oci",
"format" : "parquet",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"chunkSize" : <size in MB>,
"compression": "<SNAPPY|GZIP|NONE>",
"parquetOptions": {
"useLogicalJson": <true|false>,
"useLogicalEnum": <true|false>,
"useLogicalUUID": <true|false>,
"truncateDoubleSpecials": <true|false>
1-309
Chapter 1
Develop
},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}
Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• chunkSize
• compression
• parquetOptions
• parquetOptions.useLogicalJson
• parquetOptions.useLogicalEnum
• parquetOptions.useLogicalUUID
• parquetOptions.truncateDoubleSpecials
• credentials
• credentialsProfile
• useInstancePrincipal
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"
format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "parquet"
endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud
1-310
Chapter 1
Develop
Service for the list of data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"
namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"
bucket
• Purpose: Specifies the bucket name to use for storing the migrated data. Ensure that the
required bucket already exists in the OCI Object Storage instance and has write
permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"
prefix
• Purpose: Specifies the prefix that is added to the object name when objects are created
in the bucket. The prefix acts as a logical container or directory for storing data. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If not specified, the table name from the source is used as the prefix. If any object with
the same name already exists in the bucket, it is overwritten.
Source data is migrated to the <prefix>/Data/<chunk>.parquet file(s), where
chunk=000000.parquet, 000001.parquet, and so forth.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_export"
2. "prefix" : "my_export/2021-04-05/"
chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a
separate file to the sink. When the source data being migrated exceeds this size, a new
file is created.
1-311
Chapter 1
Develop
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40
compression
• Purpose: Specifies the compression type to use to compress the Parquet data.
Valid values are SNAPPY, GZIP, and NONE.
If not specified, it defaults to SNAPPY.
• Data Type: string
• Mandatory (Y/N): N
• Example: "compression" : "GZIP"
parquetOptions
• Purpose: Species the options to select Parquet logical types for NoSQL ENUM,
JSON, and UUID columns.
If you do not specify this parameter, the NoSQL Database Migrator writes the data
of ENUM, JSON, and UUID columns as String.
• Data Type: object
• Mandatory (Y/N): N
parquetOptions.useLogicalJson
• Purpose: Specifies whether or not to write NoSQL JSON column data as Parquet
logical JSON type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL JSON
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalJson" : true
parquetOptions.useLogicalEnum
• Purpose: Specifies whether or not to write NoSQL ENUM column data as Parquet
logical ENUM type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL ENUM
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalEnum" : true
parquetOptions.useLogicalUUID
• Purpose: Specifies whether or not to write NoSQL UUID column data as Parquet
logical UUID type. For more information see Parquet Logical Type Definitions.
1-312
Chapter 1
Develop
If not specified or set to false, NoSQL Database Migrator writes the NoSQL UUID column
data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalUUID" : true
parquetOptions.truncateDoubleSpecials
• Purpose: Specifies whether or not to truncate the double +Infinity, -Infinity, and Nan
values.
By default, it is set to false. If set to true,
– Positive_Infinity is truncated to Double.MAX_VALUE.
– NEGATIVE_INFINITY is truncated to -Double.MAX_VALUE.
– NaN is truncated to 9.9999999999999990E307.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "truncateDoubleSpecials" : true
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal parameters
are not mandatory individually, one of these parameters MUST be specified.
Additionally, these two parameters are mutually exclusive. Specify ONLY one of
these parameters, but not both at the same time.
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a 'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.
1-313
Chapter 1
Develop
Note:
This parameter is valid ONLY if the credentials parameter is
specified.
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For
more information on Instance Principal authentication method, see Source and
Sink Security .
If not specified, it defaults to false.
Note:
"sink" : {
"type": "nosqldb",
"table" : "<fully qualified table name>",
"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"defaultSchema" : <true|false>,
1-314
Chapter 1
Develop
"useSourceSchema" : <true|false>,
"DDBPartitionKey" : <"name:type">,
"DDBSortKey" : "<name:type>"
},
"overwrite" : <true|false>,
"storeName" : "<store name>",
"helperHosts" : ["hostname1:port1","hostname2:port2,..."],
"security" : "</path/to/store/credentials/file>",
"requestTimeoutMs" : <timeout in milli seconds>,
"includeTTL": <true|false>,
"ttlRelativeDate": "<date-to-use in UTC>"
}
Sink Parameters
• type
• table
• schemaInfo
• schemaInfo.schemaPath
• schemaInfo.defaultSchema
• schemaInfo.useSourceSchema
• schemaInfo.DDBPartitionKey
• schemaInfo.DDBSortKey
• overwrite
• storeName
• helperHosts
• security
• requestTimeoutMs
• includeTTL
• ttlRelativeDate
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb"
table
• Purpose: Fully qualified table name from which to migrate the data.
Format: [namespace_name:]<table_name>
If the table is in the DEFAULT namespace, you can omit the namespace_name. The table
must exist in the store during the migration, and its schema must match with the source
data.
1-315
Chapter 1
Develop
If the table is not available in the sink, you can use the schemaInfo parameter to
instruct the NoSQL Database Migrator to create the table also in the sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– With the DEFAULT namespace "table" :"mytable"
– With a non-default namespace "table" : "mynamespace:mytable"
– To specify a child table "table" : "mytable.child"
Note:
You can migrate the child tables from a valid data source to Oracle
NoSQL Database. The NoSQL Database Migrator copies only a
single table in each execution. Ensure that the parent table is
migrated before the child table.
schemainfo
• Purpose: Specifies the schema for the data being migrated. If this is not specified,
the assumes that the table already exists in the sink's store.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaPath
• Purpose: Specifies the absolute path to a file containing DDL statements for the
NoSQL table.
The NoSQL Database Migrator executes the DDL commands listed in this file
before migrating the data.
The NoSQL Database Migrator does not support more than one DDL statement
per line in the schemaPath file.
• Data Type: string
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.
schemaInfo.defaultSchema
• Purpose: Setting this parameter to true instructs the NoSQL Database Migrator to
create a table with default schema. The default schema is defined by the migrator
itself. For more information about default schema definitions, see Default Schema
in Using Oracle NoSQL Data Migrator .
• Data Type: boolean
• Mandatory: N
1-316
Chapter 1
Develop
Note:
defaultSchema and schemaPath are mutually exclusive
• Example:
– With Default Schema:
"schemaInfo" : {
"defaultSchema" : true
}
"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>"
}
schemaInfo.useSourceSchema
• Purpose: Specifies whether or not the sink uses the table schema definition provided by
the source when migrating NoSQL tables.
• Data Type: boolean
• Mandatory (Y/N): N
Note:
defaultSchema, schemaPath, and useSourceSchema parameters are
mutually exclusive. Specify ONLY one of these parameters.
• Example:
– With Default Schema:
"schemaInfo" : {
"defaultSchema" : true
}
"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>"
}
"schemaInfo" : {
"useSourceSchema" : true
}
1-317
Chapter 1
Develop
schemaInfo.DDBPartitionKey
• Purpose: Specifies the DynamoDB partition key and the corresponding Oracle
NoSQL Database type to be used in the sink Oracle NoSQL Database table. This
key will be used as a NoSQL DB table shard key. This is applicable only when
defaultSchema is set to true and the source format is dynamodb_json. See
Mapping of DynamoDB types to Oracle NoSQL types for more details.
• Mandatory: Yes if defaultSchema is true and the source is dynamodb_json.
• Example: "DDBPartitionKey" : "PersonID:INTEGER"
Note:
If the partition key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.
schemaInfo.DDBSortKey
• Purpose: Specifies the DynamoDB sort key and its corresponding Oracle NoSQL
Database type to be used in the target Oracle NoSQL Database table. If the
importing DynamoDB table does not have a sort key this should not be set. This
key will be used as a non-shard portion of the primary key in the NoSQL DB table.
This is applicable only when defaultSchema is set to true and the source is
dynamodb_json. See Mapping of DynamoDB types to Oracle NoSQL types for
more details.
• Mandatory: No
• Example:"DDBSortKey" : "Skey:STRING"
Note:
If the sort key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.
overwrite
• Purpose: Indicates the behavior of NoSQL Database Migrator when the record
being migrated from the source is already present in the sink.
If the value is set to false, when migrating tables the NoSQL Database Migrator
skips those records for which the same primary key already exists in the sink.
If the value is set to true, when migrating tables the NoSQL Database Migrator
overwrites those records for which the same primary key already exists in the sink.
If not specified, it defaults to true.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "overwrite" : false
1-318
Chapter 1
Develop
storeName
• Purpose: Name of the Oracle NoSQL Database store.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "storeName" : "kvstore"
helperHosts
• Purpose: A list of host and registry port pairs in the hostname:port format. Delimit each
item in the list using a comma. You must specify at least one helper host.
• Data Type: array of strings
• Mandatory (Y/N): Y
• Example: "helperHosts" : ["localhost:5000","localhost:6000"]
security
• Purpose:
If your store is a secure store, provide the absolute path to the security login file that
contains your store credentials. See Configuring Security with Remote Access in
Administrator's Guide to know more about the security login file.
You can use either password file based authentication or wallet based authentication.
However, the wallet based authentication is supported only in the Enterprise Edition (EE)
of Oracle NoSQL Database. For more information on wallet-based authentication, see
Source and Sink Security .
• Data Type: string
• Mandatory (Y/N): Y for a secure store
• Example:
"security" : "/home/user/client.credentials"
Example security file content for password file based authentication:
oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.pwdfile.file=/home/nosql/login.passwd
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)
oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.wallet.dir=/home/nosql/login.wallet
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)
1-319
Chapter 1
Develop
requestTimeoutMs
• Purpose: Specifies the time to wait for each write operation in the sink to
complete. This is provided in milliseconds. The default value is 5000. The value
can be any positive integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000
includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows
provided by the source when importing Oracle NoSQL Database tables.
If you do not specify this parameter, it defaults to false. In that case, the NoSQL
Database Migrator does not include TTL metadata for table rows provided by the
source when importing Oracle NoSQL Database tables.
If set to true, the NoSQL Database Migrator tool performs the following checks on
the TTL metadata when importing a table row:
– If you import a row that does not have _metadata definition, the NoSQL
Database Migrator tool sets the TTL to 0, which means the row never expires.
– If you import a row that has _metadata definition, the NoSQL Database
Migrator tool compares the TTL value against a Reference Time when a row
gets imported. If the row has already expired relative to the Reference Time,
then it is skipped. If the row has not expired, then it is imported along with the
TTL value. By default, the Reference Time of import operation is the current
time in milliseconds, obtained from System.currentTimeMillis(), of the machine
where the NoSQL Database Migrator tool is running. But you can also set a
custom Reference Time using the ttlRelativeDate configuration parameter if
you want to extend the expiration time and import rows that would otherwise
expire immediately.
The formula to calculate the expiration time of a row is as follows:
Note:
Since Oracle NoSQL TTL boundaries are in hours and days, in some
cases, the TTL of the imported row might get adjusted to the nearest
hour or day. For example, consider a row that has expiration value of
1629709200000 (2021-08-23 09:00:00) and Reference Time value
is 1629707962582 (2021-08-23 08:39:22). Here, even though the
row is not expired relative to the Reference Time when this data gets
imported, the new TTL for the row is 1629712800000 (2021-08-23
10:00:00).
1-320
Chapter 1
Develop
• Mandatory (Y/N): N
• Example: "includeTTL" : true
ttlRelativeDate
• Purpose: Specify a UTC date in the YYYY-MM-DD hh:mm:ss format used to set the TTL
expiry of table rows during importing into the Oracle NoSQL Database.
If a table row in the data you are exporting has expired, you can set the
ttlRelativeDate parameter to a date before the expiration time of the table row in the
exported data.
If you do not specify this parameter, it defaults to the current time in milliseconds,
obtained from System.currentTimeMillis(), of the machine where the NoSQL Database
Migrator tool is running.
• Data Type: date
• Mandatory (Y/N): N
• Example: "ttlRelativeDate" : "2021-01-03 04:31:17"
Let us consider a scenario where table rows expire after seven days from 1-Jan-2021.
After exporting this table, on 7-Jan-2021, you run into an issue with your table and decide
to import the data. The table rows are going to expire in one day (data expiration date
minus the default value of ttlRelativedate configuration parameter, which is the
current date).But if you want to extend the expiration date of table rows to five days
instead of one day, use the ttlRelativeDate parameter and choose an earlier date.
Therefore, in this scenario if you want to extend expiration time of the table rows by five
days, set the value of ttlRelativeDate configuration parameters to 3-Jan-2021,
which is used as Reference Time when table rows get imported.
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<Oracle NoSQL Cloud Service Endpoint>",
"table" : "<table name>",
"compartment" : "<OCI compartment name or id>",
"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"defaultSchema" : <true|false>,
"useSourceSchema" : <true|false>,
"DDBPartitionKey" : <"name:type">,
"DDBSortKey" : "<name:type>",
"onDemandThroughput" : <true|false>,
"readUnits" : <table read units>,
"writeUnits" : <table write units>,
"storageSize" : <storage size in GB>
},
"credentials" : "</path/to/oci/credential/file>",
"credentialsProfile" : "<oci credentials profile name>",
"writeUnitsPercent" : <table writeunits percent>,
1-321
Chapter 1
Develop
Sink Parameters
• type
• endpoint
• table
• compartment
• schemaInfo
• schemaInfo.schemaPath
• schemaInfo.defaultSchema
• schemaInfo.useSourceSchema
• schemaInfo.DDBPartitionKey
• schemaInfo.DDBSortKey
• schemaInfo.onDemandThroughput
• schemaInfo.readUnits
• schemaInfo.writeUnits
• schemaInfo.storageSize
• credentials
• credentialsProfile
• writeUnitsPercent
• requestTimeoutMs
• useInstancePrincipal
• overwrite
• includeTTL
• ttlRelativeDate
type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb_cloud"
endpoint
• Purpose: Specifies the Service Endpoint of the Oracle NoSQL Database Cloud
Service.
1-322
Chapter 1
Develop
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://nosql.us-ashburn-1.oci.oraclecloud.com/"
table
• Purpose: Name of the table to which to migrate the data.
You must ensure that this table exists in your Oracle NoSQL Database Cloud Service.
Otherwise, you have to use the schemaInfo object in the sink configuration to instruct the
NoSQL Database Migrator to create the table.
The schema of this table must match the source data.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– To specify a table "table" : "mytable"
– To specify a child table "table" : "mytable.child"
Note:
You can migrate the child tables from a valid data source to Oracle NoSQL
Database Cloud Service. The NoSQL Database Migrator copies only a
single table in each execution. Ensure that the parent table is migrated
before the child table.
compartment
• Purpose: Specifies the name or OCID of the compartment in which the table resides.
If you do not provide any value, it defaults to the root compartment.
You can find your compartment's OCID from the Compartment Explorer window under
Governance in the OCI Cloud Console.
• Data Type: string
• Mandatory (Y/N): Y if the table is not in the root compartment of the tenancy OR when
the useInstancePrincipal parameter is set to true.
Note:
If the useInstancePrincipal parameter is set to true, the compartment
must specify the compartment OCID and not the name.
1-323
Chapter 1
Develop
• Example:
– Compartment name
"compartment" : "mycompartment"
– Compartment name qualified with its parent compartment
"compartment" : "parent.childcompartment"
– No value provided. Defaults to the root compartment.
"compartment": ""
– Compartment OCID
"compartment" : "ocid1.tenancy.oc1...4ksd"
schemaInfo
• Purpose: Specifies the schema for the data being migrated.
If you do not specify this parameter, the NoSQL Database Migrator assumes that
the table already exists in your Oracle NoSQL Database Cloud Service.
If this parameter is not specified and the table does not exist in the sink, the
migration fails.
• Data Type: Object
• Mandatory (Y/N): N
schemaInfo.schemaPath
• Purpose: Specifies the absolute path to a file containing DDL statements for the
NoSQL table.
The NoSQL Database Migrator executes the DDL commands listed in this file
before migrating the data.
The NoSQL Database Migrator does not support more than one DDL statement
per line in the schemaPath file.
• Data Type: string
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.
schemaInfo.defaultSchema
• Purpose: Setting this parameter to Yes instructs the NoSQL Database Migrator to
create a table with default schema. The default schema is defined by the migrator
itself. For more information about default schema definitions, see Default Schema
in Using Oracle NoSQL Data Migrator .
• Data Type: boolean
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.
Note:
defaultSchema and schemaPath are mutually exclusive
1-324
Chapter 1
Develop
schemaInfo.useSourceSchema
• Purpose: Specifies whether or not the sink uses the table schema definition provided by
the source when migrating NoSQL tables.
• Data Type: boolean
• Mandatory (Y/N): N
Note:
defaultSchema, schemaPath, and useSourceSchema parameters are
mutually exclusive. Specify ONLY one of these parameters.
• Example:
– With Default Schema:
"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
}
"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>",
"readUnits" : 100,
"writeUnits" : 100,
"storageSize" : 1
}
"schemaInfo" : {
"useSourceSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
}
schemaInfo.DDBPartitionKey
• Purpose: Specifies the DynamoDB partition key and the corresponding Oracle NoSQL
Database type to be used in the sink Oracle NoSQL Database table. This key will be
used as a NoSQL DB table shard key. This is applicable only when defaultSchema is set
to true and the source format is dynamodb_json. See Mapping of DynamoDB types to
Oracle NoSQL types for more details.
• Mandatory: Yes if defaultSchema is true and the source is dynamodb_json.
• Example: "DDBPartitionKey" : "PersonID:INTEGER"
1-325
Chapter 1
Develop
Note:
If the partition key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.
schemaInfo.DDBSortKey
• Purpose: Specifies the DynamoDB sort key and its corresponding Oracle NoSQL
Database type to be used in the target Oracle NoSQL Database table. If the
importing DynamoDB table does not have a sort key this should not be set. This
key will be used as a non-shard portion of the primary key in the NoSQL DB table.
This is applicable only when defaultSchema is set to true and the source is
dynamodb_json.See Mapping of DynamoDB types to Oracle NoSQL types for
more details.
• Mandatory: No
• Example:"DDBSortKey" : "Skey:STRING"
Note:
If the sort key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.
schemaInfo.onDemandThroughput
• Purpose: Specifies to create the table with on-demand read and write throughput.
If this parameter is not set, the table is created with provisioned capacity.
The default value is false.
Note:
This parameter is not applicable for child tables as they share the
throughput of the top-level parent table.
schemaInfo.readUnits
• Purpose: Specifies the read throughput of the new table.
1-326
Chapter 1
Develop
Note:
schemaInfo.writeUnits
• Purpose: Specifies the write throughput of the new table.
Note:
schemaInfo.storageSize
• Purpose: Specifies the storage size of the new table in GB
Note:
This parameter is not applicable for child tables as they share the storage size
of the top-level parent table.
"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"readUnits" : 500,
1-327
Chapter 1
Develop
"writeUnits" : 1000,
"storageSize" : 5 }
– With defaultSchema
"schemaInfo" : {
"defaultSchema" :Yes,
"readUnits" : 500,
"writeUnits" : 1000,
"storageSize" : 5
}
credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.
Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.
credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service.
If you do not specify this value, it defaults to the DEFAULT profile.
Note:
This parameter is valid ONLY if the credentials parameter is
specified.
1-328
Chapter 1
Develop
2. "credentialsProfile": "ADMIN_USER"
writeUnitsPercent
• Purpose: Specifies the Percentage of table write units to be used during the migration
activity.
The default value is 90. The valid range is any integer between 1 to 100.
Note:
The time required for the data migration is directly proportional to the
writeUnitsPercent value.
See Troubleshooting the Oracle NoSQL Database Migrator to learn how to use this
attribute to improve the data migration speed.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "writeUnitsPercent" : 90
requestTimeoutMs
• Purpose: Specifies the time to wait for each write operation in the sink to complete. This
is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000
useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security
If not specified, it defaults to false.
Note:
1-329
Chapter 1
Develop
• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true
overwrite
• Purpose: Indicates the behavior of NoSQL Database Migrator when the record
being migrated from the source is already present in the sink.
If the value is set to false, when migrating tables the NoSQL Database Migrator
skips those records for which the same primary key already exists in the sink.
If the value is set to true, when migrating tables the NoSQL Database Migrator
overwrites those records for which the same primary key already exists in the sink.
If not specified, it defaults to true.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "overwrite" : false
includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows
provided by the source when importing Oracle NoSQL Database tables.
If you do not specify this parameter, it defaults to false. In that case, the NoSQL
Database Migrator does not include TTL metadata for table rows provided by the
source when importing Oracle NoSQL Database tables.
If set to true, the NoSQL Database Migrator tool performs the following checks on
the TTL metadata when importing a table row:
– If you import a row that does not have _metadata definition, the NoSQL
Database Migrator tool sets the TTL to 0, which means the row never expires.
– If you import a row that has _metadata definition, the NoSQL Database
Migrator tool compares the TTL value against a Reference Time when a row
gets imported. If the row has already expired relative to the Reference Time,
then it is skipped. If the row has not expired, then it is imported along with the
TTL value. By default, the Reference Time of import operation is the current
time in milliseconds, obtained from System.currentTimeMillis(), of the machine
where the NoSQL Database Migrator tool is running. But you can also set a
custom Reference Time using the ttlRelativeDate configuration parameter if
you want to extend the expiration time and import rows that would otherwise
expire immediately.
The formula to calculate the expiration time of a row is as follows:
1-330
Chapter 1
Develop
Note:
Since Oracle NoSQL TTL boundaries are in hours and days, in some
cases, the TTL of the imported row might get adjusted to the nearest hour
or day. For example, consider a row that has expiration value of
1629709200000 (2021-08-23 09:00:00) and Reference Time value is
1629707962582 (2021-08-23 08:39:22). Here, even though the row is not
expired relative to the Reference Time when this data gets imported, the
new TTL for the row is 1629712800000 (2021-08-23 10:00:00).
ttlRelativeDate
• Purpose: Specify a UTC date in the YYYY-MM-DD hh:mm:ss format used to set the TTL
expiry of table rows during importing into the Oracle NoSQL Database.
If a table row in the data you are exporting has expired, you can set the
ttlRelativeDate parameter to a date before the expiration time of the table row in the
exported data.
If you do not specify this parameter, it defaults to the current time in milliseconds,
obtained from System.currentTimeMillis(), of the machine where the NoSQL Database
Migrator tool is running.
• Data Type: date
• Mandatory (Y/N): N
• Example: "ttlRelativeDate" : "2021-01-03 04:31:17"
Let us consider a scenario where table rows expire after seven days from 1-Jan-2021.
After exporting this table, on 7-Jan-2021, you run into an issue with your table and decide
to import the data. The table rows are going to expire in one day (data expiration date
minus the default value of ttlRelativedate configuration parameter, which is the
current date).But if you want to extend the expiration date of table rows to five days
instead of one day, use the ttlRelativeDate parameter and choose an earlier date.
Therefore, in this scenario if you want to extend expiration time of the table rows by five
days, set the value of ttlRelativeDate configuration parameters to 3-Jan-2021,
which is used as Reference Time when table rows get imported.
1-331
Chapter 1
Develop
You can find the configuration template for each supported transformation below.
ignoreFields
The configuration file format for the ignoreFields transformation is shown below.
"transforms" : {
"ignoreFields" : ["<field1>","<field2>",...]
}
Transformation Parameter
ignoreFields
• Purpose: An array of the column names to be ignored from the source records.
Note:
You can supply only top-level fields. Transformations can not be applied
on the data in the nested fields.
1-332
Chapter 1
Develop
includeFields
The configuration file format for the includeFields transformation is shown below.
"transforms" : {
"includeFields" : ["<field1>","<field2>",...]
}
Transformation Parameter
includeFields
• Purpose: An array of the column names to be included from the source records. It only
includes the fields specified in the array, the rest of the fields are ignored.
Note:
The NoSQL Database Migrator tool throws an error if you specify an empty
array. Additionally, you can specify only the top-level fields. The NoSQL
Database Migrator tool does not apply transformations to the data in the nested
fields.
renameFields
The configuration file format for the renameFields transformation is shown below.
"transforms" : {
"renameFields" : {
"<old_name>" : "<new_name>",
"<old_name>" : "<new_name>,"
.....
}
}
Transformation Parameter
renameFields
• Purpose: Key-Value pairs of the old and new names of the columns to be renamed.
1-333
Chapter 1
Develop
Note:
You can supply only top-level fields. Transformations can not be applied
on the data in the nested fields.
aggregateFields
The configuration file format for the aggregateFields transformation is shown below.
"transforms" : {
"aggregateFields" : {
"fieldName" : "name of the new aggregate field",
"skipFields" : ["<field1>","<field2">,...]
}
}
Transformation Parameter
aggregateFields
• Purpose: Name of the aggregated field in the sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example: If the given record is:
{
"id" : 100,
"name" : "john",
"address" : "USA",
"age" : 20
}
"aggregateFields" : {
"fieldName" : "document",
"skipFields" : ["id"]
}
1-334
Chapter 1
Develop
{
"id": 100,
"document": {
"name": "john",
"address": "USA",
"age": 20
}
}
Few additional points to consider while mapping DynamoDB types to Oracle NoSQL types:
• DynamoDB Supports only one data type for Numbers and can have up to 38 digits of
precision,on contrast Oracle NoSQL supports many types to choose from based on the
range and precision of the data.You can select the appropriate Number type that fits the
range of your input data. If you are not sure of the nature of the data, NoSQL NUMBER
type can be used.
• DynamoDB Supports only one data type for Numbers and can have up to 38 digits of
precision,on contrast Oracle NoSQL supports many types to choose from based on the
range and precision of the data.You can select the appropriate Number type that fits the
1-335
Chapter 1
Develop
range of your input data. If you are not sure of the nature of the data, NoSQL
NUMBER type can be used.
• Partition key in DynamoDB has a limit of 2048 bytes but Oracle NoSQL Cloud
Service has a limit of 64 bytes for the Primary key/Shard key.
• Sort key in DynamoDB has a limit of 1024 bytes but Oracle NoSQL Cloud Service
has a limit of 64 bytes for the Primary key.
• Attribute names in DynamoDB can be 64KB long but Oracle NoSQL Cloud service
column names have a limit of 64 characters.
1-336
Chapter 1
Develop
JSON BINARY(STRING)
or
BINARY(JSON), if logical JSON is configured
Note:
When the NoSQL Number type is converted to Parquet Double type, there may be
some loss of precision in case the value cannot be represented in Double. If the
number is too big to represent as Double, it is converted to
Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY.
1-337
Chapter 1
Develop
Note:
The Migrator provides a user-friendly configuration defaultSchema to
automatically create a schema-less DDL table which also aggregates
attributes into a JSON column.
Note:
We highly recommend using schema-less tables when migrating data
from DynamoDB to Oracle NoSQL Database due to the nature of
DynamoDB tables being schema-less. This is especially for large tables
where the content of each record may not be uniform across the table.
1-338
Chapter 1
Develop
1-339
Chapter 1
Develop
1-340
Chapter 1
Manage
I have a long running migration involving huge datasets. How can I track the progress
of the migration?
You can enable additional logging to track the progress of a long-running migration. To control
the logging behavior of Oracle NoSQL Database Migrator, you must set the desired level of
logging in the logging.properties file. This file is provided with the NoSQL Database
Migrator package and available in the directory where the Oracle NoSQL Database Migrator
was unpacked. The different levels of logging are OFF, SEVERE, WARNING, INFO, FINE, and
ALL in the order of increasing verbosity. Setting the log level to OFF turns off all the logging
information, whereas setting the log level to ALL provides the full log information. The default
log level is WARNING. All the logging output is configured to go to the console by default. You
can see comments in the logging.properties file to know about each log level.
Manage
• Using APIs to manage tables
• Using console to manage tables
Reading Data
Learn how to read data from your table.
You can read data from your application by using the different API methods for the language-
specific drivers. You can retrieve a record based on a single primary key value, or by using
queries.
Note:
First, connect your client driver to Oracle NoSQL Database Cloud Service to get a
connection and then complete other steps. This topic omits the steps for connecting
your client driver and creating a table.
• Java
• Python
• Go
• Node.js
1-341
Chapter 1
Manage
• C#
• Spring Data
Java
The GetRequest class provides a simple and powerful way to read data, while queries
can be used for more complex read requests. To read data from a table, specify the
target table and target key using the GetRequest class and use NoSQLHandle.get() to
execute your request. The result of the operation is available in GetResult. The
following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle. To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments.
To read data from your table:
Note:
By default, all read operations are eventually consistent. You can change the
default Consistency for a NoSQLHandle instance by using the
NoSQLHandleConfig.setConsistency(oracle.nosql.driver.Consistency)
and GetRequest.setConsistency() methods.
See the Java API Reference Guide for more information about the GET APIs.
Python
Learn how to read data from your table. You can read single rows using the
borneo.NoSQLHandle.get() method. This method allows you to retrieve a record
based on its primary key value. The borneo.GetRequest class is used for simple get
operations. It contains the primary key value for the target row and returns an instance
of borneo.GetResult.
1-342
Chapter 1
Manage
Go
You can read single rows using the Client.Get function. This function allows you to retrieve
a record based on its primary key value. The nosqldb.GetRequest is used for simple get
operations. It contains the primary key value for the target row and returns an instance of
nosqldb.GetResult. If the get operation succeeds, a non-nil GetResult.Version is returned.
key:=&types.MapValue{}
key.Put("id", 1)
req:=&nosqldb.GetRequest{
TableName: "users",
Key: key,
}
res, err:=client.Get(req)
By default all read operations are eventually consistent, using types.Eventual. This type of
read is less costly than those using absolute consistency, types.Absolute. This default can
be changed in nosqldb.RequestConfig using RequestConfig.Consistency before creating
the client. It can be changed for a single request using GetRequest.Consistency field.
1. Change default consistency for all read operations.
cfg:= nosqldb.Config{
RequestConfig: nosqldb.RequestConfig{
Consistency: types.Absolute,
...
},
...
}
client, err:=nosqldb.NewClient(cfg)
req:=&nosqldb.GetRequest{
TableName: "users",
Key: key,
Consistency: types.Absolute,
}
Node.js
You can read single rows using the get method. This method allows you to retrieve a record
based on its primary key value. You can set consistency of read operation using Consistency
1-343
Chapter 1
Manage
get method returns Promise of GetResult which which is plain JavaScript object
containing the resulting row and its Version. If the provided primary key does not exist
in the table, the value of row property will be null. Note that the property names in the
provided primary key key object should be the same as underlying table column
names.
C#
You can read a single row using the GetAsync method. This method allows you to
retrieve a row based on its primary key value. This method takes the primary key value
as MapValue. The field names should be the same as the table primary key column
names. You may also pass options as GetOptions.
1-344
Chapter 1
Manage
new MapValue
{
["id"] =1
});
// Continuing from the Put example, the expected output will be:
// { "id": 1, "name": "Kim" }
Console.WriteLine("Got row: {0}", result.row);
// Use absolute consistency.
result = await client.GetAsync(tableName,
new MapValue
{
["id"] =2
}),
new GetOptions
{
Consistency=Consistency.Absolute
});
// The expected output will be:
// { "id": 2, "name": "Jack" }
Console.WriteLine("Got row with absolute consistency: {0}",
result.row);
// Continuing from the Put example, the expiration time should be
// 30 days from now.
Console.WriteLine("Expiration time: {0}", result.ExpirationTime)
}
catch(Exception ex){
// handle exceptions
}
Spring Data
Use one of these methods to read the data from the table - NosqlRepository findById(),
findAllById(), findAll() or using NosqlTemplate find(), findAll(), findAllById(). For
details, see SDK for Spring Data API Reference.
Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.
Create the UsersRepository interface. This interface extends the NosqlRepository interface
and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. This NosqlRepository interface
provides methods that are used to retrieve data from the database.
import com.oracle.nosql.spring.data.repository.NosqlRepository;
/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the
1-345
Chapter 1
Manage
Users class. */
In the application, you select all the rows from the Users table and provide them to an
iterable instance. Print the values to the output from the iterable object.
@Autowired
private UsersRepository repo;
/* Select all the rows in the Users table and provides them into an
iterable instance.*/
System.out.println("\nfindAll:");
Iterable < Users > allusers = repo.findAll();
findAll:
Using Queries
Learn about some aspects of using queries to your application in Oracle NoSQL
Database Cloud Service.
Oracle NoSQL Database Cloud Service provides a rich query language to read and
update data. See Developers Guide for a full description of the query language.
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
1-346
Chapter 1
Manage
Java
To execute your query, you use the NoSQLHandle.query() API. See the Java API Reference
Guide for more information about this API.
Note:
The following examples consider that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL
tables, see About Compartments .
/* QUERY a table named "users", using the primary key field "name".
* The table name is inferred from the query statement.
*/
QueryRequest queryRequest = new QueryRequest().
setStatement("SELECT * FROM users WHERE name = \"Taylor\"");
do {
QueryResult queryResult = handle.query(queryRequest);
1-347
Chapter 1
Manage
Python
To execute a query use the borneo.NoSQLHandle.query() method. For example, to
execute a SELECT query to read data from your table, a borneo.QueryResult
contains a list of results. And if the borneo.QueryRequest.is_done() returns False,
there may be more results, so queries should generally be run in a loop. It is possible
for single request to return no results but the query still not done, indicating that the
query loop should continue. For example:
1-348
Chapter 1
Manage
Go
To execute a query use the Client.Query function. For example, to execute a SELECT query
to read data from your table:
prepReq := &nosqldb.PrepareRequest{
Statement: "select * from users",
}
prepRes, err := client.Prepare(prepReq)
if err != nil {
fmt.Printf("Prepare failed: %v\n", err)
return
}
queryReq := &nosqldb.QueryRequest{
PreparedStatement: &prepRes.PreparedStatement,
}
var results []*types.MapValue
for {
queryRes, err := client.Query(queryReq)
if err != nil {
fmt.Printf("Query failed: %v\n", err)
return
1-349
Chapter 1
Manage
}
res, err := queryRes.GetResults()
if err != nil {
fmt.Printf("GetResults() failed: %v\n", err)
return
}
results = append(results, res...)
if queryReq.IsDone() {
break
}
}
Node.js
To execute a query use query method. This method returns returns Promise of
QueryResult which is plain JavaScript object that contains an Array of resulting rows
as well as continuation key. The amount of data returned by the query is limited by the
system default and could be further limited by setting maxReadKB property in the opt
argument of query, which means that one invocation of query method may not return
all available results. This situation is dealt with by using continuationKey property.
Not-null continuation key means that more query results may be available. This means
that queries should generally run in a loop, looping until continuation key becomes
null. Note that it is possible for rows to be empty yet have not-null continuationKey,
which means the query loop should continue. In order to receive all the results, call
query in a loop. At each iteration, if non-null continuation key is received in
QueryResult, set continuationKey property in the opt argument for the next iteration:
1-350
Chapter 1
Manage
opt);
for(let row of result.rows) {
console.log(row);
}
opt.continuationKey = result.continuationKey;
} while(opt.continuationKey);
} catch(error) {
//handle errors
}
}
1-351
Chapter 1
Manage
} catch(error) {
//handle errors
}
}
C#
To execute a query, you may call QueryAsync method or call
GetQueryAsyncEnumerable method and iterate over the resulting async enumerable.
You may pass options to each of these methods as QueryOptions. QueryAsync method
return Task<QueryResult<RecordValue>>. QueryResult contains query results as a list
of RecordValue instances, as well as other information. When your query specifies a
complete primary key (or you are executing an INSERT statement), it is sufficient to
call QueryAsync once.
The amount of data returned by the query is limited by the system. It could also be
further limited by setting MaxReadKB property of QueryOptions. This means that one
invocation of QueryAsync may not return all available results. This situation is dealt
with by using continuation key. Non-null ContinuationKey in QueryResult means that
more more query results may be available. This means that queries should run in a
loop, looping until the continuation key becomes null.
Note that it is possible for query to return now rows (QueryResult.Rows is empty) yet
have not-null continuation key, which means that the query should continue looping. To
continue the query, set ContinuationKey in the QueryOptions for the next call to
QueryAsync and loop until the continuation key becomes null. The following example
executes the query and prints query results:
1-352
Chapter 1
Manage
Console.WriteLine(row);
}
options.ContinuationKey = result.ContinuationKey;
}
while(options.ContinuationKey != null);
}
catch(Exception ex){
// handle exceptions
}
Oracle NoSQL Database provides the ability to prepare queries for execution and reuse. It is
recommended that you use prepared queries when you run the same query for multiple
times. When you use prepared queries, the execution is much more efficient than starting
with a SQL statement every time. The query language and API support query variables to
assist with query reuse.
Use PrepareAsync to prepare the query. This method returns Task<PreparedStatement>.
PreparedStatement allows you to set query variables. The query methods QueryAsync and
GetQueryAsyncEnumerable have overloads that execute prepared queries by taking
PreparedStatement as a parameter instead of the SQL statement. For example:
1-353
Chapter 1
Manage
Spring Data
Use one of these methods to run your query - The NosqlRepository derived queries,
native queries, or using NosqlTemplate runQuery(), runQueryJavaParams(),
runQueryNosqlParams(). For details, see SDK for Spring Data API Reference.
Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration
class to provide the connection details of the Oracle NoSQL Database. For
more details, see Obtaining a NoSQL connection.
In this section, you use the derived queries. For more details on the derived queries,
see Derived Queries.
Create the UsersRepository interface. This interface extends the NosqlRepository
interface and provides the entity class and the data type of the primary key in that
class as parameterized types to the NosqlRepository interface. The NosqlRepository
interface provides methods that are used to retrieve data from the database.
import com.oracle.nosql.spring.data.repository.NosqlRepository;
/* The Users is the entity class and Long is the data type of the
primary key in the Users class.
This interface provides methods that return iterable instances of
the Users class. */
In the application, you select the row from the Users table with the last name as
required and print the values to the output from the object.
@Autowired
private UsersRepository repo;
System.out.println("\nfindBylastName: Willard");
1-354
Chapter 1
Manage
/* Use queries to find by the last Name. Search the Users table by the last
name and return an iterable instance of the Users class.*/
allusers = repo.findByLastName("Willard");
findBylastName: Willard
Modifying Tables
Learn how to modify tables.
You modify a table to:
• Add new fields to an existing table
• Delete currently existing fields in a table
• To change the default TTL value
• Modify table limits
Examples of DDL statements are:
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
1-355
Chapter 1
Manage
Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments.
When altering a table with provisioned capacity, you may also use the
TableRequests.setTableLimits method to modify table limits.
You can also use the Oracle NoSQL Database Java SDK to modify a table and
change the capacity model to an on-demand capacity configuration. You can also
choose to change the storage capacity.
You can change the definition of the table. The TTL value is changed below.
1-356
Chapter 1
Manage
Python
If using the Oracle NoSQL Database Cloud Service table limits can be modified using
borneo.TableRequest.set_table_limits(). If the table is configured with provisioned
capacity, the limits can be set as shown in the example below:
You can also use the Oracle NoSQL Database Python SDK to modify a table and change the
capacity model to an on-demand capacity configuration. You can also choose to change the
Storage capacity.
Go
Specify the DDL statement and other information in a TableRequest, and execute the request
using the nosqldb.DoTableRequest() or nosqldb.DoTableRequestAndWait() function.
req:=&nosqldb.TableRequest{
Statement: "ALTER TABLE users (ADD age INTEGER)",
}
res, err:=client.DoTableRequestAndWait(req, 5*time.Second, time.Second)
1-357
Chapter 1
Manage
The Oracle NoSQL Database Cloud Service table limits can be modified using
TableRequest.TableLimits. If the table is configured with provisioned capacity, the
limits can be set as shown in the example below.
req := &nosqldb.TableRequest{
TableName: "users",
TableLimits: &nosqldb.TableLimits{
ReadUnits: 100,
WriteUnits: 100,
StorageGB: 5,
},
}
res, err := client.DoTableRequestAndWait(req, 5*time.Second,
time.Second)
You can also use the Oracle NoSQL Database Go SDK to modify a table and change
the capacity model to an on-demand capacity configuration. You can also choose to
change the Storage capacity.
Node.js
Use NoSQLClient#tableDDL to modify a table by issuing a DDL statement against this
table. Table limits can be modified using setTableLimits method. It takes table name
and new TableLimits as arguments and returns Promise of TableResult. If the table
is configured with provisioned capacity, the limits can be set as shown in the example
below.
1-358
Chapter 1
Manage
}
}
You can also use the Oracle NoSQL Database Node.js SDK to modify a table and change the
capacity model to an on-demand capacity configuration. You can also choose to change the
Storage capacity.
C#
Use ExecuteTableDDLAsync or ExecuteTableDDLWithCompletionAsync to modify a table by
issuing a DDL statement against this table.
Table limits can be modified using SetTableLimitsAsync or
SetTableLimitsWithCompletionAsync methods. They take table name and new TableLimits
as parameters and return Task<TableResult>. If the table is configured with provisioned
capacity, the limits can be set as shown in the example below.
1-359
Chapter 1
Manage
You can also use the Oracle NoSQL Database .NET SDK to modify a table and
change the capacity model to an on-demand capacity configuration. You can also
choose to change the Storage capacity.
Spring Data
To modify a table, you can use the NosqlTemplate.runTableRequest() method. For
details, see SDK for Spring Data API Reference.
Note:
While the Spring Data SDK provides an option to modify the tables, it is not
recommended to alter the schemas as the Spring Data SDK expects tables
to comply with the default schema (two columns - the primary key column of
types String, int, long, or timestamp and a JSON column called kv_json_).
Deleting Data
Learn how to delete rows from your table.
After you insert or load data into a table, you can delete the table rows when they are
no longer required.
• Java
• Python
1-360
Chapter 1
Manage
• Go
• Node.js
• C#
• Spring Data
Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL Handle. To
explore other options of specifying a compartment for the NoSQL tables, see About
Compartments.
To delete a row from a table:
See the Java API Reference Guide for more information about the APIs.
Python
Single rows are deleted using borneo.DeleteRequest using a primary key value as shown
below.
1-361
Chapter 1
Manage
Go
Single rows are deleted using nosqldb.DeleteRequest using a primary key value:
key := &types.MapValue{}
key.Put("id", 1)
req := &nosqldb.DeleteRequest{
TableName: "users",
Key: key,
}
res, err := client.Delete(req)
Node.js
To delete a row, use delete method. Pass to it the table name and primary key of the
row to delete. In addition, you can make delete operation conditional by specifying on
a Version of the row that was previously returned by get or put. You can pass it as
matchVersion property of the opt argument: { matchVersion: my_version }.
Alternatively you may use deleteIfVersion method.
1-362
Chapter 1
Manage
// Will fail because the last put has changed the row version, so
// the old version no longer matches. The result will also contain
// existing row and its version because we specified returnExisting
in
// the opt argument.
result = await client.deleteIfVersion(tableName, { id: 1 }, version,
{ returnExisting: true });
// Expected output: deleteIfVersion failed
console.log('deleteIfVersion ' + result.success ?
'succeeded' : 'failed');
// Expected output: { id: 1, name: 'John' }
console.log(result.existingRow);
} catch(error) {
//handle errors
}
}
Note that similar to put operations, success results in false value only if trying to delete row
with non-existent primary key or because of version mismatch when matching version was
specified. Failure for any other reason will result in error. You can delete multiple rows having
the same shard key in a single atomic operation using deleteRange method. This method
deletes set of rows based on partial primary key (which must be a shard key or its superset)
and optional FieldRange which specifies a range of values of one of the other (not included
into the partial key) primary key fields.
C#
To delete a row, use DeleteAsync method. Pass to it the table name and primary key of the
row to delete. This method takes the primary key value as MapValue. The field names should
be the same as the table primary key column names. You may also pass options as
DeleteOptions. In addition, you can make delete operation conditional by specifying on a
RowVersion of the row that was previously returned by GetAsync or PutAsync. Use
DeleteIfVersionAsync method that takes the row version to match. Alternatively, you may
use DeleteAsync method and pass the version as MatchVersion property of DeleteOptions.
1-363
Chapter 1
Manage
Note that Success property of the result only indicates whether the row to delete was
found and for conditional Delete, whether the provided version was matched. If the
Delete operation fails for any other reason, an exception will be thrown. You can delete
1-364
Chapter 1
Manage
multiple rows having the same shard key in a single atomic operation using
DeleteRangeAsync method. This method deletes set of rows based on partial primary key
(which must include a shard key) and optional FieldRange which specifies a range of values
of one of the other (not included into the partial key) primary key fields.
Spring Data
Use one of these methods to delete the rows from the tables - NosqlRepository
deleteById(), delete(), deleteAll(Iterable<? extends T> entities), deleteAll() or
using NosqlTemplate delete(), deleteAll(), deleteById(), deleteInShard(). For details,
see SDK for Spring Data API Reference.
Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.
In this section, you use the NosqlRepository deleteAll() method to delete the rows from
your table.
Create the UsersRepository interface. This interface extends the NosqlRepository interface
and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. The NosqlRepository interface
provides methods that are used to retrieve data from the database.
import com.oracle.nosql.spring.data.repository.NosqlRepository;
/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the
Users class. */
In the application, you use the deleteAll() method to delete the existing rows from the table.
@Autowired
private UsersRepository repo;
/* Delete all the existing rows if any, from the Users table.*/
repo.deleteAll();
1-365
Chapter 1
Manage
• Java
• Python
• Go
• Node.js
• C#
• Spring Data
Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle. To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments .
To drop a table using the TableRequests.setStatement method:
1-366
Chapter 1
Manage
Python
The following example drops the table users.
Go
The following example drops the given table.
Node.js
The following example drops the given table and index.
1-367
Chapter 1
Manage
} catch(error) {
//handle errors
}
}
C#
To drop tables, use ExecuteTableDDLAsync and
ExecuteTableDDLWithCompletionAsync.
Spring Data
To drop the tables and Indexes, you use NosqlTemplate.runTableRequest() or
NosqlTemplate.dropTableIfExists() methods. For details, see SDK for Spring Data
API Reference.
Create the AppConfig class that extends AbstractNosqlConfiguration class to
provide the connection details of the database. For more details, see Obtaining a
NoSQL connection.
In the application, you instantiate the NosqlTemplate class by providing the
NosqlTemplate create(NosqlDbConfig nosqlDBConfig) method with the instance of
the AppConfig class. You then drop the table using the
NosqlTemplate.dropTableIfExists() method. The
NosqlTemplate.dropTableIfExists() method drops the table and returns true if the
result indicates a change of the table's state to DROPPED or DROPPING.
import com.oracle.nosql.spring.data.core.NosqlTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
1-368
Chapter 1
Manage
try {
AppConfig config = new AppConfig();
NosqlTemplate tabledrop = NosqlTemplate.create(config.nosqlDbConfig());
Boolean result = tabledrop.dropTableIfExists("Users");
if (result == true) {
System.out.println("Table dropped successfully");
} else {
System.out.println("Failed to drop table");
}
} catch (Exception e) {
System.out.println("Exception creating index" + e);
}
1-369
Chapter 1
Manage
5. Click the action menu corresponding to the row you wish to update, and select
Update Row.
6. Modify the values in Simple Input or Advanced JSON Input Updation Mode.
7. Click Update Row.
To view help for the current page, click the help link at the top of the page.
1-370
Chapter 1
Manage
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Explore Data tab under Resources.
4. By default, the query text is populated with a SQL query that will retrieve all the records
from the table. You can modify this query with any valid SQL for Oracle NoSQL
statement. You may get an error that your statement is Incomplete or faulty. See
Debugging SQL statement errors in the OCI console to learn about possible errors in the
OCI console and how to fix them. See Developers Guide for SQL query examples.
5. Click Execute.
The table data is displayed in the Records section.
6. To view the query execution plan of the SQL query that was executed, click Show query
execution plan. The detailed query execution plan is displayed in a new window.
1-371
Chapter 1
Manage
Viewing Tables
You can view Oracle NoSQL Database Cloud Service tables from the NoSQL console.
To view tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. You can view all the tables in your tenancy from the NoSQL console.
Viewing Indexes
You can view Oracle NoSQL Database Cloud Service the list of indexes created for a
NoSQL table from the NoSQL console.
To view indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.
1-372
Chapter 1
Manage
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. From the Table Details page, you can view all table columns, indexes, rows, and metrics.
4. A column in the list (Child tables) shows how many child tables are owned by the
specified table.
5. The list of child tables for a given parent table can be viewed by clicking the "Child tables"
link under Resources on the parent table's details page.
1-373
Chapter 1
Manage
Editing Tables
You can update reserved capacity (if the table is not an Always Free NoSQL table) and
Time to Live (TTL) values for your Oracle NoSQL Database Cloud Service tables from
the NoSQL console.
To edit tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. The value of Time to Live (TTL) can be updated.
• To update the value of Time to Live (TTL), click the Edit link next to the Time
to live (Days) field.
1-374
Chapter 1
Manage
• You can also update the value of Time to Live (TTL) by clicking the action menu
corresponding to the table name you wish to change and select Edit default time to
live.
• If the table is a child table, only the Time to live (TTL) value can be updated. To
update the value of Time to Live (TTL), click the Edit link next to the Time to live
(Days) field.
1-375
Chapter 1
Manage
Note:
You cannot edit the reserved capacity of a child table directly. Only
the corresponding values of the parent table can be edited.
• Table Time to Live (Days): (optional) Specify the default expiration time for
the rows in the table. After this time, the rows expire automatically, and are no
longer available. The default value is zero, indicating no expiration time.
Note:
Updating Table Time to Live (TTL) will not change the TTL value of
any existing data in the table. The new TTL value will only apply to
those rows that are added to the table after this value is modified
and to the rows for which no overriding row-specific value has been
supplied.
4. If your table is not an Always Free NoSQL table, then the reserved capacity and
the usage model can be modified.
• Under More Actions, click Edit reserved capacity.
1-376
Chapter 1
Manage
• You can also update the Reserved Capacity by clicking the action menu
corresponding to the table name you wish to change and select Edit reserved
capacity.
You can also modify the Capacity mode from Provisioned Capacity to on Demand
Capacity or vice-versa. If you provision units greater than what the on Demand
capacity can offer, and then If you switch from Provisioned capacity to On Demand
capacity, the capacity of the table will be reduced. You should take into consideration
the reduction in the capacity due to the switch in this scenario.
5. (Optional) To dismiss the changes, click Cancel.
To view help for the current page, click the help link at the top of the page.
1-377
Chapter 1
Manage
Altering Tables
Learn how to alter Oracle NoSQL Database Cloud Service tables by adding in simple
or advanced mode, or deleting columns using the NoSQL console.
The NoSQL console lets you alter the Oracle NoSQL Database Cloud Service tables
in two modes:
1. Simple Input Mode: You can use this mode to alter the NoSQL Database Cloud
Service table declaratively, that is, without writing a DDL statement.
2. Advanced DDL Input Mode: You can use this mode to alter the NoSQL Database
Cloud Service table using a DDL statement.
Moving Tables
Learn how to move Oracle NoSQL Database Cloud Service table to a different
compartment from the NoSQL console.
To move a table:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, click Move Table.
4. Alternatively, Click the action menu corresponding to the table name and select
Move table.
5. In the Move Resource to a Different Compartment window, modify the following
values for the table:
• Choose New Compartment: Select the new compartment from the select list.
6. Click Move table.
7. (Optional) To dismiss the changes, click the Cancel link on the top right corner.
To view help for the current page, click the help link at the top of the page.
Note:
You cannot move a child table to another compartment. If the parent table is
moved to a new compartment, all the descendant tables within the hierarchy
will be automatically moved to the target compartment in a single operation.
1-378
Chapter 1
Manage
Deleting Tables
Learn how to delete Oracle NoSQL Database Cloud Service tables from the NoSQL console.
To delete tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To delete the table, do either of the
following:
• Click the table name. In the Table Details page, click the Delete button, or
• Click the action menu corresponding to the table name you wish to delete and select
Delete.
• If a table has child tables, then the child table should be deleted first before deleting
the parent table.
The Delete Table confirmation dialog opens.
1-379
Chapter 1
Monitor
3. Click Delete.
The table is deleted.
Deleting Indexes
Learn how to delete Oracle NoSQL Database Cloud Service indexes from the NoSQL
console.
To delete indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.
4. Click the action menu corresponding to the index you wish to delete, and select
Delete.
The Delete Index confirmation dialog opens.
5. Click Delete.
The index is deleted.
Monitor
• Monitoring Oracle NoSQL Database Cloud Service
1-380
Chapter 1
Monitor
Metric and alarm data is accessible via the Console, CLI, and API. For more information
about OCI monitoring service concepts, see Monitoring Concepts.
This article has the following topics:
Metrics for Oracle NoSQL Database Cloud Service include the following dimensions:
• RESOURCEID
The OCID of the NoSQL Table in the Oracle NoSQL Database Cloud Service.
Note:
OCID is an Oracle-assigned unique ID that is included as part of the resource's
information in both the console and API.
• TABLENAME
The name of the NoSQL table in the Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service sends metrics to the Oracle Cloud Infrastructure
Monitoring Service. You can view or create alarms on these metrics using the Oracle Cloud
Infrastructure Console SDKs or CLI.
1-381
Chapter 1
Monitor
Additionally, you can publish custom metrics as per your requirement. For example,
you can set up metrics to capture application transaction latency (time spent per
completed transaction) and then post that data to the Monitoring service.
1-382
Chapter 1
Monitor
chart for a table is shown below. The metric is taken every minute and the metric charts are
plotted for an interval of 5 minutes by default.
Write Units:
The number of write units consumed during this period. It is the throughput for up to 1 KB of
data per second for a write operation. Write operations are triggered during insert, update,
and delete operations. If your data is greater than 1 KB it will require multiple read units to
write it. The Write Unit metric chart for a table is shown below. The metric is taken every
minute and the metric charts are plotted for an interval of 5 minutes by default.
StorageGB:
The maximum amount of storage consumed by the table. The Storage metric chart for a table
is shown below. The metric is taken every minute and the metric charts are plotted for an
interval of 5 minutes by default.
1-383
Chapter 1
Monitor
Note:
It takes one hour after table creation to seed the beginning of storage size
tracking. After the initial hour, storage statistics are updated every 5 minutes.
Note:
The storage GB metric is truncated. Therefore storage usage of less than 1
GB will be displayed as 0. The chart will begin to display storage when usage
is greater than 1 GB.
ReadThrottleCount:
This gives a count of the number of read throttling exceptions on the given table in the
time period. A throttling exception usually indicates that the provisioned read
throughput has been exceeded. If you get these frequently, then you should consider
increasing the Read Units on your table. The Read throttle count metric chart for a
table is shown below. The metric is taken every minute and the metric charts are
plotted for an interval of 5 minutes by default.
1-384
Chapter 1
Monitor
WriteThrottleCount:
This gives a count of the number of write throttling exceptions on the given table in the time
period. A throttling exception usually indicates that the provisioned write throughput has been
exceeded. If you get these frequently, then you should consider increasing the Write Units on
your table. The Write throttle count metric chart for a table is shown below. The metric is
taken every minute and the metric charts are plotted for an interval of 5 minutes by default.
StorageThrottleCount:
This gives a count of the number of storage throttling exceptions on the given table in the
time period. A throttling exception usually indicates that the provisioned storage capacity has
been exceeded. If you get these frequently, then you should consider increasing the storage
capacity of your table. The Storage throttle count metric chart for a table is shown below. The
metric is taken every minute and the metric charts are plotted for an interval of 5 minutes by
default.
1-385
Chapter 1
Monitor
MaxShardSizeUsagePercent
The highest usage of space in a shard for a specific table, as a percentage of space
used in that shard.
Note:
Oracle NoSQL Database Cloud Service hashes keys to shards to provide
distribution over a collection of storage nodes that provide storage for the
tables. Although not directly visible to you, Oracle NoSQL Database Cloud
Service tables are sharded and replicated for availability and performance. A
shard key either 100% matches the primary key or is a subset of the primary
key.. All records sharing a shard key are co-located to achieve data locality.
1-386
Chapter 1
Monitor
In addition to viewing the chart for a metric, you have the following options.
You can get the table view to check the value of a metric at a given point in time.
1-387
Chapter 1
Monitor
That is you should trigger an alarm when the metric reaches a particular value, say for
example 90 percent.
1-388
Chapter 1
Monitor
OCI alarm uses OCI notification service to send notifications. Usually, the alarm will be
configured to send notifications through configured email. When maxShardSizeUsagepercent
reaches 90 percent, an email notification is sent.
1-389
Chapter 1
Monitor
• Limit the number of child tables to avoid a potential shard storage imbalance
situation.
For example:
Example response:
{
"data": [
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlx
xsrvrc4zxr6lo4a",
"tableName": "demo"
},
"name": "ReadThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},
{
1-390
Chapter 1
Monitor
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "ReadUnits",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "StorageGB",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "StorageThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "WriteThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},
1-391
Chapter 1
Monitor
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlx
xsrvrc4zxr6lo4a",
"tableName": "demo"
},
"name": "WriteUnits",
"namespace": "oci_nosql",
"resource-group": null
}
]
}
metric[interval]
{dimensionname=dimensionvalue}.groupingfunction.statistic
metric[interval]
{dimensionname=dimensionvalue}.groupingfunction.statistic
alarmoperator alarmvalue
For supported parameter values, see Monitoring Query Language (MQL) Reference.
Example Queries
Simple metric query
Sum of Storage Throttle counts for all the tables in a compartment at a one-minute
interval.
The number of lines displayed in the metric chart (Console): 1 per table.
StorageThrottleCount[1m].sum()
1-392
Chapter 1
Secure
StorageThrottleCount[1m]{tableName = "demoKeyVal"}.sum()
ReadUnits[60m]
{compartmentId="ocid1.compartment.oc1.phx..exampleuniqueID"}.grouping().mean(
)
ReadThrottleCount[60m]{tableName = "demoKeyVal"}.groupBy(ReadUnits).mean()
Secure
• About Oracle NoSQL Database Cloud Service Security Model
• Authorization to access OCI resources
• Managing Access to Oracle NoSQL Database Cloud Service Tables
Policies
Oracle NoSQL Database Cloud Service uses the Oracle Cloud Infrastructure Identity and
Access Management security model that is built on the policies. A policy is a document that
specifies who can access which Oracle Cloud Infrastructure resources, including NoSQL
tables that your company has, and how they can access these resources. A policy allows a
group to work in certain ways with specific types of resources such as NoSQL Tables in a
particular compartment.
To govern the control of your tables, your company will have at least one policy. Each policy
consists of one or more policy statements that follow this basic syntax:
1-393
Chapter 1
Secure
To learn how policies work, see Overview of Policies in Oracle Cloud Infrastructure
Documentation.
Groups
In Oracle Cloud Infrastructure Identity and Access Management, you organize Users
within groups that usually share the same type of access to a particular set of NoSQL
tables or compartments.
You can grant access to the NoSQL Tables at the group and compartment level, by
writing a policy that gives a group a specific type of access within a particular
compartment, or to the tenancy itself. If you give a group access to the tenancy, the
group automatically gets the same type of access to all the compartments inside the
tenancy. For example, after you create a table in the compartment ProjectA, you must
write a policy to grant access to the group(s) you want them to manage or use the
tables. Otherwise, the tables are not even visible to the groups that don't have access.
For example, to allow the Developer group to manage all the NoSQL resources, you
can create the following policy:
Verbs
A verb specifies the type of access being granted by the policy. For example, inspect
nosql-tables lets you list the NoSQL tables. Inspect, read, use, and manage are the
verbs supported by Oracle NoSQL Database Cloud Service. See Verbs in Oracle
Cloud Infrastructure Documentation.
Resource-types
Resources are the cloud objects that your company's employees create and use when
interacting with the Oracle Cloud Infrastructure (OCI). Oracle defines resource-types
you can use in policies. nosql-tables, nosql-rows, and nosql-indexes are three
individual resource-types supported by NoSQL Database Cloud Service.
By specifying a resource-type in a policy, you give access permissions against that
resource type alone. For example, to grant read permissions on the rows of all NoSQL
tables in the tenancy, to the viewers group, you can create a policy as:
Compartments
A compartment is the fundamental component of Oracle Cloud Infrastructure. You can
organize the Oracle NoSQL Database Cloud Service resources within compartments.
Compartments are used to separate tables for measuring usage and billing, defining
access, and isolating the resources between different projects or business units.
1-394
Chapter 1
Secure
Note:
Tenancy is the root compartment that contains all of your organization's Oracle
Cloud Infrastructure resources.
All the Oracle Cloud Infrastructure Identity and Access Management resources, users,
groups, compartments and policies are global and available across all regions, but the master
set of definitions reside in a single region, the home region. All the changes to your IAM
resources must be made in your home region. To learn more about the IAM components, see
Overview of Oracle Cloud Infrastructure Identity and Access Management. The following note
provides information regarding which version of the documentation you should read.
Note:
The way you manage users and groups for Oracle NoSQL Database Cloud Service
depends on whether or not your cloud account or tenancy is in the OCI region that
has been updated to use identity domains. Some OCI regions have been updated
to use identity domains. If you have a cloud account or tenancy in one of these OCI
regions, you can use the identity domains to manage the users who perform tasks
in Oracle Cloud Infrastructure. For more information on how to set up users and
groups for Oracle NoSQL Database Cloud Service, see About Setting Up Users,
Groups, and Policies .
Tip:
It's easy to determine whether or not your OCI region has been updated to use
Identity and Access Management (IAM) Identity Domains. For more information,
see Do You Have Access to Identity Domains?
1-395
Chapter 1
Secure
1-396
Chapter 1
Secure
• Enter details about the group. For example, if you're creating a policy that gives users
permissions to fully manage Oracle NoSQL Database Cloud Service tables you might
name the group nosql_service_admin (or similar) and include a short description
such as "Users with permissions to set up and manage Oracle NoSQL Database
Cloud Service tables on Oracle Cloud Infrastructure" (or similar).
4. Create a policy that gives users belonging to an OCI group, specific access permissions
to Oracle NoSQL Database Cloud Service tables or compartments.
• Open the navigation menu and click Identity & Security. Under Identity, click
Policies.
• Select a compartment, and click Create Policy.
For details and examples, see Policies Reference and Typical Policy Statements to
Manage Tables .
If you're unfamiliar about how policies work, see How Policies Work.
5. To manage and use NoSQL tables via Oracle NoSQL Database Cloud Service SDKs, the
user must set up the API keys. See Authentication to connect to Oracle NoSQL
Database.
Note:
Federated users can also manage and use Oracle NoSQL Database Cloud
Service tables. This requires the service administrator to set up the federation
in Oracle Cloud Infrastructure Identity and Access Management. See
Federating with Identity Providers.
Users belonging to any groups mentioned in the policy statement get their new
permission when they next sign in to the Console.
1-397
Chapter 1
Secure
– Click Create Dynamic Group and enter a Name, a Description, and a rule, or
use the Rule Builder to add a rule.
– Click Create.
Resources that meet the rule criteria are members of the dynamic group.
When you define a rule for a dynamic group, consider what resource is going
to be given access to other resources. Some examples of creating rules:
1. A matching rule for functions:
This rule implies that any resource type called fnfunc in the given
compartment(with the id specified above) is a member of the dynamic
group.
Note:
See Resource Types for more information on different resource
types.
ALL { instance.compartment.id =
'ocid1.compartment.oc1..aaaaaaaa4mlehopmvdluv2wjcdp4tnh2ypjz3
nhhpahb4ss7yvxaa3be3diq'}
1-398
Chapter 1
Secure
This rule implies that any instance with the compartment id specified above is a
member of the dynamic group.
3. A rule when using API Gateway with functions:
'ocid1.compartment.oc1..aaaaaaaafml3tca3zcxyifmdff3aadp5uojimgx3cdn
irgup6rhptxwnandq'}
This rule implies that any resource type called ApiGateway in the given
ompartment (with the id specified above) is a member of the dynamic group.
4. A rule when using Container Instances:
This rule implies that any resource type called computecontainerinstance in the
given compartment (with the id specified above) is a member of the dynamic
group.
Note:
Inheritance does not apply to Dynamic groups. While using IAM Access
policies, the policy of a parent compartment automatically applies to all child
compartments. This is not the case when you use Dynamic groups. You need to
list each compartment in the Dynamic group separately for the compartment to
qualify.
Example:A matching rule for functions for parent-child tables:
• Write policy statements for the dynamic group to enable access to Oracle Cloud
Infrastructure resources.
– In the Oracle Cloud Infrastructure console, click Identity and Security and click
Policies.
– To write policies for a dynamic group, click Create Policy, and enter a Name and a
Description.
– Use the Policy Builder to create a policy. The general syntax of defining a policy is
shown below:
1-399
Chapter 1
Secure
1-400
Chapter 1
Secure
b. Select the identity domain you want to work in and click Users.
c. Click Create User.
d. Enter details about the user, and click Create.
3. In Oracle Cloud Infrastructure Console, create an OCI group.
a. Open the navigation menu and click Identity & Security. Under Identity, click
Domains.
b. Select the identity domain you want to work in and click Groups.
c. Click Create Group.
d. Enter details about the group.
For example, if you're creating a policy that gives users permissions to fully manage
Oracle NoSQL Database Cloud Service tables you might name the group
nosql_service_admin (or similar) and include a short description such as "Users with
permissions to set up and manage Oracle NoSQL Database Cloud Service tables on
Oracle Cloud Infrastructure" (or similar).
4. Create a policy that gives users belonging to an OCI group, specific access permissions
to Oracle NoSQL Database Cloud Service tables or compartments.
1-401
Chapter 1
Secure
a. Open the navigation menu and click Identity & Security. Under Identity, click
Policies.
b. Select a compartment, and click Create Policy.
For details and examples, see Policies Reference and Typical Policy
Statements to Manage Tables.
If you're unfamiliar about how policies work, see How Policies Work.
5. To manage and use NoSQL tables via Oracle NoSQL Database Cloud Service
SDKs, the user must set up the API keys. See Acquiring Credentials.
Note:
Federated users can also manage and use Oracle NoSQL Database
Cloud Service tables. This requires the service administrator to set up
the federation in Oracle Cloud Infrastructure Identity and Access
Management. See Federating with Identity Providers.
Users belonging to any groups mentioned in the policy statement get their new
permission when they next sign in to the Console.
Cross-Tenancy Policies
Your organization might want to share resources with another organization that has its
own tenancy. It could be another business unit in your company, a customer of your
company, a company that provides services to your company, and so on. In cases like
these, you need cross-tenancy policies in addition to the required user and service
policies described previously.
To access and share resources, the administrators of both tenancies need to create
special policy statements that explicitly state the resources that can be accessed and
shared. These special statements use the words Define, Endorse ,and Admit.
Endorse, Admit, and Define Statements
Here's an overview of the special verbs used in cross-tenancy statements:
Endorse: States the general set of abilities that a group in your own tenancy can
perform in other tenancies. The Endorse statement always belongs in the tenancy with
the group of users crossing the boundaries into the other tenancy to work with that
tenancy's resources. In the examples, you refer to this tenancy as the source.
1-402
Chapter 1
Secure
Admit: States the kind of ability in your own tenancy that you want to grant a group from the
other tenancy. The Admit statement belongs in the tenancy who is granting "admittance" to
the tenancy. The Admit statement identifies the group of users that requires resource access
from the source tenancy and identified with a corresponding Endorse statement. In the
examples, you refer to this tenancy as the destination.
Define: Assigns an alias to a tenancy OCID for Endorse and Admit policy statements. A
Define statement is also required in the destination tenancy to assign an alias to the source
IAM group OCID for Admit statements.
Define statements must be included in the same policy entity as the endorse or the admit
statement. The Endorse and Admit statements work together, but they reside in separate
policies, one in each tenancy. Without a corresponding statement that specifies access, a
particular Endorse or Admit statement grants no access. You need an agreement from both
the tenancies.
Note:
In addition to policy statements, you must also be subscribed to a region to share
resources across regions.
Note:
The cross-tenancy policies can also be written with other policy subjects. For more
details on policy subjects, see Policy Syntax in Oracle Cloud Infrastructure
Documentation.
Here is an example of a broad policy statement that endorses the IAM group NoSQLAdmins
group to do anything with all NoSQL Tables in any tenancy:
To write a policy that reduces the scope of tenancy access, the destination administrator must
provide the destination tenancy OCID. Here is an example of policy statements that endorse
the IAM group NoSQLAdmins group to manage NoSQL Tables in the DestinationTenancy
only:
1-403
Chapter 1
Secure
• Defines the source tenancy and IAM group that is allowed to access resources in
your tenancy. The source administrator must provide this information.
• Admits those defined sources to access NoSQL Tables that you want to allow
access to in your tenancy.
Here is an example of policy statements that endorse the IAM group NoSQLAdmins in
the source tenancy to do anything with all NoSQL Tables in your tenancy:
Here is an example of policy statements that endorse the IAM group NoSQLAdmins in
the source tenancy to manage NoSQL Tables only the Develop compartment:
In Oracle Cloud Infrastructure you use IAM security policies to grant permissions. First,
you must add the user to a group, and then you create a security policy that grants the
group the manage nosql-tables permission on a specific compartment or the tenancy
(any compartment in the tenancy). For example, you might create a policy statement
that looks like one of these:
To find out how to create security policy statements specifically for Oracle NoSQL
Database Cloud Service, see Setting Up Users, Groups, and Policies Using Identity
and Access Management.
1-404
Chapter 1
Reference
When you create a policy for your tenancy, you grant users access to all compartments by
way of policy inheritance. Alternatively, you can restrict access to individual Oracle NoSQL
Database Cloud Service tables or compartments.
Example 1-1 To allow group Admins to fully manage any Oracle NoSQL Database
Cloud Service table
Example 1-2 To allow group Admins to do any operations against NoSQL Tables in
compartment Dev, use the family resource type.
Example 1-4 To only allow Joe in Developer to create, get and drop indexes of
NoSQL tables in compartment Dev
Example 1-5 To allow group Admins to create, drop and move NoSQL Tables only but
not alter in compartment Dev.
Example 1-6 To allow group Developer to read, update and delete rows of table
"customer" in compartment Dev but not others.
Reference
• References for Analytics Integrator
• Reference on NoSQL Database Cloud Service
• Oracle NoSQL Database Migrator Reference
1-405
Chapter 1
Reference
1-406
Chapter 1
Reference
When you fetch the contents of the row with id=1, you should see output such as the
following:
id jsonField
1 (null)
Work Around: Until ADW fixes this bug, you can manually work around the issue by doing
the following from the Database Actions SQL Interface.
• Verify that the max_string_size initialization parameter is set to EXTENDED in the
database.
If the value of the max_string_size is set to STANDARD, then increase the size from
STANDARD to EXTENDED.
• Drop the table
• Manually recreate the table and specify enough bytes to hold the JSON document.
begin
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name =>'myJsonTable',
credential_name =>'OCI$RESOURCE_PRINCIPAL' or
'NOSQLADWDB001_OBJ_STORE_CREDENTIAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/nosqldev/b/nosql-to-adw/o/myJsonTable*',
format => '{"type":"parquet", "schema": "first"}',
column_list =>'ID NUMBER (10), JSONFIELD VARCHAR2(32767)'
);
end;
• You should now be able to see the actual contents of the JSON document in the row with
id=1 .
1-407
Chapter 1
Reference
Note:
Rather than declaring the JSONFIELD as VARCHAR2(32767) you can
also work around this issue by declaring that column as type CLOB.
begin
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name =>'myJsonTable',
credential_name =>'OCI$RESOURCE_PRINCIPAL' or
'NOSQLADWDB001_OBJ_STORE_CREDENTIAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/nosqldev/b/nosql-to-adw/o/myJsonTable*',
format => '{"type":"parquet", "schema": "first"}',
column_list =>'ID NUMBER (10), JSONFIELD CLOB'
);
end;
Note:
One can tell that the query completes rather than hangs when using the Run
Statement option in the Database Actions SQL Interface when the Query
Result window of that interface eventually displays a dropdown menu
labeled Download and displays the Execution time (even though the
spinning wheel appears to indicate the query is hanging).
There are two ways you can work around this issue. First, you can simply execute the
query as a script. To do this, you would select the query in the [Worksheet]* window
of the tool and then click on the Run Script button. This will display the results of the
query in the Script Output window of the tool; displaying any
Double.POSITIVE_INFINITY values as the string 'Infinity',
Double.NEGATIVE_INFINITY values as the string '-Infinity', and any Double.NaN
values as the string 'NaN'.
1-408
Chapter 1
Reference
Another way to work around the issue in the Database Actions SQL Interface is to use the
Run Statement to execute the query, and when the Download dropdown menu appears in
the Query Result window (indicating the query has completed), click on the Download
dropdown menu and click on the menu item labeled JSON to export the output of the query
as a JSON document. Once you have exported the query results, you can use your browser
or editor of choice to examine the query results.
On the other hand, if you use Oracle Analytics (desktop tool or cloud service) to query the
table, then the following error trace occurs:
1-409
Chapter 1
Reference
1-410
Chapter 1
Reference
Note:
When you need to use numeric values in your schema,
it is recommended to decide on the data types in the
order given below: INTEGER, LONG, FLOAT, DOUBLE,
NUMBER Avoid NUMBER unless you really need it for
your use case as NUMBER is expensive both in terms
of storage and processing power used.
TIMESTAMP A point in time with a precision. The precision affects the storage size and usage.
Timestamp is stored and managed in UTC (Coordinated Universal Time). The
Timestamp datatype requires anywhere from 3 to 9 bytes depending on the precision
used.
The following breakdown illustrates the storage used by this datatype:
• bit[0~13] year - 14 bits
• bit[14~17] month - 4 bits
• bit[18~22] day - 5 bits
• bit[23~27] hour - 5 bits [optional]
• bit[28~33] minute - 6 bits [optional]
• bit[34~39] second - 6 bits [optional]
• bit[40~71] fractional second [optional with variable length]
UUID Note: The UUID data type is considered a subtype of the STRING data type. The
storage size is 16 bytes as an index key. If used as a primary key the storage size is
19 bytes.
ENUM An enumeration is represented as an array of strings. ENUM values are symbolic
identifiers (tokens) and are stored as a small integer value representing an ordered
position in the enumeration.
ARRAY An ordered collection of zero of more typed items. Arrays that are not defined as
JSON cannot contain NULL values.
Arrays declared as JSON can contain any valid JSON, including the special value,
null, which is relevant to JSON.
MAP An unordered collection of zero or more key-item pairs, where all keys are strings and
all items are the same type. All keys must be unique. The key-item pairs are called
fields, the keys are field names, and the associated items are field values. Field
values can have different types, but maps cannot contain NULL field values.
RECORD A fixed collection of one or more key-item pairs, where all keys are strings. All keys in
a record must be unique.
1-411
Chapter 1
Reference
Note:
Once dropped, a table with the same name can
be created again.
1-412
Chapter 1
Reference
DDL_TABLE=$(cat tableddl.nosql)
Note:
You need to give the compartment_id and also the values for the table capacity
for this DDL statement.
1-413
Chapter 1
Reference
This will give you the exact error in your DDL statement.
Example 2: Executing a SELECT statement from the cloud shell
1. In your OCI console, Open the Cloud Shell from the top right menu.
2. Copy your SQL SELECT statement( for example, query1.sql) into a variable
(SQL_SELECTSTMT).
Example:
Note:
You need to give the compartment_id for this SELECT statement.
This will give you the exact error in your SQL statement.
1-414
Chapter 1
Reference
For example:
Alter Table
For example:
Create Index
For example:
Drop Table
For example:
1-415
Chapter 1
Reference
Typical Queries
SELECT <expression>
FROM <table name>
[WHERE <expression>]
[GROUP BY <expression>]
[ORDER BY <expression> [<sort order>]]
[LIMIT <number>]
[OFFSET <number>];
For example:
SELECT * FROM Users;
SELECT id, firstname, lastname FROM Users WHERE firstname = "Taylor";
For example:
UPDATE JSONPersons $j
SET TTL 1 DAYS
WHERE id = 6
RETURNING remaining_days($j) AS Expires;
1-416
Chapter 1
Reference
Note:
An index is called a covering index with respect to a query if the query can be
evaluated using only the entries of that index, that is, without the need to retrieve
the associated rows.
The SELECT iterator has fields like: “FROM”, "WHERE", “FROM variable”, and “SELECT
expressions”. “FROM” and “FROM variable” represent the FROM clause of the SELECT
expression, WHERE represents the filter clause, and “SELECT expression” represents the
SELECT clause.
1-417
Chapter 1
Reference
RECEIVE iterator: It is a special internal iterator that separates the query plan into 2
parts:
1. The RECEIVE iterator itself and all iterators that are above it in the iterator tree are
executed at the driver.
2. All iterators below the RECEIVE iterator are executed at the replication nodes
(RNs); these iterators form a subtree rooted at the unique child of the RECEIVE
iterator.
In general, the RECEIVE iterator acts as a query coordinator. It sends its subplan to
appropriate RNs for execution and collects the results. It may perform additional
operations such as sorting and duplicate elimination and propagates the results to its
ancestor iterators (if any) for further processing.
Distribution kinds :
A distribution kind specifies how the query will be distributed for execution across the
RNs participating in an Oracle NoSQL database (a store). The distribution kind is a
property of the RECEIVE iterator.
Different choices of Distribution kinds are:
• SINGLE_PARTITION: A SINGLE_PARTITION query specifies a complete shard
key in its WHERE clause. As a result, its full result set is contained in a single
partition, and the RECEIVE iterator will send its subplan to a single RN that stores
that partition. A SINGLE_PARTITION query may use either the primary-key index
or a secondary index.
• ALL_PARTITIONS: Queries use the primary-key index here and they don’t specify
a complete shard key. As a result, if the store has M partitions, the RECEIVE
iterator will send M copies of its subplan to be executed over one of the M
partitions each.
• ALL_SHARDS: Queries use a secondary index here and they don’t specify a
complete shard key. As a result, if the store has N shards, the RECEIVE iterator
will send N copies of its subplan to be executed over one of the N shards each.
Anatomy of a query execution plan:
Query execution takes place in batches. When a query subplan is sent to a partition or
shard for execution, it will execute there until a batch limit is reached. The batch limit is
a number of read units consumed locally by the query. The default is 2000 read units
(about 2MB of data), and it can only be decreased via a query-level option.
When the batch limit is reached, any local results that were produced are sent back to
the RECEIVE iterator for further processing along with a boolean flag that says
whether more local results may be available. If the flag is true, the reply includes
resume information. If the RECEIVE iterator decides to resend the query to the same
partition/shard, it will include this resume information in its request, so that the query
execution will restart at the point where it stopped during the previous batch. This is
because no query state is maintained at the RN after a batch finishes. The next batch
for the same partition/shard may take place at the same RN as the previous batch or
at a different RN that also stores the same partition/shard.
1-418
Chapter 1
Reference
Supported Variables
Learn about the variables supported by Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service supports all the general variables. See General
Variables for All Requests. All three NoSQL resource types can use the following variables,
except for ListTables and CreateTable.
nosql-tables
1-419
Chapter 1
Reference
nosql-rows
nosql-indexes
1-420
Chapter 1
Reference
1-421
Chapter 1
Reference
Request Permissions
ListTables NOSQL_TABLE_INSPECT
CreateTable NOSQL_TABLE_CREATE
GetTable NOSQL_TABLE_READ
UpdateTable NOSQL_TABLE_ALTER
DeleteTable NOSQL_TABLE_DROP
ListIndexes NOSQL_INDEX_READ
CreateIndex NOSQL_INDEX_CREATE
GetIndex NOSQL_INDEX_READ
DeleteIndex NOSQL_INDEX_DROP
GetRow NOSQL_ROWS_READ
UpdateRow NOSQL_ROWS_INSERT
DeleteRow NOSQL_ROWS_DELETE
ListTableUsage NOSQL_TABLE_READ
ChangeTableCompartment NOSQL_TABLE_ALTER
Query (SELECT) NOSQL_ROWS_READ
Query (INSERT, UPSERT, UPDATE) NOSQL_ROWS_INSERT
Query (DELETE) NOSQL_ROWS_DELETE
PrepareStatement NOSQL_TABLE_READ
SummarizeStatement NOSQL_TABLE_READ
ListWorkRequests NOSQL_TABLE_READ
GetWorkRequest NOSQL_TABLE_READ
DeleteWorkRequest NOSQL_TABLE_ALTER
ListWorkRequestErrors NOSQL_TABLE_READ
ListWorkRequestLogs NOSQL_TABLE_READ
When you write a policy with request.operation, use the name of API operations. For
Query operations, use the mapping operation of statement in the query. For example:
1-422
Chapter 1
Reference
Supported Browsers
The support for web browsers is as per the Oracle Software Web Browser Support Policy.
Topics:
As of the current release of Oracle NoSQL Database Cloud Service, there are no known
issues reported.
1-423
Glossary
Glossary-1
Index
Index-1