You are on page 1of 431

Oracle® Cloud

Oracle NoSQL Database Cloud Service

Latest Cloud Release


F54691-08
May 2023
Oracle Cloud Oracle NoSQL Database Cloud Service, Latest Cloud Release

F54691-08

Copyright © 2022, 2023, Oracle and/or its affiliates.

Primary Authors: Vandanadevi Rajamani, (primary author)

Contributing Authors: (contributing author), (contributing author)

Contributors: (contributor), (contributor)

This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.

If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.

Oracle®, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.

Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents

1 Oracle NoSQL Database Cloud Service


Get Started 1-1
Getting Started with Oracle NoSQL Database Cloud Service 1-1
Setting up Your Service 1-2
Accessing the Service from the Infrastructure Console 1-2
Creating a Compartment 1-2
Authentication to connect to Oracle NoSQL Database 1-3
Connecting your Application to NDCS 1-5
Typical Workflow 1-16
Quick Start Tutorials 1-16
Getting started with Oracle NoSQL Database Analytics Integrator 1-16
Overview of Oracle NoSQL Database Analytics Integrator 1-16
Typical Workflow for Using Oracle NoSQL Database Analytics Integrator 1-17
Overview 1-19
What's New in Oracle NoSQL Database Cloud Service 1-19
May 2023 1-20
December 2022 1-21
September 29, 2022 1-21
September 2022 1-22
August 2022 1-22
June 2022 1-22
February 2022 1-22
December 2021 1-23
November 2021 1-23
October 2021 1-24
May 2021 1-24
February 2021 1-24
December 2020 1-25
November 2020 1-25
October 2020 1-25
September 2020 1-26
August 2020 1-26
July 2020 1-27

iii
June 2020 1-27
May 2020 1-27
April 2020 1-28
March 2020 1-28
February 2020 1-29
September 2019 1-29
May 2019 1-30
Cloud Concepts 1-30
Features of Oracle NoSQL Database Cloud Service 1-31
Key Features 1-32
Responsibility Model for Oracle NoSQL Database 1-33
Always Free Service 1-35
Functional difference between the NoSQL Cloud Service and On-premise
database 1-36
Oracle NoSQL Database Cloud Service Subscription 1-37
Service Limits 1-37
Service Quotas 1-38
Service Events 1-39
Service Metrics 1-41
Data Regions and Associated Service Endpoints 1-42
Plan 1-44
Plan your service 1-44
Developer Overview 1-44
Oracle NoSQL Database Cloud Service Limits 1-46
Estimating Capacity 1-48
Estimating Your Monthly Cost 1-54
Configure 1-55
Configuration tasks for Analytics Integrator 1-55
Accessing Oracle Cloud Object Storage 1-55
Accessing the Oracle Cloud Autonomous Data Warehouse 1-59
Enabling a Compute Instance for Oracle NoSQL Database Cloud Service and
ADW and (optionally) Enabling the ADW Database for Object Storage 1-69
Devops 1-74
Deploying Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager 1-74
Prerequisites 1-75
Step 1: Create Terraform configuration files for NDCS Table or Index 1-75
Step 2: Where to Store Your Terraform Configurations 1-80
Step 3: Create a Stack from a File 1-81
Step 4: Generate an Execution Plan 1-87
Step 5: Run an Apply Job 1-92

iv
Updating Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager 1-99
Step 1: Create Terraform Override files for NoSQL Database Table 1-100
Step 2: Update the Execution Plan 1-101
Step 3: Generate an Execution Plan 1-102
Step 4: Run an Apply Job 1-103
Develop 1-111
Install Analytics Integrator 1-111
Creating a table in the Oracle NoSQL Database Cloud Service 1-112
Install Oracle NoSQL Database Analytics Integrator 1-120
Running the Oracle NoSQL Database Analytics Integrator 1-122
Verifying Data in Oracle Analytics tool 1-134
Using console to create tables 1-141
Using Console to Create Tables in Oracle NoSQL Database Cloud Service 1-141
Inserting Data Into Tables 1-156
Using APIs to create tables 1-158
About Oracle NoSQL Database SDK drivers 1-159
Obtaining a NoSQL Handle 1-161
About Compartments 1-167
Creating Tables and Indexes 1-170
Adding Data 1-183
Using Plugins 1-192
Using IntelliJ Plugin for Development 1-192
Using Eclipse Plugin for Development 1-200
About Oracle NoSQL Database Visual Studio Code Extension 1-201
Designing a Table in Oracle NoSQL Database Cloud Service 1-218
Table Fields 1-218
Primary Keys and Shard Keys 1-220
Time to Live 1-221
Table States and Life Cycles 1-222
Developing in Oracle NoSQL Database Cloud Simulator 1-223
Downloading the Oracle NoSQL Database Cloud Simulator 1-223
Oracle NoSQL Database Cloud Simulator Compared With Oracle NoSQL
Database Cloud Service 1-224
Using Oracle NoSQL Database Migrator 1-225
Overview 1-225
Workflow for Oracle NoSQL Database Migrator 1-230
Use Case Demonstrations 1-237
Oracle NoSQL Database Migrator Reference 1-264
Source Configuration Templates 1-264
Sink Configuration Templates 1-299
Transformation Configuration Templates 1-331

v
Mapping of DynamoDB types to Oracle NoSQL types 1-335
Oracle NoSQL to Parquet Data Type Mapping 1-336
Mapping of DynamoDB table to Oracle NoSQL table 1-337
Troubleshooting the Oracle NoSQL Database Migrator 1-338
Manage 1-341
Using APIs to manage tables 1-341
Reading Data 1-341
Using Queries 1-346
Modifying Tables 1-355
Deleting Data 1-360
Dropping Tables and Indexes 1-366
Using console to manage tables 1-369
Modifying Table Data Using Console 1-369
Managing Table Data Using Console 1-370
Managing Tables and Indexes Using Console 1-371
Monitor 1-380
Monitoring Oracle NoSQL Database Cloud Service 1-380
Oracle NoSQL Database Cloud Service Metrics 1-381
Viewing or Listing Oracle NoSQL Database Cloud Service Metrics 1-390
How to Collect Oracle NoSQL Database Cloud Service Metrics? 1-392
Secure 1-393
About Oracle NoSQL Database Cloud Service Security Model 1-393
Authorization to access OCI resources 1-395
Setting Up Users, Groups, and Policies Using Identity and Access Management 1-396
Setting Up Users, Groups, and Policies Using Identity Domains 1-400
Managing Access to Oracle NoSQL Database Cloud Service Tables 1-402
Accessing NoSQL Tables Across Tenancies 1-402
Giving Another User Permission to Manage NoSQL Tables 1-404
Typical Policy Statements to Manage Tables 1-404
Reference 1-405
References for Analytics Integrator 1-406
Known issues with Oracle NoSQL Database Analytics Integrator 1-406
Failure handling in Oracle NoSQL Database Analytics Integrator 1-409
Reference on NoSQL Database Cloud Service 1-409
Oracle NoSQL Database Cloud Service Reference 1-410
Oracle NoSQL Database Cloud Service Policies Reference 1-418
Known Issues for Oracle NoSQL Database Cloud Service 1-423

Index

vi
1
Oracle NoSQL Database Cloud Service
• Get Started
• Overview
• Configure
• Plan
• Devops
• Develop
• Manage
• Monitor
• Secure
• Reference

Get Started
• Getting Started with Oracle NoSQL Database Cloud Service
• Getting started with Oracle NoSQL Database Analytics Integrator

Getting Started with Oracle NoSQL Database Cloud Service


Oracle NoSQL Database Cloud Service is a fully managed database cloud service that is
designed for database operations that require predictable, single digit millisecond latency
responses to simple queries.
NoSQL Database Cloud Service allows developers to focus on application development
rather than setting up cluster servers, or performing system monitoring, tuning, diagnosing,
and scaling. NoSQL Database Cloud Service is suitable for applications such as Internet of
Things, user experience personalization, instant fraud detection, and online display
advertising.
Once you are authenticated against your Oracle Cloud account, you can create a NoSQL
table, and specify throughput and storage requirements for the table. Oracle reserves and
manages the resources to meet your requirements, and provisions capacity for you. Capacity
is specified using read and write units for throughput and GB for storage units.
This article has the following topics:

1-1
Chapter 1
Get Started

Setting up Your Service


If you're setting up Oracle NoSQL Database Cloud Service for the first time, follow
these tasks as a guide.

Task Reference Related Information


Place an order for Oracle NoSQL Signing Up for Oracle Cloud To learn how to estimate the
Database Cloud Service or sign up for Infrastructure in Oracle Cloud monthly cost of your Oracle
the Oracle Free Trial. Infrastructure Documentation. NoSQL Database Cloud Service
subscription, see Estimating Your
Monthly Cost .
To upgrade your free account or
to change your payment method,
see Changing Your Payment
Method in Oracle Cloud
Infrastructure Documentation.
Activate your Oracle Cloud account Signing In to the Console in Oracle To familiarize yourself with Oracle
and sign in for the first time. Cloud Infrastructure Documentation. Cloud Infrastructure Console,
see Using the Console in Oracle
Cloud Infrastructure
Documentation.
(Recommended) Create a Creating a Compartment If you're not familiar with
compartment for your service. compartments, see
Understanding Compartments in
Oracle Cloud Infrastructure
Documentation.
Manage security for your service. About Oracle NoSQL Database Cloud To familiarize yourself with
Service Security Model NoSQL Database Cloud Service
security model, see About Oracle
NoSQL Database Cloud Service
Security Model.

Accessing the Service from the Infrastructure Console


Learn how to access the Oracle NoSQL Database Cloud Service service from the
Infrastructure Console.
1. Locate the service URL from the welcome email, and then sign in to your Oracle
NoSQL Database Cloud Service.
2. Open the navigation menu, click Databases, and then click NoSQL Database.
3. Select the compartment created for your tables by the service administrator.

To view help for the current page, click the help icon at the top of the page.

Creating a Compartment
When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy with a
root compartment that holds all your cloud resources. You then create additional
compartments within the tenancy (root compartment) and corresponding policies to
control access to the resources in each compartment. Before you create an Oracle
NoSQL Database Cloud Service table, Oracle recommends that you set up the
compartment where you want the table to belong.

1-2
Chapter 1
Get Started

You create compartments in Oracle Cloud Infrastructure Identity and Access Management
(IAM). See Setting Up Your Tenancy and Managing Compartments in Oracle Cloud
Infrastructure Documentation.

Authentication to connect to Oracle NoSQL Database


You need an Oracle Cloud account to connect your application to the Oracle NoSQL
Database Cloud Service.
If you do not already have an Oracle Cloud account, you can start with Oracle Cloud. You
need to authenticate yourself when you connect to the Oracle NoSQL Database. You could
be authenticated using one of the following ways:

Authentication Method #1: User Principals


Here you use an OCI user and an API key for authentication. The credentials that are used
for connecting your application are associated with a specific user. If you want to create a
user for your application, see Authorization to access OCI resources.
You can provide your credentials for this authentication method using one of the following
ways:
• Using a file on a local disk, The file contains details such as the user OCID, tenancy
OCID, region, private key path, and fingerprint. You can use this if you are working from a
secure network and storing private keys and configurations locally is in compliance with
your security policies.
• Supplying the credentials via an API.
• Storing the credentials in a vault somewhere.
Information Comprising the Credentials:

Table 1-1 Credentials

What is it? Where to find it?


Tenancy ID and User ID ( Both are OCID ) See Where to Get the Tenancy's OCID and User's
OCID in Oracle Cloud Infrastructure
Documentation.
API Signing Key For the application user, an API signing key must
be generated and uploaded. If this has already
been done this step can be skipped.
To know more, see the following resources in
Oracle Cloud Infrastructure Documentation:
• How to Generate an API Signing Key to
generate the signing key with an optional
pass phrase.
• How to Upload the Public Key to upload the
API signing key to the user's account.
Fingerprint for the Signing Key and (Optional) The fingerprint and pass phrase of the signing key
Pass Phrase for the Signing Key are created while generating and uploading the
API Signing Key. See How to Get the Key's
Fingerprint in Oracle Cloud Infrastructure
Documentation.

1-3
Chapter 1
Get Started

Tip:
Make a note of the location of your private key, optional pass phrase, and
fingerprint of the public key while generating and uploading the API Signing
Key.

After performing the tasks discussed above, collect the credentials information and
provide them to your application.
Providing the Credentials to your Application:
The Oracle NoSQL Database SDKs allow you to provide the credentials to an
application in multiple ways. The SDKs support a configuration file as well as one or
more interfaces that allow direct specification of the information. See the
documentation for the programming language driver that you are using to know about
the specific credentials interfaces.
If you are using a configuration file, the default location is ~/.oci/config. The SDKs
allow the file to reside in alternative locations. It's content looks like this:

[DEFAULT]
user=ocid1.user.oc1..aaaaaaaas...7ap
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaap...keq
pass_phrase=mysecretphrase

The [DEFAULT] line indicates that the lines that follow specify the DEFAULT profile. A
configuration file can include multiple profiles, prefixed with [PROFILE_NAME]. For
example:

[DEFAULT]
user=ocid1.user.oc1..aaaaaaaas...7us
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:15
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaabbap...keq
pass_phrase=mysecretphrase

[MYPROFILE]
user=ocid1.user.oc1..aaaaaaaas...7ap
fingerprint=d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13
key_file=~/.oci/oci_api_key_private.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaap...keq
pass_phrase=mysecretphrase

Authentication Method #2: Instance Principals


Instance Principals is a capability in Oracle Cloud Infrastructure Identity and Access
Management (IAM) that lets you make service calls from an instance. With instance
principals, you don’t need to configure user credentials for the services running on
your compute instances or rotate the credentials. Instances themselves are now a
principal type in IAM. Each compute instance has its own identity, and it authenticates

1-4
Chapter 1
Get Started

by using certificates that are added to the instance. These certificates are automatically
created, assigned to instances, and rotated.
Using instance principals authentication, you can authorize an instance to make API calls on
Oracle Cloud Infrastructure services. After you set up the required resources and policies, an
application running on an instance can call Oracle Cloud Infrastructure public services,
removing the need to configure user credentials or a configuration file. Instance principal
authentication can be used from an instance where you don't want to store a configuration
file.

Authentication Method #3: Resource Principals


You can use a resource principal to authenticate and access Oracle Cloud Infrastructure
resources. The resource principal consists of a temporary session token and secure
credentials that enable other Oracle Cloud services to authenticate themselves to Oracle
NoSQL Database. Resource principal authentication is very similar to instance principal
authentication, but is intended to be used for resources that are not instances, such as
server-less functions.
A resource principal enables resources to be authorized to perform actions on Oracle Cloud
Infrastructure services. Each resource has its own identity, and the resource authenticates
using the certificates that are added to it. These certificates are automatically created,
assigned to resources, and rotated, avoiding the need for you to create and manage your
own credentials to access the resource. When you authenticate using a resource principal,
you do not need to create and manage credentials to access Oracle Cloud Infrastructure
resources.
Once authenticated, you need to be authorized to access the access Oracle Cloud
Infrastructure resources. See Authorization to access OCI resources for more details.

Connecting your Application to NDCS


Learn how to connect your application to Oracle NoSQL Database Cloud Service.
Your application connects to Oracle NoSQL Database Cloud Service after being
authenticated using one of the many methods available.
You can get yourself authenticated using one of the following:
• API signing key
– From any SDK (Java, Python, C#, GO, Node.js, Spring Data) program hard-coded
directly
– A configuration file with a default profile
– A configuration file with a non-default profile
• Instance Principals with Auth tokens
• Resource Principals
See Authentication to connect to Oracle NoSQL Database for more details on the
authentication options.

• Java

• Python

1-5
Chapter 1
Get Started

• Go

• Node.js

• C#

• Spring Data

Java
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:

/* Use the SignatureProvider to supply your credentials to NoSQL


Database.
* By default, the SignatureProvider will read your OCI
configuration file
* from the default location, ~/.oci/config. See SignatureProvider
for
* additional options for reading configurations in other ways.*/
SignatureProvider sp = new SignatureProvider(
tenantId, // a string, OCID
userId, // a string, OCID
fingerprint , // a string
privateKey, // a string, content of private key
passPhrase // optional, char[]
);
//Create an handle to access the cloud service in the us-ashburn-1
region.
NoSQLHandleConfig config = new
NoSQLHandleConfig(Region.US_ASHBURN_1);
config.setAuthorizationProvider(sp);
NoSQLHandle handle = NoSQLHandleFactory.createNoSQLHandle(config);

//At this point, your handle is set up to perform data operations.

• Connecting Using a Configuration File with a default profile:

/* Use the SignatureProvider to supply your credentials to NoSQL


Database.
* By default, the SignatureProvider will read your OCI
configuration file
* from the default location, ~/.oci/config. See SignatureProvider
for
* additional options for reading configurations in other ways. */
SignatureProvider sp = new SignatureProvider();

• Connecting Using a Configuration File with non-default profile:

/* Use the SignatureProvider to supply your credentials to NoSQL


Database.
* Specify the full location of the configuration file in the
construtor for SignatureProvider.
*/

1-6
Chapter 1
Get Started

final String config_file = "<path_to_config_file>";


SignatureProvider sp = new SignatureProvider(config_file);

• Connecting using an Instance Principal :


Instance Principal is an IAM service feature that enables instances to be authorized
actors (or principals) to perform actions on service resources. Each compute instance
has its own identity, and it authenticates using the certificates that are added to it.

SignatureProvider authProvider =
SignatureProvider.createWithInstancePrincipal();

• Connecting using a Resource Principal :


Resource Principal is an IAM service feature that enables the resources to be authorized
actors (or principals) to perform actions on service resources.

SignatureProvider authProvider =
SignatureProvider.createWithResourcePrincipal();

Creating a handle :
You create a handle to access the cloud service in the us-ashburn-1 region.

NoSQLHandleConfig config = new NoSQLHandleConfig(Region.US_ASHBURN_1);


config.setAuthorizationProvider(sp);
NoSQLHandle handle = NoSQLHandleFactory.createNoSQLHandle(config);

At this point, your handle is set up to perform data operations. See SignatureProvider for
more details on the Java classes used.

Python
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:

from borneo.iam import SignatureProvider


#
# Use SignatureProvider directly via API. Note that the
# private_key argument can either point to a key file or be the
# string content of the private key itself.
#
at_provider = SignatureProvider(tenant_id='ocid1.tenancy.oc1..tenancy',
user_id='ocid1.user.oc1..user',
private_key=key_file_or_key,
fingerprint='fingerprint',
pass_phrase='mypassphrase')

• Connecting Using a Configuration File with a default profile:

from borneo.iam import SignatureProvider


#
# Use SignatureProvider with a default credentials file and
# profile $HOME/.oci/config

1-7
Chapter 1
Get Started

#
at_provider = SignatureProvider()

• Connecting Using a Configuration File with non-default profile:

from borneo.iam import SignatureProvider


#
# Use SignatureProvider with a non-default credentials file and
profile
#
at_provider = SignatureProvider(config_file='myconfigfile',
profile_name='myprofile')

• Connecting using an Instance Principal:


Instance Principal is an IAM service feature that enables instances to be
authorized actors (or principals) to perform actions on service resources. Each
compute instance has its own identity, and it authenticates using the certificates
that are added to it.

at_provider =
SignatureProvider.create_with_instance_principal(region=my_region)

• Connecting using a Resource Principal :


Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources.

at_provider = SignatureProvider.create_with_resource_principal()

Creating a handle :
The first step in any Oracle NoSQL Database Cloud Service application is to create a
handle used to send requests to the service. The handle is configured using your
credentials and other authentication information as well as the endpoint to which the
application will connect. An example endpoint is to use the region
Regions.US_ASHBURN_1.

from borneo import NoSQLHandle, NoSQLHandleConfig, Regions


from borneo.iam import SignatureProvider
# the region to which the application will connect
region = Regions.US_ASHBURN_1
# create a configuration object
config = NoSQLHandleConfig(region, at_provider)
# create a handle from the configuration object
handle = NoSQLHandle(config)

Go
You can connect your application to NDCS using any of the following methods:
• Directly providing credentials in the code:

privateKeyFile := "/path/to/privateKeyFile"
passphrase := "examplepassphrase"
sp, err := iam.NewRawSignatureProvider("ocid1.tenancy.oc1..tenancy",

1-8
Chapter 1
Get Started

"ocid1.user.oc1..user",
"us-ashburn-1",
"fingerprint",
"compartmentID",
privateKeyFile ,
&passphrase )
if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
// This is only required if the "region" property is not
//specified in the config file.
Region: "us-ashburn-1",
}

• Connecting Using a Configuration File with a default profile:

cfg := nosqldb.Config{
// This is only required if the "region" property is not
//specified in ~/.oci/config.
// This takes precedence over the "region" property when both are
specified.
Region: "us-ashburn-1",
}
client, err := nosqldb.NewClient(cfg)

• Connecting Using a Configuration File with non-default profile:

sp, err := iam.NewSignatureProviderFromFile("your_config_file_path",


"your_profile_name", "", "compartment_id")
if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
// This is only required if the "region" property is not specified in
the config file.
Region: "us-ashburn-1",
}
client, err := nosqldb.NewClient(cfg)

• Connecting using an Instance Principal:


Instance Principal is an IAM service feature that enables instances to be authorized
actors (or principals) to perform actions on service resources. Each compute instance
has its own identity, and it authenticates using the certificates that are added to it.

sp, err := iam.NewSignatureProviderWithInstancePrincipal("compartment_id")


if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
Region: "us-ashburn-1",

1-9
Chapter 1
Get Started

}
client, err := nosqldb.NewClient(cfg)

• Connecting using a Resource Principal :


Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources.

sp, err :=
iam.NewSignatureProviderWithResourcePrincipal("compartment_id")
if err != nil {
return
}
cfg := nosqldb.Config{
AuthorizationProvider: sp,
Region: "us-ashburn-1",
}
client, err := nosqldb.NewClient(cfg)

Node.js
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
You may specify credentials directly as part of auth.iam property in the initial
configuration. Create NoSQLClient instance as follows:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


let client = new NoSQLClient({
region: <your-service-region>
auth: {
iam: {
tenantId: myTenancyOCID,
userId: myUserOCID,
fingerprint: myPublicKeyFingerprint,
privateKeyFile: myPrivateKeyFile,
passphrase: myPrivateKeyPassphrase
}
}
});

• Connecting Using a Configuration File with a default profile:


You can store the credentials in an Oracle Cloud Infrastructure configuration file.
The default path for the configuration file is ~/.oci/config, where ~ stands for
user's home directory. On Windows, ~ is a value of USERPROFILE environment
variable. The file may contain multiple profiles. By default, the SDK uses profile
named DEFAULT to store the credentials.
To use these default values, create file named config in ~/.oci directory with the
following contents:

[DEFAULT]
tenancy=<your-tenancy-ocid>
user=<your-user-ocid>
fingerprint=<fingerprint-of-your-public-key>

1-10
Chapter 1
Get Started

key_file=<path-to-your-private-key-file>
pass_phrase=<your-private-key-passphrase>
region=<your-region-identifier>

Note that you may also specify your region identifier together with credentials in the OCI
configuration file. The driver will look at the location above by default, and if a region is
provided together with credentials, you do not need to provide initial configuration and
can use the no-argument constructor:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


let client = new NoSQLClient();

Alternatively, you may choose to specify the region in the configuration:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


let client = new NoSQLClient({ region: Region.US_ASHBURN_1 });

• Connecting Using a Configuration File with non-default profile:


You may choose to use different path for OCI configuration file as well as different profile
within the configuration file. In this case, specify these within auth.iam property of the
initial configuration. For example, if your OCI configuration file path is ~/myapp/.oci/
config and you store your credentials under profile Jane:
Then create NoSQLClient instance as follows:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


let client = new NoSQLClient({
region: Region.US_ASHBURN_1,
auth: {
iam: {
configFile: '~/myapp/.oci/config',
profileName: 'Jane'
}
}
});

• Connecting using an Instance Principal:


Instance Principal is an IAM service feature that enables instances to be authorized
actors (or principals) to perform actions on service resources. Each compute instance
has its own identity, and it authenticates using the certificates that are added to it.
Once set up, create NoSQLClient instance as follows:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const client = new NoSQLClient({
region: Region.US_ASHBURN_1,
compartment: 'ocid1.compartment.oc1..............'
auth: {
iam: {
useInstancePrincipal: true
}
}
});

1-11
Chapter 1
Get Started

You may also use JSON config file with the same configuration as described
above. Note that when using Instance Principal you must specify compartment id
(OCID) as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Instance Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.
• Connecting using a Resource Principal :
Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources.
Once set up, create NoSQLClient instance as follows:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const client = new NoSQLClient({
region: Region.US_ASHBURN_1,
compartment: 'ocid1.compartment.oc1...........................'
auth: {
iam: {
useResourcePrincipal: true
}
}
});

You may also use JSON config file with the same configuration as described
above. Note that when using Resource Principal you must specify compartment id
(OCID) as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Resource Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.

C#
You can connect to NDCS using one of the following methods:
• Directly providing credentials in the code:
You may specify credentials directly as IAMCredentials when creating
IAMAuthorizationProvider. Create NoSQLClient as follows:

var client = new NoSQLClient(


new NoSQLConfig
{
Region=<your-service-region>,
AuthorizationProvider = new IAMAuthorizationProvider(
new IAMCredentials
{
TenantId=myTenancyOCID,
UserId=myUserOCID,
Fingerprint=myPublicKeyFingerprint,
PrivateKeyFile=myPrivateKeyFile
})
});

• Connecting Using a Configuration File with a default profile:

1-12
Chapter 1
Get Started

You can store the credentials in an Oracle Cloud Infrastructure configuration file. The
default path for the configuration file is ~/.oci/config, where ~ stands for user's home
directory. On Windows, ~ is a value of USERPROFILE environment variable. The file may
contain multiple profiles. By default, the SDK uses profile named DEFAULT to store the
credentials
To use these default values, create file named config in ~/.oci directory with the
following contents:

[DEFAULT]
tenancy=<your-tenancy-ocid>
user=<your-user-ocid>
fingerprint=<fingerprint-of-your-public-key>
key_file=<path-to-your-private-key-file>
pass_phrase=<your-private-key-passphrase>
region=<your-region-identifier>

Note that you may also specify your region identifier together with credentials in the OCI
configuration file. By default, the driver will look for credentials and a region in the OCI
configuration file at the default path and in the default profile. Thus, if you provide region
together with credentials as shown above, you can create NoSQLClient instance without
passing any configuration:

var client = new NoSQLClient();

Alternatively, you may specify the region (as well as other properties) in NoSQLConfig:

var client = new NoSQLClient(


new NoSQLConfig(
{
Region = Region.US_ASHBURN_1,
Timeout = TimeSpan.FromSeconds(10),
});

• Connecting Using a Configuration File with non-default profile:


You may choose to use different path for OCI configuration file as well as different profile
within the configuration file. In this case, specify these within auth.iam property of the
initial configuration. For example, if your OCI configuration file path is ~/myapp/.oci/
config and you store your credentials under profile Jane:
Then create NoSQLClient instance as follows:

var client = new NoSQLClient(


new NoSQLConfig
{
Region = Region.US_ASHBURN_1,
AuthorizationProvider = new IAMAuthorizationProvider(
"~/myapp/.oci/config", "Jane")
});

• Connecting using an Instance Principal:


Instance Principal is an IAM service feature that enables instances to be authorized
actors (or principals) to perform actions on service resources. Each compute instance

1-13
Chapter 1
Get Started

has its own identity, and it authenticates using the certificates that are added to it.
Once set up, create NoSQLClient instance as follows:

var client = new NoSQLClient(


new NoSQLConfig
{
Region=<your-service-region>,

Compartment="ocid1.compartment.oc1.............................",

AuthorizationProvider=IAMAuthorizationProvider.CreateWithInstancePri
ncipal()
});

You may also represent the same configuration in JSON as follows:

{
"Region": "<your-service-region>",
"AuthorizationProvider":
{
"AuthorizationType": "IAM",
"UseInstancePrincipal": true
},
"Compartment":
"ocid1.compartment.oc1.............................",
}

Note that when using Instance Principal you must specify compartment id (OCID)
as compartment property. This is required even if you wish to use default
compartment. Note that you must use compartment id and not compartment name
or path. In addition, when using Instance Principal, you may not prefix table name
with compartment name or path when calling NoSQLClient APIs.
• Connecting using a Resource Principal :
Resource Principal is an IAM service feature that enables the resources to be
authorized actors (or principals) to perform actions on service resources. Once set
up, create NoSQLClient instance as follows:

var client = new NoSQLClient(


new NoSQLConfig
{
Region = <your-service-region>,
Compartment =
"ocid1.compartment.oc1.............................",
AuthorizationProvider =
IAMAuthorizationProvider.CreateWithResourcePrincipal()
});

You may also represent the same configuration in JSON as follows:

{
"Region": "<your-service-region>",
"AuthorizationProvider":

1-14
Chapter 1
Get Started

{
"AuthorizationType": "IAM",
"UseResourcePrincipal": true
},
"Compartment": "ocid1.compartment.oc1.............................",
}

Note that when using Resource Principal you must specify compartment id (OCID) as
compartment property. This is required even if you wish to use default compartment. Note
that you must use compartment id and not compartment name or path. In addition, when
using Resource Principal, you may not prefix table name with compartment name or path
when calling NoSQLClient APIs.

Spring Data
You can use one of these methods to connect to the Oracle NoSQL Database Cloud Service.
1. Use the SignatureProvider instance as a constructor in the NosqlDbConfig class to
configure the Spring Data Framework to connect and authenticate with the Oracle
NoSQL Database Cloud Service. See SignatureProvider in the Java SDK API Reference.

import oracle.nosql.driver.iam.SignatureProvider (
<tenantID>, //The Oracle Cloud Identifier (OCID) of the tenancy.
<userID>, //The Oracle Cloud Identifier (OCID) of a user in the
tenancy.
<fingerprint>, //The fingerprint of the key pair used for signing.
new File(<privateKeyFile>), //Full path to the key file.
char[] passphrase //Optional. A passphrase for the key, if it is
encrypted.
)

2. Use the SignatureProvider with the Instance principal authentication to connect to the
Oracle NoSQL Database Cloud Service. This requires a one-time setup. For more
details, see Instance principal authentication.

SignatureProvider.createWithInstancePrincipal()

3. Use the Cloud Simulator, which requires either an AuthorizationProvider instance from
the NosqlDbConfig class or a helper method such as
NosqlDbConfig.createCloudSimConfig().

com.oracle.nosql.spring.data.NosqlDbFactory.CloudSimProvider.getProvider()

To expose the connection and security parameters to the Oracle NoSQL Database SDK for
Spring Data, you need to create a class that extends the AbstractNosqlConfiguration
class. This provides a NosqlDbConfig Spring bean that describes how to connect to the
Oracle NoSQL Database Cloud Service.

1-15
Chapter 1
Get Started

Typical Workflow
Typical sequence of tasks to work with Oracle NoSQL Database Cloud Service.
If you're developing applications using Oracle NoSQL Database Cloud Service for the
first time, follow these tasks as a guide.

Task Description More Information


Connect your application Connect your application to use Oracle NoSQL Connecting your
Database Cloud Service tables. Application to NDCS
Develop your application Develop your application after connecting to Oracle Developing in Oracle Cloud
NoSQL Database Cloud Service tables. Developing in Oracle
NoSQL Database Cloud
Simulator

If you're setting up Oracle NoSQL Database Cloud Service for the first time, see
Setting up Your Service.

Quick Start Tutorials


Access the quick start tutorials and get started with the service.
1. Get access to the service.
Sign up for a Cloud promotion or purchase an Oracle Cloud subscription. Activate
your order, create users (optional). See Setting up Your Service.
2. Acquire credentials to connect your application. See Authentication to connect to
Oracle NoSQL Database.
3. Create a table in Oracle NoSQL Database Cloud Service.
• To get started, see Get Started with Tables in Oracle NoSQL Database Cloud
Service.

Getting started with Oracle NoSQL Database Analytics Integrator


Oracle NoSQL Database Analytics Integrator copies data located in a NoSQL
Database Cloud Service table to a database created in the Oracle Autonomous Data
Warehouse Cloud Service.

Overview of Oracle NoSQL Database Analytics Integrator


After storing data in the Oracle NoSQL Database Cloud Service you may want to
analyze that data using Oracle Analytics; either the cloud-based or the desktop
version. Because Oracle Analytics does not currently support Oracle NoSQL Database
Cloud Service as a data source, the Oracle NoSQL Database Analytics Integrator can
be used to copy data from Oracle NoSQL Database Cloud Service tables to
corresponding tables in the Oracle Autonomous Data Warehouse Cloud Service
(ADW). Once data from a NoSQL table is copied to the ADW database, all of the tools
provided by Oracle Analytics can be used to visualize and analyze that data.
Once you complete the prerequisites for using the Oracle Cloud Infrastructure
services, you can install the utility, configure it and execute from the command line of
an Oracle Cloud Compute Instance, or even from your local environment.

1-16
Chapter 1
Get Started

Download the Oracle NoSQL Database Analytics Integrator from the Oracle Technology
Network and install it in the desired compute environment. Once installed, you then have all
the classes needed to copy data from the Oracle NoSQL Database Cloud Service to a
database in the Oracle Autonomous Data Warehouse.

Typical Workflow for Using Oracle NoSQL Database Analytics Integrator


Table 1-2 Analytics Integrator Workflow

Task Description More Information


Create OCI account Sign up for an account on the Oracle Cloud Infrastructure -
Oracle Cloud Infrastructure Signup
Create a Compute Instance Create a Compute Instance from Compute Instance
which the Oracle NoSQL
Database Analytics Integrator
can be installed and executed
Create a NoSQL table Create one or more tables in the Create and populate a NoSQL
Oracle NoSQL Database Cloud table
Service and populate those
tables with data.
Create a bucket in OCI Object To set up access to the Oracle Create a bucket in Object
Storage Object Storage Service, you Storage
need to create a bucket for
Object Storage.
Create a database in the Oracle You need to create a database to Create a database in the
Autonomous Data Warehouse access the Oracle Cloud Autonomous Data Warehouse
Autonomous Data Warehouse
from Oracle NoSQL Database
Analytics Integrator.
Download and install the client For the Oracle NoSQL Database Install credentials needed for a
credentials Analytics Integrator to connect secure database connection
securely to the ADW database,
the utility uses the credentials
contained in an Oracle Wallet.
Generate an authorization token For user based authentication of Generate an authorization token
(optional) the ADW database with Object for Object Storage
Storage, you need to generate
an authentication token
(AUTH_TOKEN) that the
database can use to access files
in the Object Storage bucket

1-17
Chapter 1
Get Started

Table 1-2 (Cont.) Analytics Integrator Workflow

Task Description More Information


Enable/store the credential the If you wish to have the ADW Enable the OCI Resource
ADW database should use to database authenticate with Principal Credential or Store/
access the objects in Object Object Storage using a Resource Enable the User's Object
Storage Principal, then you must perform Storage AUTH_TOKEN in the
the prerequisites to use ADW Database
Resource Principal with the ADW
database and then you must
enable the Resource principal to
access Object Storage (see
Create a Dynamic Group for the
Compute Instance and the ADW
database and Create a Policy
with appropriate permissions for
the dynamic group) and then
enable the Resource Principal to
access the objects in Object
Storage. Alternatively, if you wish
to have the ADW database
authenticate with Object Storage
using the user's AUTH_TOKEN
that you generated (see
Generate an authorization token
for Object Storage), then you
must store the AUTH_TOKEN in
the ADW database; which will
also enable that token to access
the objects in Object Storage.
Create a Dynamic group for the To authorize your compute Create a Dynamic Group for the
Compute Instance and instance to perform actions on Compute Instance and the ADW
(optionally) the ADW database the NoSQL Service, database
ObjectStorage, and ADW, a
dynamic group must be created
and a set of matching rules must
be added for your instance. A
dynamic group is also required if
you wish to employ the OCI
Resource Principal when
authenticating the ADW
database with Object Storage.
Create a Policy with Appropriate Once a dynamic group is Create a Policy with appropriate
Permissions for the Dynamic created, you must create a policy permissions for the dynamic
Group that grants permissions to it that group
allows members of that group to
do read, write and management
operations.
Install Oracle NoSQL Database You can download the Oracle Installation
Analytics Integrator NoSQL Database Analytics
Integrator from Oracle
Technology Network.

1-18
Chapter 1
Overview

Table 1-2 (Cont.) Analytics Integrator Workflow

Task Description More Information


Create a configuration file for the Before you can execute the Create a configuration file for the
integrator Oracle NoSQL Database integrator
Analytics Integrator, you must
first create a configuration file.
This configuration file will be
used when invoking the utility.
Running the integrator tool The Oracle NoSQL Database Running the integrator tool
Analytics Integrator can be
executed by simply typing a
command on the command line.
Verify Data in the Autonomous After executing the Oracle Verify the data in the Oracle
Database NoSQL Database Analytics Autonomous Database
Integrator to copy the data from
your NoSQL table to the
Autonomous Database in ADW,
you can verify that the NoSQL
table data has been copied
correctly.
Verify the data in Oracle You can connect Oracle Verify the data in Oracle
Analytics Analytics to that database and Analytics
verify that Oracle Analytics can
access and analyze the data in
the NoSQL table.

Overview
• What's New in Oracle NoSQL Database Cloud Service
• Cloud Concepts
• Features of Oracle NoSQL Database Cloud Service
• Oracle NoSQL Database Cloud Service Subscription

What's New in Oracle NoSQL Database Cloud Service


This document describes what's new in Oracle NoSQL Database Cloud Service on all
infrastructure platform where it’s available.
The information is organized according to when a specific feature or capability became
available. When new and changed features become available, Oracle NoSQL Database
Cloud Service will be upgraded in the regions where Oracle Cloud services are hosted. You
don’t need to request an upgrade to be able to use the new features — they come to you
automatically.
Here’s an overview of new features and enhancements added recently to improve your
Oracle NoSQL Database Cloud Service experience.

Topics
• May 2023
• December 2022

1-19
Chapter 1
Overview

• September 29, 2022


• September 2022
• August 2022
• June 2022
• February 2022
• December 2021
• November 2021
• October 2021
• May 2021
• February 2021
• December 2020
• November 2020
• October 2020
• September 2020
• August 2020
• July 2020
• June 2020
• May 2020
• April 2020
• March 2020
• February 2020
• September 2019
• May 2019

May 2023

Feature Description
Changes in terraform script While using Terraform scripts, the table schema can be updated
for updating NoSQL table based on new version of create table ddl statement instead of an
definition alter table statement.That is to update the definition of a table, you
use a new CREATE TABLE as the ddl_statement and internally
the compiler parses the ddl and compares with the existing table
definition and generates an equivalent alter table statement and
applies it to the table.
Additional features The IntelliJ plugin for Oracle NoSQL Database offers these new
available in IntelliJ plugin features:
• Add new columns using form-based entry or supply DDL
statements
• Drop Columns
• Create Indexes
• Drop Indexes
• Execute DML statements to update, insert, and delete data
from a table

1-20
Chapter 1
Overview

Feature Description
Additional features The Oracle NoSQL Database Visual Studio (VS) Code extension
available in Visual Studio for Oracle NoSQL Database offers these new features:
Code Extension • Add new columns using form-based entry or supply DDL
statements
• Drop Columns
• Create Indexes
• Drop Indexes
• Execute DML statements to update, insert, and delete data
from a table
• Download the Query Result after running the SELECT query
into a JSON file
• Download each row of the result obtained after running the
SELECT query into a JSON file

December 2022

Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chicago in North America data region.
See Data Regions and Associated Service URLs.
Migrator utility updates Enhanced the migrator to support importing CSV files that conform to
the RFC4180 standard. Users can create a NoSQL table that
corresponds to CSV file fields either manually or through the migrator.
The migrator now supports table creation with on-demand capacity and
Import/Export of Child tables in NDCS. Additionally, it provides an
option to specify OCI Object Storage service namespace for valid
source and sinks.

September 29, 2022

Feature Description
New functionality in OCI The following new functionality has been added in the OCI console
console • Bulk upload of table rows: The Upload Data button in the Table
details page allows bulk uploading of data from a local file into the
table, via the browser. The Bulk upload feature is intended for
loading less than a few thousand rows
• Query execution plan: You can now access the query execution
plan for your SQL queries from the OCI console. On the Table
Details page, you have a button to view the query execution plan.

1-21
Chapter 1
Overview

September 2022

Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in two
available new regions:
• Italy Northwest (Milan) in EMEA data region.
• Spain Central (Madrid) in EMEA data region.
See Data Regions and Associated Service URLs.

August 2022

Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Mexico Central (Queretaro) in LAD data region.
See Data Regions and Associated Service URLs.
Availability of Child Tables Table hierarchies (child tables) are available in the cloud. With the
availability of table hierarchy, developers have additional flexibility
when choosing the best data model to meet their business and
application workload requirements. With child tables comes the
ability to perform left outer join (nested table) queries.
Migrator utility updates Enhanced the migrator do support importing files from
DynamoDB. The process is simple, export your DynamoDB tables
as JSON files to AWS S3, then grab those files and import them
into Oracle NoSQL.

June 2022

Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• France Central (Paris) in EMEA data region.
See Data Regions and Associated Service URLs.
Format change for JSON Added pretty print JSON in the query section of the console.
output
New query driver in the Removed the REST based query driver from the console and
console replaced it with a javascript driver. This adds significant more
functionality to the console when it comes to querying your data.

February 2022

Feature Description
New Data Region location Oracle NoSQL Database Cloud Service is now available in a new
available region:
• South Africa Central (Johannesburg) in EMEA data region.
See Data Regions and Associated Service URLs.

1-22
Chapter 1
Overview

Feature Description
Oracle NoSQL Database With this release, the NoSQL Database Migrator supports the
Migrator below listed functionality:
• Sink for Parquet - Export Oracle NoSQL Database table data
as Parquet files.
• Sink for Parquet in OCI Object storage - Export Oracle
NoSQL Database table data as Parquet files to OCI object
storage.
• TTL Support - Export and import of Row TTL data.
• New transformation includeFields.
For more details, see Overview of Oracle NoSQL Database
Migrator in Using Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database You can use the new Oracle NoSQL Database Visual Studio (VS)
Visual Studio (VS) Code Code Extension to browse tables and execute queries on your
Extension Oracle NoSQL Database Cloud Service instance or simulator.
See About Oracle NoSQL Database Visual Studio Code
Extension.
On Demand pricing model Oracle NoSQL Database Cloud Service added an on-demand
pricing model. With this model, the service automatically scales
the read and write capacities to meet dynamic workload needs.
Customers don't need to provision the read or write capacities for
each table/collection. The monthly billing captures the
application's actual read and write capacities and charges
accordingly.

December 2021

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in two new
available regions:
• Sweden Central (Stockholm) in EMEA data region.
• UAE Central (Abu Dhabi) in EMEA data region.
See Data Regions and Associated Service URLs.

November 2021

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in two new
available regions:
• Marseille (France South ) in EMEA data region.
• Singapore in APAC data region.
See Data Regions and Associated Service URLs.
New OCI IAM Identity The new OCI IAM services introduces identity domains. Identity
Domains service domains are the next generation of IDCS instances (stripes). Each OCI
IAM identity domain represents a stand-alone identity and access
management solution.

1-23
Chapter 1
Overview

October 2021

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Jerusalem (Israel) in EMEA data region.
See Data Regions and Associated Service URLs.
Manage tables and table You can now use the .NET SDK that enables your .NET
data from your .NET application to create, update, and drop tables as well as add,
application read, and delete data in the tables.

May 2021

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Brazil South East (Vinhedo) in LAD data region.
See Data Regions and Associated Service URLs.

February 2021

Feature Description
SQL new features • SQL string functions regex_like(any, string, string),
regex_like(any, string)
• IN operator, DISTINCT operator
• untyped JSON Index
• SQL ORDER BY and GROUP BY clause
Spring Data Driver The Oracle NoSQL Database Cloud Service SDK for Spring Data
provides POJO (Plain Old Java Object) centric modeling and
integration between the Oracle NoSQL Database Cloud Service
and the Spring Data Framework. The following features are
currently supported by the Oracle NoSQL Database Cloud
Service SDK for Spring Data.
• Generic CRUD operations on a repository using methods in
the CrudRepository interface.
• Pagination and sorting operations using methods in the
PagingAndSortingRepository interface.
• Derived Queries.
• Native Queries
For more information, see Oracle NoSQL Database SDK for
Spring Data

1-24
Chapter 1
Overview

December 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chile Central (Santiago) in LAD data region.
See Data Regions and Associated Service URLs.

November 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Cardiff (UK) in EMEA data region.
See Data Regions and Associated Service URLs.
Always Free NoSQL As part of the Oracle Cloud Free Tier, the Oracle NoSQL Database
Database Service Cloud Service participates as an Always Free service.
• You may have up to three Always Free NoSQL tables in your
tenancy.
• You can have both Always Free and regular tables in the same
tenancy.
• The Always Free NoSQL tables are displayed in the console with
an “Always Free” label next to the table name.
• An Always Free NoSQL table cannot be changed to a regular table
or vice versa.

October 2020
Summary of October 2020 new features available in Oracle NoSQL Database Cloud Service.

1-25
Chapter 1
Overview

Feature Description
Oracle NoSQL Database Migrator You can now use Oracle NoSQL Database
Migrator to migrate NoSQL tables from one
data source to another. This tool can operate
on tables in Oracle Oracle NoSQL Database
Cloud Service, Oracle NoSQL Database on-
premise, and handle JSON and MongoDB-
formatted JSON input files. With this release,
NoSQL Database Migrator supports the below
listed migration options:
• Oracle NoSQL Database on-premise to
Oracle NoSQL Database Cloud Service
and vice-versa
• Between two Oracle NoSQL on-premise
Databases
• Between two Oracle NoSQL Database
Cloud Service Tables
• JSON file to Oracle NoSQL Database on-
premise and vice-versa
• JSON file to Oracle NoSQL Database
Cloud Service and vice-versa
• MongoDB-formatted JSON file to an
Oracle NoSQL Database table on-premise
or cloud
For more details, see Overview of Oracle
NoSQL Database Migrator in Using Oracle
NoSQL Database Cloud Service.

September 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Dubai in EMEA data region.
See Data Regions and Associated Service URLs.

August 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• San Jose in North America data region.
See Data Regions and Associated Service URLs.

1-26
Chapter 1
Overview

July 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Jeddah in EMEA data region.
See Data Regions and Associated Service URLs.

June 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in a new
available region:
• Chuncheon in APAC data region
See Data Regions and Associated Service URLs.

May 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five new
available regions:
1. North America data region
a. Montreal
2. APAC data region
a. Osaka
b. Melbourne
c. Hyderabad
3. LAD data region
a. Sao Paulo
See Data Regions and Associated Service URLs.

1-27
Chapter 1
Overview

April 2020

Feature Description
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five
available new regions:
1. North America data region
a. Toronto
2. APAC data region
a. Tokyo
b. Seoul
3. EMEA data region
a. Amsterdam
b. London
See Data Regions and Associated Service URLs

March 2020

Feature Description
Manage tables and table You can now use the Node.js SDK that enable your Node.js
data from your Node.js applications to create, update, and drop tables as well as add,
applications read, and delete data in the tables.
Manage tables and table You can now use the Go SDK that enable your Go applications to
data from your Go create, update, and drop tables as well as add, read, and delete
applications data in the tables.
New Data Region locations Oracle NoSQL Database Cloud Service is now available in five
available new regions:
1. North America data region
a. Ashburn
2. APAC data region
a. Mumbai
b. Sydney
3. EMEA data region
a. Frankfurt
b. Zurich
See Data Regions and Associated Service URLs

1-28
Chapter 1
Overview

February 2020

Feature Description
Integration with Oracle Cloud Oracle NoSQL Database Cloud Service is completely integrated with
Infrastructure. Oracle Cloud Infrastructure. As a result, the following features are
integrated into the NoSQL Database Cloud Service:
• Oracle Cloud Infrastructure Identity and Access Management
replaces Oracle Identity Cloud Service for implementing Identity,
Permissions, and Compartments.
• Oracle Cloud Infrastructure console to create, manage, and
monitor NoSQL tables and data.
• Oracle Cloud Infrastructure tags
• Oracle Cloud Infrastructure auditing
• Oracle Cloud Infrastructure search
• Oracle Cloud Infrastructure limits and quotas
• Oracle Cloud Infrastructure monitoring
Naming tables in your Table naming has changed to fit in with Oracle Cloud Infrastructure
application code changes compartments. For details, see About Compartments.
because of Oracle Cloud
Infrastructure Identity and
Access Management
integration.
The Oracle NoSQL Database The Oracle NoSQL Database Cloud Service query language is
Cloud Service query improvised with new feature updates. They are:
language is updated with new • Enhanced query support for sorted and aggregated queries.
features. • Support for geo_near in NoSQL Database queries.
• Support for Identity columns.

September 2019

Feature Description
Use IntelliJ plug-in to quickly You can now use the IntelliJ plug-in to browse tables and execute
build and run queries queries on your Oracle NoSQL Database Cloud Service instance or
simulator.
Universal Credit accounts do After signing into Oracle Cloud, you use the Oracle Cloud Infrastructure
not use My Services Console to access your service. Previously you were required to
Dashboard access the service from the My Services Dashboard.
New Data Region locations Oracle NoSQL Database Cloud Service is now available in four new
available regions:
• Canada Southeast (Toronto)
• UK South (London)
• South Korea Central (Seoul)
• Japan East (Tokyo)
See Data Regions and Associated Service URLs.

1-29
Chapter 1
Overview

May 2019

Feature Description
Manage tables and table You can now use the Python SDK that enables your Python
data from your Python application to create, update, and drop tables as well as add,
application read, and delete data in the tables.

Cloud Concepts
Learn the Oracle NoSQL Database Cloud Service concepts.
• Table: A Table is a collection of rows where each row holds a data record from
your application.
Each table row consists of key and data fields which are defined when a table is
created. In addition, a table has a specified storage, can support a defined
maximum read and write throughput, and has a maximum size. The storage
capacity is specified at table creation time and can be changed later.
– High-Level Data Types: Oracle NoSQL Database Cloud Service supports all
three types of Big Data. You can create NoSQL tables to store structured,
unstructured, or semi-structured data.
* Structured: This type of data can be organized and stored in tables with a
predefined structure or schema. For example, the data stored in regular
relational database tables come under this category. They adhere to a
fixed schema and are simple to manage and analyze. Data generated
from credit card transactions and e-commerce transactions are a few
examples of structured data.
* Semi-Structured: The data that can not fit into a relational database but
can be organized into rows and columns after a certain level of processing
is called semi-structured data. Oracle NoSQL Database Cloud Service
can store and process semi-structured data by storing key-value pairs in
NoSQL tables. XML data is an example of semi-structured data.
* Unstructured: The data that can not be organized or stored in tables with
a fixed schema or structure are called Unstructured data. Videos, images,
and media are a few examples of unstructured data. Oracle NoSQL
Database Cloud Service lets you define tables with rows of JSON data
type to store unstructured data.
– Data Types: A table is created using DDL (Data Definition Language) which
defines the data types and primary keys used for the table.
Oracle NoSQL Database Cloud Service supports several data types, including
several numeric types, string, binary, timestamp, maps, arrays, records, and a
special JSON data type which can hold any valid JSON data. Applications can
use unstructured tables where a row uses the JSON data type to store the
data, or use structured tables where all row types are defined and enforced.
See Supported Data Types to view the list of data types supported in Oracle
NoSQL Database Cloud Service.
Unstructured tables are flexible. But typed data is safer from an enforcement
and storage efficiency point of view. Table schema can be modified , but the
table structure is less flexible to change.

1-30
Chapter 1
Overview

– Indexes: Applications can create an index on any data field which has a data type
that permits indexing, including JSON data fields. JSON indexes are created using a
path expression into the JSON data.
– Capacity: When you create a table, you can choose between Provisioned Capacity
and On-Demand Capacity.
* By choosing Provisioned Capacity, you also specify throughput and storage
resources available for the table. The read and write operations to the table are
limited by the read and write throughput capacity that you define. The amount of
space that the table can use is limited by the storage capacity.
* By choosing On-Demand Capacity, the read and write operations to the table are
automatically managed by Oracle. The amount of space that the table can use is
limited by the storage capacity.
See Estimating Capacity to learn how to estimate capacity for your application
workload.
• Distribution and Sharding: Although not visible to the user, Oracle NoSQL Database
Cloud Service tables are sharded and replicated for availability and performance.
Therefore, you should consider this during schema design.
– Primary and Shard keys: An important consideration for a table is the designation of
the primary key, and the shard key. When you create a table in Oracle NoSQL
Database Cloud Service, the data in the table is automatically sharded based on a
portion of the table primary key, called the shard key. See Primary Keys and Shard
Keys for considerations on how to designate the primary and shard keys.
– Read Consistency: Read consistency specifies different levels of flexibility in terms
of which copy of the data is used to fulfill a read operation. Oracle NoSQL Database
Cloud Service provides two levels of consistency, EVENTUAL, and ABSOLUTE.
Applications can specify ABSOLUTE consistency, which guarantees that all read
operations return the most recently written value for a designated key. Or,
applications capable of tolerating inconsistent data can specify EVENTUAL consistency,
allowing the database to return a value more quickly even if it is not up-to-date.
ABSOLUTE consistency results in a higher cost, consuming twice the number of read
units for the same data relative to EVENTUAL consistency, and should only be used
when required. Consistency can be set for a NoSQL handle, or as an optional
argument for all read operations.
• Identity Access and Management: Oracle NoSQL Database Cloud Service uses the
Oracle Cloud Infrastructure Identity and Access Management to provide secure access to
Oracle Cloud. Oracle Cloud Infrastructure Identity and Access Management enables you
to create user accounts and give users permission to inspect, read, use, or manage
Oracle NoSQL Database Cloud Service tables. See Overview of Oracle Cloud
Infrastructure Identity and Access Management in Oracle Cloud Infrastructure
Documentation.

Features of Oracle NoSQL Database Cloud Service


Learn about the key features of Oracle NoSQL Database Cloud Service and Always Free
NoSQL Database Service.
This article has the following topics:

1-31
Chapter 1
Overview

Key Features
Learn the key features of Oracle NoSQL Database Cloud Service.
• Fully Managed with Zero Administration: Developers do not need to administer
data servers or the underlying infrastructure and security. Oracle maintains the
hardware and software which allows developers to focus on building applications.
• Faster Development Life Cycle: After purchasing access to the service,
developers write their applications, and then connect to the service using their
credentials. Reading and writing data can begin immediately. Oracle performs
Database Management, Storage Management, High Availability, and Scalability
which helps developers concentrate on delivering high-performance applications.
• High Performance and Predictability: Oracle NoSQL Database Cloud Service
takes advantage of the latest component technologies in the Oracle Cloud
Infrastructure by providing high performance at scale. Developers know that their
applications return data with predictable latencies, even as their throughput and
storage requirements increase.
• On-Demand Throughput and Storage Provisioning: Oracle NoSQL Database
Cloud Service scales to meet application throughput performance requirements
with low and predictable latency. As workloads increase with periodic business
fluctuations, applications can increase their provisioned throughput to maintain a
consistent user experience. As workloads decrease, the same applications can
reduce their provisioned throughput, resulting in lower operating expenses. The
same holds true for storage requirements. Those can be adjusted based on
business fluctuations. You can increase or decrease the storage using the Oracle
Cloud Infrastructure Console or the TableRequest API.
You can choose between an on-demand capacity allocation or provisioned-based
capacity allocation:
– With on-demand capacity, you don't need to provision the read or write
capacities for each table. You only pay for the read and write units that are
actually consumed. Oracle NoSQL Database Cloud Service automatically
manages the read and write capacities to meet the needs of dynamic
workloads.
– With provisioned capacity, you can increase or decrease the throughput using
the Oracle Cloud Infrastructure Console or the TableRequest API.
You can also modify the capacity mode from Provisioned Capacity to On-Demand
Capacity and vice-versa.
• Simple APIs: Oracle NoSQL Database Cloud Service provides easy-to-use
CRUD (Create Read Update Delete) APIs that allow developers to easily create
tables and maintain data in them.
• Data Modeling: Oracle NoSQL Database Cloud Service supports both schema-
based and schema-less (JSON) modeling.
• Data Safety in Redundancy: The Oracle NoSQL Database Cloud Service stores
data across multiple Availability Domains (ADs) or Fault Domains (FDs) in single
AD regions. If an AD or FD becomes unavailable, user data is still accessible from
another AD or FD.

1-32
Chapter 1
Overview

• Data Security: Data is encrypted at rest (on disk) with Advanced Encryption Standard
(AES 256). Data is encrypted in motion (transferring data between the application and
Oracle NoSQL Database Cloud Service) with HTTPS.
• ACID-Compliant Transactions: ACID (Atomicity, Consistency, Isolation, Durability)
transactions are fully supported for the data you store in Oracle NoSQL Database Cloud
Service. If required, consistency can be relaxed in favor of lower latency.
• JSON Data Support: Oracle NoSQL Database Cloud Service allows developers to query
schema-less JSON data by using the familiar SQL syntax.
• Partial JSON Updates: Oracle NoSQL Database Cloud Service allows developers to
update (change, add, and remove) parts of a JSON document. Because these updates
occur on the server, the need for a read-modify-write cycle is eliminated, which would
consume throughput capacity.
• Time-To-Live: Oracle NoSQL Database Cloud Service lets developers set a time frame
on table rows, after which the rows expire automatically, and are no longer available. This
feature is a critical requirement when capturing sensor data for Internet Of Things (IoT)
services.
• SQL Queries: Oracle NoSQL Database Cloud Service lets developers access data with
SQL queries.
• Secondary Indexes: Secondary indexes allow a developer to create an index on any
field of a supported data type, thus improving performance over multiple paths for queries
using the index.
• NoSQL Table Hierarchy: Oracle NoSQL Database Cloud Service supports Table
hierarchies that offer high scalability while still providing the benefits of data
normalization. A NoSQL table hierarchy is an ideal data model for applications that need
some data normalization, but also require predictable, low latency at scale. A table
hierarchy links distinct tables and therefore enables left outer joins, combining rows from
two or more tables based on related columns between them. Such joins execute
efficiently as rows from the parent-child tables are co-located in the same database
shard.

Responsibility Model for Oracle NoSQL Database


In general, Oracle is responsible for performing the various management tasks related to the
administration and monitoring of Oracle Cloud services for Oracle NoSQL Database.
However, you the customer are responsible for a few tasks and sometimes for directing
Oracle to initiate a task or for specifying how or when Oracle is to perform a task.

Table 1-3 Sharing tasks between Oracle and customer

Task Who Details


Provisioning NoSQL Database Oracle Oracle is responsible for
tables provisioning tables. You the
customer are responsible for
initiating provisioning requests
that specify the capacities of the
tables specified, including read
units, write units and storage. In
addition, the customer is
responsible for specifying the
pricing model.

1-33
Chapter 1
Overview

Table 1-3 (Cont.) Sharing tasks between Oracle and customer

Task Who Details


Backing up tables Customer Customer is responsible for
backing up tables on a schedule
they choose. Oracle provides a
migrator tool that can be used to
take a backup and store it in
Oracle Object Storage.
Restoring a table Customer Customer is responsible for
restoring their tables. Oracle
provides a migrator tool that can
be used to restore a table from
files stored in Oracle Object
Storage.
Patching and upgrading Oracle Oracle is responsible for
patching and upgrading all
NoSQL Database resources.
Scaling Oracle Oracle is responsible for scaling
NoSQL Database tables. You the
customer are responsible for
initiating scaling requests.
Monitoring service health Oracle Oracle is responsible for
monitoring the health of NoSQL
Database resources and for
ensuring their availability as per
the published guidelines.
Monitoring application health and Customer You the customer are
performance responsible for monitoring the
health and performance of your
applications at all levels. This
responsibility includes monitoring
the performance of the tables
and updates your applications
perform.
Application security Customer You the customer are
responsible for the security of
your applications at all levels.
This responsibility includes
Cloud user access to NoSQL
Database tables, network access
to these resources, and access
to the data. Oracle ensures that
data stored in NoSQL Database
tables is encrypted and ensures
that connections to NoSQL
Database tables require TLS 1.2
encryption and wallet-based
authentication.
Auditing Oracle Oracle is responsible for logging
DDL API calls made to NoSQL
Database tables and for making
these logs available to you the
customer for auditing purposes.

1-34
Chapter 1
Overview

Table 1-3 (Cont.) Sharing tasks between Oracle and customer

Task Who Details


Alerts and Notifications Oracle Oracle is responsible for
providing an alert and notification
feature for service events. You
the customer are responsible for
monitoring any database alerts
that may be of interest.

Always Free Service


Always Free NoSQL Database Service
As part of the Oracle Cloud Free Tier, the Oracle NoSQL Database Cloud Service
participates as an Always Free service. This section describes the restrictions, and details of
that offering.
Features of Always Free NoSQL Database Service
• You may have up to three Always Free NoSQL tables in your region.
• You can have both Always Free and regular tables in the same region.
• The Always Free NoSQL tables are displayed in the console with an “Always Free” label
next to the table name.
• An Always Free NoSQL table cannot be changed to a regular table or vice versa.
Resource Restrictions for Always Free NoSQL tables
• You may have a maximum of three Always Free NoSQL tables in any region at any time.
If you have three Always Free NoSQL tables , the toggle button to create an Always Free
NoSQL table is disabled. If you delete one or more of those tables, then the toggle button
will be re-enabled.
• Read Capacity (Read Units) is 50 and cannot be changed.
• Write Capacity (Write Units) is 50 and cannot be changed.
• Disk Storage is 25GB and cannot be changed.
Regional Availability
Always Free NoSQL tables are available in a subset of Oracle Cloud Infrastructure data
regions. See Data Regions for more details on where Always Free NoSQL tables are
supported.
Always Free NoSQL tables - Inactivity and Deletion
If an Always Free NoSQL table has not been used or accessed for 30 days, it moves to an
‘inactive’ state. Always Free NoSQL tables that remain inactive for 90 days are deleted. The
inactive state is shown in the console next to the table name. A customer notification is sent
to the tenancy administrator when the table initially becomes inactive (after 30 days of
inactivity). A reminder is sent again at 75 days of inactivity.
You may make an Always Free NoSQL table active again by performing any get/put/delete
operation on any row(s) in the table. DDL operations do not make an inactive table active
again.

1-35
Chapter 1
Overview

Functional difference between the NoSQL Cloud Service and On-premise


database
Table 1-4 High level feature comparison

- NoSQL Database Cloud NoSQL Database Enterprise


Service Edition (EE)
Infrastructure and software Managed by Oracle Managed by customer
management/maintenance
(servers, storage, networking,
security, OS, and NoSQL
software)
Database deployment Oracle Cloud only Customer on-premises data
centers or BYOL in Oracle
Cloud or other cloud vendors.
Licensing/Edition Paid subscription or always- Enterprise Edition (paid) or
free service Community Edition (free open
source)
Throughput Throughput capacity is Throughput capacity is
managed at each NoSQL managed at each NoSQL
Table level through NoSQL cluster. The capacity depends
APIs or Oracle Cloud on the size of the NoSQL
Infrastructure (OCI) Console. cluster deployed. Larger
The capacity is measured in cluster size provides more
Write Units, Read Units. throughput capacity for user
Throughput capacity per table tables.
can be adjusted to meet the
dynamic workloads. When the
limits for a table is exceeded,
users are notified. At the
tenancy level, there are
maximum service limits. To get
more details, see Oracle
NoSQL Database Cloud
Service Limits .
Storage Storage capacity is managed Storage capacity is managed
at each NoSQL Table level at each NoSQL cluster. The
through NoSQL APIs or capacity depends on the
Oracle Cloud Infrastructure number of disks and specific
(OCI) Console. The capacity is configuration in each storage
measured in gigabytes (GB). node deployed in the cluster.
Storage capacity per table can Larger cluster size and disk
be adjusted to meet the capacity provide more storage
dynamic workloads. When the for user tables.
limit for a table is exceeded,
users are notified. At the
tenancy level, there are
maximum service limits. To get
more details, see Oracle
NoSQL Database Cloud
Service Limits.

1-36
Chapter 1
Overview

Table 1-4 (Cont.) High level feature comparison

- NoSQL Database Cloud NoSQL Database Enterprise


Service Edition (EE)
Interoperability Interoperates with NoSQL Interoperates with NoSQL
Database Enterprise Edition Database Cloud Service
through a single programmatic through a single programmatic
interface with no application interface with no application
code modification. code modification.
Installation No customer installs. Customers download and
Customers start using the install the software to set up
service right away by creating the NoSQL cluster in multiple
NoSQL Tables. storage nodes.

Oracle NoSQL Database Cloud Service Subscription


Learn how to manage Oracle NoSQL Database Cloud Service subscriptions and it's users.
This article has the following topics:

Service Limits
Oracle NoSQL Database Cloud Service has various default limits. Whenever you create an
Oracle NoSQL Database Cloud Service table, the system ensures that your requests are
within the bounds of the specified limit. When creating On Demand capacity tables, the On
Demand Capacity max limits will be used during the validation.
Oracle Cloud tenancies are typically active in more than one region. You can view this as a
single large tenancy, however, the Oracle NoSQL Database Cloud Service takes the
combination of tenancy OCID and region location to establish some of the limits (region-level
limits). Additionally, it has limits at the table level. For detailed list of service limits, see Oracle
NoSQL Database Cloud Service Limits
You can view the existing limits for Read Units, Write Units, and Table size for your region
from the Limits, Quotas, and Usage page in Oracle Cloud Infrastructure Console as shown
below. This example shows the values for the Ashburn region. You see the service limit, the
current usage, and the current availability for each of the limits. Note that the availability can
be affected by quota policies on either this compartment or its parent compartment.

1-37
Chapter 1
Overview

You can increase your service limits by submitting a request either from Limits,
Quotas, and Usage page in Oracle Cloud Infrastructure Console or using the
TableRequest API as shown below. This is an example service limit update request for
increasing the read units from 100000 to 110000 in the Ashburn region.

See About Service Limits and Usage in Oracle Cloud Infrastructure Documentation.

Service Quotas
You can use quotas to determine how other users allocate Oracle NoSQL Database
Cloud Service resources across compartments in Oracle Cloud Infrastructure. A
compartment is a collection of related resources (such as instances, virtual cloud
networks, block volumes) that can be accessed only by certain groups that have been
given permission by an administrator. Whenever you create an Oracle NoSQL
Database Cloud Service table or scale up the provisioned throughput or storage, the
system ensures that your requests are within the bounds of the quota for that
compartment.
This table lists the Oracle NoSQL Database Cloud Service quotas that you can
reference.

Name Scope Description


read-unit-count Regional Read Unit Count
write-unit-count Regional Write Unit Count
table-size-gb Regional Table Size (GB)

You can set quotas using the Console or API. You can execute quota statements from
the Quota Policies page under the Governance option in Oracle Cloud Infrastructure
Console.

1-38
Chapter 1
Overview

Example Quota Statements for Oracle NoSQL Database Cloud Service


• Limit the number of Oracle NoSQL Database Cloud Service read units that users
can allocate to tables they create in my_compartment to 20,000.

set nosql quota read-unit-count to 20000 in compartment my_compartment

If you do not specify any region, then the quota will be set to the entire tenancy, which
means it applies to all regions. However, you can set a specific quota to one region alone
by applying a filter condition in the set clause and specifying the name of one particular
region as shown below.
Limit the number of Oracle NoSQL Database Cloud Service read units that users
can allocate to tables they create in the region us-phoenix-1 to 10,000.

set nosql quota read-unit-count to 10000


in compartment my_compartment where request.region = us-phoenix-1

In this example, Only the Phoenix region will have a read unit count quota of 10000.
• Limit the number of Oracle NoSQL Database Cloud Service write units that users
can allocate to tables they create in my_compartment to 5,000.

set nosql quota write-unit-count to 5000 in compartment my_compartment

• Limit the maximum storage space of Oracle NoSQL Database Cloud Service that
users can allocate to tables they create in my_compartment to 1,000 GB.

set nosql quota table-size-gb to 1000 in compartment my_compartment

See About Compartment Quotas in Oracle Cloud Infrastructure Documentation.

Service Events
Actions that you perform on Oracle NoSQL Database Cloud Service tables emit events.
You can define rules that trigger a specific action when an event occurs. For example, you
might define a rule that sends a notification to administrators when someone drops a table.
See Overview of Events and Get Started with Events in Oracle Cloud Infrastructure
Documentation.
This table lists the Oracle NoSQL Database Cloud Service events that you can reference.

Friendly Name Event Type


Alter Table Begin
com.oraclecloud.nosql.altertable.begin

Alter Table End


com.oraclecloud.nosql.altertable.end

1-39
Chapter 1
Overview

Friendly Name Event Type


Change Table Compartment Begin
com.oraclecloud.nosql.changecompartment.b
egin

Change Table Compartment End


com.oraclecloud.nosql.changecompartment.e
nd

Create Index Begin


com.oraclecloud.nosql.createindex.begin

Create Index End


com.oraclecloud.nosql.createindex.end

Create Table Begin


com.oraclecloud.nosql.createtable.begin

Create Table End


com.oraclecloud.nosql.createtable.end

Drop Index Begin


com.oraclecloud.nosql.dropindex.begin

Drop Index End


com.oraclecloud.nosql.dropindex.end

Drop Table Begin


com.oraclecloud.nosql.droptable.begin

Drop Table End


com.oraclecloud.nosql.droptable.end

Example
This example shows information associated with the event Create Table Begin:

{
"cloudEventsVersion": "0.1",
"contentType": "application/json",
"source": "nosql",
"eventID": "<unique_ID>",
"eventType": "com.oraclecloud.nosql.createtable.begin",
"eventTypeVersion": "<version>",
"eventTime": "2019-12-30T00:52:01.343Z",

1-40
Chapter 1
Overview

"data": {
"additionalDetails": {},
"availabilityDomain": "<availability_domain>",
"compartmentId": "ocid1.compartment.oc1..<unique_ID>",
"compartmentName": "my_compartment",
"freeformTags": {
"key":"value"
},
"resourceId": "ocid1.nosqltable.oc1..<unique_ID>",
"resourceName": "my_nosql_table"
},
"extensions": {
"compartmentId": "ocid1.compartment.oc1..<unique_ID>"
}
}

Service Metrics
Learn about the metrics emitted by the metric namespace oci_nosql (Oracle NoSQL
Database Cloud Service).
Metrics for Oracle NoSQL Database Cloud Service include the following dimensions:
• RESOURCEID
The OCID of the NoSQL Table in the Oracle NoSQL Database Cloud Service.

Note:
OCID is an Oracle-assigned unique ID that is included as part of the resource's
information in both the console and API.

• TABLENAME
The name of the NoSQL table in the Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service sends metrics to the Oracle Cloud Infrastructure
Monitoring Service. You can view or create alarms on these metrics using the Oracle Cloud
Infrastructure Console SDKs or CLI. See OCI SDKs and CLI in Oracle Cloud Infrastructure
Documentation.
Available Metrics

Metric Metric Display Unit Description Dimensions


Name
ReadUnits Read Units Units The number of resourceIdtable
read units Name
consumed during
this period.
WriteUnits Write Units Units The number of resourceIdtable
write units Name
consumed during
this period.

1-41
Chapter 1
Overview

Metric Metric Display Unit Description Dimensions


Name
StorageGB Storage Size GB The maximum resourceIdtable
amount of storage Name
consumed by the
table. As this
information is
generated hourly,
you may see
values that are out
of date in between
the refresh points.
ReadThrottleCou Read Throttle Count The number of resourceIdtable
nt read throttling Name
exceptions on this
table in the time
period.
WriteThrottleCo Write Throttle Count The number of resourceIdtable
unt write throttling Name
exceptions on this
table in the time
period.
StorageThrottle Storage Throttle Count The number of resourceIdtable
Count storage throttling Name
exceptions on this
table in the time
period.

Data Regions and Associated Service Endpoints


Learn about the data regions supported for Oracle NoSQL Database Cloud Service
and access region-specific service endpoints.

Data Regions
To start with Oracle NoSQL Database Cloud Service, you must create an account
(either for free trial or to purchase provisioning). Along with other details, the account
application requires you to choose the default data region.
If your application is running under your tenancy on an OCI host in the same region,
you should configure your VCN to route all NDCS traffic through the Service Gateway.
See Access to Oracle Services: Service Gateway for more details.

Service Endpoints Associated with Data Regions


A service endpoint is the regional network access point to the Oracle NoSQL
Database Cloud Service. The general format of a region endpoint is https://nosql.
{region}.oci.oraclecloud.com. For example, the service endpoint for the Ashburn
Oracle NoSQL Database Cloud Service region identifier in North America region is
https://nosql.us-ashburn-1.oci.oraclecloud.com. Different data regions have
different {region} components of their URLs.

This table lists the service endpoints for all the data regions which are or will be
supported by Oracle NoSQL Database Cloud Service. See Service Availability for the
latest information about the regions that support Oracle NoSQL Database Cloud
Service.

1-42
Chapter 1
Overview

Data Region Region Identifier Service Endpoint


North America ca-montreal-1 https://nosql.ca-
montreal-1.oci.oraclecloud.com
North America ca-toronto-1 https://nosql.ca-
toronto-1.oci.oraclecloud.com
North America us-ashburn-1 https://nosql.us-
ashburn-1.oci.oraclecloud.com
North America us-chicago-1 https://nosql.us-
chicago-1.oci.oraclecloud.com
North America us-phoenix-1 https://nosql.us-
phoenix-1.oci.oraclecloud.com
North America us-sanjose-1 https://nosql.us-
sanjose-1.oci.oraclecloud.com
EMEA af-johannesburg-1 https://nosql.af-
johannesburg-1.oci.oraclecloud.com
EMEA eu-amsterdam-1 https://nosql.eu-
amsterdam-1.oci.oraclecloud.com
EMEA eu-frankfurt-1 https://nosql.eu-
frankfurt-1.oci.oraclecloud.com
EMEA eu-madrid-1 https://nosql.eu-
madrid-1.oci.oraclecloud.com
EMEA eu-marseille-1 https://nosql.eu-
marseille-1.oci.oraclecloud.com
EMEA eu-milan-1 https://nosql.eu-
milan-1.oci.oraclecloud.com
EMEA eu-paris-1 https://nosql.eu-
paris-1.oci.oraclecloud.com
EMEA eu-stockholm-1 https://nosql.eu-
stockholm-1.oci.oraclecloud.com
EMEA eu-zurich-1 https://nosql.eu-
zurich-1.oci.oraclecloud.com
EMEA il-jerusalem-1 https://nosql.il-
jerusalem-1.oci.oraclecloud.com
EMEA me-abudhabi-1 https://nosql.me-
abudhabi-1.oci.oraclecloud.com
EMEA me-dubai-1 https://nosql.me-
dubai-1.oci.oraclecloud.com
EMEA me-jeddah-1 https://nosql.me-
jeddah-1.oci.oraclecloud.com
EMEA uk-cardiff-1 https://nosql.uk-
cardiff-1.oci.oraclecloud.com
EMEA uk-london-1 https://nosql.uk-
london-1.oci.oraclecloud.com
APAC ap-chuncheon-1 https://nosql.ap-
chuncheon-1.oci.oraclecloud.com
APAC ap-hyderabad-1 https://nosql.ap-
hyderabad-1.oci.oraclecloud.com

1-43
Chapter 1
Plan

Data Region Region Identifier Service Endpoint


APAC ap-melbourne-1 https://nosql.ap-
melbourne-1.oci.oraclecloud.com
APAC ap-mumbai-1 https://nosql.ap-
mumbai-1.oci.oraclecloud.com
APAC ap-osaka-1 https://nosql.ap-
osaka-1.oci.oraclecloud.com
APAC ap-seoul-1 https://nosql.ap-
seoul-1.oci.oraclecloud.com
APAC ap-singapore-1 https://nosql.ap-
singapore-1.oci.oraclecloud.com
APAC ap-sydney-1 https://nosql.ap-
sydney-1.oci.oraclecloud.com
APAC ap-tokyo-1 https://nosql.ap-
tokyo-1.oci.oraclecloud.com
LAD mx-queretaro-1 https://nosql.mx-
queretaro-1.oci.oraclecloud.com
LAD sa-santiago-1 https://nosql.sa-
santiago-1.oci.oraclecloud.com
LAD sa-saopaulo-1 https://nosql.sa-
saopaulo-1.oci.oraclecloud.com
LAD sa-vinhedo-1 https://nosql.sa-
vinhedo-1.oci.oraclecloud.com

Plan
• Plan your service

Plan your service


Take some time to plan your Oracle NoSQL Database Cloud Service service before
you create it. Think about the questions outlined here and decide what you want to do,
before you start.
This article has the following topics:

Developer Overview
Get a high-level overview of the service architecture and select an SDK/Driver that will
meet your application development needs.
NDCS Developer tasks
Oracle NoSQL Database Cloud Service (NDCS) is a fully HA service. It is designed for
highly demanding applications that require low latency response times, a flexible data
model, and elastic scaling for dynamic workloads. As a fully managed service, Oracle
handles all the administrative tasks such as software upgrades, security patches,
hardware failures, and patching.

1-44
Chapter 1
Plan

NoSQL Database SDKs/Drivers – These SDKs are licensed under Universal Permissive
License (UPL) and can be used on either the NoSQL Cloud Service or the on-premise
database. These are full featured SDKs and offer a rich set of functionality. These drivers can
also be used in applications executing against Oracle NoSQL clusters running in other
vendors cloud.
1. NoSQL SDK for Java
2. NoSQL JavaScript SDK
3. NoSQL Python SDK
4. NoSQL .NET SDK
5. NoSQL Go SDK
6. NoSQL SDK for Spring Data
OCI Console – Offers ability to create tables quickly, modify tables, delete tables, load data,
create indexes quickly, delete indexes, basic queries, alter table capacities and view metrics.
OCI SDKs/Drivers – Oracle Cloud Infrastructure provides a number of Software Development
Kits (SDKs) to facilitate development of custom solutions. These are typically licensed under
UPL. These offer similar functionality to the OCI console through a programmatic interface.
1. Rest API
2. SDK for Java
3. SDK for Python
4. SDK for Javascript
5. SDK for .NET
6. SDK for Go
7. SDK for Ruby
References:

1-45
Chapter 1
Plan

• SQL for NoSQL documentation


• Functional difference between the NoSQL Cloud Service and On-premise
database

Oracle NoSQL Database Cloud Service Limits


Oracle NoSQL Database Cloud Service has various default limits. Whenever you
create an Oracle NoSQL Database Cloud Service table, the system ensures that your
requests are within the bounds of the specified limit. Some limits are imposed at the
table level and some are imposed at the region level.
To learn more about Service Limits, their scope and how to increase your service limits
by submitting a request, see Service Limits. Listed below are the current limits that
apply to Oracle NoSQL Database Cloud Service.

Limit Scope Description Value


Maximum table storage Table Maximum total storage size 5 TB
size per tenant. The total space
used for one or more
tables cannot exceed this
value.
Table Names Table Maximum number of Table names can have a
characters, allowed maximum of 256
characters, and initial characters. All names must
character for table names. begin with a letter (a–z, A–
Z). Subsequent characters
can be letters (a–z, A–Z),
digits (0–9), or underscore.
Provisioned Capacity - Table Maximum read and write 40,000 read units and
Maximum read and write throughput while 20,000 write units per
throughput provisioning a table. table.
On-Demand Capacity - Table Maximum read and write 10,000 read units and
Maximum read and write throughput while using On 5,000 write units per table.
throughput Demand Capacity to
provision tables.
On-Demand capacity - Region Number of tables with On 3
Number of tables Demand Capacity.
Change the provisioning Table Change the provisioning Can be changed only once
mode mode for the table from per day.
Provisioned to On Demand
or vice-versa.
Maximum number of tables Region The maximum number of 30
tables.
Maximum number of Table The maximum number 50
columns. columns.
Maximum number of table Table The maximum number of 100
schema updates table schema updates.
Maximum number of Table The maximum number of 5
indexes indexes.

1-46
Chapter 1
Plan

Limit Scope Description Value


Maximum number of Table The maximum number of Oracle allows:
changes for throughput and changes for throughput • Unlimited number of
storage limits and storage limits. throughput and
storage increases per
day
• Up to four throughput
or storage decreases
per 24-hour period.
Index Names Index The maximum number of Index names can have a
characters, allowed maximum of 64 characters.
characters, and initial All names must begin with
character. a letter (a–z, A–Z).
Subsequent characters can
be letters (a–z, A–Z), digits
(0–9), or underscore.
Maximum number of Request The maximum number of 50
individual operations per individual operations per
WriteMultiple request WriteMultiple request.
Maximum data size for Request The maximum data size for 25 MB
WriteMultiple request. WriteMultiple request.
Columns Names Column The maximum number of Field names can have a
characters, allowed maximum of 64 characters.
characters, and initial All names must begin with
character. a letter (a–z, A–Z).
Subsequent characters can
be letters (a–z, A–Z), digits
(0–9), or underscore.
Maximum secondary index Index Maximum index key size. 64 bytes
key size
Maximum primary index Index Maximum primary key size. 64 bytes
key size
Maximum row size Row Maximum row size. 512 KB
Maximum query string Query Maximum query string 10 KB
length. length.
Maximum supported rate of Region Maximum supported rate 4 per minute
DDL operations. of DDL operations.

1-47
Chapter 1
Plan

Limit Scope Description Value


Maximum values for Region Maximum values for Per region, Oracle allows:
throughput and data throughput and data • A maximum of
storage resources. storage resources. 100,000 read units
• A maximum of 40,000
write units
Oracle allows a maximum
storage size of 5-TB per-
tenant. The region can
have a single table with
storage size of 5 TB, in
which case the region is
not able to create another
table. Or have multiple
tables and ensure that the
data within all these tables
is within the maximum
storage size of 5 TB.

Estimating Capacity
Learn how to estimate throughput and storage capacities for your Oracle NoSQL
Database Cloud Service.

Basics Behind the Calculation


Before you learn how to estimate throughput and storage for the service, let's review
the throughput and storage unit definitions.
• Write Unit (WU): One Write Unit is defined as throughput for up to 1 kilobyte (KB)
of data per second. A write operation is any Oracle NoSQL Database Cloud
Service API call that results in insertion, update, or deletion of a record. A NoSQL
table has a write limit value which specifies the number of write units that may be
used each second. Index updates also consume write units.
For example, a record size of less than 1 KB requires one WU for a write
operation. A record size of 1.5 KB requires two WUs for the write operation.
• Read Unit (RU): One Read Unit is defined as throughput for up to 1 KB of data
per second for an eventually consistent read operation. Your NoSQL table has a
read limit value which specifies the number of read units that may be used each
second.
For example, a record size of less than 1 KB requires one RU for an eventually
consistent read operation. A record size of 1.5 KB requires two RUs for an
eventually consistent read operation and four RUs for an absolutely consistent
read operation.
• Storage Capacity: One storage unit is a single gigabyte (GB) of data storage.
• Absolute Consistency: The data returned is expected to be the most recently
written data to the database.
• Eventual Consistency: The data returned may not be the most recently written
data to the database; if no new updates are made to the data, eventually all
accesses to that data return the latest updated value.

1-48
Chapter 1
Plan

Note:
Oracle NoSQL Database Cloud Service automatically manages the read and write
capacities to meet the needs of dynamic workloads when using On-Demand
Capacity. It is recommended to validate that the capacity needs do not exceed the
On Demand Capacity limits. See Oracle NoSQL Database Cloud Service Limits for
more details.

Factors that Impact the Capacity Unit


Before you provision the capacity units, it is important to consider the following factors that
impact the read, write, and storage capacities.
• Record size: As the record size increases, the number of capacity units consumed to
write or read data also increases.
• Data consistency: Absolute consistency reads are twice the cost of eventual
consistency reads.
• Secondary Indexes: In a table, when an existing record is modified (added, updated, or
deleted), updating secondary indexes consume Write Units. The total provisioned
throughput cost for a write operation is the sum of write units consumed by writing to the
table and updating the local secondary indexes.
• Data modeling choice: With schema-less JSON, each document is self-describing
which adds metadata overhead to the overall size of the record. With fixed schema
tables, the overhead for each record is exactly 1 byte.
• Query pattern: The cost of a query operation depends on the number of rows retrieved,
number of predicates, the size of the source data, projections, and the presence of
indexes. The least expensive queries specify a shard key or index key (with an
associated index) to allow the system to take advantage of primary and secondary
indexes. An application can try different queries and examine the consumed throughput
to help tune the operations.

Real World Example: How to Estimate your Application Workload


Consider an E-commerce application example to learn how to estimate reads and writes per
second. In this example, Oracle NoSQL Database Cloud Service is used to store the product
catalog information of the application.
1. Identify the data model (JSON or fixed-table), record size, and key size for the
application.
Assume that the E-commerce application follows the JSON data model and the
developer has created a simple table with two columns. A record identifier (primary key)
and a JSON document for the product features and attributes. The JSON document,
which is under 1 KB (0.8 KB) is as follows:

{
"additionalFeatures": "Front Facing 1.3MP Camera",
"os": "Macintosh OS X 10.7",
"battery": {
"type": "Lithium Ion (Li-Ion) (7000 mAH)",
"standbytime" : "24 hours" },
"camera": {
"features": ["Flash","Video"],

1-49
Chapter 1
Plan

"primary": "5.0 megapixels" },


"connectivity": {
"bluetooth": "Bluetooth 2.1",
"cell": "T-mobile HSPA+ @ 2100/1900/AWS/850 MHz",
"gps": true,
"infrared": false,
"wifi": "802.11 b/g" },
"description": "Apple iBook is the best in class computer
for your professional and personal work.",
"display": {
"screenResolution": "WVGA (1280 x 968)",
"screenSize": "13.0 inches" },
"hardware": {
"accelerometer": true,
"audioJack": "3.5mm",
"cpu": "Intel i7 2.5 GHz",
"fmRadio": false,
"physicalKeyboard": false,
"usb": "USB 3.0" },
"id": "appleproduct_1",
"images": ["img/apple-laptop.jpg"],
"name": "Myshop.com : Apple iBook",
"sizeAndWeight": {
"dimensions": [
"300 mm (w)",
"300 mm (h)",
"12.4 mm (d)" ],
"weight": "1250.0 grams" },
"storage": {
"hdd": "750GB",
"ram": "8GB" }
}

Assume that the application has 100,000 such records and the primary key is
about 20 bytes in size. Also, assume that there are queries that would read
records using secondary index. For example, to find all the records that have
screen size of 13 inches. So, an index is created on the screenSize field.
The information in summarized as follows:

Tables Rows per Columns per Key Size in Value Size in Indexes Index Key
Table Table Bytes Bytes (sum Size in Bytes
of all
columns)
1 100000 2 20 1 KB 1 20

2. Identify the list of operations (typically CRUD operations and Index reads) on the
table and at what rate (per second) they are expected.

Operation Number of Example


Operations (per
second)
Create Records. 3 To create a product.

1-50
Chapter 1
Plan

Operation Number of Example


Operations (per
second)
Read records using the primary key. 200 To read product details using the product ID.
Read records using the secondary index. 1 To fetch all products that have a screen size
of 13 inches.
Update or add an attribute to a record. 5 To update the product description of a
camera
or
To add information about the weight of a
camera.
Delete record. 5 To delete an existing product.

3. Identify the read and write consumption in KB.

Operation Assumptions (If Formula Read Write Notes/Explanation


any) Consumption Consumption
(KB) (KB)
Create Records. Assume that the Record size 0 1 KB + 1 KB Record size is 1 KB
records are (rounded to (1 ) = 2 KB ( 0.8 KB for JSON
created without next KB) + column and 20 bytes for
performing any 1 KB(index) key column) and there
condition checks * (number is one index of size 1
(if they exist). of indexes) KB.
A create operation
incurs a read unit cost if
you execute the put
commands with some
options. Since you need
to guarantee that you
are reading the most
current version of the
row, absolute consistent
reads are used. In such
cases you use the
multiplier 2 in the read
unit formula. Here are
the different options of
determining read unit
costs:
• If Option.IfAbsent
or Option.IfPresent
is used, then Read
Consumption = 2
• If setReturnRow is
used, then Read
Consumption = 2 *
Record size
• If Option.IfAbsent
and setReturnRow
is used, then Read
Consumption = 2 *
Record size

1-51
Chapter 1
Plan

Operation Assumptions (If Formula Read Write Notes/Explanation


any) Consumption Consumption
(KB) (KB)
Read records Record size Record size = 0 Record size is 1 KB
using the primary round up to 1 KB
key. KB
Read records Assume that 100 record_size 11KB *100 = 0 There is no charge for
using the records are * 100KB secondary index. The
secondary index. returned. number_of_r 100 KB + 10 record size is 1 KB. For
ecords_matc KB = 110 KB 100 records it is 100
hed KB.
Additional 10 KB
accounts for variable
overhead that may
occur depending on the
number of batches
returned and the size
limit set for the query.
The overhead is the
cost of reading last key
in a batch. This is a
variable that depends
on the maxReadKB and
record size. The
overhead is up to
(numBatches - 1) * key
read cost(1KB).

1-52
Chapter 1
Plan

Operation Assumptions (If Formula Read Write Notes/Explanation


any) Consumption Consumption
(KB) (KB)
Update existing Assume that the Read 1 KB * 2 1 KB + 1 KB + When rows are updated
records updated record consumption 1KB(1) *(2) = 4 using a query(SQL
size is the same = KB statement), then both
as the old record record_size read and write units are
size (1 KB). * 2 consumed. Depending
Write on the update it may
need to read the
consumption
primary key, secondary
= key, or even the record
original_re itself. Absolute
cord_size + consistent reads are
new_record_ needed to guarantee
size + 1 KB we are reading the
(index) * most recent record.
(number of Absolute consistency
writes) reads are twice the cost
of eventual consistency
reads. That is the
reason for multiplying
by 2 in the formula.
Read Consumption:
No charge for index and
Record size is 1 KB. If
executing using the
option setReturnRow,
then Read
Consumption = 2 *
Record size
Write Consumption:
The original and new
record size is 1 KB and
1KB for one index.
Delete record Read 1 KB (1) *2 = 2 1 KB + 1 KB(1) A delete incurs both
consumption KB * (1) = 2 KB read and write unit
= 1 KB costs. Since you have
(index) * 2 to guarantee you are
Write looking at the most
consumption current version of the
row, absolute consistent
=
reads are used, that is
record_size the reason to use the 2
+ 1KB multiplier in the read
(index) * unit formula.
(number_of_
If executing using the
indexes) option setReturnRow,
Read Consumption = 2
* Record size.
Otherwise, Read
Consumption= 1KB for
one index
Write Consumption:
Record size is 1 KB and
1KB for index. No of
index is 1.

1-53
Chapter 1
Plan

Using steps 2 and 3, determine read and write units for the application workload.

Operations Rate of Operations Reads per Second Writes per Second


Create records 3 0 6
Read records using the primary Key 300 300 0
Read records using the secondary index 10 1100 0
Update existing record 5 10 20
Delete record 1 2 2

Total Read Units: 1412


Total Write Units: 28
Therefore, the E-commerce application is estimated to have a workload of 1412
reads per second and 28 writes per second. Download the Capacity Estimator tool
available on Oracle Technology Network to input these values and estimate
throughput and storage of your application.

Note:
The preceding calculations assume eventually consistent read requests. For
an absolute consistency read request, the operation consumes double the
capacity units. Therefore, the read capacity units would be 4844 Read Units.

Estimating Your Monthly Cost


Learn how to estimate the monthly cost of your Oracle Cloud subscription.
When you are ready to order your Oracle Cloud service, Oracle provides you with a
cost estimator to figure out your monthly usage and costs before you commit to a
subscription model or an amount.
The Cost Estimator automatically calculates your monthly cost based on your input of
read units, write units, and storage. But for you to understand how to calculate the
read and write units for your application, follow these steps:
1. Step 1: Navigate to the Estimating Capacity topic. Estimate your application
workload by using the example and formulas described in this topic.
Step 2: Download and use the Capacity Estimator from Oracle Technology
Network to estimate write units, read units, and the storage capacity for your
application based on the application workload and database operations criteria.
2. Step 2: Access the Cost Estimator on the Oracle Cloud website. Select the Data
Management check box. Scroll through to locate Oracle NoSQL Database
Cloud, and click Add to add an entry for Oracle NoSQL Database Cloud under
the Configuration Options. Expand NoSQL Database to find the different
Utilization and configuration options. Input values for the Utilization and
Configuration parameters to estimate the cost for Oracle NoSQL Database Cloud
Service usage from your Oracle Cloud Pay-As-You-Go and Monthly
Flex subscriptions.
3. Step 3: Access the Cost Estimator on the Oracle Cloud website. In the Dropdown,
select Data Management. You see various options displayed under Data

1-54
Chapter 1
Configure

Management. Scroll through to locate Oracle NoSQL Database Cloud. Click Add to add
an entry for Oracle NoSQL Database Cloud under the Configuration Options.
4. Step 4: Expand Database - NoSQL to find the different Utilization and configuration
options. You have two options under Configuration. You could start with an "Always Free"
option or you could provision your instance with your desired configuration.
• Step 4a: If you want an Always Free option, under Configuration expand Oracle
NoSQL Database Cloud - Read, Oracle NoSQL Database Cloud Service - Storage,
and Oracle NoSQL Database Cloud Service - Write and change the Read, Storage
and Write capacity as 0. Then your total cost estimate is shown as 0 and you can
proceed with the Always Free option.
5. Step 5: Alternatively, if you want to provision higher read, write, and storage capacity
than what is available in Always Free, you can do so by entering the configuration values
under Database-NoSQL.
• Step 5a: Under Utilization, do not modify the default values as Oracle NoSQL
Database Cloud Service does not use any of these values.
• Step 5 b: Under Configuration, add the number of Read Units, Write Units, and
Storage Capacity that you estimated in the previous step. The cost is estimated
based on your input values and shown on the page.

Note:
If you are using the auto-scale feature, an invoice will be generated end-of-the-
month for the actual consumption of read and write units in real-time. So you may
wish to collect your own audit logs in the application to verify end-of-the-month
billing. It would be recommended to log the consumed read and write units that are
returned by the NoSQL Database Cloud service with every API call. You could use
this data to correlate with end-of-the-month invoicing data from the Oracle Cloud
metering and billing system.

For a detailed understanding of the different pricing models available, see NoSQL Database
Cloud Service Pricing.

Configure
• Configuration tasks for Analytics Integrator

Configuration tasks for Analytics Integrator


• Accessing Oracle Cloud Object Storage
• Accessing the Oracle Cloud Autonomous Data Warehouse
• Enabling a Compute Instance for Oracle NoSQL Database Cloud Service and ADW and
(optionally) Enabling the ADW Database for Object Storage

Accessing Oracle Cloud Object Storage


These are simple steps to set up access to the Oracle Object Storage Service.

1-55
Chapter 1
Configure

Create a bucket in Object Storage


To set up access to the Oracle Object Storage Service, you need to create a bucket for
Object Storage.
You create a bucket to which the NoSQL table’s data can be temporarily copied, in
Parquet format. This is in preparation for the transfer of data to ADW. To create a
bucket, go to the Oracle Cloud Console and do the following:
• Select Storage from the menu on the left side of the display and Select Object
Storage & Archive Storage.

• Select the Compartment in which to create the bucket.

• Supply a descriptive name.


• Under the Default Storage Tier, select Standard.

1-56
Chapter 1
Configure

• Choose your desired Encryption and click Create.

There is no need to create any other files. Once the bucket is created, all you need to do is
specify the name of the bucket in the configuration, along with the bucket’s compartment, and
the utility will take it from there; creating objects with names derived from the table being
copied.
For example, if the name of the bucket you created is nosql-to-adw and the name of the
table you wish to copy to ADW is myTable, and you direct the utility to use the Oracle NoSQL
Migrator , then the utility will retrieve data from the NoSQL table named myTable, convert it
to Parquet format, and copy the Parquet data to the nosql-to-adw bucket as objects having
names of the form, myTable_2021_07_22/Data/000000.parquet, myTable_2021_07_22/Data/
000001.parquet, etc.

1-57
Chapter 1
Configure

Generate an authorization token for Object Storage.


After creating the bucket, if you wish to authenticate ADW with the Object Storage
service using a user specified authorization token (AUTH_TOKEN), and if your system
administrator has not already generated one for you, then you must generate that
token yourself so that files written to the Object Storage bucket can be accessed.
Communication between ADW and Object Storage will rely on this AUTH_TOKEN as
well as the database's username/password authentication mechanism.
To create an AUTH_TOKEN do the following steps.
• In the OCI Console, go to User Settings in your Profile menu.

• Under Resources, select Auth Tokens . Click Generate Token.

• If you, rather than the system administrator, generate the AUTH_TOKEN, then
copy it to a file for safekeeping. Whether generated by you or the system
administrator, the AUTH_TOKEN must then be stored in the ADW database. For

1-58
Chapter 1
Configure

details on how to do this, see Enable the OCI Resource Principal Credential or Store/
Enable the User's Object Storage AUTH_TOKEN in the ADW Database

Accessing the Oracle Cloud Autonomous Data Warehouse


Steps to access the Oracle Cloud Autonomous Data Warehouse from Oracle NoSQL
Database Analytics Integrator.

Create a database in the Autonomous Data Warehouse


You need to create a database to access the Oracle Cloud Autonomous Data Warehouse
from Oracle NoSQL Database Analytics Integrator. To create a database in the Oracle Cloud
Autonomous Data Warehouse, go to the Oracle Cloud Console and do the following:
• Select Oracle Database from the menu on the left side of the display.
• Select Autonomous Data Warehouse.

• Select the Compartment in which to create the database.


• Click Create Autonomous Database.
• Enter the Basic Information for the Autonomous Database; for example,
– Compartment: enter the compartment name selected above.
– Display name: the name to display on the console; for example, NoSqlToAdwDb.
– Database name: the name to use when connecting to the database; ex.
NoSqlToAdwDb (cannot be more than 14 characters).

1-59
Chapter 1
Configure

• Choose the Data Warehouse workload type.


• Choose the Shared Infrastructure deployment type.

• Choose the default configuration for the database.

1-60
Chapter 1
Configure

• Set a Password under Create Administrator Credentials.

• Choose Allow secure access from everywhere for Access Type.


• Choose the appropriate license type. If you have your own license, then choose Bring
Your Own License (BYOL).
• Click Create Autonomous Database.

1-61
Chapter 1
Configure

Install credentials needed for a secure database connection


Connections to the database you created in the Autonomous Data Warehouse must
be secure. In order for the Oracle NoSQL Database Analytics Integrator to connect
securely to the ADW database, the utility uses the credentials contained in an Oracle
Wallet.
To obtain the Oracle Wallet, go to the Oracle Cloud Console and do the following:
• Select Oracle Database from the menu on the left side of the display.
• Select Autonomous Data Warehouse.

• Select the Compartment in which the database is located.


• Click on the link with the display name you entered when creating the database.

1-62
Chapter 1
Configure

• Click Service Console.

• Click on the Administration link on the left of the display.


• Select Download Client Credentials (Wallet) and enter the administrative password set
during database creation.

• Save the file (zip) to a safe location.


The zip file that is produced includes the following items:

1-63
Chapter 1
Configure

• The network configuration files (tnsnames.ora and sqlnet.ora) needed to connect


to the database.
• The auto-open SSO wallet file, cwallet.sso.
• The PKCS12 file, ewallet.p12 which is protected by the wallet password you
provided when you downloaded the zip file via the Oracle Cloud Console.
• The Java keystore and trustore files, keystore.js and truststore.jks; protected
by the wallet password.
• The file ojdbc.properties, which specifies the wallet-related Java system
property required for connecting to the database via JDBC.
• A README file containing wallet expiration information.

After obtaining the wallet zip file, make note of the password and store the wallet in
any environment from where you will be connecting to the database. Additionally, to
use the Oracle NoSQL Database Analytics Integrator, the extracted contents of the
wallet zip file must be installed in the environment where you will be executing the
utility. For example, if you are executing the utility from an Oracle Cloud Compute
Instance, you should extract the contents of the zip file in any directory on that
instance. Then use the path to that directory as the value of the parameter
databaseWallet in the database section of the utility’s configuration file.

Enable the Resource Principal Credential or Store/Enable the User's Object Storage
AUTH_TOKEN in the ADW Database
After retrieving data from the desired NoSQL Cloud Service table and writing that data
to Parquet files in Object Storage, the Oracle NoSQL Database Analytics Integrator
uses subprograms from the Oracle PL/SQL DBMS_CLOUD package to retrieve the
Parquet files from Object Storage. It then loads the data contained in those files to a
table in the database you created in the Oracle Cloud Autonomous Data Warehouse.
Before the Oracle NoSQL Database Analytics Integrator can do this, you must provide
a way for the ADW database to authenticate with Object Storage for access to those
Parquet files. The ADW database can authenticate with the Object Storage service in
one of two ways: using the OCI Resource Principal or a user-specific AUTH_TOKEN
that either you or the system administrator generates. The authentication mechanism
you decide to use is enabled by executing the following steps from the Oracle Cloud
Console.
• Select Oracle Database from the menu on the left side of the display.
• Select Autonomous Data Warehouse.

1-64
Chapter 1
Configure

• Select the Compartment in which the database is located.


• Click on the link with the display name you entered when creating the database.

• Click Service Console.

1-65
Chapter 1
Configure

• Select Development from the menu on the left side of the display.

• Select Database Actions and login to the database; for example,


– Username: ADMIN
– Password: <password-set-during-database-creation>

1-66
Chapter 1
Configure

• Select the item SQL.

• From the window labeled [Worksheet]*, if you wish to authenticate the ADW database
with Object Storage using the Resource Principal, then execute the following procedure.

EXEC DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL();

1-67
Chapter 1
Configure

Alternatively, if you wish to perform the authentication using the AUTH_TOKEN


that either the system administrator provided to you or you generated yourself,
then execute the procedure,

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
credential_name => 'NOSQLADWDB_OBJ_STORE_CREDENTIAL',
username => '<your-Oracle-Cloud-username>',
password => '<cut-and-paste-the-AUTH_TOKEN>'
);
END;

The DBMS_CLOUD_ADMIN.ENABLE_RESOURCE_PRINCIPAL procedure enables the OCI


Resource Principal (named OCI$RESOURCE_PRINCIPAL) for use by the ADW database
when authenticating with an OCI resource such as Object Storage. The
DBMS_CLOUD.CREATE_CREDENTIAL procedure encrypts the specified AUTH_TOKEN
credential and stores it in a table in the database named adwc_user. Whichever
procedure you employ, that procedure needs to be executed only once; after which the
same credential name can be specified for all transfers from Object Storage to the
ADW database.

Note:
When the ADW database uses the OCI Resource Principal to authenticate
with Object Storage, the name of the credential is OCI$RESOURCE_PRINCIPAL.
Alternatively, when using the AUTH_TOKEN to authenticate with Object
Storage, the name of the credential is the value you specify for the
credential_name parameter in the DBMS_CLOUD.CREATE_CREDENTIAL
procedure. But note that the value shown above
(NOSQLADWDB_OBJ_STORE_CREDENTIAL) is only an example. You can use any
name you wish. Thus, the dbmsCredentialName parameter in the
configuration file should contain either the value OCI$RESOURCE_PRINCIPAL,
or the name you specify here for the credential_name parameter; depending
on the authentication mechanism you choose to employ for authenticating
the ADW database with Object Storage.

1-68
Chapter 1
Configure

Enabling a Compute Instance for Oracle NoSQL Database Cloud Service and ADW
and (optionally) Enabling the ADW Database for Object Storage
Steps to authorize your compute instance to perform actions on the NoSQL Service,
ObjectStorage, and ADW.

Create a Dynamic Group for the Compute Instance and the ADW Database
Although you can execute the Oracle NoSQL Database Analytics Integrator using your own
credentials exclusively, it is recommended that you execute the utility from an Oracle Cloud
Compute Instance authorized to perform actions on the Oracle NoSQL Cloud Service, Object
Storage, and the Autonomous Data Warehouse. Similarly, although you can use an Object
Storage AUTH_TOKEN to allow the ADW database to access Object Storage, it is
recommended that you use the OCI Resource Principal to authenticate the ADW database
with Object Storage. It is important to note though, that because the database you create in
ADW requires authentication using the database’s username and password, your user
credentials still must be supplied to the utility to access that resource.
To authorize your compute instance to perform actions on the NoSQL Service,
ObjectStorage, and ADW, a dynamic group must be created and a set of matching rules must
be added for your instance. To allow the ADW database to use the OCI Resource Principal to
access Object Storage, a dynamic group with the appropriate set of rules must also be
created. If you wish, the same dynamic group you create for your compute instance can also
be used for the ADW database. This is shown in the example below.
• Select Identity & Security from the menu on the left of the display.
• Under Identity, select Dynamic Groups.

1-69
Chapter 1
Configure

• Click Create Dynamic Group.

• Enter a name for the group, for example, nosql-to-adw-group.


• Enter a description for the group; for example, the list of the group’s members.
• Enter the desired matching rules; for example, Any {instance.id=''<ocid-of-
compute-instance}' and 'resource.id='<ocid-ofthe-database>'.

1-70
Chapter 1
Configure

• Click Create.

Create a Policy with appropriate permissions for the dynamic group


Once a dynamic group is created, you must create a policy that grants permissions to it that
allows members of that group (for example, the compute instance) to read tables in the
NoSQL Cloud Service, read and write objects in ObjectStorage, and execute procedures in
the Autonomous Data Warehouse.
• Select Identity & Security from the menu on the left of the display.
• Under Identity, select Policies.

1-71
Chapter 1
Configure

• Click Create Policy.

• Enter a name for the policy.


• Enter a description for the policy; for example, a description of what the members
of the group are allowed to do.
• Enter the compartment and click Create.

1-72
Chapter 1
Configure

• Add Statements to the policy using Basic Policy Builder.

An example set of policies that allow the compute instance from the dynamic group to access
the NoSQL Cloud Service, ObjectStorage, and ADW is given below.

Allow dynamic-group <dyn-grp-name> to manage nosql-tables in compartment


<compartment-name>
Allow dynamic-group <dyn-grp-name> to manage nosql-rows in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to manage nosql-indexes in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to read buckets in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to read objects in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to manage buckets in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to manage objects in compartment
<compartment-name>
Allow dynamic-group <dyn-grp-name> to manage autonomous-database in
compartment <compartment-name>

1-73
Chapter 1
Devops

After this configuration, you should be able to execute the utility from a compute
instance using Instance Principal authentication.

Devops
• Deploying Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager
• Updating Oracle NoSQL Database Cloud Service Table Using Terraform and OCI
Resource Manager

Deploying Oracle NoSQL Database Cloud Service Table Using


Terraform and OCI Resource Manager
It’s easy to deploy NDCS tables on OCI (Oracle Cloud Infrastructure) using Terraform
and Resource Manager’s Stack. We are going to use the OCI Resource Manager CLI
to deploy NDCS Tables on Oracle Cloud. Before we proceed with this article, it is
assumed that you are aware of NoSQL Cloud Service and know its basics, along with
Terraform.
Terraform uses providers to interface between the Terraform engine and the
supported cloud platform. The Oracle Cloud Infrastructure (OCI) Terraform provider is
a component that connects Terraform to the OCI services that you want to manage.
You can use the OCI Terraform provider, including Terraform Cloud and the OCI
Resource Manager.
The OCI Resource Manager is an Oracle-managed service based on Terraform that
uses Terraform configuration files to automate deployment and operations for the OCI
resources supported by the OCI Terraform provider. Resource Manager allows you to
share and manage infrastructure configurations and state files across multiple teams
and platforms.

• To create resources in OCI, we need to configure terraform. Create the basic


terraform configuration files for terraform provider definition, NoSQL resource
definitions, authentication, and input variables.
• Decide where to store the terraform configuration files. You can store these files in
different sources, such as local folder or zip, Object Storage bucket, and source
control systems, such as GitHub or GitLab.
• Run the Resource Manager CLI commands to perform the following tasks:
– Create a stack.
– Generate and review the execution plan.
– Run the Apply job to provision NoSQL resources.

1-74
Chapter 1
Devops

– Review log files, as needed.

Note:
We’re going to be working with the Oracle Cloud Infrastructure (OCI) Resource
Manager Command Line Interface (CLI) and executing these commands in the
Cloud Shell using Console. This means you will need some information about your
cloud tenancy and other items such as public or private key pairs handy. If you want
to configure the OCI CLI on your local machine, refer to this documentation.

This article has the following topics:

Prerequisites
• Basic understanding of Terraform. Read the brief introduction here.
• An Oracle Cloud account and a subscription to the Oracle NoSQL Database Cloud
Service. If you do not already have an Oracle Cloud account you can start here.
• OCI Terraform provider installed and configured.

Step 1: Create Terraform configuration files for NDCS Table or Index

Substep 1.1: Create OCI Terraform provider configuration


Create a new file named "provider.tf" that contains the OCI Terraform provider definition,
and also associated variable definitions. The OCI Terraform provider requires ONLY the
region argument.

However, you might have to configure additional arguments with authentication credentials
for an OCI account based on the authentication method. The OCI Terraform provider
supports three authentication methods:
• API Key Authentication
• Instance Principal Authorization
• Security Token Authentication
The region argument specifies the geographical region in which your provider resources are
created. To target multiple regions in a single configuration, you simply create a provider
definition for each region and then differentiate by using a provider alias, as shown in the
following example. Notice that only one provider, named "oci" is defined, and yet the oci

1-75
Chapter 1
Devops

provider definition is entered twice, once for the us-phoenix-1 region (with the alias
"phx"), and once for the region us-ashburn-1 (with the alias "iad").

provider "oci" {
region = "us-phoenix-1"
alias = "phx"
}
provider "oci" {
region = "us-ashburn-1"
alias = "iad"
}

• API Key Authentication

• Instance Principal Authorization

• Security Token Authentication

API Key Authentication


In the example below, an region argument is required for the OCI Terraform provider.
tenancy_ocid, user_ocid, private_key_path, and fingerprint arguments are
required for API Key authentication. You can provide the value for region and API Key
Authentication keys (tenancy_ocid, user_ocid, private_key_path, and fingerprint)
as Environment Variables or within Terraform configuration variables (as mentioned in
Substep 1.3: Loading Terraform Configuration Variables).

variable "tenancy_ocid" {
}
variable "user_ocid" {
}
variable "fingerprint" {
}
variable "private_key_path" {
}
variable "region" {
}

provider "oci" {
region = var.region
tenancy_ocid = var.tenancy_ocid
user_ocid = var.user_ocid
fingerprint = var.fingerprint
private_key_path = var.private_key_path
}

Instance Principal Authorization


Instance principal authorization allows your provider to make API calls from an OCI
compute instance without needing the tenancy_ocid, user_ocid, private_key_path,
and fingerprint attributes in your provider definition.

1-76
Chapter 1
Devops

Note:
Instance principal authorization applies only to instances that are running in Oracle
Cloud Infrastructure.

In the example below, an region argument is required for the OCI Terraform provider, and an
auth argument is required for Instance Principal Authorization. You can provide the value for
region argument as Environment Variables or within Terraform configuration variables (as
mentioned in Substep 1.3: Loading Terraform Configuration Variables).

variable "region" {
}
provider "oci" {
auth = "InstancePrincipal"
region = var.region
}

Security Token Authentication


Security Token authentication allows you to run Terraform using a token generated with
Token-based Authentication for the CLI.

Note:
This token expires after one hour. Avoid using this authentication method when
provisioning of resources takes longer than one hour. See Refreshing a Token for
more information.

In the example below, region an argument is required for the OCI Terraform provider. The
auth and config_file_profile arguments are required for Security Token authentication.

You can provide the value for API Key Authentication keys (tenancy_ocid, user_ocid,
private_key_path, and fingerprint) as Environment Variables or in the OCI config file
(~/.oci/config). You can also provide the value for region and config_file_profile as
Environment Variables or within Terraform configuration variables (as mentioned in Substep
1.3: Loading Terraform Configuration Variables).

variable "region" {
}
variable "config_file_profile" {
}
provider "oci" {
auth = "SecurityToken"
config_file_profile = var.config_file_profile
region = var.region
}

1-77
Chapter 1
Devops

Substep 1.2: Create NoSQL Terraform configuration

Create a new file named "nosql.tf" that contains the NoSQL terraform configuration
resources for creating NoSQL Database Cloud Service tables or indexes. For more
information about the NoSQL Database resources and data sources, see
oci_nosql_table.
In the example below, we are creating 2 NoSQL tables. compartment_ocid argument
is required for NoSQL Database resources such as tables and indexes. You can
provide the value for compartment_ocid as Environment Variables or within Terraform
configuration variables (as mentioned in Substep 1.3: Loading Terraform
Configuration Variables).

variable "compartment_ocid" {
}

resource "oci_nosql_table" "nosql_demo" {


compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demo (ticketNo
INTEGER, fullName STRING, contactPhone STRING, confNo STRING, gender
STRING, bagInfo JSON, PRIMARY KEY (ticketNo))"
name = "demo"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}

resource "oci_nosql_table" "nosql_demoKeyVal" {

compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demoKeyVal (key
INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO
CYCLE), value JSON, PRIMARY KEY (key))"
name = "demoKeyVal"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}

Substep 1.3: Loading Terraform Configuration Variables


The next step is to create a file named "terraform.tfvars" and provide values for
the required OCI Terraform provider arguments based on the authentication method.

• API Key Authentication

• Instance Principal Authorization

1-78
Chapter 1
Devops

• Security Token Authentication

API Key Authentication


Provide values for your IAM access and secret keys (tenancy_ocid, user_ocid,
private_key_path, and fingerprint), region, and compartment_ocid arguments. You
should already have an OCI IAM user with secret and access keys having sufficient
permissions on NoSQL Database Cloud Service. Get those keys and store them in the file.
For example:

tenancy_ocid =
"ocid1.tenancy.oc1..aaaaaaaaqljdu37xcfoqvyj47pf5dqutpxu4twoqc7hukwgpbavpdwkqx
c6q"
user_ocid =
"ocid1.user.oc1..aaaaaaaafxz473ypsc6oqiespihan6yi6obse3o4e4t5zmpm6rdln6fnkurq
"
fingerprint = "2c:9b:ed:12:81:8d:e6:18:fe:1f:0d:c7:66:cc:03:3c"
private_key_path = "~/NoSQLLabPrivateKey.pem"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"
region = "us-phoenix-1"

Instance Principal Authorization


Provide values for region and compartment_ocid arguments.

For example:

region = "us-phoenix-1"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"

Security Token Authentication


Provide values for the region, compartment_ocid, and config_file_profile arguments.

For example:

region = "us-phoenix-1"
compartment_ocid =
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya"
config_file_profile = "PROFILE"

Substep 1.4: Declaring Input Variables


The last step is to create "variables.tf" and assign values to input variables in it. Click
here to refer to Terraform documentation to check for the valid arguments or properties

1-79
Chapter 1
Devops

available for NoSQL Database. In the example, the default value of the read, write,
and storage units for NoSQL table are set to 10, 10, and 1 respectively.

variable "table_table_limits_max_read_units" {
default = 10
}
variable "table_table_limits_max_write_units" {
default = 10
}
variable "table_table_limits_max_storage_in_gbs" {
default = 1
}

Using Cloud Shell from the Console, we have created all the required terraform
configuration files for provider definition, NoSQL database resources, authentication
values, and input variables.

Step 2: Where to Store Your Terraform Configurations

When creating a stack with Resource Manager, you can select your Terraform
configuration from the following sources.
• Local .zip file
• Local folder
• Object Storage bucket
The most recent contents of the bucket are automatically used by any job running
on the associated stack.
• Source code control systems, such as GitHub and GitLab
The latest version of your configuration is automatically used by any job running
on the associated stack.
• Template (pre-built Terraform configuration from Oracle or a private template)
• Existing compartment (Resource Discovery)

1-80
Chapter 1
Devops

Substep 2.1: Create Configuration Source Providers for Remote Terraform Configurations

You need to create a source configuration provider in case you want to use the remote
terraform configurations that are hosted on a source control system, such as GitHub and
GitLab.
For more information on how to create configuration source providers for remote Terraform
configurations, see Managing Configuration Source Providers.

Step 3: Create a Stack from a File

Use the command related to your file location. For Terraform configuration sources supported
with Resource Manager, see Where to Store Your Terraform Configurations.
For this tutorial, we are going to create a stack using Instance Principal authentication
method from a local terraform configuration zip file, terraform.zip. The terraform.zip
file contains the following files:
• provider.tf
• nosql.tf
• terraform.tfvars
• variables.tf

Note:
In this tutorial, we are using OCI Resource Manager CLI commands to create a
stack. You can perform the same task using OCI Resource Manager Console.

• File hosted on Git

• Terraform configuration in OS bucket

• Uploaded configuration .zip file

1-81
Chapter 1
Devops

File hosted on Git


To create a stack from a file hosted on a source code control system, such as GitHub
and GitLab.

oci resource-manager stack create-from-git-provider


--compartment-id ocid1.tenancy.oc1..uniqueid
--config-source-configuration-source-provider-id
ocid.ormconfigsourceprovider.oc1..uniqueid
--config-source-repository-url https://github.com/user/repo.git
--config-source-branch-name mybranch
--display-name "My Stack from Git"
--description "Create NoSQL Table"
--variables file://variables.json
--working-directory ""

Terraform configuration in OS bucket


To create a stack from a Terraform configuration in an Object Storage bucket.

oci resource-manager stack create-from-object-storage


--compartment-id ocid1.tenancy.oc1..uniqueid
--config-source-namespace MyNamespace
--config-source-bucket-name MyBucket
--config-source-region PHX
--display-name "My Stack from Object Storage"
--description "Create NoSQL Table"
--variables file://variables.json

Uploaded configuration .zip file


To create a stack from an uploaded configuration file (.zip)

oci resource-manager stack create


--compartment-id ocid1.tenancy.oc1..uniqueid
--config-source vcn.zip
--variables file://variables.json
--display-name "My Example Stack"
--description "Create NoSQL Table"
--working-directory ""

Where,
• --compartment-id is the OCID of the compartment where you want to create the
stack.
• --config-source is the name of a .zip file that contains one or more Terraform
configuration files.
• (Optional) --variables is the path to the file specifying input variables for your
resources.
The Oracle Cloud Infrastructure Terraform provider requires additional parameters
when running Terraform locally (unless you are using instance principals). For
more information on using variables in Terraform, see Input Variables. See also
Input Variable Configuration.

1-82
Chapter 1
Devops

• (Optional) --display-name is the friendly name for the new stack.


• (Optional) --description is the description for the new stack.
• (Optional) --working-directory is the root configuration file in the directory. If not
specified, or if null as in this example, then the service assumes that the top-level file in
the directory is the root configuration file.
For example:

oci resource-manager stack create


--compartment-id
ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7nf
5ca3ya
--config-source terraform.zip

Example response:

{
"data": {
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-type": "ZIP_UPLOAD",
"working-directory": null
},
"defined-tags": {},
"description": null,
"display-name": "ormstack20220117104810",
"freeform-tags": {},
"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"lifecycle-state": "ACTIVE",
"stack-drift-status": "NOT_CHECKED",
"terraform-version": "1.0.x",
"time-created": "2022-01-17T10:48:10.878000+00:00",
"time-drift-last-checked": null,
"variables": {}
},
"etag": "dd62ace0b9e9d825d825c05d4588b73fede061e55b75de6436b84fb2bb794185"
}

We have created a stack from the terraform configuration file(s) and generated a stack id. In
the next step, this stack id is used to generate an execution plan for the deployment of
NoSQL tables.

"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq"

1-83
Chapter 1
Devops

Substep 3.1: Review the Execution Plan


To review an execution plan, run the following command:

oci resource-manager job get-job-logs


--job-id <plan_job_OCID>

For example:

oci resource-manager job get-job-logs


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaagke5ajwwchvxkql2c56qoohhvc2dxu5fnqswnpw4hs
ombrfijnia

Example response:

...
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demo will be read
during apply",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\" -> (known
after apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtl
xxsrvrc4zxr6lo4a\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ is_auto_reclaimable = true -> (known after
apply)",
"timestamp": "2022-01-24T12:23:36.445000+00:00",

1-84
Chapter 1
Devops

"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " - \"orcl-cloud.free-tier-retained\" = \"true\"",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demoKeyVal will be read
during apply",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT EXISTS
demoKeyVal(key INTEGER, value JSON, shortName STRING, PRIMARY
KEY(SHARD(key)))\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.446000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrlbd
54l3wdo7hq\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ is_auto_reclaimable = true -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be updated in-place",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},

1-85
Chapter 1
Devops

{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, fullName STRING, PRIMARY
KEY(SHARD(ticketNo)))\" -> \"ALTER TABLE demo (DROP fullName)\"",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be updated
in-place",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\"
\"nosql_demoKeyVal\" {",
"timestamp": "2022-01-24T12:23:36.447000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demoKeyVal(key INTEGER, value JSON, PRIMARY KEY(SHARD(key)))\" -
> \"ALTER TABLE demoKeyVal (ADD shortName STRING)\"",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ nosql_kv_table_ddl_statement = \"CREATE TABLE IF
NOT EXISTS demoKeyVal(key INTEGER, value JSON, shortName STRING,
PRIMARY KEY(SHARD(key)))\" -> (known after apply)",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ nosql_table_ddl_statement = \"CREATE TABLE IF
NOT EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\" -> (known
after apply)",
"timestamp": "2022-01-24T12:23:36.448000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...

This step is very important as it validates if the updated stack code contains any
syntax errors, and exactly how many OCI resources are being added, updated, or

1-86
Chapter 1
Devops

destroyed. In the tutorial, we are updating the schema of two NoSQL tables: demo and
demoKeyVal by adding and dropping column(s)..

{
...
"message": "Plan: 0 to add, 2 to change, 0 to destroy.",
...
}

Step 4: Generate an Execution Plan

Note:
In this tutorial, we are using OCI Resource Manager CLI commands to generate an
execution plan. You can perform the same task using OCI Resource Manager
Console.

To generate an execution plan, run the following command:

oci resource-manager job create-plan-job


–-stack-id <stack_OCID>
--display-name "<friendly_name>"

For example:

oci resource-manager job create-plan-job


--stack-id
ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpksc
mr57bq

Example response:

{
"data": {
"apply-job-plan-resolution": null,
"cancellation-details": {
"is-forced": false
},
"compartment-id":

1-87
Chapter 1
Devops

"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ormjob20220117104856",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacrylnpglae4yvwo4q2r2tk5z5x5v6bwjsoxgn26mo
yg3eqwnt2aq",
"job-operation-details": {
"operation": "PLAN",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "PLAN",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"time-created": "2022-01-17T10:48:56.324000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag":
"a6f75ec1e205cd9105705fd7c8d65bf262159a7e733b27148049e70ce6fc14fe"
}

We have generated an execution plan from a stack. The Resource Manager creates a
job with a unique id corresponding to this execution plan. This plan job id can be used
later to review the execution plan details before running the apply operation to deploy
the NoSQL database resources on the OCI cloud.

"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacrylnpglae4yvwo4q2r2tk5z5x5v6bwjsoxgn26mo
yg3eqwnt2aq",
"job-operation-details": {
"operation": "PLAN"
...
}

Substep 4.1: Review the Execution Plan

1-88
Chapter 1
Devops

To review an execution plan, run the following command:

oci resource-manager job get-job-logs


--job-id <plan_job_OCID>

For example:

oci resource-manager job get-job-logs


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaacrylnpglae4yvwo4q2r2tk5z5x5v6bwjsoxgn26moyg3eqwn
t2aq

Example response:

...
{
"level": "INFO",
"message": "Terraform used the selected providers to generate the
following execution",
"timestamp": "2022-01-17T10:49:21.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "plan. Resource actions are indicated with the following
symbols:",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + create",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be created",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{

1-89
Chapter 1
Devops

"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not
exists demo (ticketNo INTEGER, fullName STRING, contactPhone STRING,
confNo STRING, gender STRING, bagInfo JSON, PRIMARY KEY (ticketNo))\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demo\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be
created",

1-90
Chapter 1
Devops

"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demoKeyVal\"",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:49:21.635000+00:00",
"type": "TERRAFORM_CONSOLE"
},

1-91
Chapter 1
Devops

{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:49:21.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
"timestamp": "2022-01-17T10:49:21.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...

This step is very important as it validates to see if the stack code contains any syntax
errors, and exactly how many OCI resources are being added, updated, or destroyed.
In the tutorial, we are deploying two NoSQL tables: demo and demoKeyVal.

{
...
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
...
}

Step 5: Run an Apply Job

• To specify a plan job ("apply" an execution plan), use FROM_PLAN_JOB_ID:

oci resource-manager job create-apply-job


--stack-id <stack_OCID>
--execution-plan-strategy FROM_PLAN_JOB_ID
--execution-plan-job-id <plan_job_OCID>
--display-name "Example Apply Job"

• To automatically approve the apply job (no plan job specified), use AUTO_APPROVED:

oci resource-manager job create-apply-job


--stack-id <stack_OCID>
--execution-plan-strategy AUTO_APPROVED
--display-name "Example Apply Job"

1-92
Chapter 1
Devops

For example:

oci resource-manager job create-apply-job


--stack-id
ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpksc
mr57bq
--execution-plan-strategy AUTO_APPROVED
--display-name "Create NoSQL Tables Using Terraform"

Example response:

{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Create NoSQL Tables Using Terraform",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5dsxd6f
hescq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-17T10:54:46.346000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null

1-93
Chapter 1
Devops

},
"etag":
"4042a300e8f678dd6da0f49ffeccefed66902b51331ebfbb559da8077a728126"
}

We have run the apply operation on the execution plan from a stack. The Resource
Manager creates a job with a unique id to run the apply operation. This apply job id
can be later used to review the logs generated as part of NoSQL Database table
deployment on OCI cloud.

"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5
dsxd6fhescq",
"job-operation-details": {
"operation": "APPLY"
...
}

Substep 5.1: Verify the Status of Job


To verify the status of a job, run the following command:

oci resource-manager job get


--job-id <job_OCID>

For example:

oci resource-manager job get


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5d
sxd6fhescq

Example response:

{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Create NoSQL Tables Using Terraform",

1-94
Chapter 1
Devops

"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5dsxd6f
hescq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "SUCCEEDED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-17T10:54:46.346000+00:00",
"time-finished": "2022-01-17T10:55:28.853000+00:00",
"variables": {},
"working-directory": null
},
"etag": "9e9f524b87e3c47b3f3ea3bbb4c1f956172a48e4c2311a44840c8b96e318bcaf--
gzip"
}

You can check the status of your apply job to verify if the job is SUCCESSFUL or FAILED.

{
...
"lifecycle-state": "SUCCEEDED",
...
}

Substep 5.2: View the Log for a Job


To view the log for a job, run the following command:

oci resource-manager job get-job-logs-content


--job-id <job_OCID>

For example:

oci resource-manager job get-job-logs-content


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaaqn4nsnfgi3th4rxolwqn3kftzzdpsw52pnfeyphi5dsxd6fh
escq

1-95
Chapter 1
Devops

Example response:

...
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be created",
"timestamp": "2022-01-17T10:55:05.580000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not
exists demo (ticketNo INTEGER, fullName STRING, contactPhone STRING,
confNo STRING, gender STRING, bagInfo JSON, PRIMARY KEY (ticketNo))\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demo\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},

1-96
Chapter 1
Devops

{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be created",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + is_auto_reclaimable = true",

1-97
Chapter 1
Devops

"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + name = \"demoKeyVal\"",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_limits {",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_read_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_storage_in_gbs = 1",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + max_write_units = 10",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 2 to add, 0 to change, 0 to destroy.",
"timestamp": "2022-01-17T10:55:05.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Creating...",
"timestamp": "2022-01-17T10:55:06.581000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Creating...",
"timestamp": "2022-01-17T10:55:06.582000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Creation complete
after 6s

1-98
Chapter 1
Devops

[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrl
bd54l3wdo7hq]",
"timestamp": "2022-01-17T10:55:12.582000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Creation complete after 9s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsr
vrc4zxr6lo4a]",
"timestamp": "2022-01-17T10:55:15.583000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Apply complete! Resources: 2 added, 0 changed, 0
destroyed.",
"timestamp": "2022-01-17T10:55:15.583000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...

This step is very important as it confirms exactly how many OCI resources were added,
updated, or destroyed. In the tutorial, we have successfully deployed two NoSQL tables: demo
and demoKeyVal.

{
...
"message": "Apply complete! Resources: 2 added, 0 changed, 0
destroyed.",
...
}

We have covered a lot of details in this tutorial. We created terraform configuration files
required for deployment of NoSQL database tables on OCI cloud and then configured the
source location for these files. We then used the OCI Resource Manager CLI to create a
stack, generate an execution plan, and run apply job on the execution plan.

Updating Oracle NoSQL Database Cloud Service Table Using Terraform


and OCI Resource Manager
In this article, we will see the steps to update an existing NDCS Table schema or storage
(read or write) units using Terraform. We will use the OCI Resource Manager CLI to update
NDCS Tables. Before we proceed with this article, it is assumed that you are aware of
NoSQL Cloud Service and know its basics, along with Terraform.
The first step is to create the override file with the necessary configuration changes. And then
run the Resource Manager CLI commands to perform the following tasks:
• Update a stack.
• Generate and review the execution plan.
• Run the Apply job to update the required NoSQL resources.

1-99
Chapter 1
Devops

• Review log files, as needed.


This article has the following topics:

Step 1: Create Terraform Override files for NoSQL Database Table


Create a new file named "nosql_override.tf" or "override.tf" and provide the
specific portion of the NoSQL Database table object that you want to override. For
example, you may want to add or drop a column from the table change the data type
of an existing column, or change table limits (read/write and storage units).
In the example below, we are going to modify the demo table to drop an existing
column, named fullName, and modify the demoKeyVal table to add a new column,
named shortName.

For example:
If you have a Terraform configuration nosql.tf with the following contents:

variable "compartment_ocid" {
}
resource "oci_nosql_table" "nosql_demo" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demo (ticketNo
INTEGER, fullName STRING, contactPhone STRING, confNo STRING, gender
STRING, bagInfo JSON, PRIMARY KEY (ticketNo))"
name = "demo"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}
resource "oci_nosql_table" "nosql_demoKeyVal" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demoKeyVal (key
INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 NO
CYCLE), value JSON, PRIMARY KEY (key))"
name = "demoKeyVal"
table_limits {
max_read_units = var.table_table_limits_max_read_units
max_storage_in_gbs = var.table_table_limits_max_storage_in_gbs
max_write_units = var.table_table_limits_max_write_units
}
}

You now want to modify the demo table and drop an existing column, named fullName,
and modify the demoKeyVal table to add a new column, named shortName. Then
create a file nosql_override.tf or override.tf containing the following content:

variable "compartment_ocid" {
}
resource "oci_nosql_table" "nosql_demo" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demo (ticketNo

1-100
Chapter 1
Devops

INTEGER, contactPhone STRING,


confNo STRING, gender STRING, bagInfo JSON, PRIMARY KEY (ticketNo))"
name = "demo"
}
resource "oci_nosql_table" "nosql_demoKeyVal" {
compartment_id = var.compartment_ocid
ddl_statement = "CREATE TABLE if not exists demoKeyVal (key INTEGER
GENERATED ALWAYS AS IDENTITY
(START WITH 1 INCREMENT BY 1 NO CYCLE), value JSON, shortName STRING,
PRIMARY KEY (key))"
name = "demoKeyVal"
}

When Terraform processes this file (nosql_override.tf), internally it parses the DDL
statement (CREATE TABLE statement) and compares it with the existing table definition and
generates an equivalent ALTER TABLE statement, and applies it.

Step 2: Update the Execution Plan

Note:
These instructions don't apply to configurations stored in source code control
systems. If you are using source code control systems such as GitHub and Gitlab to
maintain terraform configuration files, you can skip this step and directly move to
Step 3. The latest version of your configuration is automatically used by any job
running on the associated stack.

For this tutorial, we are going to update the execution plan with the updated terraform
configuration zip file, terraform.zip file. The updated terraform.zip file contains the
following files:
• provider.tf
• nosql.tf
• nosql_override.tf or override.tf
• terraform.tfvars
• variables.tf

For example:

oci resource-manager stack update


--stack-id
ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpksc
mr57bq
--config-source terraform.zip

Example response:

{
"data": {
"compartment-id":

1-101
Chapter 1
Devops

"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"config-source": {
"config-source-type": "ZIP_UPLOAD",
"working-directory": null
},
"defined-tags": {},
"description": null,
"display-name": "ormstack20220117104810",
"freeform-tags": {},
"id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"lifecycle-state": "ACTIVE",
"stack-drift-status": "NOT_CHECKED",
"terraform-version": "1.0.x",
"time-created": "2022-01-17T10:48:10.878000+00:00",
"time-drift-last-checked": null,
"variables": {}
},
"etag":
"068e7b962aa43c7b3e7bf5c24b2d7f937db0901a784a9dce8715d76d78ad30f3"
}

We have updated an existing stack with the new updated zip file containing override
terraform configuration file(s).

Step 3: Generate an Execution Plan


To generate an execution plan, run the following command:

oci resource-manager job create-plan-job


–-stack-id <stack_OCID>
--display-name "<friendly_name>"

For example:

oci resource-manager job create-plan-job


--stack-id
ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxioh
gcpkscmr57bq

Example response:

{
"data": {
"apply-job-plan-resolution": null,
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",

1-102
Chapter 1
Devops

"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ormjob20220124122310",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaagke5ajwwchvxkql2c56qoohhvc2dxu5fnqswnpw4hsombrf
ijnia",
"job-operation-details": {
"operation": "PLAN",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "PLAN",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpks
cmr57bq",
"time-created": "2022-01-24T12:23:10.366000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag": "b77d497287af3dd2d166871457d880ffee9952ee2c9a44e8f9dfa3e02b974c95"
}

We have generated an execution plan from a stack. The Resource Manager creates a job
with a unique id corresponding to this execution plan. This plan job id can be used later to
review the execution plan details before running the apply operation to deploy the NoSQL
database resources on the OCI cloud.

"id":
"ocid1.ormjob.oc1.phx.aaaaaaaagke5ajwwchvxkql2c56qoohhvc2dxu5fnqswnpw4hsombrf
ijnia",
"job-operation-details": {
"operation": "PLAN"
...
}

Step 4: Run an Apply Job


• To specify a plan job ("apply" an execution plan), use FROM_PLAN_JOB_ID:

oci resource-manager job create-apply-job


--stack-id <stack_OCID>
--execution-plan-strategy FROM_PLAN_JOB_ID

1-103
Chapter 1
Devops

--execution-plan-job-id <plan_job_OCID>
--display-name "Example Apply Job"

• To automatically approve the apply job (no plan job specified), use AUTO_APPROVED:

oci resource-manager job create-apply-job


--stack-id <stack_OCID>
--execution-plan-strategy AUTO_APPROVED
--display-name "Example Apply Job"

For example:

oci resource-manager job create-apply-job


--stack-id
ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxioh
gcpkscmr57bq
--execution-plan-strategy AUTO_APPROVED
--display-name "Update NoSQL Tables Using Terraform"

Example response:
{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7nf5ca
3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "Update NoSQL Tables Using Terraform",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmbzg3dmuc3b
q",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,
"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "ACCEPTED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":

1-104
Chapter 1
Devops

"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxiohgcpkscmr57bq",
"time-created": "2022-01-24T12:36:52.911000+00:00",
"time-finished": null,
"variables": {},
"working-directory": null
},
"etag": "b2af026af48897c7839c347e06a8c40ec3ce1cac08a3da2f0c6ee74fb07078ab"
}

Substep 4.1: Verify the Status of Job


To verify the status of a job, run the following command:

oci resource-manager job get


--job-id <job_OCID>

For example:

oci resource-manager job get


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmbzg3dmu
c3bq

Example response:

{
"data": {
"apply-job-plan-resolution": {
"is-auto-approved": true,
"is-use-latest-job-id": null,
"plan-job-id": null
},
"cancellation-details": {
"is-forced": false
},
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"config-source": {
"config-source-record-type": "ZIP_UPLOAD"
},
"defined-tags": {},
"display-name": "ALTER NoSQL Table Schema",
"failure-details": null,
"freeform-tags": {},
"id":
"ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmbzg3dm
uc3bq",
"job-operation-details": {
"execution-plan-job-id": null,
"execution-plan-strategy": "AUTO_APPROVED",
"operation": "APPLY",
"terraform-advanced-options": {
"detailed-log-level": null,

1-105
Chapter 1
Devops

"is-refresh-required": true,
"parallelism": 10
}
},
"lifecycle-state": "SUCCEEDED",
"operation": "APPLY",
"resolved-plan-job-id": null,
"stack-id":
"ocid1.ormstack.oc1.phx.aaaaaaaa7jrci2s5iav5tdxpl6ucwo2dwazzrdkfhs6bxio
hgcpkscmr57bq",
"time-created": "2022-01-20T11:14:13.916000+00:00",
"time-finished": "2022-01-20T11:14:51.921000+00:00",
"variables": {},
"working-directory": null
},
"etag":
"13b1253bd5e6ca78778b4cf6aad38d262b1476aae06e6f36b40b5f914016b899--
gzip"
}

You can check the status of your apply job to verify if the job is SUCCESSFUL or
FAILED.

{
...
"lifecycle-state": "SUCCEEDED",
...
}

Substep 4.2: View the Log for a Job


To view the log for a job, run the following command:

oci resource-manager job get-job-logs-content


--job-id <job_OCID>

For example:

oci resource-manager job get-job-logs-content


--job-id
ocid1.ormjob.oc1.phx.aaaaaaaacmnanu2qd34x7l5uicgpdfpjbsgh5swddmtslb3qmb
zg3dmuc3bq

Example response:

...
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Refreshing
state...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudx
hwlqrlbd54l3wdo7hq]",

1-106
Chapter 1
Devops

"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Refreshing state...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsr
vrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "plan. Resource actions are indicated with the following
symbols:",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ update in-place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Terraform will perform the following actions:",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # data.oci_nosql_table.nosql_demo will be read during
apply",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7
nf5ca3ya\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_name_or_id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvr

1-107
Chapter 1
Devops

c4zxr6lo4a\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " <= data \"oci_nosql_table\" \"nosql_demoKeyVal\"
{",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + compartment_id =
\"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3dr
jho3f7nf5ca3ya\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " + table_name_or_id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhw
lqrlbd54l3wdo7hq\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demo will be updated in-
place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demo\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE IF NOT
EXISTS demo(ticketNo INTEGER, contactPhone STRING, confNo STRING,
gender STRING, bagInfo JSON, fullName STRING, PRIMARY
KEY(SHARD(ticketNo)))\" -> \"ALTER TABLE demo (DROP fullName)\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtl
xxsrvrc4zxr6lo4a\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",

1-108
Chapter 1
Devops

"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " name = \"demo\"",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " # oci_nosql_table.nosql_demoKeyVal will be updated in-
place",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ resource \"oci_nosql_table\" \"nosql_demoKeyVal\" {",
"timestamp": "2022-01-20T11:14:26.632000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " ~ ddl_statement = \"CREATE TABLE if not exists
demoKeyVal (key INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT
BY 1 NO CYCLE), value JSON, PRIMARY KEY (key))\" -> \"ALTER TABLE demoKeyVal
(ADD shortName STRING)\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " id =
\"ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrlbd
54l3wdo7hq\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": " name = \"demoKeyVal\"",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Plan: 0 to add, 2 to change, 0 to destroy.",
"timestamp": "2022-01-20T11:14:26.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demoKeyVal: Modifying...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudxhwlqrl
bd54l3wdo7hq]",

1-109
Chapter 1
Devops

"timestamp": "2022-01-20T11:14:27.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Modifying...
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:27.633000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "oci_nosql_table.nosql_demo: Modifications complete
after 9s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demo: Reading...",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demo: Read complete after
0s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bq
tlxxsrvrc4zxr6lo4a]",
"timestamp": "2022-01-20T11:14:35.634000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demoKeyVal: Reading...",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "data.oci_nosql_table.nosql_demoKeyVal: Read complete
after 0s
[id=ocid1.nosqltable.oc1.phx.amaaaaaau7x7rfyaqgpbjucp3s6jjzpnar4lg5yudx
hwlqrlbd54l3wdo7hq]",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "Apply complete! Resources: 0 added, 2 changed, 0
destroyed.",
"timestamp": "2022-01-20T11:14:44.636000+00:00",

1-110
Chapter 1
Develop

"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "nosql_kv_table_ddl_statement = \"CREATE TABLE IF NOT
EXISTS demoKeyVal(key INTEGER, value JSON, shortName STRING, PRIMARY
KEY(SHARD(key)))\"",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
{
"level": "INFO",
"message": "nosql_table_ddl_statement = \"CREATE TABLE IF NOT EXISTS
demo(ticketNo INTEGER, contactPhone STRING, confNo STRING, gender STRING,
bagInfo JSON, PRIMARY KEY(SHARD(ticketNo)))\"",
"timestamp": "2022-01-20T11:14:44.636000+00:00",
"type": "TERRAFORM_CONSOLE"
},
...

This step is very important as it confirms exactly how many OCI resources were added,
updated, or destroyed. In the tutorial, we have successfully updated the schema of the two
NoSQL tables: demo and demoKeyVal.

{
...
"message": "Apply complete! Resources: 0 added, 2 changed, 0
destroyed.",
...
}

We have covered a lot of details in this tutorial. We created override terraform configuration
files required for updating the schema of NoSQL database tables on OCI cloud and then
used the OCI Resource Manager CLI to update the existing stack, generate an execution
plan, and run apply job on the execution plan.

Develop
• Install Analytics Integrator
• Using console to create tables
• Using APIs to create tables
• Using Plugins
• Designing a Table in Oracle NoSQL Database Cloud Service
• Developing in Oracle NoSQL Database Cloud Simulator
• Using Oracle NoSQL Database Migrator

Install Analytics Integrator


• Create and populate a table

1-111
Chapter 1
Develop

• Installation
• Verify the data in the Oracle Autonomous Database
• Verify the data in Oracle Analytics

Creating a table in the Oracle NoSQL Database Cloud Service


Steps to create a table in the Oracle NoSQL Database Cloud Service and populate it
with data.

Create and populate a table


You can create a table in the Oracle NoSQL Database Cloud Service and populate it
with data. To do this, you must be able to connect to the NoSQL Cloud Service using
either your own credentials or as an authorized Instance Principal from an Oracle
Cloud Compute Instance. If you are new to the NoSQL Cloud Service, the 30-minute
tutorial & lab can help you get started. That tutorial presents a basic example that
shows you how to create and populate a simple table in the NoSQL Cloud Service
using your own credentials.
Example Application: Creating and Populating a Complex Table
The program presented here shows how to create a table in the NoSQL Cloud Service
and populate that table with complex data. To use the application presented in this
section, first, copy the Java program CreateLoadComplexTable.java to the directory
examples/nosql/cloud/table in your environment. You can then compile and execute
the application.

package nosql.cloud.table;

import java.io.File;
import java.math.BigDecimal;
import java.security.SecureRandom;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;

import oracle.nosql.driver.NoSQLHandle;
import oracle.nosql.driver.NoSQLHandleConfig;
import oracle.nosql.driver.NoSQLHandleFactory;
import oracle.nosql.driver.Region;
import oracle.nosql.driver.iam.SignatureProvider;
import oracle.nosql.driver.ops.DeleteRequest;
import oracle.nosql.driver.ops.DeleteResult;
import oracle.nosql.driver.ops.GetRequest;
import oracle.nosql.driver.ops.GetResult;
import oracle.nosql.driver.ops.PutRequest;
import oracle.nosql.driver.ops.PutResult;
import oracle.nosql.driver.ops.QueryRequest;
import oracle.nosql.driver.ops.QueryResult;
import oracle.nosql.driver.ops.TableLimits;
import oracle.nosql.driver.ops.TableRequest;
import oracle.nosql.driver.ops.TableResult;

1-112
Chapter 1
Develop

import oracle.nosql.driver.util.TimestampUtil;
import oracle.nosql.driver.values.ArrayValue;
import oracle.nosql.driver.values.LongValue;
import oracle.nosql.driver.values.MapValue;
import oracle.nosql.driver.values.StringValue;

public final class CreateLoadComplexTable {


private static final SecureRandom generator = new SecureRandom();
private final NoSQLHandle ociNoSqlHndl;
private long nOps = 10; /* Default number of rows. */
private long nRowsAdded;
private boolean deleteExisting = false;
private static String compartment = "";
private static String tableName = "";
public static void main(final String[] args) {
try {
final CreateLoadComplexTable loadData =
new CreateLoadComplexTable(args);
loadData.run();
System.exit(0);
} catch (Throwable e) {
e.printStackTrace();
System.out.println("Failed to create and populate the " +
"requested table [name = " + tableName +
"]");
System.exit(1);
}
}
private CreateLoadComplexTable(final String[] argv) {
String tenantOcid = "";
String userOcid = "";
String fingerprint = "";
String privateKeyFilename = "";
String passStr = null;
File privateKeyFile = null;
char[] passPhrase = null;
final int nArgs = argv.length;
int argc = 0;

if (nArgs == 0) {
usage(null);
}
while (argc < nArgs) {
final String thisArg = argv[argc++];
if ("-tenant".equals(thisArg)) {
if (argc < nArgs) {
tenantOcid = argv[argc++];
} else {
usage("-tenant argument requires an argument");
}
} else if ("-user".equals(thisArg)) {
if (argc < nArgs) {
userOcid = argv[argc++];
} else {
usage("-user requires an argument");

1-113
Chapter 1
Develop

}
} else if ("-fp".equals(thisArg)) {
if (argc < nArgs) {
fingerprint = argv[argc++];
} else {
usage("-fp requires an argument");
}
} else if ("-pem".equals(thisArg)) {
if (argc < nArgs) {
privateKeyFilename = argv[argc++];
privateKeyFile = new File(privateKeyFilename);
} else {
usage("-pem requires an argument");
}
} else if ("-compartment".equals(thisArg)) {
if (argc < nArgs) {
compartment = argv[argc++];
} else {
usage("-compartment requires an argument");
}
} else if ("-table".equals(thisArg)) {
if (argc < nArgs) {
tableName = argv[argc++];
} else {
usage("-table requires an argument");
}
} else if ("-n".equals(thisArg)) {
if (argc < nArgs) {
nOps = Long.parseLong(argv[argc++]);
} else {
usage("-n requires an argument");
}
} else if ("-phrase".equals(thisArg)) {
passStr = argv[argc++];
passPhrase = passStr.toCharArray();
} else if ("-delete".equals(thisArg)) {
deleteExisting = true;
} else {
usage("Unknown argument: " + thisArg);
}
}
nRowsAdded = nOps;
System.out.println("COMPARTMENT: " + compartment);
System.out.println("TABLE: " + tableName);
final SignatureProvider auth =
new SignatureProvider(tenantOcid, userOcid, fingerprint,
privateKeyFile, passPhrase);
final NoSQLHandleConfig config =
new NoSQLHandleConfig(Region.US_ASHBURN_1, auth);
ociNoSqlHndl = NoSQLHandleFactory.createNoSQLHandle(config);
createTable();
}
private void usage(final String message) {
if (message != null) {
System.out.println("\n" + message + "\n");

1-114
Chapter 1
Develop

}
System.out.println("usage: " + getClass().getName());
System.out.println
("\t-tenant <tenant ocid>\n" +
"\t-user <user ocid>\n" +
"\t-fp <fingerprint>\n" +
"\t-pem <private key file>\n" +
"\t-compartment <compartment name>\n" +
"\t-table <table name>\n" +
"\t-n <total records to create>\n" +
"\t[-phrase <[pass phrase>]\n" +
"\t-delete (default: false) [delete all “ +
“pre-existing data]\n");
System.exit(1);
}
private void run() {
if (deleteExisting) {
deleteExistingData();
}
doLoad();
}
private void createTable() {
final int readUnits = 10;
final int writeUnits = 10;
final int storageGb = 1;
final int ttlDays = 1;
/* Wait no more than 2 minutes for table create. */
final int waitMs = 2 * 60 * 1000;
/* Check for table existence every 2 seconds. */
final int delayMs = 2 * 1000;
/* Table creation statement. */
final String statement =
"CREATE TABLE IF NOT EXISTS " + tableName +
" (" +
"ID INTEGER," +
"AINT INTEGER," +
"ALON LONG," +
"ADOU DOUBLE," +
"ANUM NUMBER," +
"AUUID STRING," +
"ATIM_P0 TIMESTAMP(0)," +
"ATIM_P3 TIMESTAMP(3)," +
"ATIM_P6 TIMESTAMP(6)," +
"ATIM_P9 TIMESTAMP(9)," +
"AENU ENUM(S,M,L,XL,XXL,XXXL)," +
"ABOO BOOLEAN," +
"ABIN BINARY," +
"AFBIN BINARY(16)," +
"ARRY ARRAY (INTEGER)," +
"AMAP MAP (DOUBLE)," +
"AREC RECORD(" +
"BLON LONG," +
"BTIM_P6 TIMESTAMP(6)," +
"BNUM NUMBER," +
"BSTR STRING," +

1-115
Chapter 1
Develop

"BRRY ARRAY(DOUBLE))," +
"AJSON JSON," +
"PRIMARY KEY (SHARD(AINT), ALON, ADOU, ID)" +
")" +
" USING TTL " + ttlDays + " days";
System.out.println(statement);
final TableRequest tblRqst = new TableRequest();
tblRqst.setCompartment(compartment).setStatement(statement);
final TableLimits tblLimits =
new TableLimits(readUnits, writeUnits, storageGb);
tblRqst.setTableLimits(tblLimits);
final TableResult tblResult =
ociNoSqlHndl.tableRequest(tblRqst);
tblResult.waitForCompletion(ociNoSqlHndl, waitMs, delayMs);
if (tblResult.getTableState() != TableResult.State.ACTIVE) {
final String msg =
"TIMEOUT: Failed to create table in OCI NoSQL “ +
“[table=" + tableName + "]";
throw new RuntimeException(msg);
}
}
private void doLoad() {
final List<MapValue> rows = generateData(nOps);
for (MapValue row : rows) {
addRow(row);
}
displayRow();
final long nRowsTotal = nRowsInTable();
if (nOps > nRowsAdded) {
System.out.println(
nOps + " records requested, " +
nRowsAdded + " unique records actually added " +
"[" + (nOps - nRowsAdded) + " duplicates], " +
nRowsTotal + " records total in table");
} else {
System.out.println(
nOps + " records requested, " +
nRowsAdded + " unique records added, " +
nRowsTotal + " records total in table");
}
}
private void addRow(final MapValue row) {
final PutRequest putRqst = new PutRequest();
putRqst.setCompartment(compartment).setTableName(tableName);
putRqst.setValue(row);
final PutResult putRslt = ociNoSqlHndl.put(putRqst);
if (putRslt.getVersion() == null) {
final String msg =
"PUT: Failed to insert row [table=" + tableName +
", row = " + row + "]";
}
}
/* Retrieves and deletes each row from the table. */
private void deleteExistingData() {
final String selectAll = "SELECT * FROM " + tableName;

1-116
Chapter 1
Develop

final QueryRequest queryRqst = new QueryRequest();


queryRqst.setCompartment(compartment).setStatement(selectAll);

long cnt = 0;
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
final DeleteRequest delRqst = new DeleteRequest();
delRqst.setCompartment(compartment)
.setTableName(tableName);
delRqst.setKey(row);
final DeleteResult delRslt =
ociNoSqlHndl.delete(delRqst);
if (delRslt.getSuccess()) {
cnt++;
}
}
} while (!queryRqst.isDone());
System.out.println(cnt + " records deleted");
}
/* Counts the number of rows in the table. */
private long nRowsInTable() {
final String selectAll = "SELECT * FROM " + tableName;
final QueryRequest queryRqst = new QueryRequest();
queryRqst.setCompartment(compartment).setStatement(selectAll);
long cnt = 0;
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
cnt++;
}
} while (!queryRqst.isDone());
return cnt;
}
/* Convenience method for displaying output when debugging. */
private void displayRow() {
final String selectAll = "SELECT * FROM " + tableName;
final QueryRequest queryRqst = new QueryRequest();
queryRqst.setCompartment(compartment).setStatement(selectAll);
do {
QueryResult queryRslt = ociNoSqlHndl.query(queryRqst);
final List<MapValue> rowMap = queryRslt.getResults();
for (MapValue row : rowMap) {
System.out.println(row);
}
} while (!queryRqst.isDone());
}
/* Generates randomized data with which to populate the table. */
private List<MapValue> generateData(final long count) {
List<MapValue> rows = new ArrayList<>();
final BigDecimal[] numberArray = {
new BigDecimal("3E+8"),
new BigDecimal("-1.7976931348623157E+2"),

1-117
Chapter 1
Develop

new BigDecimal("12345.76455"),
new BigDecimal("12345620.789"),
new BigDecimal("1234562078912345678988765446777475657"),
new BigDecimal("1.7976931348623157E+305"),
new BigDecimal("-1.7976931348623157E+304")
};
final Timestamp[] timeArray_p0 = {
TimestampUtil.parseString("2010-05-05T10:45:00"),
TimestampUtil.parseString("2011-05-05T10:45:01"),
Timestamp.from(Instant.parse("2021-07-15T11:31:21Z"))
};
final Timestamp[] timeArray_p3 = {
TimestampUtil.parseString("2011-05-05T10:45:01.123"),
Timestamp.from(
Instant.parse("2021-07-15T11:31:47.549Z")),
Timestamp.from(
Instant.parse("2021-07-15T11:32:12.836Z"))
};
final Timestamp[] timeArray_p6 = {
TimestampUtil.parseString(
"2014-05-05T10:45:01.789456Z"),
TimestampUtil.parseString(
"2013-08-20T12:34:56.123456Z"),
Timestamp.from(Instant.parse(
"2021-07-15T11:31:47.549213Z")),
Timestamp.from(Instant.parse(
"2021-07-15T11:32:12.567836Z"))
};
final Timestamp[] timeArray_p9 = {
Timestamp.from(Instant.parse(
"2021-07-15T12:46:35.574639954Z")),
Timestamp.from(Instant.parse(
"2021-07-15T12:47:32.883922660Z")),
Timestamp.from(Instant.parse(
"2021-07-15T12:48:11.321131987Z"))
};
final String[] enumArray =
{"S", "M", "L", "XL", "XXL", "XXXL"};
for(int i = 1; i <= count; ++i) {
byte[] byteArray = new byte[16];
generator.nextBytes(byteArray);

MapValue row = new MapValue(true,16);

row.put("ID", i);
row.put("AINT", generator.nextInt());
row.put("ALON", generator.nextLong());
row.put("ADOU", generator.nextDouble());
row.put("ANUM",
numberArray[generator.nextInt(
numberArray.length)]);
row.put("AUUID", UUID.randomUUID().toString());
/* TIMESTAMP */
row.put("ATIM_P0",
timeArray_p0[generator.nextInt(

1-118
Chapter 1
Develop

timeArray_p0.length)]);
row.put("ATIM_P3",
timeArray_p3[generator.nextInt(
timeArray_p3.length)]);
row.put("ATIM_P6",
timeArray_p6[generator.nextInt(
timeArray_p6.length)]);
row.put("ATIM_P9",
timeArray_p9[generator.nextInt(
timeArray_p9.length)]);
/* ENUM */
row.put("AENU", enumArray[i % enumArray.length]);
/* BOOLEAN */
row.put("ABOO", generator.nextBoolean());
/* BINARY & FIXED_BINARY stored as strings */
row.put("ABIN", byteArray);
row.put("AFBIN", byteArray);
/* ARRAY of INTEGER */
ArrayValue integerArr = new ArrayValue();
for (int j = 0; j < 3; ++j) {
integerArr.add(generator.nextInt());
}
row.put("ARRY", integerArr);
/* MAP of DOUBLE */
MapValue map = new MapValue(true,3);
map.put("d1", generator.nextDouble());
map.put("d2", generator.nextDouble());
row.put("AMAP", map);
/*
* RECORD of: LONG, TIMESTAMP, NUMBER,
* STRING, ARRAY of DOUBLE
*/
MapValue record = new MapValue(true,5);
/* LONG element */
record.put("BLON", generator.nextLong());
/* TIMESTAMP element */
record.put("BTIM_P6",
timeArray_p6[generator.nextInt(
timeArray_p6.length)]);
/* NUMBER element */
record.put("BNUM",
numberArray[generator.nextInt(
numberArray.length)]);
/* STRING element */
record.put("BSTR", Double.toString(
generator.nextDouble()));
/* ARRAY of DOUBLE element */
ArrayValue doubleArr = new ArrayValue();
for (int j = 0; j < 3; ++j) {
doubleArr.add(generator.nextDouble());
}
record.put("BRRY", doubleArr);
row.put("AREC", record);
/* JSON */
MapValue json = new MapValue(true,5);

1-119
Chapter 1
Develop

json.put("id", i);
json.put("name", "name_" + i);
json.put("age", i + 10);
row.put("AJSON", json);
rows.add(row);
}
return rows;
}
}

Compile and run the application


Compiling the Application
To compile the CreateLoadComplexTable.java application, first change the directory to
the parent of the examples directory. Then, assuming you have installed the Oracle
NoSQL SDK for Java under that directory, type the command:

javac -classpath oracle-nosql-java-sdk/lib/nosqldriver.jar:examples


examples/nosql/cloud/table/CreateLoadComplexTable.java

Running the Application


After successful compilation, the application can be executed by typing the command:

java -classpath oracle-nosql-java-sdk/lib/nosqldriver.jar:examples


nosql.cloud.table.CreateLoadComplexTable
-tenant <ocid-of-your-tennancy> -user <your-user-ocid> -fp <your-
finger-print> -pem <path-to-your-oci-private-key-file>
-compartment <compartment-ocid-for-the-table> -table tableName1 [-n
11] [-delete]

In the example above, all of the arguments related to your user credentials are
required (-tenant, -fp, -pem, and -compartment), as is the argument -table,
which specifies the name of the table to create and load with data. The remaining
arguments (-n and -delete) are optional.

If the -n argument is specified, the value specified represents the number of new rows
to generate and write to the table. If the argument is not specified, then 10 rows will be
written to the table by default.
If the -delete argument is specified, then all existing rows written to the table by
previous executions of the application will first be deleted from the table before adding
any new rows.
After the application completes executing, you can verify that the table exists and is
populated with data by logging into the Oracle Cloud Console, navigating to the tables
section of the Oracle NoSQL Database service, and querying the table having the
name you specified for the -table argument.

Install Oracle NoSQL Database Analytics Integrator


Steps to install Oracle NoSQL Database Analytics Integrator.

1-120
Chapter 1
Develop

Prerequisites
In order to use the Oracle NoSQL Database Analytics Integrator, you must complete the
following:
• You need to have Java 11 or higher.
• Sign up for an account on the Oracle Cloud Infrastructure. See Oracle Cloud
Infrastructure - Signup for more details.
• Create a Compute Instance from which the Oracle NoSQL Database Analytics Integrator
can be installed and executed. See Compute Instance for more information.
• Create one or more tables in the Oracle NoSQL Database Cloud Service and populate
those tables with data. See Create and populate a NoSQL table for more details.
• Create a bucket in OCI Object Storage. See Create a bucket in object Storage for more
details.
• Create a database in the Oracle Autonomous Data Warehouse (ADW). See Create a
database in the Autonomous Data Warehouse for more details.
• Download and install the client credentials (wallet) needed to establish a secure
connection to the ADW database. See Install credentials needed for a secure database
connection for more details.
• If you wish to employ user-to-service based authentication instead of service-to-service
based authentication (via the OCI Resource Principal), then generate an authorization
token to facilitate authentication of the ADW database with Object Storage. See Generate
an authorization token for Object Storage for more details .
• Enable/store the credential the ADW database should use to access the objects in Object
Storage - that is, either enable the OCI Resource Principal Credential or Store/Enable the
User's Object Storage AUTH_TOKEN in the ADW Database. See Enable the OCI
Resource Principal Credential or Store/Enable the User's Object Storage AUTH_TOKEN
in the ADW Database for more details.
• Create a Dynamic group for the Compute Instance and (optionally) the ADW database.
See Create a Dynamic Group for the Compute Instance and optionally the ADW
Database for more details.
• Create a Policy with Appropriate Permissions for the Dynamic Group. See Create a
Policy with appropriate permissions for the dynamic group for more details.
After you have satisfied all of the prerequisites for using the Oracle NoSQL Database
Analytics Integrator, you can then install and configure the utility. You can then execute the
utility to copy the contents of your tables in the NoSQL Cloud Service to the Autonomous
Data Warehouse so that you can analyze the data using Oracle Analytics.

Installation
You can download the Oracle NoSQL Database Analytics Integrator from Oracle Technology
Network.You can install it in the desired compute environment; which can be an Oracle Cloud
Compute Instance or your own local environment, outside of the Oracle Cloud. The utility’s
installation package is provided as either a compressed tar file or a zip file; for example,
nosqlanalytics-<version>.tar.gz or nosqlanalytics-<version>.zip. If you decide to
install the utility on an Oracle Cloud Compute Instance, then after downloading the desired
installation package, you should remote copy that package to the compute instance.

1-121
Chapter 1
Develop

For example, suppose you download the zip file for version 1.0.1 to the ~/Downloads
directory of your local environment, then you would do the following:

scp ~/Downloads/nosqlanalytics-1.0.1.zip opc@<public-ip-address>:/


home/opc
ssh opc@<public-ip-address>
unzip nosqlanalytics-1.0.1.zip

This installs the utility under the home directory for the user named opc on the
compute instance; that is, /home/opc/nosqlanalytics-1.0.1.

Note:
If you install the utility on an Oracle Cloud Compute instance, then the utility
can be executed using either your own security credentials or an Oracle
Cloud Instance Principal. But if you install the utility outside of the Oracle
Cloud Infrastructure for testing purposes, then you must use your own Oracle
Cloud security credentials to run the utility. You should execute the utility
from your local environment only when the NoSQL tables that you want to
copy are small in size.

Running the Oracle NoSQL Database Analytics Integrator


Steps to run the Oracle NoSQL Database Analytics Integrator.

Create a configuration file for the integrator


Before you can execute the Oracle NoSQL Database Analytics Integrator, you must
first create a configuration file. This configuration file will be used when invoking the
utility. The configuration file should have the entries in a JSON format as shown in the
examples below. The following are just two sample configuration files. Not all of the
parameters used below are required. The table below explains every parameter being
used in the example and highlights if it is optional or required.
Example 1: You execute the utility from an Oracle Cloud Compute Instance and you
wish to authenticate using an Instance Principal.

{
"nosqlstore": {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"useInstancePrincipal" : true,
"compartment" : <ocid.of.compartment.containing.nosql.tables>,
"table" : <tableName1,tableName2,tableName3>,
"readUnitsPercent" : "90,90,90",
"requestTimeoutMs" : "5000"
},
"objectstore" : {
"type" : "object_storage_oci",
"endpoint" : "us-ashburn-1",
"useInstancePrincipal" : true,
"compartment" : <ocid.of.compartment.containing.bucket>,

1-122
Chapter 1
Develop

"bucket" : <bucket-name-objectstorage>,
"compression" : "snappy"
},
"database": {
"type" : "database_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <profile-for-adw-auth>,
"databaseName" : <database-name>,
"databaseUser" : "ADMIN",
"databaseWallet"” : <path-where-wallet-unzipped>

}
}

Example 2: You prefer to authenticate using your own user credentials, or you are executing
from outside of the Oracle Cloud and thus Instance Principal authentication is not available.

{
"nosqlstore": {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <nosqldb-user-credentials>,
"table" : <tableName1,tableName2,tableName3>,
"readUnitsPercent" : "90,90,90",
"requestTimeoutMs" : "5000"
},
"objectstore" : {
"type" : "object_storage_oci",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <objectstorage-user-credentials>,
"bucket" : <bucket-name-objectstorage>,
"compression" : "snappy"
},
"database": {
"type" : "database_cloud",
"endpoint" : "us-ashburn-1",
"credentials" : "/home/opc/.oci/config",
"credentialsProfile" : <adw-user-credentials>,
"databaseName" : <database-name>,
"databaseUser" : "ADMIN",
"databaseWallet" : <path-where-wallet-unzipped>
}
"abortOnError" : false
}

The configuration is divided into three sections – nosqlstore, objectstore, and database -
whose entries are used to specify how the utility interacts with each respective cloud service:
the NoSQL Cloud Service, Oracle ObjectStorage, and Oracle Autonomous Data Warehouse.
There are some parameters that are common in all three sections.

1-123
Chapter 1
Develop

Table 1-5 Common Parameters for all sections

Paramter name Details of the parameter


type Currently, this parameter can take one of the
three values: nosqldb_cloud (for the
nosqlstore section), object_storage_oci
(for the objectstore section), and
database_cloud (for the database section).
endpoint The value of this entry must be set to the
region in which the associated resource is
located. The value specified for this entry can
be either the region’s API endpoint or the
Region identifier for the resource. For
example, if each resource is located in the US
East (Ashburn) region, then the endpoint entry
in each section can be specified using either
the region’s identifier (“us-ashburn-1”) or the
region’s API endpoint for the desired service.

1-124
Chapter 1
Develop

Table 1-6 Parameters in the configuration file

Parameter name Specified Section Details of the section


useInstancePrincipal nosqlstore(Optional) The useInstancePrincipal entry
objectstore(Optional) can be specified as the boolean
value true if the following
conditions are satisfied:
• The utility will be executed
from an Oracle Cloud
Compute Instance.
• The section being
configured is not the
database section
• The compute instance is
authorized, as an Instance
Principal, to perform actions
on the resource referenced
in the section being
configured
• The credentials entry is not
specified
If true is specified for the
useInstancePrincipal entry and
the credentials entry is also
specified, then the credentials
entry takes precedence, and the
user credentials referenced in
that entry’s value will be used to
interact with the associated
resource.

No
te:
Us
er
cre
den
tial
s
mu
st
be
spe
cifi
ed
in
the
dat
aba
se
sec
tion
bec
aus

1-125
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section

e
the
Aut
ono
mo
us
Dat
aba
se
hos
ted
in
AD
W
req
uire
s it.

compartment nosqlstore(Optional) • If true is specified for the


objectstore(Optional) useInstancePrincipal entry,
then the OCID of the
compartment containing that
resource must also be
specified.
• If either false is specified for
the useInstancePrincipal
entry or the credentials entry
is specified, then the
compartment entry is
optional; although it must be
specified in the file
referenced by the
credentials entry.

1-126
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section


credentials nosqlstore(Optional) The credentials entry is required
objectstore(Optional) in the database section under all
circumstances. It is required in
database(Required)
the nosqlstore and objectstore
sections in one or more of the
following circumstances:
• Either the utility will be
executed from outside the
Oracle Cloud, or it will be
executed from an Oracle
Cloud Compute Instance
that is not an Instance
Principal
• The useInstancePrincipal
entry is not specified or is
set to false.
The value specified for this entry
must reference a file on the local
file system that specifies user
credentials that can be used to
securely interact with the
associated resource.
credentialsProfile nosqlstore(Optional) The credentialsProfile entry is
objectstore(Optional) optional in each section, and
even if specified, applies only
database(Optional)
when a corresponding
credentials entry is also
specified.
table nosqlstore(Required) The table entry is required and
must be specified in the
nosqlstore section. The value of
this entry is a string consisting of
a comma-separated list of
names; where each name
references the name of a table in
the NoSQL Database Cloud
Service whose contents should
be retrieved and copied to the
Autonomous Data Warehouse.

1-127
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section


readUnitsPercent nosqlstore(Optional) The readUnitsPercent entry is
optional and is applicable only in
the nosqlstore section. The value
of this entry is a string consisting
of a comma-separated list of
integers; between 1 and 100,
representing the percentage of
read units that can be consumed
when retrieving data from the
corresponding table.
This entry allows you to specify
different read unit percentages
for each of the tables referenced
in the table entry; where the first
percentage in the list
corresponds to the first table in
the list of tables, the second
percentage corresponds to the
second table, and so on. It is not
required that the number of
percentages in this list equal the
number of tables in the list of
tables. A default value of 90
percent will be assigned to any
table in the list of tables that
does not have a corresponding
percentage in this list.
For example, suppose four table
names are specified in the table
entry, but the readUnitsPercent
entry is set to the value ”50,80”.
For this case, data from the first
table will be retrieved using 50
percent of the available read
units, whereas 80 percent of the
read units will be used when
retrieving data from the second
table. And finally, for the
remaining two tables, 90 percent
of the read units (the default) will
be used when retrieving the data
from each of those tables.

1-128
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section


requestTimeoutMs nosqlstore(Optional) The requestTimeoutMs entry is
optional and is applicable only in
the nosqlstore section. The value
of this entry is a string consisting
of a comma-separated list of
positive integers; where each
integer represents the number of
milliseconds allowed for each
data retrieval request to
complete for the corresponding
table.
This entry allows you to specify
different timeout values for each
of the tables referenced in the
table entry. If this entry is not
specified, or if this entry specifies
a timeout for only a subset of the
tables, then the default value of
5000 will be assigned to the
remaining tables.
bucket objectstore(Required) The bucket entry is required and
must be specified in the
objectstore section. The value of
this entry is a string representing
the name of the OCI Object
Storage bucket, into which the
utility copies the data retrieved
from the NoSQL tables.

1-129
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section


compression objectstore(Optional) The compression entry is
optional and is applicable in only
the objectstore section. The
value specified for this entry is a
string representing how the data
is retrieved from the table(s)
specified in the nosqlstore. If this
is set, then the table data is
compressed when being copied
to object storage. The value
specified for this entry must be
one of the following:
• snappy – for snappy
compression
• gzip – for gzip compression
• none – do not compress the
table data copied to
ObjectStorage

No
te:
If
the
co
mp
res
sio
n
ent
ry
is
not
spe
cifi
ed,
the
n
sna
ppy
co
mp
res
sio
n
will
be
per
for
me
d.

1-130
Chapter 1
Develop

Table 1-6 (Cont.) Parameters in the configuration file

Parameter name Specified Section Details of the section


databaseName database(Required) The dabaseName entry is
required and must be specified in
the database section. This entry
is a string whose value is the
name of the database created in
the Oracle Autonomous Data
Warehouse Cloud Service.
databaseUser database(Optional) The databaseUser entry is
optional and should be specified
in the database section. This
entry is a string whose value is
the name of the user account in
the Autonomous Database
specified in the dabaseName
entry. If this entry is not
specified, then you will be
prompted in the command line to
provide the value.
databaseWallet database(Required) The databaseWallet entry is
required and must be specified in
the database section. This entry
is a string whose value is the
filesystem path to the directory
containing the contents of the
Oracle Wallet downloaded from
the Autonomous Database user
account specified in the
databaseUser entry in the
configuration file.
abortOnError Optional Specifies the action to be taken
on facing an error. The default
value is true.

Note:
Each entry in the configuration file can be overridden on the command line by
setting a system property with the name of the form, section.entry for example, -
Dnosqlstore.table=tableName1,tableName3. If an entry is not located within a
section, then the name to use for such a property is simply the name of the entry
itself; for example, -DabortOnError=false. This feature may be useful when
testing or writing scripts that run the utility at regular intervals.

Specifying config information in the credentials file:


Oracle Cloud Infrastructure requires basic configuration information, like user credentials,
tenancy OCID, etc which can be specified in the config file. The default location for this config
file is ~/.oci. You can specify multiple sets of user credentials in this config file.

1-131
Chapter 1
Develop

A sample credentials file is shown below.

[DEFAULT]
user=<ocid.of.default.user>
fingerprint=<fingerprint.of.default.user>
key_file=<path.to.default.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.default.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.default.compartment>

[nosqldb-user-credentials]
user=<ocid.of.nosqldb.user>
fingerprint=<fingerprint.of.nosqldb.user>
key_file=<path.to.nosqldb.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.nosqldb.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.nosqldb.compartment>

[objectstorage-user-credentials]
user=<ocid.of.objectstorage.user>
fingerprint=<fingerprint.of.objectstorage.user>
key_file=<path.to.objectstorage.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.objectstorage.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.objectstorage.compartment>

[adw-user-credentials]
user=<ocid.of.adw.user>
fingerprint=<fingerprint.of.adw.user>
key_file=<path.to.adw.user.oci.api.private.key.file.pem>
tenancy=<ocid.of.adw.user.tenancy>
region=us-ashburn-1
compartment=<ocid.of.adw.compartment>
dbmsOcid=<ocid.of.autonomous.database.in.adw>
dbmsCredentialName=<OCI$RESOURCE_PRINCIPAL or
NOSQLADWDB_OBJ_STORE_CREDENTIAL>

Note:
In the above configuration file, there are three separate entries for nosql-db-
user, objectstorage-user and adw-user. This is not mandatory and a config
file can exist with only one DEFAULT profile. However, having separate
profiles is a good practice rather than combining all parameters in the
DEFAULT profile.

Table 1-7 Parameters in credentials file

Parameter Name Details of the parameter


user The OCID of the user
fingerprint A short sequence of bytes used to identify a
longer public key for the default user

1-132
Chapter 1
Develop

Table 1-7 (Cont.) Parameters in credentials file

Parameter Name Details of the parameter


keyfile The path/filename to the file which contains
the private key for the default user
tenancy The OCID of the tenancy
regions The endpoint of the region
compartment compartment name or OCID of the
compartment of the default user
dbmsOcid OCID of the Autonomous Database
dbmsCredentialName This is the name of the credential the ADW
database will use to authenticate with Object
Storage; which is either the name
OCI$RESOURCE_PRINCIPAL (if you choose to
employ Resource Principal authentication), or
the name of the AUTH_TOKEN credential that
is created when the
DBMS_CLOUD.CREATE_CREDENTIAL
procedure is executed by either the user or the
system administrator (for
example,NOSQLADWDB_OBJ_STORE_CREDENTI
AL ).

Running the tool


After all the requirements for using the necessary Oracle Cloud services (NoSQL Database,
Object Storage, and Autonomous Data Warehouse) have been completed and a valid
configuration file has been created, the Oracle NoSQL Database Analytics Integrator can be
executed by simply typing a command on the command line.
• Navigate to the directory nosqlanalytics under the installation directory (/home/opc/
nosqlanalytics-<version>) .

cd /home/opc/nosqlanalytics-1.0.1/nosqlanalytics

• Invoke the utility using the following command. The configuration file oci-
nosqlanalytics-config.json is present under the .oci directory inside the home
directory.

java -Djava.util.logging.config.file=./src/main/resources/logging/java-
util-logging.properties
-Dlog4j.configurationFile=file:./src/main/resources/logging/log4j2-
analytics.properties
-jar ./lib/nosqlanalytics-1.0.1.jar
-config ~/.oci/oci-nosqlanalytics-config.json

1-133
Chapter 1
Develop

Note:
The system properties that configure the loggers used during execution are
optional. If those system properties are not specified, then the utility will
produce no logging output.

Logging
The Oracle NoSQL Database Analytics Integrator executes software from multiple
third-party libraries, where each library defines its own set of loggers with different
namespaces. For convenience, the Oracle NoSQL Database Analytics Integrator
provides two logging configuration files as part of the release; one to configure logging
mechanisms based on java.util.logging, and one for loggers based on Log4j2.

Note:
By default, the logger configuration files provided with the utility are designed
to produce minimal output as the utility executes. But if you wish to see
verbose output from the various components that are employed by the utility,
then you should increase the logging levels of the specific loggers whose
behavior you wish to analyze.

Verifying Data in Oracle Analytics tool


Steps to verify if the data has been copied from your NoSQL table to the Autonomous
Database in ADW.

Verify the data in the Oracle Autonomous Database


After executing the Oracle NoSQL Database Analytics Integrator to copy the data from
your NoSQL table to the Autonomous Database in ADW, you can verify that the
NoSQL table data has been copied correctly by executing the following steps from the
Oracle Cloud Console:
• Select Oracle Database from the menu on the left side of the display.
• Select Autonomous Data Warehouse.

1-134
Chapter 1
Develop

• Select the Compartment in which the database is located.


• Click on the link with the display name you entered when creating the database.

• Click Service Console.

1-135
Chapter 1
Develop

• Select Development from the menu on the left side of the display.

• Select Database Actions and login to the database; for example,


– Username: ADMIN
– Password: <password-set-during-database-creation>

1-136
Chapter 1
Develop

• Select the item SQL.

From the Navigator tab of the window on the left of the display, first, verify that the table you
created appears in the list of tables contained in the database. If it does, then from the
Worksheet* window execute the SQL query,

SELECT * FROM <table-name>;

Verify that the expected contents of the table are displayed in the window in the center
bottom of the display, under the window Query Result.

Verify the data in Oracle Analytics


After verifying that the Oracle NoSQL Database Analytics Integrator successfully copied the
data from the example table in the NoSQL Cloud Service to a corresponding table in the
database you created in Autonomous Data Warehouse, you can then connect Oracle
Analytics to that database and verify that Oracle Analytics can access and analyze the data
in that table. In the section below, Oracle Analytics Desktop is used to demonstrate how this
can be done. But if you have access to an Oracle Analytics Cloud instance, you should be
able to verify the data using similar functions.

1-137
Chapter 1
Develop

Download Oracle Analytics Desktop:


Oracle Analytics Desktop provides standalone data exploration and visualization in a
per-user desktop download. Oracle Analytics Desktop is the tool for quick exploration
of sample data from multiple sources or for analysis and investigation of your own
local datasets.
You can download the Oracle Analytics Desktop here.
After executing the Oracle Analytics application in your environment, you can invoke
the graphical user interface to create a connection to the database in Autonomous
Data Warehouse and access the database in Autonomous Data Warehouse.

Create a Connection to the Database in Autonomous Data Warehouse:


• Open the GUI for Oracle Analytics Desktop and click Create.
• Click Connection and then choose Oracle Autonomous Data Warehouse.

1-138
Chapter 1
Develop

• In the pop-up window that appears, enter the following information:


– Connection Name: <name of the database>
– Client Credentials: <path-to-wallet.zip>
– Username: ADMIN
– Password: <password-set-during-database-creation>

1-139
Chapter 1
Develop

• Click on Save.

Note:
For Client Credentials, when you enter the path to the wallet zip file, the tool
will extract the file cwallet.sso and replace what you entered with that file’s
name. Finally, once you enter the Connection Name, the tool will
automatically enter a value for Service Name that is based on what was
entered for Connection Name.

Access the Database in Autonomous Data Warehouse


After creating the connection to the database in ADW, you can then connect to that
database from the Oracle Analytics GUI:
• Click on Data.
• Click on the icon with the connection name.

• Click ADMIN.
• Scroll down the list of tables and select the table you copied from NoSQL
Database Cloud Service.

1-140
Chapter 1
Develop

• Click Add to Data Set. Verify the data displayed is what you expect.
Your data is now ready to be analyzed using all of the facilities provided by Oracle Analytics.

Using console to create tables


• Using Console to Create Tables in Oracle NoSQL Database Cloud Service
• Inserting Data Into Tables

Using Console to Create Tables in Oracle NoSQL Database Cloud Service


Learn how to create and manage Oracle NoSQL Database Cloud Service tables and indexes
from the console.
This article has the following topics:

Creating a Compartment
When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy with a root
compartment that holds all your cloud resources. You then create additional compartments
within the tenancy (root compartment) and corresponding policies to control access to the
resources in each compartment. Before you create an Oracle NoSQL Database Cloud
Service table, Oracle recommends that you set up the compartment where you want the table
to belong.
You create compartments in Oracle Cloud Infrastructure Identity and Access Management
(IAM). See Setting Up Your Tenancy and Managing Compartments in Oracle Cloud
Infrastructure Documentation.

Creating Tables
You can create new Oracle NoSQL Database Cloud Service table from the NoSQL console.
The NoSQL console lets you create the Oracle NoSQL Database Cloud Service tables in two
modes:

1-141
Chapter 1
Develop

1. Simple Input Mode: You can use this mode to create the NoSQL Database Cloud
Service table declaratively, that is, without writing a DDL statement.
2. Advanced DDL Input Mode: You can use this mode to create the NoSQL
Database Cloud Service table using a DDL statement.

Creating Table: Simple Input Mode


Learn how to create a table from the NoSQL console by using the Simple Input table
creation mode.
To create a table:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. Click Create Table.
3. In the Create Table dialog, select Simple input for Table Creation Mode.
4. Under Reserved Capacity, you have the following options.
• Always Free Configuration:
Enable the toggle button to create an Always Free NoSQL table. Disabling the
toggle button creates a regular NoSQL table. You can create up to three
Always Free NoSQL tables in the tenancy. If you have three Always Free
NoSQL tables in the tenancy, the toggle button to create an Always Free SQL
table is disabled.
If you enable the toggle button to create an Always Free NoSQL table, the
Read capacity, Write capacity, and Disk storage fields are assigned default
values. The Capacity mode becomes Provisioned Capacity. These values
cannot be changed.

1-142
Chapter 1
Develop

If you want to create a regular table, then disable the toggle button. You will be able
to enter the appropriate capacity values for the table.
– Read Capacity (ReadUnits): Enter the number of read units. See Estimating
Capacity to learn about read units.
– Write Capacity (WriteUnits): Enter the number of write units. See Estimating
Capacity to learn about write units.
– Disk Storage (GB): Specify the disk space in gigabytes (GB) to be used by the
table. See Estimating Capacity to learn about storage capacity.

• Capacity mode
You can specify the option for Capacity mode as Provisioned Capacity or On
Demand Capacity. Provisioned Capacity and On Demand Capacity modes are
mutually exclusive options. If you enable On Demand Capacity for a table, you don't
need to specify the read/write capacity of the table. You are charged for the actual
read and write units usage, not the provisioned usage.
Enabling On Demand Capacity for a table is a good option if any of the following are
true:
a. You create new tables with unknown workloads.
b. You have unpredictable application traffic.
c. You prefer the ease of paying for only what you use.
Limitations of enabling On Demand Capacity for a table:
a. On Demand Capacity limits the capacity of the table to 5,000 writes and 10,000
reads.

1-143
Chapter 1
Develop

b. The number of tables with On Demand Capacity per tenant is limited to 3.


c. You pay more per unit for On Demand Capacity table units than
provisioned table units.

Selecting On Demand Capacity disables Always Free Configuration. The


Read Capacity and Write Capacity input boxes become read-only and show
the text On Demand Capacity. The On Demand Capacity tables will show On
Demand Capacity in their read and write capacity columns. If Capacity mode
is On Demand Capacity then the Always Free control is disabled.

1-144
Chapter 1
Develop

5. In the Name field, enter a table name that is unique within your tenancy.
Table names must conform to Oracle NoSQL Database Cloud Service naming
conventions. See Oracle NoSQL Database Cloud Service Limits .
6. In the Primary Key Columns section, enter primary key details:
• Column Name: Enter a column name for the primary key in your table. See Oracle
NoSQL Database Cloud Service Limits to learn about column naming requirements.
• Type: Select the data type for your primary key column.
• Precision:This is applicable for TIMESTAMP typed columns only. Timestamp values
have precision in fractional seconds that range from 0 to 9. For example, a precision
of 0 means that no fractional seconds are stored, 3 means that the timestamp stores
milliseconds and 9 means a precision of nanoseconds. 0 is the minimum precision,
and 9 is the maximum.
• Set as Shard Key: Click this option to set this primary key column as shard key.
Shard key is to distribute data across the Oracle NoSQL Database Cloud Service
cluster for increased efficiency, and to position records that share the shard key
locally for easy reference and access. Records that share the shard key are stored in
the same physical location and can be accessed atomically and efficiently.
• + Another Primary Key Column: Click this button to add more columns while
creating a composite (multi-column) primary key.
• Use the up and down arrows to change the sequence of columns while creating a
composite primary key.

1-145
Chapter 1
Develop

7. In the Columns section, enter non-primary column details:

• Column Name: Enter the column name. Ensure that you conform to column
naming requirements described in Oracle NoSQL Database Cloud Service
Limits .
• Type: Select the data type for your column.
• Precision:This is applicable for TIMESTAMP typed columns only. Timestamp
values have precision in fractional seconds that range from 0 to 9. For
example, a precision of 0 means that no fractional seconds are stored, 3
means that the timestamp stores milliseconds and 9 means a precision of
nanoseconds. 0 is the minimum precision, and 9 is the maximum.
• Size: This is applicable for BINARY typed columns only. Specify the size in
bytes to make the binary a fixed binary.
• Default Value: (optional) Supply a default value for the column.

Note:
Default values can not be specified for binary and JSON data type
columns.

• Value is Not Null: Click this option to specify that a column must always have
a value.
• + Another Column: Click this button to add more columns.
• Click the delete icon to delete a column.
8. (Optional) To specify advanced options, click Show Advanced Options and enter
advanced details:
• Table Time to Live (Days): (optional) Specify expiration duration (no. of days)
for the rows in the table. After the number of days, the rows expire

1-146
Chapter 1
Develop

automatically, and are no longer available. The default value is zero, indicating no
expiration time.

Note:
Updating Table Time to Live (TTL) will not change the TTL value of any
existing data in the table. The new TTL value will only apply to those rows
that are added to the table after this value is modified and to the rows for
which no overriding row-specific value has been supplied.

In the Tags section, enter:


• Tag Namespace: Select a tag namespace from the select list. A tag namespace is
like a container for your tag keys. It is case insensitive and must be unique across the
tenancy.
• Tag Key: Enter the name to use to refer to the tag. A tag key is case insensitive and
must be unique within a namespace.
• Value: Enter the value to give your tag.
• + Additional Tag: Click to add more tags.

9. Click Create table.


The table is created and listed in the NoSQL console.
To view help for the current page, click the help link at the top of the page.

Creating Table: Advanced DDL Input Mode


Learn how to create a table from the NoSQL console by using the Advanced DDL Input table
creation mode.
To create a table:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. Click Create Table.

1-147
Chapter 1
Develop

3. In the Create Table window, select Advanced DDL Input for Table Creation
Mode.
4. Under Reserved Capacity, you have the following options.
• Always Free Configuration:
Enable the toggle button to create an Always Free NoSQL table. Disabling the
toggle button creates a regular NoSQL table. You can create up to three
Always Free NoSQL tables in the tenancy. If you have three Always Free
NoSQL tables in the tenancy, the toggle button to create an Always Free SQL
table is disabled.
If you enable the toggle button to create an Always Free NoSQL table, the
Read capacity, Write capacity, and Disk storage fields are assigned default
values. The Capacity mode becomes Provisioned Capacity. These values
cannot be changed.

If you want to create a regular table, then disable the toggle button. You will be
able to enter the appropriate capacity values for the table.
– Read Capacity (ReadUnits): Enter the number of read units. See
Estimating Capacity to learn about read units.
– Write Capacity (WriteUnits): Enter the number of write units. See
Estimating Capacity to learn about write units.
– Disk Storage (GB): Specify the disk space in gigabytes (GB) to be used
by the table. See Estimating Capacity to learn about storage capacity.

1-148
Chapter 1
Develop

• Capacity mode
You can specify the option for Capacity mode as Provisioned Capacity or On
Demand Capacity. Provisioned Capacity and On Demand Capacity modes are
mutually exclusive options. If you enable On Demand Capacity for a table, you don't
need to specify the read/write capacity of the table. You are charged for the actual
read and write units usage, not the provisioned usage.
Enabling On Demand Capacity for a table is a good option if any of the following are
true:
a. You create new tables with unknown workloads.
b. You have unpredictable application traffic.
c. You prefer the ease of paying for only what you use.
Limitations of enabling On Demand Capacity for a table:
a. On Demand Capacity limits the capacity of the table to 5,000 writes and 10,000
reads.
b. The number of tables with On Demand Capacity per tenant is limited to 3.
c. You pay more per unit for On Demand Capacity table units than provisioned table
units.

1-149
Chapter 1
Develop

Selecting On Demand Capacity disables Always Free Configuration. The


Read Capacity and Write Capacity input boxes become read-only and show
the text On Demand Capacity. The On Demand Capacity tables will show On
Demand Capacity in their read and write capacity columns. If Capacity mode
is On Demand Capacity then the Always Free control is disabled.

1-150
Chapter 1
Develop

5. In the DDL input section, enter the create table DDL statement for Query. You may get
an error that your statement is Incomplete or faulty. See Debugging SQL statement errors
in the OCI console to learn about possible errors in the OCI console and how to fix them.
See Developers Guide for examples on create table statement.
6. (Optional) To specify advanced options, click Show Advanced Options and enter
advanced details:
• Tag Namespace: Select a tag namespace from the select list. A tag namespace is
like a container for your tag keys. It is case insensitive and must be unique across the
tenancy.
• Tag Key: Enter the name to use to refer to the tag. A tag key is case insensitive and
must be unique within a namespace.
• Value: Enter the value to give your tag.
• + Additional Tag: Click to add more tags.

7. Click Create Table.


The table is created and listed in the NoSQL console.
To view help for the current page, click the help link at the top of the page.

Creating a child table


With Oracle NoSQL Database, you can create tables in a hierarchical structure( as parent-
child tables).

Table Hierarchies
You can use the create table statement to create a table as a child table of another table,
which then becomes the parent of the new table. This is done by using a composite name (a
name_path) for the child table. A composite name consists of a number N (N > 1) of
identifiers separated by dots. The last identifier is the local name of the child table and the
first N-1 identifiers are the name of the parent.

A
/ \

1-151
Chapter 1
Develop

A.B A.G
/
A.B.C
/
A.B.C.D

The top-most parent table is A. The child table B gets the composite name A.B. The
next level of child table C gets the composite name A.B.C and so on.

Properties of child tables:


• You cannot specify the Read capacity, Write capacity, or Disk storage limits while
creating a child table. The child table shares the corresponding values from the
parent table.
• A child table is counted against a tenancy's total number of tables.
• A parent table and its child tables are always in the same compartment.
• Metric information is collected and aggregated at the parent level. No metrics are
visible at the child tables level.
• A child table has its own tags independent of the parent table.
• A child table also inherits the capacity pricing model of the parent table. For
example, if the parent table is configured with On Demand Capacity, the child table
can also be configured with the same capacity pricing model.

Transactions in parent-child tables


A parent table and a child table share the same shard key. Using child tables, you can
achieve ACID transactions across multiple objects using the following two simple
steps:
• Declare a table as a child of another table.
• Use writeMutliple API to add operations for both parent and child tables.

If child tables are not there, achieving ACID transactions across multiple objects is a
tedious procedure. Without child tables, you do the following:
• Find the shard key values for all the objects that you want to include in a
transaction.
• Make sure that the shard keys for all the objects are equal.
• Use writeMutliple API to add every object to a collection.

Use child tables to easily achieve ACID transactions across multiple objects.

Authorization in a child table:


If you don't own a table and you want to read from, delete or insert into this table, two
conditions must be met:
• You have the specific privilege (READ/INSERT/DELETE) for the child table.
• You have the same privileges, or at least the read privilege, for the parent table of
the specific child table in the hierarchy.
See IAM policies for authorization for more details.

1-152
Chapter 1
Develop

For example, if you want to insert data into the child table myTable.child1, which you don't
own, then you must have the INSERT privilege on the child table and READ and/or INSERT
privileges on myTable. Granting privileges to child tables is independent of granting privileges
to the parent table. That means you can give specific privileges to the child table without
giving the same privilege to its parent table. Any parent/child join queries require the relevant
privileges on all tables used in the query. See Using Left Outer joins with parent-child tables
for more details.

Creating a child table


• Click on the parent table to view its details. The list of child tables already present for the
parent is displayed.
• In the left navigation menu, under Resources, click Child tables.

• – The list of child tables for the parent table is displayed. To create a child table, click
Create Child Table.

• You can choose Simple input method or Advanced DDL input method to create the child
table.

1-153
Chapter 1
Develop

• Specify a name for the child table. This is automatically prefixed with the name of
the parent table followed by a dot. Specify the list of columns and primary key
columns.

• The Set as shard key checkbox is not shown while creating a child table, as the
child tables inherit their shard key from their top-level parent table.

Note:
The Read Capacity, Write Capacity, and Disk Storage fields are not specified
because a child table inherits these limits from the top-level table. The limits
set for the top-level table are automatically applied to the child table.

Viewing the details of a child table


You can view the details of a child table after it is created.

1-154
Chapter 1
Develop

Creating Indexes
Learn how to create indexes in Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
To create indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.
4. Click Add Index.
5. In the Create Index window, enter a name for the index that is unique within the table.
See Oracle NoSQL Database Cloud Service Limits to learn about the naming restrictions
for indexes.
6. In the Index Columns section, enter index details:
• Index Column Name: Select the column that you would like included in the index.
• + Another Index Column: Click this button to include another column in the index.
• Use the up and down arrow to change the sequence of the columns in the index
being created.
• Click the delete icon next to any column to remove it from the index being created.
7. Click Create Index.
The index is created.
To view help for the current page, click the help link at the top of the page.

1-155
Chapter 1
Develop

Inserting Data Into Tables


Learn how to insert data into Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
The NoSQL console lets you insert new rows into the Oracle NoSQL Database Cloud
Service tables in two modes:
1. Simple Input Mode: You can use this mode to provide the values for the new
rows declaratively.
2. Advanced JSON Input Mode: You can use this mode to provide the values for
the new rows in JSON format.
You can also bulk upload the data from a local file into the table, via the browser.

Inserting Data Into Tables: Simple Input Mode


Learn how to insert data into Oracle NoSQL Database Cloud Service tables by using
the Simple Input insertion mode.
To insert data into tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. Click Insert Row.
4. In the Insert Row window, select Simple Input for Entry Mode.
5. All the columns in the table are listed. Input the data for the columns of the table.
For some column types, such as Binary, you upload the data.

Note:
Entering a value is mandatory for all non-nullable columns of the table.

6. Click Insert Row.


The record is inserted into the table.
To view help for the current page, click the help link at the top of the page.

Inserting Data Into Tables: Advanced JSON Input Mode


Learn how to insert data into Oracle NoSQL Database Cloud Service tables by using
the Advanced JSON input mode.
To insert data into tables:

1-156
Chapter 1
Develop

1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. Click Insert Row.
4. In the Insert Record window, select Advanced JSON Input for Entry Mode.
5. Paste or upload the Record Definition in JSON format.
6. Click Insert Row.
The record is inserted into the table.
To view help for the current page, click the help link at the top of the page.

Bulk Upload of table rows


The Upload Data button in the Table details page allows bulk uploading of data from a local
file into the table, via the browser.
The Bulk upload feature is intended for loading less than a few thousand rows. This feature
is great for performing a proof of concept ( POC) or doing basic testing of the service. It is a
convenient way to populate a small table. If you want to write tens of thousands of rows, then
for performance reasons you would be better off using the Oracle NoSQL Database Migrator
or writing your own program using one of the NoSQL SDKs. If, however, you want to quickly
insert a few hundred or a few thousand rows, this upload method is an expeditious approach.
The file to be uploaded must contain a series of JSON objects. The objects can be expressed
as comma-separated items of a single array or as a sequence of simple objects bounded by
curly braces, with no syntactic delimiters between them. The contents of each object must be
correctly formatted JSON and must conform to the schema of the table to which they will be
uploaded.
Example: A table is created using the following DDL statement

CREATE TABLE Simple ( id integer, val string, PRIMARY KEY ( id ) )

The following example illustrates using the array format for the file content.

[
{
"id": 0,
"val": "0"
},
{
"id": 1,
"val": "2"
}, ...
]

1-157
Chapter 1
Develop

The following example illustrates using simple objects for the file content.

{
"id": 0,
"val": "0"
}
{
"id": 1,
"val": "2"
}, ...

• If a column value is not required by the table's schema, then the corresponding
JSON property may be left out.
• If a column value is GENERATED ALWAYS, then the corresponding JSON
property must be left out.
• If a JSON object contains properties with names that do not match any column
names, those properties are ignored.
To use the upload feature, click the Upload Data button and select the file to be
uploaded. The upload begins immediately, and progress will be shown on the page.
Upon successful completion, the total number of rows inserted will be shown. You can
interrupt the upload by clicking the Stop Uploading button. The number of rows that
were successfully committed to the database will be shown.
If an error in the input file is detected, then uploading will stop and an error message
with an approximate line number will be shown. Input errors might be caused by
incorrect JSON syntax or schema nonconformance. Errors can also occur during
requests for the service. Such errors also stop the uploading and display a message.
If the upload is stopped in the middle for any reason, you can do one of the following:
• If there are no columns with generated key values (that is, if the keys are entirely
dictated by the JSON file), then you can simply start over with the same file. The
already-written rows will be written again.
• If there are generated key values, then starting over would write new records
instead of overwriting existing records. The easiest path would be to drop the table
and create it again.
• Alternatively, you could remove all records from the table by executing the
statement DELETE FROM tablename in the Explore data form.

If the provisioned write limit is exceeded during the upload process, a transient
message indicating so will be displayed, and the uploading will be slowed down to
avoid exceeding the limit again.

Using APIs to create tables


• About Oracle NoSQL Database SDK drivers
• Obtaining a NoSQL Handle
• About Compartments
• Creating Tables and Indexes
• Adding Data

1-158
Chapter 1
Develop

About Oracle NoSQL Database SDK drivers


Learn about Oracle NoSQL Database SDK drivers.
The Oracle NoSQL Database SDK Driver contains the files that enable an application to
communicate with the on-premises or the Oracle NoSQL Database Cloud Service or the
Oracle NoSQL Database Cloud Simulator.

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

Java
The Oracle NoSQL Database SDK for Java is available in Maven Central repository, details
available here. The main location of the project is in GitHub.
You can get all the required files for running the SDK with the following POM file
dependencies.

Note:
The version changes with each release.

<dependency>
<groupId>com.oracle.nosql.sdk</groupId>
<artifactId>nosqldriver</artifactId>
<version>5.2.31</version>
</dependency>

The Oracle NoSQL Database SDK for Java provides you with all the Java classes, methods,
interfaces and examples. Documentation is available as javadoc in GitHub or from Java API
Reference Guide.

Python
You can install the Python SDK through the Python Package Index with the command given
below.

pip3 install borneo

1-159
Chapter 1
Develop

The Oracle NoSQL SDK for Python provides you with all the Python classes, methods,
interfaces and examples. Documentation is available in Python API Reference Guide.

Go
Open the Go Downloads page in a browser and click the download tab corresponding
to your operating system. Save the file to your home folder.
Install Go in your operating system.
• On Windows systems, Open the MSI file you downloaded and follow the prompts
to install Go.
• On Linux systems, Extract the archive you downloaded into /usr/local, creating
a Go tree in /usr/local/go. Add /usr/local/go/bin to the PATH environment
variable.
Access the online godoc for information on using the SDK and to reference Go driver
packages, types, and methods.

Node.js
Download and install Node.js 12.0.0 or higher version from Node.js Downloads.
Ensure that Node Package Manager (npm) is installed along with Node.js. Install the
node SDK for Oracle NoSQL Database as shown below.

npm install oracle-nosqldb

Access the Node.js API Reference Guide to reference Node.js classes, events, and
global objects.

C#
You can install the SDK from NuGet Package Manager either by adding it as a
reference to your project or independently.
• Add the SDK as a Project Reference: You may add the SDK NuGet Package as a
reference to your project by using .Net CLI.

cd <your-project-directory>
dotnet add package Oracle.NoSQL.SDK

Alternatively, you may perform the same using NuGet Package Manager in Visual
Studio.
• Independent Install: You may install the SDK independently into a directory of your
choice by using nuget.exe CLI.

nuget.exe install Oracle.NoSQL.SDK -OutputDirectory


<your-packages-directory>

Spring Data
The Oracle NoSQL Database SDK for Spring Data is available in the Maven Central
repository, details are available here. The main development location is the oracle-
spring-sdk project on GitHub.

1-160
Chapter 1
Develop

You can get all the required files for running the Spring Data Framework with the following
POM file dependencies.

Note:
The version changes with each release.

<dependency>
<groupId>com.oracle.nosql.sdk</groupId>
<artifactId>spring-data-oracle-nosql</artifactId>
<version>1.4.1</version>
</dependency>

Add the additional dependency to use the Spring Data Framework:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<version>2.7.0</version>
</dependency>

The Oracle NoSQL Database SDK for Spring Data provides you with all the Spring Data
classes, methods, interfaces, and examples. Documentation is available as nosql-spring-sdk
in GitHub or from SDK for Spring Data API Reference.

Obtaining a NoSQL Handle


Learn how to access tables using Oracle NoSQL Database Drivers. Start developing your
application by creating a NoSQL Handle. Use the NoSQLHandle to access the tables and
execute all operations.

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

1-161
Chapter 1
Develop

Java
To create a connection represented by a NoSQLHandle, obtain a handle using the
NoSQLHandleFactory.createNoSQLHandle method and the NoSQLHandleConfig class.
The NoSQLHandleConfig class allows an application to specify the handle
configuration. See the Java API Reference Guide to learn more.
Use the following code to obtain a NoSQL handle:

/* Configure a handle for the desired Region and AuthorizationProvider.


* By default this SignatureProvider constructor reads authorization
* information from ~/.oci/config and uses the default user profile and
* private key for request signing. Additional SignatureProvider
* constructors are available if a config file is not available or
* desirable.
*/
AuthorizationProvider ap = new SignatureProvider();

/* Use the us-ashburn-1 region */


NoSQLHandleConfig config = new NoSQLHandleConfig(Region.US_ASHBURN_1,
ap);
config.setAuthorizationProvider(ap);

/* Sets a default compartment for all requests from this handle. This
* may be overridden in individual requests or by using a
* compartment-name prefixed table name.
*/
config.setDefaultCompartment("mycompartment");

// Open the handle


NoSQLHandle handle = NoSQLHandleFactory.createNoSQLHandle(config);

// Use the handle to execute operations

A handle has memory and network resources associated with it. Use the
NoSQLHandle.close method to free up the resources when your application is done
using the handle.
To minimize network activity and resource allocation and deallocation overheads, it's
best to avoid creating and closing handles repeatedly. For example, creating and
closing a handle around each operation would result in poor application performance.
A handle permits concurrent operations, so a single handle is sufficient to access
tables in a multi-threaded application. The creation of multiple handles incurs
additional resource overheads without providing any performance benefit.

Python
A handle is created by first creating a borneo.NoSQLHandleConfig instance to
configure the communication endpoint, authorization information, as well as default
values for handle configuration. borneo.NoSQLHandleConfig represents a connection
to the service. Once created it must be closed using the method
borneo.NoSQLHandle.close() in order to clean up resources. Handles are thread-safe
and intended to be shared.

1-162
Chapter 1
Develop

An example of acquiring a NoSQL Handle for the Oracle NoSQL Cloud Service:

from borneo import NoSQLHandle, NoSQLHandleConfig, Regions


from borneo.iam import SignatureProvider
# create AuthorizationProvider
provider = SignatureProvider()
# create handle config using the correct desired region
# as endpoint, add a default compartment.
config = NoSQLHandleConfig(Regions.US_ASHBURN_1).
set_authorization_provider(provider).
set_default_compartment('mycompartment')
# create the handle
handle = NoSQLHandle(config)

Note:
To reduce resource usage and overhead of handle creation it is best to avoid
excessive creation and closing of borneo.NoSQLHandle instances.

Go
The first step in any Oracle NoSQL Database Cloud Service go application is to create a
nosqldb.Client handle used to send requests to the service. Instances of the Client handle
are safe for concurrent use by multiple goroutines and intended to be shared in a multi-
goroutines application. The handle is configured using your credentials and other
authentication information.

provider, err := iam.NewSignatureProviderFromFile(cfgfile, profile,


passphrase, compartment)
cfg := nosqldb.Config
{
Region: "us-phoenix-1", AuthorizationProvider: provider,
}
client, err := nosqldb.NewClient(cfg)
// use client for all NoSQL DB operations

Node.js
Class NoSQLClient represents the main access point to the service. To create instance of
NoSQLClient you need to provide appropriate configuration information. This information is
represented by a plain JavaScript object and may be provided to the constructor of
NoSQLClient as the object literal. Alternatively, you may choose to store this information in a
JSON configuration file and the constructor of NoSQLClient with the path (absolute or relative
to the application's current directory) to that file.
The first example below creates instance of NoSQLClient for the Cloud Service using
configuration object literal. It also adds a default compartment and overrides some default
timeout values in the configuration object.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;

let client = new NoSQLClient({

1-163
Chapter 1
Develop

region: Region.US_ASHBURN_1,
timeout: 20000,
ddlTimeout: 40000,
compartment: 'mycompartment',
auth: {
iam: {
configFile: '~/myapp/.oci/config',
profileName: 'Jane'
}
}
});

The second example stores the same configuration in a JSON file config.json and
uses it to create NoSQLClient instance.

Sample config.json file:

{
"region": "US_ASHBURN_1",
"timeout": 20000,
"ddlTimeout": 40000,
"compartment": "mycompartment",
"auth": {
"iam": {
"configFile": "~/myapp/.oci/config",
"profileName": "Jane"
}
}
}

Application code:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


let client = new NoSQLClient('config.json');

C#
Class NoSQLClient represents the main access point to the service. To create an
instance of NoSQLClient you need to provide appropriate configuration information.
This information is represented by NoSQLConfig class which instance can be provided
to the constructor of NoSQLClient. Alternatively, you may choose to store the
configuration information in a JSON configuration file and use the constructor of
NoSQLClient that takes the path (absolute or relative to current directory) to that file.

The first example below creates instance of NoSQLClient for the Cloud Service using
NoSQLConfig. It also adds a default compartment and overrides some default timeout
values in NoSQLConfig.

var client = new NoSQLClient(


new NoSQLConfig
{
Region = Region.US_ASHBURN_1,
Timeout = TimeSpan.FromSeconds(10),

1-164
Chapter 1
Develop

TableDDLTimeout = TimeSpan.FromSeconds(20),
Compartment = "mycompartment",
AuthorizationProvider = new IAMAuthorizationProvider(
"~/myapp/.oci/config", "Jane")
});

The second example stores the same configuration in a JSON file config.json and uses it to
create NoSQLClient instance.
config.json

{
"Region": "us-ashburn-1",
"Timeout": 20000,
"TableDDLTimeout": 40000,
"compartment": "mycompartment",
"AuthorizationProvider":
{
"AuthorizationType": "IAM",
"ConfigFile": "~/myapp/.oci/config",
"ProfileName": "Jane"
}
}

Application code:

var client = new NoSQLClient("config.json");

Spring Data
Obtaining a NoSQL connection
In a Spring Data application, you must set up the AppConfig class that provides a
NosqlDbConfig Spring bean. The NosqlDbConfig Spring bean describes how to connect to
the Oracle NoSQL Database Cloud Service.
Create the AppConfig class that extends the AbstractNosqlConfiguration class. This
exposes the connection and security parameters to the Oracle NoSQL Database SDK for
Spring Data.
Return a NosqlDbConfig instance object with the connection details to the Oracle NoSQL
Database Cloud Service. Provide the @Configuration and @EnableNoSQLRepositories
annotations to this NosqlDbConfig class. The @Configuration annotation informs the Spring
Data Framework that the AppConfig class is a configuration class that should be loaded
before running the program. The @EnableNoSQLRepositories annotation informs the Spring
Data Framework that it needs to load the program and look up for the repositories that extend
the NosqlRepository interface. The @Bean annotation is required for the repositories to be
instantiated.
Create a nosqlDbConfig @Bean annotated method to return an instance of the
NosqlDBConfig class. The NosqlDBConfig instance object will be used by the Spring Data
Framework to authenticate the Oracle NoSQL Database Cloud Service.
You can use different methods to connect to the Oracle NoSQL Database Cloud Service. For
more details, see Connecting your Application to NDCS.

1-165
Chapter 1
Develop

In the following example, you authenticate using the SignatureProvider method. You
require tenancy id, user id, and fingerprint information which can be found on the user
profile page of the cloud account under the User Information tab on View
Configuration File. You can also add the passphrase to your private key. For more
details, see Authentication to connect to Oracle NoSQL Database.

import com.oracle.nosql.spring.data.config.AbstractNosqlConfiguration;
import com.oracle.nosql.spring.data.config.NosqlDbConfig;
import
com.oracle.nosql.spring.data.repository.config.EnableNosqlRepositories;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import oracle.nosql.driver.kv.StoreAccessTokenProvider;

import oracle.nosql.driver.Region;
import oracle.nosql.driver.iam.SignatureProvider;
import java.io.File;

public class AppConfig extends AbstractNosqlConfiguration

{
/* Annotation to tell the Spring Data Framework that the returned
object should be registered as a bean in the Spring application.*/
@Bean
public NosqlDbConfig nosqlDbConfig()
{
SignatureProvider signatureProvider;
char passphrase[] = < Pass phrase > ; //Optional. A passphrase
for the key, if it is encrypted.

/* Details that are required to authenticate and authorize


access to the Oracle NoSQL Database Cloud Service are provided.*/
signatureProvider = new SignatureProvider(
< tenantID > , //The Oracle Cloud Identifier (OCID) of the
tenancy.
< userID > , //The Oracle Cloud Identifier (OCID) of a
user in the tenancy.
< fingerprint > , //The fingerprint of the key pair used
for signing.
new File( < privateKeyFile > ), //Full path to the key
file.
passphrase //Optional.
);

/* Provide the service URL of the Oracle NoSQL Database Cloud


Service and update the Region.<Region name>.endpoint() with the
appropriate value.
For example, Region.US_ASHBURN_1.endpoint() .*/

return new NosqlDbConfig(Region. < Region name > .endpoint(),


signatureProvider);
}

1-166
Chapter 1
Develop

About Compartments
Learn how to specify the compartment while creating and working with Oracle NoSQL
Database Cloud Service tables using Oracle NoSQL Database Drivers.
Oracle NoSQL Database Cloud Service tables are created in a compartment and are scoped
to that compartment. When authenticated as a specific user, your tables are managed in the
root compartment of your tenancy unless otherwise specified. Organizing tables into different
compartments will help with respect to organization and security.
If you have been authenticated using an instance principal (accessing the service from an
OCI compute instance), you must specify a compartment using its id (OCID), as there is no
default in this case. See Calling Service From an Instance in Oracle Cloud Infrastructure
Documentation.

• Java

• Python

• Go

• Node.js

• C#

Java
There are several ways to specify a compartment in your application code:
1. Use a default compartment in NoSQLHandleConfig so that it applies to all the operations
using the handle. See Obtaining a NoSQL Handle for an example.
2. Use the compartment name or id (OCID) in each request in addition to the table name.
This overrides any default compartment.
For example:

GetRequest getReq = new GetRequest().setTableName("mytable")


.setCompartment("mycompartment");

3. Use the compartment name as a prefix on the table name. This overrides any default
compartment as well as a compartment specified using API.
For example:

GetRequest getReq = new


GetRequest().setTableName("mycompartment:mytable");

When using a named compartment, the name can be the simple name of a top-level
compartment or a path to a nested compartment. In the latter case, the path is a "." (dot)
separated path.

1-167
Chapter 1
Develop

Note:
While specifying the path to a nested compartment, do not include the top-
level compartment's name in the path as that is inferred from the tenancy.

Python
There are several ways to specify a compartment in your application code:
• A method exists to allow specification of a default compartment for requests in
borneo.NoSQLHandleConfig.set_compartment(). This overrides the user’s default
compartment.
• In addition, it is possible to specify a compartment in each Request instance.

The set_compartment methods take either an id (OCID) or a compartment name or a


path. If a compartment name is used it may be the name of a top-level compartment.

Note:
If a compartment path is used to reference a nested compartment, the path
is a dot-separate path that excludes the top-level compartment of the path,
for example, compartmentA.compartmentB.

Instead of setting a compartment in the request, it is possible to use a compartment


name to prefix a table name in a request, query, or DDL statement. This usage
overrides any other setting of the compartment. For example,

...
request = PutRequest().set_table_name('mycompartment:mytable')
...
create_statement = 'create table mycompartment:mytable(...)'
...
request = GetRequest().set_table_name('compartmentA.compartmentB')

Go
There are several ways to specify a compartment in your application code:
• You can set a desired compartment name or id.
• Set to an empty string to use the default compartment, that is the root
compartment of the tenancy.
• If using a nested compartment, specify the full compartment path relative to the
root compartment as compartmentID. For example, if using
rootCompartment.compartmentA.compartmentB, the compartmentID should be
set to compartmentA.compartmentB.
• You can also use the compartment OCID as the string value.

compartmentID:="<optional-compartment-name-or-ID>"
iam.NewRawSignatureProvider(tenancy, user, region, fingerprint,

1-168
Chapter 1
Develop

compartmentID,
privateKey, &privateKeyPassphrase)

Node.js
The default compartment for tables is the root compartment of the user's tenancy. A default
compartment for all operations can be specified by setting the Compartment property of
NoSQLConfig. For example:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const client = new NoSQLClient({
region: Region.US_ASHBURN_1,
compartment: 'mycompartment'
});

The string value may be either a compartment id or a compartment name or path. If it is a


simple name it must specify a top-level compartment. If it is a path to a nested compartment,
the top-level compartment must be excluded as it is inferred from the tenancy. A
compartment can also be specified in each request in the options object. This value overrides
the initial configuration value.
If compartment is not supplied, the tenancy OCID will be used as default. Note this only
applies if you are authorizing with user's identity. When using instance principal or resource
principal, compartment id must be specified.

C#
The default compartment for tables is the root compartment of the user's tenancy. A default
compartment for all operations can be specified by setting the Compartment property of
NoSQLConfig. For example:

var client = new NoSQLClient(


new NoSQLConfig
{
Region=Region.US_ASHBURN_1,
Compartment="<compartment_ocid_or_name>"
});

The string value may be either a compartment OCID or a compartment name or path. If it is a
simple name it must specify a top-level compartment. If it is a path to a nested compartment,
the top-level compartment must be excluded as it is inferred from the tenancy.
In addition, all operation options classes have Compartment property, such as
TableDDLOptions.Compartment, GetOptions.Compartment, PutOptions.Compartment, etc.
Thus you may also specify comparment separately for any operation. This value, if set, will
override the compartment value in NoSQLConfig, if any.

If compartment is not supplied, the tenancy OCID will be used as default. Note this only
applies if you are authorizing with user's identity. When using instance principal or resource
principal, compartment id must be specified.

1-169
Chapter 1
Develop

Creating Tables and Indexes


Learn how to create tables and indexes.
Creating a table is the first step of developing your application.
You use the API class and methods to execute all DDL statements, such as, creating,
modifying, and dropping tables. You can also set table limits using the API method.
Before creating a table, see:
• Supported Data Types, and
• Oracle NoSQL Database Cloud Service Limits
Examples of DDL statements are:

/* Create a new table called users */


CREATE TABLE IF NOT EXISTS users(id INTEGER,
name STRING,
PRIMARY KEY(id))

/* Create a new table called users and set the TTL value to 4 days */
CREATE TABLE IF NOT EXISTS users(id INTEGER,
name STRING,
PRIMARY KEY(id))
USING TTL 4 days

/* Create a new index called nameIdx on the name field in the users
table */
CREATE INDEX IF NOT EXISTS nameIdx ON users(name)

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments .

1-170
Chapter 1
Develop

Create a table and index using the TableRequest and its methods. The TableRequest class
lets you pass a DDL statement to the TableRequest.setStatement method.

/* Create a simple table with an integer key and a single json data
* field and set your desired table capacity.
* Set the table TTL value to 3 days.
*/
String createTableDDL = "CREATE TABLE IF NOT EXISTS users " +
"(id INTEGER, name STRING, " +
"PRIMARY KEY(id)) USING TTL 3 days";

/* Call the appropriate constructor for


* 1) Provisioned Capacity
* TableLimits limits = new TableLimits(50, 50, 25);
* 2) On-demand Capacity - only set storage limit
* TableLimits limits = new TableLimits( 5 );
* In this example, we will use Provisioned Capacity
*/
TableLimits limits = new TableLimits(50, 50, 25);
TableRequest treq = new TableRequest().setStatement(createTableDDL)
.setTableLimits(limits);

// start the asynchronous operation


TableResult tres = handle.tableRequest(treq);

// wait for completion of the operation


tres.waitForCompletion(handle,
60000, // wait for 60 sec
1000); // delay in ms for poll

// Create an index called nameIdx on the name field in the users table.
treq = new TableRequest().setStatement("CREATE INDEX
IF NOT EXISTS nameIdx ON users(name)
");

// start the asynchronous operation


handle.tableRequest(treq);

// wait for completion of the operation


tres.waitForCompletion(handle,
60000, // wait for 60 sec
1000); // delay in ms for poll

Creating a child table: You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.

final static String tableName = "users";


final static String childtableName = "userDetails";
String createchildTableDDL = "CREATE TABLE IF NOT EXISTS " +
tableName + "."+ childtableName + "(address STRING, salary
INTEGER, " +

1-171
Chapter 1
Develop

"PRIMARY KEY(address))";
TableRequest treq = new
TableRequest().setStatement(createchildTableDDL);
System.out.println("Creating child table " + tableName);
TableResult tres = handle.tableRequest(treq);
/* The request is async,
* so wait for the table to become active.
*/
System.out.println("Waiting for "+ childtableName + " to become
active");
tres.waitForCompletion(handle, 60000, /* wait 60 sec */
1000); /* delay ms for poll */
System.out.println("Table " + childtableName + " is active");

Find the list of tables:


You can get a list of tables.

ListTablesRequest tablereq = new ListTablesRequest();


String [] tablelis = handle.listTables(tablereq).getTables();
if (tablelis.length == 0)
System.out.println("No tables avaiable");
else {
System.out.println("The tables available are");
for (int i=0;i< tablelis.length; i++) {
System.out.println(tablelis[i]);
}
}

You can also fetch the schema of a table at any time.

GetTableRequest gettblreq = new GetTableRequest();


gettblreq.setTableName(tableName);
System.out.println("The schema details for the table is "
+ handle.getTable(gettblreq).getSchema());

Python
DDL statements are executed using the borneo.TableRequest class. All calls to
borneo.NoSQLHandle.table_request() are asynchronous so it is necessary to check
the result and call borneo.TableResult.wait_for_completion() to wait for the
operation to complete.

#Create a simple table with an integer key and a single


#json data field and set your desired table capacity.
#Set the table TTL value to 3 days.
from borneo import TableLimits,
TableRequest statement = 'create table if not exists users(id integer,
name string,
' + 'primary key(id)
USING TTL 3 DAYS'

# In the Cloud Service TableLimits is a required object for table

1-172
Chapter 1
Develop

#creation. It specifies the throughput and capacity for the table in


#ReadUnits, WriteUnits, GB
# Call the appropriate constructor for
# 1) Provisioned Capacity
# TableLimits(50, 50, 25);
# 2) On-demand Capacity - only set storage limit
# TableLimits( 25 );
# In this example, we will use Provisioned Capacity
request = TableRequest().set_statement(statement).
set_table_limits( TableLimits(50, 50, 25))

# assume that a handle has been created, as handle, make the request
#wait for 60 seconds, polling every 1 seconds
result = handle.do_table_request(request, 60000, 1000)
# the above call to do_table_request is equivalent to
# result = handle.table_request(request)
result.wait_for_completion(handle, 60000, 1000)

#Create an index called nameIdx on the name field in the users table.
request = TableRequest().set_statement("CREATE INDEX IF NOT EXISTS nameIdx
ON users(name)")
# assume that a handle has been created, as handle, make the request
#wait for 60 seconds, polling every 1 seconds
result = handle.do_table_request(request, 60000, 1000)
# the above call to do_table_request is equivalent to
# result = handle.table_request(request)
result.wait_for_completion(handle, 60000, 1000)

Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.

statement = 'create table if not exists users.userDetails (address STRING,


salary integer, primary key(address))'
print('Creating table: ' + statement)
request = TableRequest().set_statement(statement)
# Ask the cloud service to create the table,
# waiting for a total of 40000 milliseconds and polling the service
# every 3000 milliseconds to see if the table is active
table_result = handle.do_table_request(request, 40000, 3000)
table_result.wait_for_completion(handle, 40000, 3000)
if (table_result.get_state() != State.ACTIVE):
raise NameError('Table userDetails is in an unexpected state ' +
str(table_result.get_state()))

Find the list of tables:


You can get a list of tables.

ltr = ListTablesRequest()
list(str)= handle.list_tables(ltr).getTables()
if list(str).len() = 0
print ("No tables available")

1-173
Chapter 1
Develop

else
print('The tables available are: ' + list(str))

You can also fetch the schema of a table at any time.

request = GetTableRequest().set_table_name(table_name)
result = handle.get_table(request)
print('The schema details for the table is: ' + result.get_schema())

Go
The following example creates a simple table with an integer key and a single STRING
field. The create table request is asynchronous. You wait for the table creation to
complete.

// Create a simple table with an integer key and a single


// json data field and set your desired table capacity.
// Set the table TTL value to 3 days.
tableName := "users"
stmt := fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s "+
"(id integer, name STRING, PRIMARY KEY(id) "+
"USING TTL 3 DAYS)", tableName)

// Call the appropriate constructor for


// 1) Provisioned Capacity
// &nosqldb.TableLimits(ReadUnits: 50, WriteUnits: 50, StorageGB: 25);
// 2) On-demand Capacity - only set storage limit
// &nosqldb.TableLimits(StorageGB: 25 );
// In this example, we will use Provisioned Capacity
tableReq := &nosqldb.TableRequest{
Statement: stmt,
TableLimits: &nosqldb.TableLimits{
ReadUnits: 50,
WriteUnits: 50,
StorageGB: 25,
},
}

tableRes, err := client.DoTableRequest(tableReq)


if err != nil {
fmt.Printf("cannot initiate CREATE TABLE request: %v\n", err)
return
}
_, err = tableRes.WaitForCompletion(client, 60*time.Second,
time.Second)
if err != nil {
fmt.Printf("Error finishing CREATE TABLE request: %v\n", err)
return
}
fmt.Println("Created table ", tableName)

//Create an index called nameIdx on the name field in the users table
stmt_ind := fmt.Sprintf("CREATE INDEX IF NOT EXISTS nameIdx ON

1-174
Chapter 1
Develop

users(name)")
tableReq := &nosqldb.TableRequest{Statement: stmt_ind}
tableRes, err := client.DoTableRequest(tableReq)
if err != nil {
fmt.Printf("cannot initiate CREATE INDEX request: %v\n", err)
return
}
_, err = tableRes.WaitForCompletion(client, 60*time.Second, time.Second)
if err != nil {
fmt.Printf("Error finishing CREATE INDEX request: %v\n", err)
return
}
fmt.Println("Created index nameIdx ")

Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.

// Creates a simple child table with a string key and a single integer field.
childtableName := "users.userDetails"
stmt1 := fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s ("+
"address STRING, "+
"salary INTEGER, "+
"PRIMARY KEY(address))",
childtableName)
tableReq1 := &nosqldb.TableRequest{Statement: stmt1}
tableRes1, err := client.DoTableRequest(tableReq1)
if err != nil {
fmt.Printf("cannot initiate CREATE TABLE request: %v\n", err)
return
}
// The create table request is asynchronous, wait for table creation to
complete.
_, err = tableRes1.WaitForCompletion(client, 60*time.Second, time.Second)
if err != nil {
fmt.Printf("Error finishing CREATE TABLE request: %v\n", err)
return
}
fmt.Println("Created table ", childtableName)

Find the list of tables:


You can get a list of tables.

req := &nosqldb.ListTablesRequest{Timeout: 3 * time.Second,}


res, err := client.ListTables(req)
if len(res.Tables)== 0{
fmt.Printf("No tables in the given compartment"
return
}
fmt.Printf("The tables in the given compartment are:\n" )
for i, table := range res.Tables {
fmt.Printf(table)
}

1-175
Chapter 1
Develop

You can also fetch the schema of a table at any time.

req := &nosqldb.GetTableRequest{
TableName: table_name, Timeout: 3 * time.Second, }
res, err := client.GetTable(req)
fmt.Printf("The schema details for the table is:state=%s,
limits=%v\n", res.State,res.Limits)

Node.js
Table DDL statements are executed by tableDDL method. Like most other methods of
NoSQLClient class, this method is asynchronous and it returns a Promise of
TableResult. TableResult is a plain JavaScript object that contains status of DDL
operation such as its TableState, name, schema and its TableLimit.
tableDDL method takes opt object as the 2nd optional argument. When you are
creating a table, you must specify its TableLimits as part of the opt argument.
TableLimits specifies maximum throughput and storage capacity for the table as the
amount of read units, write units, and Gigabytes of storage.
Note that tableDDL method only launches the specified DDL operation in the
underlying store and does not wait for its completion. The resulting TableResult will
most likely have one of intermediate table states such as TableState.CREATING,
TableState.DROPPING or TableState.UPDATING (the latter happens when table is in
the process of being altered by ALTER TABLE statement, table limits are being
changed or one of its indexes is being created or dropped).
When the underlying operation completes, the table state should change to
TableState.ACTIVE or TableState.DROPPED (the latter if the DDL operation was
DROP TABLE).

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const TableState = require('oracle-nosqldb').TableState;
const client = new NoSQLClient('config.json');

async function createUsersTable() {


try {
const statement = 'CREATE TABLE IF NOT EXISTS users(id
INTEGER, ' +
'name STRING, PRIMARY KEY(id))';

// Call the appropriate constructor for


// 1) Provisioned Capacity
// tableLimits: {readUnits: 50, writeUnits: 50, storageGB: 25);
// 2) On-demand Capacity - only set storage limit
// tableLimits: {storageGB: 25 );
// In this example, we will use Provisioned Capacity
let result = await client.tableDDL(statement, {
tableLimits: {
readUnits: 50,
writeUnits: 50,
storageGB: 25
}
});

result = await client.forCompletion(result);

1-176
Chapter 1
Develop

console.log('Table users created');


} catch(error) {
//handle errors
}
}

After the above call returns, result will reflect final state of the operation. Alternatively, to use
complete option, substitute the code in try-catch block above with the following:

const statement = 'CREATE TABLE IF NOT EXISTS users(id INTEGER, ' +


'name STRING, PRIMARY KEY(id))';

// Call the appropriate constructor for


// 1) Provisioned Capacity
// tableLimits: {readUnits: 50, writeUnits: 50, storageGB: 25);
// 2) On-demand Capacity - only set storage limit
// tableLimits: {storageGB: 25 );
// In this example, we will use Provisioned Capacity
let result = await client.tableDDL(statement, {
tableLimits: {
readUnits: 50,
writeUnits: 50,
storageGB: 25
},
complete: true
});

console.log('Table users created');

You need not specify TableLimits for any DDL operation other than CREATE TABLE. You
may also change table limits of the table after it has been created by calling setTableLimits
method. This may also require waiting for the completion the operation in the same way as
waiting for completion of operations initiated by tableDDL.

// Create an index called nameIdx on the name field in the users table.
try {
const statement = 'CREATE INDEX IF NOT EXISTS nameIdx ON users(name))';
let result = await client.tableDDL(statement);
result = await client.forCompletion(result);
console.log('Index nameIdx created');
} catch(error){
//handle errors
}

Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.

/**
* This function will create the child table userDetails with two columns,
* one string column address which will be the primary key and one integer
column
* which will be the salary.

1-177
Chapter 1
Develop

* @param {NoSQLClient} handle An instance of NoSQLClient


*/
const TABLE_NAME = 'users';
const CHILDTABLE_NAME = 'userDetails';
async function createChildTable(handle) {
const createChildtblDDL = `CREATE TABLE IF NOT EXISTS $
{TABLE_NAME}.${CHILDTABLE_NAME}
(address STRING, salary INTEGER, PRIMARY KEY(address))`;
console.log('Create table: ' + createChildtblDDL);
let res = await handle.tableDDL(createChildtblDDL, {complete:
true});
}

Find the list of tables:


You can get a list of tables.

let varListTablesResult = await client.listTables();


console.log("The tables in the given compartment are:")
{res.send(varListTablesResult)}

You can also fetch the schema of a table at any time.

let resExistingTab = await client.getTable(tablename);


{ await client.forCompletion(resExistingTab);}
console.log("The schema details for the table is:")
{ res.send(resExistingTab.schema)}

C#
To create tables and execute other Data Definition Language (DDL) statements, such
as creating, modifying and dropping tables as well as creating and dropping indexes,
use methods ExecuteTableDDLAsync and ExecuteTableDDLWithCompletionAsync.
Methods ExecuteTableDDLAsync and ExecuteTableDDLWithCompletionAsync return
Task<TableResult>. TableResult instance contains status of DDL operation such as
TableState and table schema. Each of these methods comes with several overloads.
In particular, you may pass options for the DDL operation as TableDDLOptions.

When creating a table, you must specify its TableLimits. Table limits specify maximum
throughput and storage capacity for the table as the amount of read units, write units
and Gigabytes of storage. You may use an overload that takes tableLimits parameter
or pass table limits as TableLimits property of TableDDLOptions.

Note that these are potentially long running operations. The method
ExecuteTableDDLAsync only launches the specified DDL operation by the service and
does not wait for its completion. You may asynchronously wait for table DDL operation
completion by calling WaitForCompletionAsync on the returned TableResult instance.

var client = new NoSQLClient("config.json");


try {
var statement = "CREATE TABLE IF NOT EXISTS users(id INTEGER,"
+ "name STRING, PRIMARY KEY(id))";

// Call the appropriate constructor for

1-178
Chapter 1
Develop

// 1) Provisioned Capacity
// new TableLimits(50, 50, 25);
// 2) On-demand Capacity - only set storage limit
// new TableLimits( 25 );
// In this example, we will use Provisioned Capacity
var result = await client.ExecuteTableDDLAsync(statement, new
TableLimits(50, 50, 25));

await result.WaitForCompletionAsync();
Console.WriteLine("Table users created.");
} catch(Exception ex) {
// handle exceptions
}

Note that WaitForCompletionAsync will change the calling TableResult instance to reflect the
operation completion.
Alternatively you may use ExecuteTableDDLWithCompletionAsync. Substitute the statements
in the try-catch block with the following:

var statement = "CREATE TABLE IF NOT EXISTS users(id INTEGER,"


+ "name STRING, PRIMARY KEY(id))";

// Call the appropriate constructor for


// 1) Provisioned Capacity
// new TableLimits(50, 50, 25);
// 2) On-demand Capacity - only set storage limit
// new TableLimits(25 );
// In this example, we will use Provisioned Capacity
await client.ExecuteTableDDLWithCompletionAsync(statement,
new TableLimits(50, 50, 25));

Console.WriteLine("Table users created.");

You need not specify TableLimits for any DDL operation other than CREATE TABLE. You
may also change table limits of an existing table by calling SetTableLimitsAsync or
SetTableLimitsWithCompletionAsync methods.
Creating a child table:You use the API class and methods to execute DDL statement to
create a child table. While creating a child table, Table limits need not be explicitly set as a
child table inherits the limits of a parent table.

private const string TableName = "users";


private const string ChildTableName = "userDetails";
// Create a child table
var childtblsql = $"CREATE TABLE IF NOT EXISTS {TableName}.{ChildTableName}
address STRING, salary INTEGER, PRIMARY KEY(address))";
Console.WriteLine("\nCreate table {0}", ChildTableName);
var tableResult = await client.ExecuteTableDDLAsync(childtblsql);
Console.WriteLine(" Creating table {0}", ChildTableName);
Console.WriteLine(" Table state: {0}", tableResult.TableState);
// Wait for the operation completion
await tableResult.WaitForCompletionAsync();

1-179
Chapter 1
Develop

Console.WriteLine(" Table {0} is created",tableResult.TableName);


Console.WriteLine(" Table state: {0}", tableResult.TableState);

Find the list of tables:


You can get a list of tables.

varresult = await client.ListTablesAsync();


console.WriteLine("The tables in the given compartment are:")
foreach(var tableName inresult.TableNames){
Console.WriteLine(tableName);
}

Spring Data
In Spring data applications, the tables are automatically created at the beginning of the
application when the entities are initialized unless @NosqlTable.autoCreateTable is
set to false.

Create a Users entity class to persist. This entity class represents a table in the Oracle
NoSQL Database and an instance of this entity corresponds to a row in that table.
You can set the default TableLimits in the @NosqlDbConfig instance using
NosqlDbConfig.getDefaultCapacityMode(),
NosqlDbConfig.getDefaultStorageGB(), NosqlDbConfig.getDefaultReadUnits(),
and NosqlDbConfig.getDefaultWriteUnits() methods. TableLimits can also be
specified per table if @NosqlTable annotation is used, through capacityMode,
readUnits, writeUnits, and storageGB fields.
Provide the @NosqlId annotation to indicate the ID field. The generated=true attribute
specifies that the ID will be auto-generated. You can set the table level TTL by
providing the ttl() and ttlUnit() parameters in the @NosqlTable annotation of the
entity class. For details on all the Spring Data classes, methods, interfaces, and
examples see SDK for Spring Data API Reference.
If the ID field type is a String, a UUID will be used. If the ID field type is int or long, a
"GENERATED ALWAYS as IDENTITY (NO CYCLE)" sequence is used.

import com.oracle.nosql.spring.data.core.mapping.NosqlId;
import com.oracle.nosql.spring.data.core.mapping.NosqlTable;

/* Set the TableLimits and TTL values. */


@NosqlTable(readUnits = 50, writeUnits = 50, storageGB = 25, ttl = 10,
ttlUnit = NosqlTable.TtlUnit.DAYS)

public class Users


{
@NosqlId(generated = true)
long id;
String firstName;
String lastName;

/* public or package protected constructor required when


retrieving from the database. */
public Users() {

1-180
Chapter 1
Develop

@Override
public String toString()
{
return "Users{" +
"id=" + id + ", " +
"firstName=" + firstName + ", " +
"lastName=" + lastName +
'}';
}
}

Create the following UsersRepository interface. This interface extends the NosqlRepository
interface and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. This NosqlRepository interface
provides methods that are used to store or retrieve data from the database.

import com.oracle.nosql.spring.data.repository.NosqlRepository;

/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the
Users class. */

public interface UsersRepository extends NosqlRepository<Users, Long> {


/* Search the Users table by the last name and return an iterable
instance of the Users class.*/
Iterable<Users> findByLastName(String lastname);
}

You can use Spring's CommandLineRunner interface to show the application code that
implements the run method and has the main method.

Note:
You can code the functionality as per your requirements by implementing any of the
various interfaces that the Spring Data Framework provides. For more information
on setting up a Spring boot application, see Spring Boot.

import com.oracle.nosql.spring.data.core.NosqlTemplate;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;

/* The @SpringBootApplication annotation helps you to build an application


using Spring Data Framework rapidly.*/
@SpringBootApplication

1-181
Chapter 1
Develop

public class App implements CommandLineRunner {

/* The annotation enables Spring Data Framework to look up the


configuration file for a matching bean.*/
@Autowired
private UsersRepository repo;

public static void main(String[] args) {


ConfigurableApplicationContext ctx =
SpringApplication.run(App.class, args);
SpringApplication.exit(ctx, () -> 0);
ctx.close();
System.exit(0);
}

@Override
public void run(String... args) throws Exception {
}
}

When a table is created through the Spring Data application, a default schema is
created automatically, which includes two columns - the primary key column (types
String, int, long, or timestamp) and a JSON column called kv_json_.

Note:
If a table exists already, it must comply with the generated schema.

To create an index on a field in the Users table, you use


NosqlTemplate.runTableRequest().

Create the AppConfig class that extends AbstractNosqlConfiguration class to


provide the connection details of the Oracle NoSQL Database. For details, see
Obtaining a NoSQL connection.
In the application, you instantiate the NosqlTemplate class by providing the
NosqlTemplate create (NosqlDbConfig nosqlDBConfig) method with the instance of
the AppConfig class. You then modify the table using the
NosqlTemplate.runTableRequest() method. You provide the NoSQL statement for
the creation of the index in the NosqlTemplate.runTableRequest() method.

In this example, you create an index on the lastName field in the Users table.

import com.oracle.nosql.spring.data.core.NosqlTemplate;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;

/* Create an Index on the lastName field of the Users Table. */

1-182
Chapter 1
Develop

try {
AppConfig config = new AppConfig();
NosqlTemplate idx = NosqlTemplate.create(config.nosqlDbConfig());
idx.runTableRequest("CREATE INDEX IF NOT EXISTS nameIdx ON
Users(kv_json_.lastName AS STRING)");
System.out.println("Index created successfully");
} catch (Exception e) {
System.out.println("Exception creating index" + e);
}

For more details on table creation, see Example: Accessing Oracle NoSQL Database Using
Spring Data Framework in the Spring Data SDK Developers Guide.

Related Topics
• About Time to Live

Adding Data
Add rows to your table. When you store data in table rows, your application can easily
retrieve, add to, or delete information from a table.

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

Java
The PutRequest class represents the input to a
NoSQLHandle.put(oracle.nosql.driver.ops.PutRequest) operation. This request can be
used to perform unconditional and conditional puts to:
• Overwrite any existing row. Overwrite is the default functionality.
• Succeed only if the row does not exist. Use the PutRequest.Option.IfAbsent method in
this case.
• Succeed only if the row exists. Use the PutRequest.Option.IfPresent method in this
case.
• Succeed only if the row exists and the version matches a specific version. Use the
PutRequest.Option.IfVersion method for this case and the
setMatchVersion(oracle.nosql.driver.Version) method to specify the version to
match.

1-183
Chapter 1
Develop

Note:
First, connect your client driver to Oracle NoSQL Database Cloud Service to
get a handle and then complete other steps. This topic omits the steps for
connecting your client driver and creating a table.
If you do not yet have a table, see Creating Tables and Indexes .

The following example assumes that the default compartment is specified in


NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments.
To add rows to your table:

/* use the MapValue class and input the contents of a new row */
MapValue value = new MapValue().put("id", 1).put("name", "myname");

/* create the PutRequest, setting the required value and table name */
PutRequest putRequest = new PutRequest().setValue(value)
.setTableName("users");

/* use the handle to execute the PUT request


* on success, PutResult.getVersion() returns a non-null value
*/
PutResult putRes = handle.put(putRequest);
if (putRes.getVersion() != null) {
// success
} else {
// failure
}

You can perform a sequence of PutRequest operations on a table that share the shard
key using the WriteMultipleRequest class. If the operation is successful, the
WriteMultipleResult.getSuccess() method returns true.

See the Java API Reference Guide for more information about the APIs.
You can also add JSON data to your table. You can either convert JSON data into a
record for a fixed schema table or you can insert JSON data into a column whose data
type is of type JSON.
The PutRequest class also provides the setValueFromJson method which takes a
JSON string and uses that to populate a row to insert into the table. The JSON string
should specify field names that correspond to the table field names.
To add JSON data to your table:

/* Construct a simple row, specifying the values for each


* field. The value for the row is this:
*
* {
* "cookie_id": 123,
* "audience_data": {

1-184
Chapter 1
Develop

* "ipaddr": "10.0.00.xxx",
* "audience_segment": {
* "sports_lover": "2018-11-30",
* "book_reader": "2018-12-01"
* }
* }
* }
*/
MapValue segments = new MapValue()
.put("sports_lover", new TimestampValue("2018-11-30"))
.put("book_reader", new TimestampValue("2018-12-01"));
MapValue value = new MapValue()
.put("cookie_id", 123) // fill in cookie_id field
.put("ipaddr", "10.0.00.xxx")
.put("audience_segment", segments);
PutRequest putRequest = new PutRequest()
.setValue(value)
.setTableName(tableName);
PutResult putRes = handle.put(putRequest);

The same row can be inserted into the table as a JSON string:

/* Construct a simple row in JSON */


String jsonString = "{\"cookie_id\":123,\"ipaddr\":\"10.0.00.xxx\",
\"audience_segment\":{\"sports_lover\":\"2018-11-30\",
\"book_reader\":\"2018-12-01\"}}";
PutRequest putRequest = new PutRequest()
.setValueFromJson(jsonString, null) // no options
.setTableName(tableName);
PutResult putRes = handle.put(putRequest);

Python
The borneo.PutRequest class represents input to the borneo.NoSQLHandle.put() method
used to insert single rows. This method can be used for unconditional and conditional puts to:
• Overwrite any existing row. This is the default.
• Succeed only if the row does not exist. Use borneo.PutOption.IF_ABSENT for this case.
• Succeed only if the row exists. Use borneo.PutOption.IF_PRESENT for this case.
• Succeed only if the row exists and its borneo.Version matches a specific borneo.Version.
Use borneo.PutOption.IF_VERSION for this case and
borneo.PutRequest.set_match_version() to specify the version to match.

from borneo import PutRequest


# PutRequest requires a table name
request = PutRequest().set_table_name('users')
# set the value
request.set_value({'id': i, 'name': 'Jane'})
result = handle.put(request)
# a successful put returns a non-empty version
if result.get_version() is not None:
# success

1-185
Chapter 1
Develop

When adding data the values supplied must accurately correspond to the schema for
the table. If they do not, IllegalArgumentException is raised. Columns with default or
nullable values can be left out without error, but it is recommended that values be
provided for all columns to avoid unexpected defaults. By default, unexpected columns
are ignored silently, and the value is put using the expected columns.
If you have multiple rows that share the same shard key they can be put in a single
request using borneo.WriteMultipleRequest which can be created using a number of
PutRequest or DeleteRequest objects. You can also add JSON data to your table. In
the case of a fixed-schema table the JSON is converted to the target schema. JSON
data can be directly inserted into a column of type JSON. The use of the JSON data
type allows you to create table data without a fixed schema, allowing more flexible use
of the data.
The data value provided for a row or key is a Python dict. It can be supplied to the
relevant requests (GetRequest, PutRequest, DeleteRequest) in multiple ways:
• as a Python dict directly:

request.set_value({'id': 1})
request.set_key({'id': 1 })

• as a JSON string:

request.set_value_from_json('{"id": 1, "name": "Jane"}')


request.set_key_from_json('{"id": 1}')

In both cases the keys and values provided must accurately correspond to the schema
of the table. If not an borneo.IllegalArgumentException exception is raised. If the
data is provided as JSON and the JSON cannot be parsed a ValueError is raised.

Go
The nosqldb.PutRequest represents an input to the nosqldb.Put() function used to
insert single rows. This function can be used for unconditional and conditional puts to:
• Overwrite any existing row. This is the default.
• Succeed only if the row does not exist. Specify types.PutIfAbsent for the
PutRequest.PutOption field for this case.
• Succeed only if the row exists. Specify types.PutIfPresent for the
PutRequest.PutOption field for this case.
• Succeed only if the row exists and its version matches a specific version. Specify
types.PutIfVersion for the PutRequest.PutOption field and a desired version for
the PutRequest.MatchVersion field for this case.

The data value provided for a row (in PutRequest) or key (in GetRequest and
DeleteRequest ) is a *types.MapValue. The key portion of each entry in the MapValue
must match the column name of target table, and the value portion must be a valid
value for the column. There are several ways to create a MapValue for the row to put
into a table:
1. Create an empty MapValue and put values for each column.

value:=&types.MapValue{}
value.Put("id", 1).Put("name", "Jack")

1-186
Chapter 1
Develop

req:=&nosqldb.PutRequest{
TableName: "users",
Value: value,
}
res, err:=client.Put(req)

2. Create a MapValue from a map[string]interface{}.

m:=map[string]interface{}{
"id": 1,
"name": "Jack",
}
value:=types.NewMapValue(m)
req:=&nosqldb.PutRequest{
TableName: "users",
Value: value,
}
res, err:=client.Put(req)

3. Create a MapValue from JSON. This is convenient for setting values for a row in the case
of a fixed-schema table where the JSON is converted to the target schema. For example:

value, err:=types.NewMapValueFromJSON(`{"id": 1, "name": "Jack"}`)


iferr!=nil {
return
}
req:=&nosqldb.PutRequest{
TableName: "users",
Value: value,
}
res, err:=client.Put(req)

JSON data can also be directly inserted into a column of type JSON. The use of the JSON
data type allows you to create table data without a fixed schema, allowing more flexible use
of the data.

Node.js
Method put is used to insert a single row into the table. It takes table name, row value as
plain JavaScript object and opts as an optional 3rd argument. This method can be used for
unconditional and conditional puts to:
• Overwrite existing row with the same primary key if present. This is the default.
• Succeed only if the row with the same primary key does not exist. Specify ifAbsent in
the opt argument for this case: { ifAbsent: true }. Alternatively, you may use
putIfAbsent method.
• Succeed only if the row with the same primary key exists. Specify ifPresent in the opt
argument for this case: { ifPresent: true }. Alternatively, you may use putIfPresent
method.
• Succeed only if the row with the same primary key exists and its Version matches a
specific Version value. Set matchVersion in the opt argument for this case to the specific
version: { matchVersion: my_version }. Alternatively, you may use putIfVersion
method and specify the version value as the 3rd argument (after table name and row).

1-187
Chapter 1
Develop

Each put method returns a Promise of PutResult which is a plain JavaScript object
containing information such as success status and resulting row Version. Note that the
property names in the provided row object should be the same as underlying table
column names.
To add rows to your table:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const client = new NoSQLClient('config.json');

async function putRowsIntoUsersTable() {


const tableName = 'users';
try {
// Uncondintional put, should succeed
let result = await client.put(tableName, { id: 1, name:
'John' });

// Will fail since the row with the same primary key exists
result = await client.putIfAbsent(tableName, { id: 1, name:
'Jane' });
// Expected output: putIfAbsent failed
console.log('putIfAbsent ' + result.success ? 'succeeded' :
'failed');

// Will succeed because the row with the same primary key
exists
res = await client.putIfPresent(tableName, { id: 1 , name:
'Jane' });
// Expected output: putIfAbsent succeeded
console.log('putIfPresent ' + result.success ?
'succeeded' : 'failed');

let version = result.version;


// Will succeed because the version matches existing row
result = await client.putIfVersion(tableName, { id: 1, name:
'Kim' },
version);
// Expected output: putIfVersion succeeded
console.log('putIfVersion ' + result.success ? 'succeeded' :
'failed');

// Will fail because the previous put has changed the row
version, so
// the old version no longer matches.
result = await client.putIfVersion(tableName, { id: 1, name:
'June' },
version);
// Expected output: putIfVersion failed
console.log('putIfVersion ' + result.success ? 'succeeded' :
'failed');

} catch(error) {
//handle errors
}
}

1-188
Chapter 1
Develop

Note that success results in false value only if conditional put operation fails due to condition
not being satisfied (e.g. row exists for putIfAbsent, row doesn't exist for putIfPresent or
version doesn't match for putIfVersion). If put operation fails for any other reason, the
resulting Promise will reject with an error (which you can catch in async function). For
example, this may happen if a column value supplied is of a wrong type, in which case the
put will result in NoSQLArgumentError.

You can perform a sequence of put operations on a table that share the same shard key
using putMany method. This sequence will be executed within the scope of single transaction,
thus making this operation atomic. The result of this operation is a Promise of
WriteMultipleResult. You can also use writeMany if the sequence includes both puts and
deletes.
Using columns of type JSON allows more flexibility in the use of data as the data in the JSON
column does not have predefined schema. To put data into JSON column, provide either
plain JavaScript object or a JSON string as the column value. Note that the data in plain
JavaScript object must be of supported JSON types.

C#
Method PutAsync and related methods PutIfAbsentAsync, PutIfPresentAsync and
PutIfVersionAsync are used to insert a single row into the table or update a single row.

These methods can be used for unconditional and conditional puts:


• Use PutAsync (without conditional options) to insert a new row or overwrite existing row
with the same primary key if present. This is unconditional put.
• Use PutIfAbsentAsync to insert a new row only if the row with the same primary key
does not exist.
• Use PutIfPresentAsync to overwrite existing row only if the row with the same primary
key exists.
• Use PutIfVersionAsync to overwrite existing row only if the row with the same primary
key exists and its RowVersion matches a specific version.
Each of the Put methods above returns Task<PutResult<RecordValue>>. PutResult
instance contains info about a completed Put operation, such as success status (conditional
put operations may fail if the corresponding condition was not met) and the resulting
RowVersion.
To add rows to your table:

var client = new NoSQLClient("config.json");


var tableName = "users";

try {
// Uncondintional put, should succeed.
var result = await client.PutAsync(tableName,
new MapValue
{
["id"] = 1,
["name"] = "John"
});

// This Put will fail because the row with the same primary
// key already exists.

1-189
Chapter 1
Develop

result = await client.PutIfAbsentAsync(tableName,


new MapValue
{
["id"] = 1,
["name"] = "Jane"
});

// Expected output: PutIfAbsentAsync failed.


Console.WriteLine("PutIfAbsentAsync {0}.",
result.Success ? "succeeded" : "failed");

// This Put will succeed because the row with the same primary
// key exists.
result = await client.PutIfPresentAsync(tableName,
new MapValue
{
["id"] = 1,
["name"] = "Jane"
});

// Expected output: PutIfPresentAsync succeeded.


Console.WriteLine("PutIfPresentAsync {0}.",
result.Success ? "succeeded" : "failed");
var rowVersion = result.Version;

// This Put will succeed because the version matches existing


// row.
result = await client.PutIfVersionAsync(
tableName,
new MapValue
{
["id"] = 1,
["name"] = "Kim"
}),
rowVersion);

// Expected output: PutIfVersionAsync succeeded.


Console.WriteLine("PutIfVersionAsync {0}.",
result.Success ? "succeeded" : "failed");

// This Put will fail because the previous Put has changed
// the row version, so the old version no longer matches.
result = await client.PutIfVersionAsync(
tableName,
new MapValue
{
["id"] = 1,
["name"] = "June"
}),
rowVersion);

// Expected output: PutIfVersionAsync failed.


Console.WriteLine("PutIfVersionAsync {0}.",
result.Success ? "succeeded" : "failed");

1-190
Chapter 1
Develop

// Put a new row with TTL indicating expiration in 30 days.


result = await client.PutAsync(tableName,
new MapValue
{
["id"] = 2,
["name"] = "Jack"
}),
new PutOptions
{
TTL = TimeToLive.OfDays(30)
});
}
catch(Exception ex) {
// handle exceptions
}

Note that Success property of the result only indicates successful completion as related to
conditional Put operations and is always true for unconditional Puts. If the Put operation fails
for any other reason, an exception will be thrown.
You can perform a sequence of put operations on a table that share the same shard key
using PutManyAsync method. This sequence will be executed within the scope of single
transaction, thus making this operation atomic. You can also call WriteManyAsync to perform
a sequence that includes both Put and Delete operations.
Using fields of data type JSON allows more flexibility in the use of data as the data in JSON
field does not have a predefined schema. To put value into a JSON field, supply a MapValue
instance as its field value as part of the row value. You may also create its value from a JSON
string via FieldValue.FromJsonString.

Spring Data
Use one of these methods to add rows to the table - NosqlRepository
save(entity_object), saveAll(Iterable<T> iterable), or NosqlTemplate
insert(entity). For details, see SDK for Spring Data API Reference.

In this section, you use the repository.save(entity_object) method to add the rows.

Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.

To add rows to your table, you can include the following code in your application.

@Override
public void run(String...args) throws Exception {

/* Create a new User instance and load values into it.*/

Users u1 = new Users();


u1.firstName = "John";

1-191
Chapter 1
Develop

u1.lastName = "Doe";

/* Save the User instance.*/


repo.save(u1);

/* Create a second User instance and load values into it. Save the
instance.*/
Users u2 = new Users();
u2.firstName = "Angela";
u2.lastName = "Willard";

repo.save(u2);
}

This creates and saves two user entities. For each entity, the Spring Data SDK creates
two columns:
1. Primary key column
2. JSON data type column
Here, the primary key is auto-generated. The @NosqlId annotation in the Users class
specifies that the id field will act as the ID and be the primary key of the underlying
storage table.
The generated=true attribute specifies that this ID will be auto-generated by a
sequence. The rest of the entity fields, that is, the firstName and lastName fields are
stored in the JSON column.

Using Plugins
• Using IntelliJ Plugin for Development
• Using Eclipse Plugin for Development
• About Oracle NoSQL Database Visual Studio Code Extension

Using IntelliJ Plugin for Development


Browse tables and execute queries on your Oracle NoSQL Database Cloud Service
instance or simulator from IntelliJ.
The Oracle NoSQL Database Cloud Service IntelliJ plugin connects to a running
instance of Oracle NoSQL Database Cloud Service or simulator and allows you to:
• View the tables in a well-defined tree structure with Table Explorer.
• View information on columns, indexes, primary key(s), and shard key(s) for a
table.
• View column data in a well-formatted JSON Structure.
• Create tables using form-based schema entry or supply DDL statements.
• Drop tables.
• Add new columns using form-based entry or supply DDL statements.

1-192
Chapter 1
Develop

• Drop Columns.
• Create Indexes.
• Drop Indexes.
• Execute SELECT SQL queries on a table and view query results in tabular format.
• Execute DML statements to update, insert, and delete data from a table.
This article has the following topics:

Setting Up IntelliJ Plug-in


Learn how to set up the IntelliJ plug-in for Oracle NoSQL Database Cloud Service instance or
simulator.
Perform the following steps:
1. Download and start Oracle NoSQL Database Cloud Simulator. See Downloading the
Oracle NoSQL Database Cloud Simulator .
2. Download and extract Oracle NoSQL Database Java SDK. See About Oracle NoSQL
Database SDK drivers .
3. Install the IntelliJ plugin, and restart the IDE.
You have two options to install the plugin:
• Search the Oracle NoSQL Database Connector in the JetBrains plug-in repository,
and install it, or
• Download the IntelliJ plugin from Oracle Technology Network, and install the plugin
from disk.

Tip:
Don't extract the downloaded plugin zip file. Select the plugin in the zip format
while installing it from disk.

After you successfully set up your IntelliJ plugin, create a NoSQL project, and connect it to
your Oracle NoSQL Database Cloud Service instance or simulator.

Creating a NoSQL Project in IntelliJ


Learn how to create a NoSQL project in IntelliJ.
Perform the following steps:
1. Open IntelliJ IDEA. Click File > New > Project.
2. Enter a value for Project Name and Project Location, and click Create.
3. Once your NoSQL project is created, you can browse the example java files from the
Project Explorer window.
After you successfully create a NoSQL project in IntelliJ, connect your project to your Oracle
NoSQL Database Cloud Service or simulator.

1-193
Chapter 1
Develop

Connecting to Oracle NoSQL Database Cloud Service from IntelliJ


Learn how to connect your NoSQL project to Oracle NoSQL Database Cloud Service
using the IntelliJ plugin
Perform the following steps:
1. Open your NoSQL project in IntelliJ.

2. Click the icon in the Schema Explorer window to open the Settings dialog for
the plugin.
3. Expand Tools > Oracle NoSQL in the Settings Explorer, and click Connections.
4. Select Cloud from the drop-down menu for the connection type.
5. Enter values for the following connection parameters, and click OK.

Table 1-8 Connection Parameters

Parameter Description Sample Value


Endpoint Regional network access https://nosql.us-
point to the Oracle NoSQL ashburn-1.oci.oraclecl
Database Cloud Service. oud.com (for the Ashburn
Oracle NoSQL Database
Cloud Service region
identifier in the North
America region. See Data
Regions and Associated
Service Endpoints for a list of
service endpoints.
SDK Path Complete path to the D:\oracle-nosql-java-
directory where you sdk-5.2.11
extracted the Oracle NoSQL
Database Java SDK.
Tenant ID and User ID Tenancy's OCID and User's See Where to get the
OCID for your Oracle NoSQL Tenancy's OCID and User's
Database Cloud Service. OCID in Oracle Cloud
Infrastructure
Documentation.
Fingerprint and The fingerprint and See the following resources
Passphrase(Optional) passphrase of the signing in Oracle Cloud
key created while generating Infrastructure
and uploading the API Documentation:
Signing Key. • How to Generate an API
Signing Key to generate
the signing key with an
optional passphrase.
• See How to Get the
Key's Fingerprint to get
the key's fingerprint

1-194
Chapter 1
Develop

Table 1-8 (Cont.) Connection Parameters

Parameter Description Sample Value


Private Key The private key generated for For the application user, an
the user. API signing key must be
generated and uploaded.
See How to Generate an API
Signing Key to generate the
signing key with an optional
passphrase.
Compartment (Optional) The compartment ID for your If no value is specified, it
NoSQL database schema. defaults to the Root
compartment.

6. The Intellij plugin connects your project to the Oracle NoSQL Database Cloud Service
and displays its schema in the Schema Explorer window.
7. If required, you can change your service endpoint or compartment from the Schema
Explorer window itself. To do this, click the icon in the Schema Explorer window.
A dialog window appears where you can provide the new values for Endpoint and
Compartment. Enter the values that you want to modify, and click OK.
You can provide values for:
• Both Endpoint and Compartment, or
• Endpoint alone. In this case, the Compartment defaults to the Root compartment in
that region.
After you successfully connect your project to your Oracle NoSQL Database Cloud Service,
you can manage the tables and data in your schema.

Connecting to Oracle NoSQL Database Cloud Simulator from IntelliJ


Learn how to connect your NoSQL project to Oracle NoSQL Database Cloud Simulator using
the IntelliJ plugin.
Perform the following steps:
1. Open your NoSQL project in IntelliJ.

2. Click the icon in the Schema Explorer window to open the Settings dialog for the
plugin.
3. Expand Tools > Oracle NoSQL in the Settings Explorer, and click Connections.
4. Select Cloudsim from the drop-down menu for the connection type.
5. Enter values for the following connection parameters, and click OK.

Table 1-9 Connection Parameters

Parameter Description Sample Value


Service URL The IP address and port on The default value is http://
which the Oracle NoSQL localhost:8080
Database Cloud Simulator is
running.

1-195
Chapter 1
Develop

Table 1-9 (Cont.) Connection Parameters

Parameter Description Sample Value


Tenant Identifier Unique identifier to identify the The default value is exampleId.
tenant. Retain this value if you want to
test the examples.
SDK Path Complete path to the directory D:\oracle-nosql-java-
where you extracted the Oracle sdk-5.2.11
NoSQL Database Java SDK.

6. The Intellij plugin connects your project to the Oracle NoSQL Database Cloud
Simulator and displays its schema in the Schema Explorer window.

Note:
Before connecting your project to Oracle NoSQL Database Cloud
Simulator, it must be started and running. Otherwise, your connection
request will fail in IntelliJ.

After you successfully connect your project to your Oracle NoSQL Database Cloud
Simulator, you can manage the tables and data in your schema.

Managing Tables Using the IntelliJ Plugin


Learn how to create tables and view table data in Oracle NoSQL Database Cloud
Service or Oracle NoSQL Database Cloud Simulator from IntelliJ.
After connecting to the Oracle NoSQL Database Cloud Simulator or Oracle NoSQL
Database Cloud Service, you can execute the examples downloaded with Oracle
NoSQL Database Java SDK to create a sample table. With the help of the IntelliJ
Plugin, you can view the tables and their data in the Schema Explorer window.
Execute an example program:
1. Open the NoSQL project connected to your Oracle NoSQL Database Cloud
Service or simulator.
2. Locate and click BasicTableExample in the Project Explorer window. You will find
this in the examples folder under oracle-nosql-java-sdk. By looking at the code,
you can notice that this program creates a table called audienceData, puts two
rows into this table, queries the inserted rows, deletes the inserted rows, and
finally drops the audienceData table.
3. To pass the required arguments, click Run > Edit Configurations. Depending on
the connection type, enter the following program arguments, and click OK.

Table 1-10 Program Arguments

Connection Type Program Arguments More Information


Cloudsim http://localhost:8080 If you started the Oracle
NoSQL Database Cloud
Simulator on a different port,
you must replace 8080 with
that port number.

1-196
Chapter 1
Develop

Table 1-10 (Cont.) Program Arguments

Connection Type Program Arguments More Information


Cloud us-ashburn-1 - The first argument indicates
configFile the data region of your
D:\OCI_PROP\config Oracle NoSQL Database
Cloud Service. The second
argument passes a
configuration file that
contains the credentials to
connect to the Oracle
NoSQL Database Cloud
Service.

4. To execute this program, click Run > Run 'BasicExampleTable' or press Shift + 10.
5. Verify the logs in the terminal to confirm that the code executed successfully. You can see
the display messages that indicate table creation, rows insertion, and so on.

Tip:
As the BasicExampleTable deletes the inserted rows and drops the
audienceData table, you can't view this table in the Schema Explorer. If you
want to see the table in the Schema Explorer, comment the code that deletes
the inserted rows and drops the table, and rerun the program.

6. To view the tables and their data:

a. Locate the Schema Explorer, and click the icon to reload the schema.
b. Locate the audienceData table under your tenant identifier, and expand it to view its
columns, primary key, and shard key details.
c. Double-click the table name to view its data. Alternatively, you can right-click the
table and select Browse Table.
d. A record viewer window appears in the main editor. Click Execute to run the query
and display table data.
e. To view individual cell data separately, double-click the cell.

Perform DDL operations using IntelliJ


You can use IntelliJ to perform DDL operations.
Some of the DDL operations that can be performed from inside the IntelliJ plugin are
• CREATE TABLE
• DROP TABLE
• CREATE INDEX
• DROP INDEX
• ADD COLUMN
• DROP COLUMN

1-197
Chapter 1
Develop

CREATE TABLE
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click the connection name and choose Create Table.
• In the prompt, enter the details for your new table. You can create the Oracle
NoSQL Database table in two modes:
– **Simple DDL Input** : You can use this mode to create the table declaratively,
that is, without writing a DDL statement.
– **Advanced DDL Input** : You can use this mode to create the table using a
DDL statement.
• You have the option to view the DDL statement before creating. Click Show DDL
to view the DDL statement formed based on the values entered in the fields in the
Simple DDL input mode. This DDL statement gets executed when you click
Create.
• Click Create to create the table.

DROP TABLE
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table that you want to drop. Choose Drop Table.
• A confirmation window appears, click Ok to confirm the drop action.

CREATE INDEX
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where index need to be created. Choose Create Index.
• In the Create Index panel, enter the details for creating an index without writing
any DDL statement. Specify the name of the index and the columns to be part of
the index.
• Click Add Index.

DROP INDEX
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Click on the target table to see the listed columns, Primary Keys, Indexes and
Shard Keys.
• Locate the target-index which has to be dropped and right-click on it. Click Drop
Index.
• A confirmation window appears, click Ok to confirm the drop action.

ADD COLUMN
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where column needs to be added. Choose Add Column.
• You can add new COLUMNs in two modes:
– Simple DDL Input : You can use this mode to add new columns without writing
a DDL statement.

1-198
Chapter 1
Develop

– Advanced DDL Input : You can use this mode to add new columns into the table by
supplying a valid DDL statement.
• In both the modes, specify the name of the column and define the column with its
properties - datatype, default value and whether it is nullable.
• Click Add Column.

DROP COLUMN
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Click on the target table to see the listed columns, Primary Keys, Indexes and Shard
Keys.
• Locate the target-column which has to be dropped and right-click on it. Click Drop
Column.
• A confirmation window appears, click Ok to confirm the drop action.

Perform DML operations using IntelliJ


You can add data, modify existing data and query data from tables using IntelliJ plugin.

Insert data
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where a row needs to be inserted. Choose Insert Row.
• In the Insert Row panel, enter the details for inserting a new row. You can INSERT a new
ROW in two modes:
– Simple Input : You can use this mode to insert the new row without writing a DML
statement. Here a form based row fields entry is loaded, where you can enter the
value of every field in the row.
– Advanced JSON Input : You can use this mode to insert a new row into the table by
supplying a JSON Object containing the column name and its corresponding value as
key-value pairs.
• Click Insert Row.

Modify Data - UPDATE ROW/DELETE ROW:


• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table where a row needs to be inserted. Choose Browse Table.
• In the textbox on the left, enter the SQL statement to fetch data from your table. Click
Execute to run the query.
• To view individual cell data separately, click the table cell.
• To perform DML operations like Update and Delete Row, right-click on the particular row.
Pick your option from the context-menu that appears.
– Delete Row : A confirmation window appears, click Ok to delete the row.
– Update Row : A separate HTML panel opens below the listed rows, containing the
column names and its corresponding value in a form-based entry and as a JSON
key-pair object. You can choose either of the two methods and supply new values.

1-199
Chapter 1
Develop

Note:
In any row, PRIMARY KEY and GENERATED ALWAYS AS
IDENTITY columns cannot be updated.

Query tables
• Locate the Schema Explorer, and click the Refresh icon to reload the schema.
• Right click on the table and choose Browse Table.
• In the textbox on the left, enter the SELECT statement to fetch data from your
table.
• Click Execute to run the query. The corresponding data is retrieved from the table.
• Right click on any row and click View JSON to view the entire row object in the
JSON format.
• Click Show Query Plan to view the execution plan of the query.

Using Eclipse Plugin for Development


Build and run your Oracle NoSQL Database Cloud Service application quickly from the
Eclipse IDE.
To enhance your experience of building an Oracle NoSQL Database Cloud Service
application, a plugin is available in Eclipse. This plugin connects to a running instance
of Oracle NoSQL Database Cloud Service or Cloud Simulator and allows you to:
• Quickly get started with Oracle NoSQL Database Cloud Service by using the
examples available with the plugin.
• Explore development or test data from tables in your Oracle NoSQL Database
Cloud Service or Cloud Simulator.
• Build and test your queries.
• Retrieve columns, indexes, primary keys, and shard keys for each table.
• Build and test your SQL queries on a table and obtain results in a tabular format.
This article has the following topics:

About Eclipse Plugin


Build and run your Oracle NoSQL Database Cloud Service application quickly from the
Eclipse IDE.
To enhance your experience of building an Oracle NoSQL Database Cloud Service
application, a plugin is available in Eclipse. This plugin connects to a running instance
of Oracle NoSQL Database Cloud Service or Cloud Simulator and allows you to:
• Quickly get started with Oracle NoSQL Database Cloud Service by using the
examples available with the plugin.
• Explore development/test data from tables in your Oracle NoSQL Database Cloud
Service or Cloud Simulator.
• Build and test your queries.

1-200
Chapter 1
Develop

• Retrieve columns, indexes, primary keys, and shard keys for each table.
• Build and test your SQL queries on a table and obtain results in a tabular format.
To use the Eclipse plugin:
1. Download the Eclipse plugin from Oracle Technology Network.
2. Follow the instructions given in the README file and install the plugin.
3. After installing the Eclipse plugin, you can connect to your Oracle NoSQL Database
Cloud Service or Oracle NoSQL Database Cloud Simulator and execute the code to
read/write the tables. For more details, you can access the help content embedded with
Eclipse. To access the help content:
a. Click Help Contents from the Help menu.
b. Locate and expand the Oracle NoSQL Plugin Help Contents section.
c. This lists all the help topics available for Oracle NoSQL Plugin.
d. Refer the help topic as per your requirement.

About Oracle NoSQL Database Visual Studio Code Extension


The Oracle NoSQL Database Cloud Service provides an extension for Microsoft Visual
Studio Code which lets you connect to a running instance of Oracle NoSQL Database Cloud
Service.
You can use Oracle NoSQL Database Visual Studio (VS) Code extension to:
• View the tables in a well-defined tree structure with Table Explorer.
• View information on columns, indexes, primary key(s), and shard key(s) for a table.
• View column data in a well-formatted JSON Structure.
• Create tables using form-based schema entry or supply DDL statements.
• Drop tables.
• Add new columns using form-based entry or supply DDL statements.
• Drop Columns.
• Create Indexes.
• Drop Indexes.
• Execute SELECT SQL queries on a table and view query results in tabular format.
• Execute DML statements to update, insert, and delete data from a table.
• Download the Query Result after running the SELECT query into a JSON file.
• Download each row of the result obtained after running the SELECT query into a JSON
file.
This article has the following topics:

1-201
Chapter 1
Develop

Installing Oracle NoSQL Database Visual Studio Code Extension


You can install the Oracle NoSQL Database VS Code extension in two ways. Install
from the Visual Studio Marketplace for online installation or install from the VSIX
package using *.vsix file for offline installation.

Before you can install the Oracle NoSQL Database Visual Studio (VS) Code
extension, you must install Visual Studio Code. You can download Visual Studio Code
from here.

• Install from Visual Studio Marketplace

• Install from a VSIX

Install from Visual Studio Marketplace


1. In Visual Studio Code, click the Extensions icon in the left navigation.

Alternatively, you can open the Extensions view by pressing:


• (Windows and Linux) Control + Shift + X
• (macOS) Command + Shift + X.
2. Search Oracle NoSQL Database Connector in the extension marketplace.
3. Click Install on the Oracle NoSQL Database Connector extension

Install from a VSIX


1. Download the VSIX file for Oracle NoSQL Database from Oracle NoSQL
Database Downloads site.
2. In Visual Studio Code, click the Extensions icon in the left navigation.

1-202
Chapter 1
Develop

Alternatively, you can open the Extensions view by pressing:


• (Windows and Linux) Control + Shift + X
• (macOS) Command + Shift + X.
3. In the Extensions view, Click the More Actions (...) menu and then click Install from
VSIX....

4. Browse to the location where the *.vsix file is stored and click Install.

Connecting to Oracle NoSQL Database Cloud Service from Visual Studio Code
Oracle NoSQL Database Visual Studio (VS) Code extension provides two methods to
connect to Oracle NoSQL Database Cloud Service or Oracle NoSQL Database Cloud
Simulator.
You can either provide a config file with the connection information or fill in the connection
information in the specific fields. If you are using a Node.js driver and already have
connection details saved in a file, use the Connect via Config File option to connect to the
Oracle NoSQL Database Cloud Service. Otherwise, if you are creating a new connection, use
the Fill in Individual Fields option.

1-203
Chapter 1
Develop

• Fill in Individual Fields

• Connect via Config File

Fill in Individual Fields


1. In Visual Studio Code, click the Oracle NoSQL DB view in the Activity Bar.

2. Open the Oracle NoSQL DB Show Connection Settings page from the
Command Palette or the Oracle NoSQL DB view in the Activity Bar.
• Open from Command Palette
a. Open the Command Palette by pressing:
– (Windows and Linux) Control + Shift + P
– (macOS) Command + Shift + P
b. From the Command Palette, select OracleNoSQL: Show Connections
Settings.

Tip:
Enter oraclenosql in the Command Palette to display all of
the Oracle NoSQL DB commands you can use.

1-204
Chapter 1
Develop

• Open from Oracle NoSQL DB View


a. Expand the TABLE EXPLORER pane in the left navigation if it's collapsed.
b. Click Add Connection to open the Oracle NoSQL DB Show Connection
Settings page.

3. In the Show Connection Settings page, click Cloud or CloudSim to connect to Oracle
NoSQL Database Cloud Service or Oracle NoSQL Database Cloud Simulator.

1-205
Chapter 1
Develop

4. Enter the connection information.

Table 1-11 Cloud Connection Parameters

Field Description Sample Value


Region: Select the Region identifier us-ashburn-1
of the Oracle NoSQL
Database Cloud Service
endpoint.
Configuration File: Browse to the location where /home/user/
the OCI configuration file is security/config/
stored. oci.config
Profile: Name of the configuration ADMIN_USER
profile to be used to connect
to the Oracle NoSQL
Database Cloud Service.
If you do not specify this
value, the field defaults to the
DEFAULT profile.

1-206
Chapter 1
Develop

Table 1-11 (Cont.) Cloud Connection Parameters

Field Description Sample Value


Compartment: The name or OCID of the • Compartment name
compartment for your Oracle mycompartment
NoSQL Database Cloud • Compartment name
Service schema. qualified with its parent
If you do not provide any compartment
value, the field defaults to the parent.childcomp
root compartment. artment
You create compartments in • Compartment OCID
Oracle Cloud Infrastructure ocid1.tenancy.oc
Identity and Access 1...<unique_ID>
Management (IAM). See
Setting Up Your Tenancy and
Managing Compartments in
Oracle Cloud Infrastructure
Documentation.
Tenant OCID: Tenancy's OCID for your ocid1.tenancy.oc1..
Oracle NoSQL Database <unique_ID>
Cloud Service. See Where to
Get the Tenancy's OCID and
User's OCID in Oracle Cloud
Infrastructure Documentation
User OCID: User's OCID for your Oracle ocid1.user.oc1..<un
NoSQL Database Cloud ique_ID>
Service. See Where to Get
the Tenancy's OCID and
User's OCID in Oracle Cloud
Infrastructure
Documentation.
Fingerprint: Fingerprint for the private key 12:34:56:78:90:ab:c
that was added to this user. d:ef:12:34:56:78:90
The fingerprint of the signing :ab:cd:ef
key is created while
generating and uploading the
API Signing Key. See How to
Get the Key's Fingerprint in
Oracle Cloud Infrastructure
Documentation.
Private Key File: Browse to the location where /home/user/.oci/
private key is stored. See oci_api_key.pem
How to Generate an API
Signing Key to generate the
signing key with an optional
pass phrase.

1-207
Chapter 1
Develop

Table 1-11 (Cont.) Cloud Connection Parameters

Field Description Sample Value


Passphrase: Passphrase you specified
when creating the private
key. The pass phrase of the
signing key is created while
generating and uploading the
API Signing Key. See How to
Get the Key's Fingerprint in
Oracle Cloud Infrastructure
Documentation.
Required, only if the key is
encrypted.
Compartment: The name or OCID of the • Compartment name
compartment for your Oracle mycompartment
NoSQL Database Cloud • Compartment name
Service schema. qualified with its parent
If you do not provide any compartment
value, the field defaults to the parent.childcomp
root compartment. artment
You create compartments in • Compartment OCID
Oracle Cloud Infrastructure ocid1.tenancy.oc
Identity and Access 1...<unique_ID>
Management (IAM). See
Setting Up Your Tenancy and
Managing Compartments in
Oracle Cloud Infrastructure
Documentation.

Table 1-12 CloudSim Connection Parameters

Field Description Sample Value


Endpoint: Service Endpoint URL of the http://
Oracle NoSQL Database myinstance.cloudsim
Cloud Simulator instance. .com:8080
If you do not specify the
value, it defaults to http://
localhost:8080.
Tenant Identifier: Unique identifier to identify Tenant01
the tenant. If you do not specify the
value, it defaults to
TestTenant.

5. Click Connect.
6. Click Reset to clear the saved connection details from the workspace.

1-208
Chapter 1
Develop

Connect via Config File


1. Create the config file, for example, config.json or a file with the JSON object. The config
file format for connecting to Oracle NoSQL Database Cloud Service or Oracle NoSQL
Database Cloud Simulator is as shown below.

1-209
Chapter 1
Develop

Table 1-13 Configuration Templates

Oracle NoSQL Database Cloud Service Oracle NoSQL Database Cloud Simulator

Configuration template to connect using {


OCI configuration file "endpoint": "http://
myinstance.cloudsim.com:8080",
{ "auth" : "Bearer<tenant-id>"
"region": "<region-id-of-nosql- }
cloud-service-endpoint>",
"compartment": "<oci-
compartment-name-or-id>",
"auth":
{
"iam":
{
"configFile": "<path-to-
OCI-config-file>",
"profileName": "<oci-
credentials-profile-name>"
}
}
}

Configuration template to connect using


IAM authentication credentials

{
"region": "<region-id-of-nosql-
cloud-service-endpoint>",
"compartment": "<oci-
compartment-name-or-id>",
"auth":
{
"iam":
{
"tenantId": "<tenancy-
ocid>",
"userId": "<user-ocid>",
"fingerprint":
"<fingerprint-for-the-signing-
key>",
"privateKeyFile": "<path-
to-the-private-key>",
"passphrase": "<passphrase-
of-the-signing-key>"
}
}
}

2. Open the Command Palette by pressing:


• (Windows and Linux) Control + Shift + X

1-210
Chapter 1
Develop

• (macOS) Command + Shift + X


3. From the Command Palette, select Oracle NoSQL: Connect via Config File.

Tip:
Enter oraclenosql in the Command Palette to display all of the Oracle
NoSQL DB commands you can use.

4. Browse to the location where the *.config file is stored and click Select.

Managing Tables Using Visual Studio Code Extension


Once you connect to your deployment using Oracle NoSQL Database Visual Studio (VS)
Code extension, use the TABLE EXPLORER located on the left navigation to:
• Explore your tables, columns, indexes, primary keys, and shard keys.
• Create new tables.
• Drop existing tables.
• Create Indexes.
• Drop Indexes.
• Add columns.
• Drop Columns.
• Insert data into table.
• Execute SELECT SQL queries.

Explore tables, columns, indexes and keys


When you expand an active connection, Oracle NoSQL Database VS Code shows the tables
in that deployment.
• Click the table name to view its columns, indexes, primary key(s), and shard key(s). The
column name displays along with its data type.

1-211
Chapter 1
Develop

• You can refresh the schema or table at any time to re-query your deployment and
populate Oracle NoSQL Database Cloud Service with the most up-to-date data.
– In the TABLE EXPLORER, locate the connection and click the Refresh icon to
reload the schema. Alternatively, you can right-click the connection and select
Refresh Schema.

– In the TABLE EXPLORER, locate the table name and click the Refresh icon to
reload the table. Alternatively, you can right-click the table name and select
Refresh Table.

Perform DDL operations using Visual Studio Code


You can use Visual Studio Code to perform DDL operations.
Some of the DDL operations that can be performed from inside the Visual Studio Code
plugin are:
• CREATE TABLE
• DROP TABLE
• CREATE INDEX

1-212
Chapter 1
Develop

• DROP INDEX
• ADD COLUMN
• DROP COLUMN

CREATE TABLE
You can create the Oracle NoSQL Database table in two modes:
• Simple DDL Input: You can use this mode to create the Oracle NoSQL Database table
declaratively, that is, without writing a DDL statement.
• Advanced DDL Input: You can use this mode to create the Oracle NoSQL Database
table using a DDL statement.
1. Hover over the Oracle NoSQL Database connection to add the new table.
2. Click the Plus icon that appears.
3. In the Create Table page, select Simple DDL Input.

Table 1-14 Create an Oracle NoSQL Database Table

Field Description
Table Name: Specify a unique table name.
Column Name Specify a column name for the primary key in
your table.

1-213
Chapter 1
Develop

Table 1-14 (Cont.) Create an Oracle NoSQL Database Table

Field Description
Column Type Select the data type for your primary key
column.
Set as Shard Key Select this option to set this primary key column
as shard key. Shard key is to distribute data
across the Oracle NoSQL Database cluster for
increased efficiency, and to position records that
share the shard key locally for easy reference
and access. Records that share the shard key
are stored in the same physical location and can
be accessed atomically and efficiently.
Remove Click this button to delete an existing column.
+ Add Primary Key Column Click this button to add more columns while
creating a composite (multi-column) primary
key.
Column Name Specify the column name.
Column Type Select the data type for your column.
Default Value (optional) Specify a default value for the column.

Note:
Default values can not be specified
for binary and JSON data type
columns.

Not Null Select this option to specify that a column must


always have a value.
Remove Click this button to delete an existing column.
+ Add Column Click this button to add more columns.
Unit Select the unit (Days or Hours) to use for TTL
value for the rows in the table.
Value Specify expiration duration for the rows in the
table. After the number of days or hours, the
rows expire automatically, and are no longer
available. The default value is zero, indicating no
expiration time.

Note:
Updating Table Time to Live (TTL)
does not change the TTL value of
any existing data in the table. The
new TTL value applies only to those
rows that are added to the table after
this value is modified and to the rows
for which no overriding row-specific
value has been supplied.

1-214
Chapter 1
Develop

4. Click Show DDL to view the DDL statement formed based on the values entered in the
fields in the Simple DDL input mode. This DDL statement gets executed when you click
Create.
5. Click Create.

DROP TABLE
1. Right-click the target table.
2. Click Drop Table.
3. Click Yes to drop the table.

CREATE INDEX
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Right click on the table where index need to be created. Choose Create Index.
• Specify the name of the index and the columns to be part of the index.
• Click Add Index.

DROP INDEX
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Click on the table where the index needs to be removed. The list of indexes are displayed
below the column names.
• Right click on the index to be dropped. Click Drop Index.
• A confirmation window appears, click Ok to confirm the drop action.

ADD COLUMN
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Right click on the table where column needs to be added. Click Add columns.
• Specify the name of the column and define the column with its properties - datatype,
default value and whether it is nullable.
• Click Add New Columns.

DROP COLUMN
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Expand the table where column needs to be removed.
• Right click the column to be removed and choose Drop Column.
• A confirmation window appears, click Ok to confirm the drop action.

Perform DML operations using Visual Studio Code


You can add data, modify existing data and query data from tables usingVisual Studio Code
plugin.

Insert Data
• Locate the Table Explorer, and click the Refresh Schema to reload the schema.

1-215
Chapter 1
Develop

• Right click on the table where a row needs to be inserted. Choose Insert Row.
• In the Insert Row panel, enter the details for inserting a new row. You can INSERT
a new ROW in two modes:
– Simple Input : You can use this mode to insert the new row without writing a
DML statement. Here a form based row fields entry is loaded, where you can
enter the value of every field in the row.
– Advanced JSON Input : You can use this mode to insert a new row into the
table by supplying a JSON Object containing the column name and its
corresponding value as key-value pairs.
• Click Insert Row.

Modify Data - UPDATE ROW/DELETE ROW:


• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Click on the table where data needs to be modified.
• In the textbox on the right under SQL>, enter the SQL statement to fetch data from
your table. Click > to run the query.
• To view individual cell data separately, click the table cell.
• To perform DML operations like Update and Delete Row, right-click on the
particular row. Pick your option from the context-menu that appears.
– Delete Row : A confirmation window appears, click Ok to delete the row.
– Update Row : A separate HTML panel opens below the listed rows, containing
the column names and its corresponding value in a form-based entry or
provide the input as ON key-pair object. You can choose either of the two
methods and supply new values.

Note:
In any row, PRIMARY KEY and GENERATED ALWAYS AS
IDENTITY columns cannot be updated.

Executing SQL Queries for a Table


• Locate the Table Explorer, and click the Refresh Schema to reload the schema.
• Right click on the table and choose Browse Table.
• In the textbox on the right under SQL>, enter the SELECT statement to fetch data
from your table.
• Click > to run the query. The corresponding data is retrieved from the table.
• Right click on any row and click Download row in to JSON file. The single row gets
downloaded into a JSON file.
• Click Download Query Result to save the complete result of the SELECT
statement as a JSON file.
• Click Fetch All Records to retrieve all data from the table.

1-216
Chapter 1
Develop

Removing a Connection
Oracle NoSQL Database Connector provides two methods to remove a connection from
Visual Studio (VS) Code.
You can:
• Remove a connection with the Command Palette, or
• Remove a connection from the Oracle NoSQL DB view in the Activity Bar.

Note:
Removing a connection from Visual Studio Code deletes the persisted connection
details from the current workspace.

• Remove Connection from Oracle NoSQL DB View

• Remove Connection with Command Palette

Remove Connection from Oracle NoSQL DB View


1. Expand the TABLE EXPLORER pane in the left navigation if it's collapsed.
2. Right-click the connection you want to remove, then click Remove Connection.

Remove Connection with Command Palette


1. Open the Command Palette by pressing:
• (Windows and Linux) Control + Shift + P
• (macOS) Command + Shift + P
2. From the Command Palette, select OracleNoSQL: Remove Connection.

1-217
Chapter 1
Develop

Tip:
Enter oraclenosql in the Command Palette to display all of the Oracle
NoSQL DB commands you can use.

Designing a Table in Oracle NoSQL Database Cloud Service


Learn how to design and configure tables in Oracle NoSQL Database Cloud Service.
This article has the following topics:

Table Fields
Learn how to design and configure data using table fields.
An application may choose to use schemaless tables, where a row consists of key
fields and a single JSON data field. A schemaless table offers flexibility in what can be
stored in a row.
Alternatively, the application can choose to use fixed schema tables, where all of the
table fields are defined as specific types.
Fixed schema tables with typed data are safer to use from an enforcement and
storage efficiency standpoint. Even though the schema of fixed schema tables can be
modified, their table structure cannot easily be changed. A schemaless table is flexible
and the table structure can be easily modified.
Finally, an application can also use a hybrid data model approach where a table can
have typed data and JSON data fields.
The following examples demonstrate how to design and configure data for all three
approaches.

1-218
Chapter 1
Develop

Example 1: Designing a Schemaless Table


You have multiple options to store information about browsing patterns in your table. One
option is to define a table that uses a cookie ID as a key and keeps audience segmentation
data as a single JSON field.

// schema less, data is stored in a JSON field


CREATE TABLE audience_info (
cookie_id LONG,
audience_data JSON,
PRIMARY KEY(cookie_id))

In this case, the audience_info table can hold a JSON object such as:

{
"cookie_id": "",
"audience_data": {
"ipaddr" : "10.0.00.xxx",
"audience_segment: {
"sports_lover" : "2018-11-30",
"book_reader" : "2018-12-01"
}
}
}

Your application will have a key field and a data field for this table. You have flexibility in what
you chose to store as information in your audience_data field. Therefore, you can easily
change the types of information available.

Example 2: Designing a Fixed Schema Table


You can store information about browsing patterns by creating your table with more explicitly
declared fields:

// fixed schema, data is stored in typed fields.


CREATE TABLE audience_info(
cookie_id LONG,
ipaddr STRING,
audience_segment RECORD(sports_lover TIMESTAMP(9),
book_reader TIMESTAMP(9)),
PRIMARY KEY(cookie_id))

In this example, your table has a key field and two data fields. Your data is more compact,
and you are able to ensure that all data fields are accurate.

Example 3: Designing a Hybrid Table


You can store information about browsing patterns by using both typed and JSON data fields
in your table.

// mixed, data is stored in both typed and JSON fields.


CREATE TABLE audience_info (
cookie_id LONG,

1-219
Chapter 1
Develop

ipaddr STRING,
audience_segment JSON,
PRIMARY KEY(cookie_id))

Primary Keys and Shard Keys


Learn the purpose of primary keys and shard keys while designing your application.
Primary keys and shard keys are important elements in your schema and help you
access and distribute data efficiently. You create primary keys and shard keys only
when you create a table. They remain in place for the life of the table, and cannot be
changed or dropped.

Primary Keys
You must designate one or more primary key columns when you create your table. A
primary key uniquely identifies every row in the table. For simple CRUD operations,
Oracle NoSQL Database Cloud Service uses the primary key to retrieve a specific row
to read or modify. For example, consider a table has the following fields:
• productName
• productType
• productLine
From experience, you know that the product name is important as well as unique to
each row, so you set the productName as the primary key. Then, you retrieve rows of
interest based on the productName. In such a case, use a statement like this, to define
the table.

/* Create a new table called users. */


CREATE TABLE if not exists myProducts
(
productName STRING,
productType STRING,
productLine INTEGER,
PRIMARY KEY (productName)
)"

Shard Keys
The main purpose of shard keys is to distribute data across the Oracle NoSQL
Database Cloud Service cluster for increased efficiency, and to position records that
share the shard key locally for easy reference and access. Records that share the
shard key are stored in the same physical location and can be accessed atomically
and efficiently.
Your Primary and shard key design has implications on scaling and achieving
provisioned throughput. For example, when records share shard keys, you can delete
multiple table rows in an atomic operation, or retrieve a subset of rows in your table in
a single atomic operation. In addition to enabling scalability, well-designed shard keys
can improve performance by requiring fewer cycles to put data to, or get data from, a
single shard.

1-220
Chapter 1
Develop

For example, suppose that you designate three primary key fields:

PRIMARY KEY (productName, productType, productLine)

Because you know that your application frequently makes queries using the productName and
productType columns, specifying those fields as shard keys is appropriate. The shard key
designation guarantees that all rows for these two columns are stored on the same shard. If
these two fields are not shard keys, the most frequently queried columns could be stored on
any shard. Then, locating all rows for both fields requires scanning all data storage, rather
than one shard.
Shard keys designate storage on the same shard to facilitate efficient queries for key values.
However, because you want your data be distributed across the shards for best performance,
you must avoid shard keys that have few unique values.

Note:
If you do not designate shard keys when creating a table, Oracle NoSQL Database
Cloud Service uses the primary keys for shard organization.

Important factors to consider when choosing a shard key


• Cardinality: Low cardinality fields, such as a user home country, group records together
on a few shards. In turn, those shards require frequent data rebalancing, increasing the
likelihood of hot shard issues. Instead, each shard key should have high cardinality,
where the shard key can express an even slice of records in the data set. For example,
identity numbers such as customerID, userID, or productID are good candidates for a
shard key.
• Atomicity: Only objects that share the shard key can participate in a transaction. If you
require ACID transactions that span multiple records, choose a shard key that lets you
meet that requirement.

What are the best practices to follow?


• Uniform distribution of shard keys: When shard keys are uniformly distributed, no
single shard limits the capacity of the system.
• Query Isolation: Queries should be targeted to a specific shard to maximize efficiency
and performance. If queries are not isolated to a single shard, the query is applied to all
shards which is less efficient and increases query latency.
See Creating Tables and Indexes to learn how to assign primary and shard keys using the
TableRequest object.

Time to Live
Learn how to specify expiration times for tables and rows using the Time-to-Live (TTL)
feature.
Many applications handle data that has a limited useful lifetime. Time-to-Live (TTL) is a
mechanism that allows you to set a time frame on table rows, after which the rows expire
automatically, and are no longer available. It is the amount of time data is allowed to remain
in the Oracle NoSQL Database Cloud Service. Data that reaches expiration time can no
longer be retrieved, and does not appear in any storage statistics.

1-221
Chapter 1
Develop

By default, every table that you create has a TTL value of zero, indicating no expiration
time. You can declare a TTL value when you create a table, specifying the TTL with a
number, followed by either HOURS or DAYS. Table rows inherit the TTL value of the table
in which they reside, unless you explicitly set a TTL value for table rows. Setting a
row's TTL value overrides the table's TTL value. If you change the table's TTL value
after the row has a TTL value, the row's TTL value persists.
You can update the TTL value for a table row at any time before the row reaches the
expiration time. Expired data can no longer be accessed. Therefore, using TTL values
is more efficient than manually deleting rows, because the overhead of writing a
database log entry for the data deletion is avoided. Expired data is purged from the
disk after expiration date.

Table States and Life Cycles


Learn about the different table states and their significance (table life cycle process).
Each table passes through a series of different states from table creation to deletion
(drop). For example, a table in the DROPPING state cannot proceed to the ACTIVE state,
while a table in the ACTIVE state can change to the UPDATING state. You can track the
different table states by monitoring the table life cycle. This section describes the
various table states.

Table State Description


CREATING The table is in the process of being created. It is not ready to use.
UPDATING Updating the table is in process. Further table modifications are not possible
while the table is in this state.
A table is in the UPDATING state when:
• The table limits are being changed
• The table schema is evolving
• Adding or dropping a table index
ACTIVE The table can be used in the current state. The table may have been recently
created, or modified, but the table state is now stable.
DROPPING The table is being dropped and cannot be accessed for any purpose.

1-222
Chapter 1
Develop

Table State Description


DROPPED The table has been dropped and no longer exists for read, write, or query
activities.

Note:
Once dropped, a table with the same name can
be created again.

Developing in Oracle NoSQL Database Cloud Simulator


Get familiar with the Cloud APIs by using the Oracle NoSQL Database Cloud Simulator.
The Oracle NoSQL Database Cloud Simulator simulates the cloud service and lets you write
and test applications locally without accessing Oracle NoSQL Database Cloud Service. The
Oracle NoSQL Database Java SDK contains a few examples for the developer to get started
with.
You can start developing your application in the Oracle NoSQL Database Cloud Simulator,
using and understanding the basic examples, before you get started with Oracle NoSQL
Database Cloud Service.
Extract Oracle NoSQL Database Java SDK and the Oracle NoSQL Database Cloud
Simulator bundles. Create your application using the Cloud APIs. After building, debugging,
and testing your application with the Oracle NoSQL Database Cloud Simulator, move your
application to Oracle NoSQL Database Cloud Service.

Topics
1. Downloading the Oracle NoSQL Database Cloud Simulator
2. Oracle NoSQL Database Cloud Simulator Compared With Oracle NoSQL Database
Cloud Service

Downloading the Oracle NoSQL Database Cloud Simulator


Learn how to download and extract the Oracle NoSQL Database Cloud Simulator.
The Oracle NoSQL Database Cloud Simulator is available for download from the Oracle
Cloud website. To install the Oracle NoSQL Database Cloud Simulator, first download and
extract the file.

Note:
Your local system should meet the following requirements to run the Oracle NoSQL
Database Cloud Simulator:
• Java JDK version 10 or higher installed in your machine.
• A minimum of 5-GB available disk space where you plan to install the Oracle
NoSQL Database Cloud Simulator.

1-223
Chapter 1
Develop

Perform the following steps:


1. Open the Oracle Cloud Downloads page in a browser and click Oracle NoSQL
Database Cloud.
2. Click Download Oracle NoSQL Cloud Simulator.
3. Select the Oracle NoSQL Database Cloud Simulator Zip or Tar file, accept the
license agreement and click Download.
4. Gunzip and untar the .tar.gz package (or extract the files if you have downloaded
a .zip package).

tar xvfz oracle-nosql-cloud-simulator-<version_number>.tar.gz

The output displays all directories and files that are part of the package. All the
Oracle NoSQL Database Cloud Simulator related .jar files are placed in the
cloudsim/lib directory.
After extracting the package, read the oracle-nosql-cloud-simulator-
<version_number>/README.txt file for instructions on how to start and stop the
simulator.
In order to use the Oracle NoSQL Database Cloud Simulator you must download one
of the supported Oracle NoSQL language SDKs. The SDKs have instructions and
example code to connect to either the Oracle NoSQL Database Cloud Simulator or the
Oracle NoSQL Database Cloud Service.

Oracle NoSQL Database Cloud Simulator Compared With Oracle NoSQL


Database Cloud Service
Learn the difference between Oracle NoSQL Database Cloud Simulator and Oracle
NoSQL Database Cloud Service. The differences helps determine important design
considerations that you should make before using your application in a production
environment.
Oracle NoSQL Database Cloud Simulator is a local version of the Oracle NoSQL
Database Cloud Service. The server instance that you create in Oracle NoSQL
Database Cloud Simulator supports relatively limited aggregate throughput when
compared to the Oracle NoSQL Database Cloud Service. Also, the performance of
NoSQL operations on the Oracle NoSQL Database Cloud Simulator is based on the
speed and capability of the machine on which it is deployed.
By comparison, Oracle NoSQL Database Cloud Service is suitable for production use
because of features such as scalability, availability, and durability.
Oracle NoSQL Database Cloud Simulator has the following limitations when compared
to Oracle NoSQL Database Cloud Service:
• The Oracle NoSQL Database Cloud Simulator can be used for only development
and testing purposes. Do not use Oracle NoSQL Database Cloud Simulator for
performance measurements or in a production environment.
• At least 5 GB of disk drive space must be available to run the Oracle NoSQL
Database Cloud Simulator.
• A single instance of the Oracle NoSQL Database Cloud Simulator should be
started in a root directory (directory where the Oracle NoSQL Database Cloud

1-224
Chapter 1
Develop

Simulator data is located). Oracle NoSQL Database Cloud Simulator assumes exclusive
control over the data storage directory.
• The Oracle NoSQL Database Cloud Simulator does not support or require security-
relevant configurations.
• No hard limit is enforced on the number of tables, size of tables, number of indexes, or
maximum throughput specified for tables (except for the amount of storage on the local
disk drive).
• Data Definition Language (DDL) operations, such as creating or dropping a table, and
creating or dropping an index, are not throttled.
• Operational history is not maintained.

Using Oracle NoSQL Database Migrator


Learn about Oracle NoSQL Database Migrator and how to use it for data migration.
Oracle NoSQL Database Migrator is a tool that enables you to migrate Oracle NoSQL tables
from one data source to another. This tool can operate on tables in Oracle NoSQL Database
Cloud Service, Oracle NoSQL Database on-premises and AWS S3. The Migrator tool
supports several different data formats and physical media types. Supported data formats are
JSON, Parquet, MongoDB-formatted JSON, DynamoDB-formatted JSON, and CSV files.
Supported physical media types are files, OCI Object Storage, Oracle NoSQL Database on-
premises, Oracle NoSQL Database Cloud Service and AWS S3.
This article has the following topics:

Overview
Oracle NoSQL Database Migrator lets you move Oracle NoSQL tables from one data source
to another, such as Oracle NoSQL Database on-premises or cloud or even a simple JSON
file.
There can be many situations that require you to migrate NoSQL tables from or to an Oracle
NoSQL Database. For instance, a team of developers enhancing a NoSQL Database
application may want to test their updated code in the local Oracle NoSQL Database Cloud
Service (NDCS) instance using cloudsim. To verify all the possible test cases, they must set
up the test data similar to the actual data. To do this, they must copy the NoSQL tables from
the production environment to their local NDCS instance, the cloudsim environment. In
another situation, NoSQL developers may need to move their application data from on-
premise to the cloud and vice-versa, either for development or testing.
In all such cases and many more, you can use Oracle NoSQL Database Migrator to move
your NoSQL tables from one data source to another, such as Oracle NoSQL Database on-
premise or cloud or even a simple JSON file. You can also copy NoSQL tables from a
MongoDB-formatted JSON input file, DynamoDB-formatted JSON input file (either stored in
AWS S3 source or from files), or a CSV file into your NoSQL Database on-premises or cloud.
As depicted in the following figure, the NoSQL Database Migrator utility acts as a connector
or pipe between the data source and the target (referred to as the sink). In essence, this
utility exports data from the selected source and imports that data into the sink. This tool is
table-oriented, that is, you can move the data only at the table level. A single migration task
operates on a single table and supports migration of table data from source to sink in various
data formats.

1-225
Chapter 1
Develop

Oracle NoSQL Database Migrator is designed such that it can support additional
sources and sinks in the future. For a list of sources and sinks supported by Oracle
NoSQL Database Migrator as of the current release, see Supported Sources and
Sinks.

Transformations

NoSQL NoSQL
Table Data Table Data
Migration Pipe
Source Sink

Terminology used with Oracle NoSQL Database Migrator


Learn about the different terms used in the above diagram, in detail.
• Source: An entity from where the NoSQL tables are exported for migration. Some
examples of sources are Oracle NoSQL Database on-premise or cloud, JSON file,
MongoDB-formatted JSON file, DynamoDB-formatted JSON file, and CSV files.
• Sink: An entity that imports the NoSQL tables from NoSQL Database Migrator.
Some examples for sinks are Oracle NoSQL Database on-premise or cloud and
JSON file.
The NoSQL Database Migrator tool supports different types of sources and sinks (that
is physical media or repositories of data) and data formats (that is how the data is
represented in the source or sink). Supported data formats are JSON, Parquet,
MongoDB-formatted JSON, DynamoDB-formatted JSON, and CSV files. Supported
source and sink types are files, OCI Object Storage, Oracle NoSQL Database on-
premise, and Oracle NoSQL Database Cloud Service.
• Migration Pipe: The data from a source will be transferred to the sink by NoSQL
Database Migrator. This can be visualized as a Migration Pipe.
• Transformations: You can add rules to modify the NoSQL table data in the
migration pipe. These rules are called Transformations. Oracle NoSQL Database
Migrator allows data transformations at the top-level fields or columns only. It does
not let you transform the data in the nested fields. Some examples of permitted
transformations are:
– Drop or ignore one or more columns,
– Rename one or more columns, or
– Aggregate several columns into a single field, typically a JSON field.
• Configuration File : A configuration file is where you define all the parameters
required for the migration activity in a JSON format. Later, you pass this
configuration file as a single parameter to the runMigrator command from the
CLI. A typical configuration file format looks like as shown below.

{
"source": {
"type" : <source type>,
//source-configuration for type. See Source Configuration
Templates .
},
"sink": {

1-226
Chapter 1
Develop

"type" : <sink type>,


//sink-configuration for type. See Sink Configuration Templates .
},
"transforms" : {
//transforms configuration. See Transformation Configuration
Templates .
},
"migratorVersion" : "<migrator version>",
"abortOnError" : <true|false>
}

Group Parameters Mandatory (Y/N) Purpose Supported


Values
source type Y Represents the To know the type
source from which value for each
to migrate the source, see
data. The source Supported
provides data and Sources and
metadata (if any) Sinks.
for migration.
source source- Y Defines the See Source
configuration for configuration for Configuration
type the source. These Templates . for the
configuration complete list of
parameters are configuration
specific to the parameters for
type of source each source type.
selected above.
sink type Y Represents the To know the type
sink to which to value for each
migrate the data. source, see
The sink is the Supported
target or Sources and
destination for the Sinks.
migration.
sink sink-configuration Y Defines the See Sink
for type configuration for Configuration
the sink. These Templates for the
configuration complete list of
parameters are configuration
specific to the parameters for
type of sink each sink type.
selected above.
transforms transforms N Defines the See
configuration transformations to Transformation
be applied to the Configuration
data in the Templates for the
migration pipe. complete list of
transformations
supported by the
NoSQL Data
Migrator.
- migratorVersio N Version of the -
n NoSQL Data
Migrator

1-227
Chapter 1
Develop

Group Parameters Mandatory (Y/N) Purpose Supported


Values
- abortOnError N Specifies whether true, false
to stop the
migration activity
in case of any
error or not.
The default value
is true indicating
that the migration
stops whenever it
encounters a
migration error.
If you set this
value to false, the
migration
continues even in
case of failed
records or other
migration errors.
The failed records
and migration
errors will be
logged as
WARNINGs on
the CLI terminal.

Note:
As JSON file is case-sensitive, all the parameters defined in the
configuration file are case-sensitive unless specified otherwise.

Supported Sources and Sinks


This topic provides the list of the sources and sinks supported by the Oracle NoSQL
Database Migrator.
You can use any combination of a valid source and sink from this table for the
migration activity. However, you must ensure that at least one of the ends, that is,
source or sink must be an Oracle NoSQL product. You can not use the NoSQL
Database Migrator to move the NoSQL table data from one file to another.

Valid Source Valid Sink


Type Format
(value) (value)

NA Y Y
Oracle NoSQL Database
(nosqldb)

NA Y Y
Oracle NoSQL Database
Cloud Service
(nosqldb_cloud)

1-228
Chapter 1
Develop

Valid Source Valid Sink


Type Format
(value) (value)

Y Y
File system JSON
(file) (json)

Y N
MongoDB JSON
(mongodb_json)

Y N
DynamoDB JSON
(dynamodb_json)

N Y
Parquet(parquet)

Y N
CSV
(csv)

Y Y
OCI Object Storage JSON
(object_storage_oci) (json)

Y N
MongoDB JSON
(mongodb_json)

N Y
Parquet(parquet)

Y N
CSV
(csv)

AWS S3 Y N
DynamoDB JSON
(dynamodb_json)

Note:
Many configuration parameters are common across the source and sink
configuration. For ease of reference, the description for such parameters is
repeated for each source and sink in the documentation sections, which explain
configuration file formats for various types of sources and sinks. In all the cases, the
syntax and semantics of the parameters with the same name are identical.

Source and Sink Security


Some of the source and sink types have optional or mandatory security information for
authentication purposes.

1-229
Chapter 1
Develop

All sources and sinks that use services in the Oracle Cloud Infrastructure (OCI) can
use certain parameters for providing optional security information. This information can
be provided using an OCI configuration file or Instance Principal.
Oracle NoSQL Database sources and sinks require mandatory security information if
the installation is secure and uses an Oracle Wallet-based authentication. This
information can be provided by adding a jar file to the <MIGRATOR_HOME>/lib
directory.

Wallet-based Authentication
If an Oracle NoSQL Database installation uses Oracle Wallet-based authentication,
you need an additional jar file that is part of the EE installation. For more information,
see Oracle Wallet.
Without this jar file, you will get the following error message:
Could not find kvstore-ee.jar in lib directory. Copy kvstore-
ee.jar to lib directory.

To prevent the exception shown above, you must copy the kvstore-ee.jar file from
your EE server package to the <MIGRATOR_HOME>/lib directory.
<MIGRATOR_HOME> is the nosql-migrator-M.N.O/ directory created by
extracting the Oracle NoSQL Database Migrator package and M.N.O represent the
software release.major.minor numbers. For example, nosql-migrator-1.1.0/lib.

Note:
The wallet-based authentication is supported ONLY in the Enterprise Edition
(EE) of Oracle NoSQL Database.

Authenticating with Instance Principals


Instance principals is an IAM service feature that enables instances to be authorized
actors (or principals) that can perform actions on service resources. Each compute
instance has its own identity, and it authenticates using the certificates added to it.
Oracle NoSQL Database Migrator provides an option to connect to a NoSQL cloud
and OCI Object Storage sources and sinks using instance principal authentication. It is
only supported when the NoSQL Database Migrator tool is used within an OCI
compute instance, for example, the NoSQL Database Migrator tool running in a VM
hosted on OCI. To enable this feature use the useInstancePrincipal attribute of the
NoSQL cloud source and sink configuration file. For more information on configuration
parameters for different types of sources and sinks, see Source Configuration
Templates and Sink Configuration Templates .
For more information on instance principals, see Calling Services from an Instance.

Workflow for Oracle NoSQL Database Migrator


Learn about the various steps involved in using the Oracle NoSQL Database Migrator
utility for migrating your NoSQL data.
The high level flow of tasks involved in using NoSQL Database Migrator is depicted in
the below figure.

1-230
Chapter 1
Develop

BEGIN

Download the NoSQL


Migrator Utility

Identify Source & Sink for


Migration

OR

Generate the Create a


Configuration JSON File Configuration JSON
using runMigrator File Manually

OR You can reuse the


Config JSON File
multiple times.

You can reuse the


Config JSON File
multiple times.

Proceed to Migration Save the Configuration You can reuse the Run runMigrator by
with the Generated JSON File for a Future Config JSON File passing the Configuration
Configuration JSON File Migration multiple times. JSON File as a Parameter

END

Download the NoSQL Data Migrator Utility


The Oracle NoSQL Database Migrator utility is available for download from the Oracle
NoSQL Downloads page. Once you download and unzip it on your machine, you can access
the runMigrator command from the command line interface.

Note:
Oracle NoSQL Database Migrator utility requires Java 11 or higher versions to run.

Identify the Source and Sink


Before using the migrator, you must identify the data source and sink. For instance, if you
want to migrate a NoSQL table from Oracle NoSQL Database on-premise to a JSON
formatted file, your source will be Oracle NoSQL Database and sink will be JSON file. Ensure
that the identified source and sink are supported by the Oracle NoSQL Database Migrator by

1-231
Chapter 1
Develop

referring to Supported Sources and Sinks. This is also an appropriate phase to decide
the schema for your NoSQL table in the target or sink, and create them.
• Identify Sink Table Schema: If the sink is Oracle NoSQL Database on-premise or
cloud, you must identify the schema for the sink table and ensure that the source
data matches with the target schema. If required, use transformations to map the
source data to the sink table.
– Default Schema: NoSQL Database Migrator provides an option to create a
table with the default schema without the need to predefine the schema for the
table. This is useful primarily when loading JSON source files into Oracle
NoSQL Database.
If the source is a MongoDB-formatted JSON file, the default schema for the
table will be as follows:

CREATE TABLE IF NOT EXISTS <tablename>(ID STRING, DOCUMENT


JSON,PRIMARY KEY(SHARD(ID))

Where:
— tablename = value provided for the table attribute in the configuration.
— ID = _id value from each document of the mongoDB exported JSON source
file.
— DOCUMENT = For each document in the mongoDB exported file, the
contents excluding the _id field are aggregated into the DOCUMENT column.
If the source is a DynamoDB-formatted JSON file, the default schema for the
table will be as follows:

CREATE TABLE IF NOT EXISTS <TABLE_NAME>(DDBPartitionKey_name


DDBPartitionKey_type,
[DDBSortKey_name DDBSortKey_type],DOCUMENT JSON,
PRIMARY KEY(SHARD(DDBPartitionKey_name),[DDBSortKey_name]))

Where:
— TABLE_NAME = value provided for the sink table in the configuration
— DDBPartitionKey_name = value provided for the partition key in the
configuration
— DDBPartitionKey_type = value provided for the data type of the partition
key in the configuration
— DDBSortKey_name = value provided for the sort key in the configuration if
any
— DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
— DOCUMENT = All attributes except the partition and sort key of a Dynamo
DB table item aggregated into a NoSQL JSON column
If the source format is a CSV file, a default schema is not supported for the
target table. You can create a schema file with a table definition containing the
same number of columns and data types as the source CSV file. For more
details on the Schema file creation, see Providing Table Schema.

1-232
Chapter 1
Develop

For all the other sources, the default schema will be as follows:

CREATE TABLE IF NOT EXISTS <tablename> (ID LONG GENERATED ALWAYS AS


IDENTITY, DOCUMENT JSON, PRIMARY KEY(ID))

Where:
— tablename = value provided for the table attribute in the configuration.
— ID = An auto-generated LONG value.
— DOCUMENT = The JSON record provided by the source is aggregated into the
DOCUMENT column.

Note:
If the _id value is not provided as a string in the MongoDB-formatted JSON
file, NoSQL Database Migrator converts it into a string before inserting it
into the default schema.

• Providing Table Schema: NoSQL Database Migrator allows the source to provide
schema definitions for the table data using schemaInfo attribute. The schemaInfo
attribute is available in all the data sources that do not have an implicit schema already
defined. Sink data stores can choose any one of the following options.
– Use the default schema defined by the NoSQL Database Migrator.
– Use the source-provided schema.
– Override the source-provided schema by defining its own schema. For example, if
you want to transform the data from the source schema to another schema, you need
to override the source-provided schema and use the transformation capability of the
NoSQL Database Migrator tool.

The table schema file, for example, mytable_schema.ddl can include table DDL
statements. The NoSQL Database Migrator tool executes this table schema file before

1-233
Chapter 1
Develop

starting the migration. The migrator tool supports no more than one DDL
statement per line in the schema file. For example,

CREATE TABLE IF NOT EXISTS(id INTEGER, name STRING, age INTEGER,


PRIMARY KEY(SHARD(ID)))

Note:
Migration will fail if the table is present at the sink and the DDL in the
schemaPath is different than the table.

• Create Sink Table: Once you identify the sink table schema, create the sink table
either through the Admin CLI or using the schemaInfo attribute of the sink
configuration file. See Sink Configuration Templates .

Note:
If the source is a CSV file, create a file with the DDL commands for the
schema of the target table. Provide the file path in
schemaInfo.schemaPath parameter of the sink configuration file.

Migrating TTL Metadata for Table Rows


You can choose to include the TTL metadata for table rows along with the actual data
when performing migration of NoSQL tables. The NoSQL Database Migrator provides
a configuration parameter to support the export and import of table row TTL metadata.
Additionally, the tool provides an option to select the relative expiry time for table rows
during the import operation. You can optionally export or import TTL metadata using
the includeTTL parameter.

Note:
The support for migrating TTL metadata for table rows is only available for
Oracle NoSQL Database and Oracle NoSQL Database Cloud Service.

Exporting TTL metadata


When a table is exported, TTL data is exported for the table rows that have a valid
expiration time. If a row does not expire, then it is not included explicitly in the exported
data because its expiration value is always 0. TTL information is contained in the
_metadata JSON object for each exported row. The NoSQL Database Migrator exports
the expiration time for each row as the number of milliseconds since the UNIX epoch
(Jan 1st, 1970). For example,

//Row 1
{
"id" : 1,
"name" : "xyz",
"age" : 45,

1-234
Chapter 1
Develop

"_metadata" : {
"expiration" : 1629709200000 //Row Expiration time in milliseconds
}
}

//Row 2
{
"id" : 2,
"name" : "abc",
"age" : 52,
"_metadata" : {
"expiration" : 1629709400000 //Row Expiration time in milliseconds
}
}

//Row 3 No Metadata for below row as it will not expire


{
"id" : 3,
"name" : "def",
"age" : 15
}

Importing TTL metadata


You can optionally import TTL metadata using a configuration parameter, includeTTL. The
import operation handles the following use cases when migrating table rows containing TTL
metadata. These use-cases are applicable only when the includeTTL configuration
parameter is specified.
In the use-cases 2 and 3, the default Reference Time of import operation is the current time
in milliseconds, obtained from System.currentTimeMillis(), of the machine where the NoSQL
Database Migrator tool is running. But you can also set a custom Reference Time using the
ttlRelativeDate configuration parameter if you want to extend the expiration time and
import rows that would otherwise expire immediately.
• Use-case 1: No TTL metadata information is present in the importing table row.
When you import a JSON source file produced from an external source or exported using
earlier versions of the migrator, the importing row does not have TTL information. But
since the includeTTL configuration parameter is equal to true, the migrator set the
TTL=0 for the table row, which means the importing table row never expires.
• Use-case 2: TTL value of the source table row is expired relative to the Reference Time
when the table row gets imported.
When you export a table row to a file and try to import it after the expiration time of the
table row, the table row is ignored and is not written into the store.
• Use-case 3: TTL value of the source table row is not expired relative to the Reference
Time when the table row gets imported.
When you export a table row to a file and try to import it before the expiration time of the
table row, the table row gets imported with a TTL value. But the new TTL value for the
table row may not be equal to exported TTL value because of the integer hour and day
window constraints in the TimeToLive class. For example,
Exported table row

{
"id" : 8,

1-235
Chapter 1
Develop

"name" : "xyz",
"_metadata" : {
"expiration" : 1629709200000 //Monday, August 23, 2021 9:00:00
AM in UTC
}
}

The reference time while importing is 1629707962582, which is Monday, August


23, 2021 8:39:22.582 AM.
Imported table row

{
"id" : 8,
"name" : "xyz",
"_metadata" : {
"ttl" : 1629712800000 //Monday, August 23, 2021 10:00:00 AM UTC
}
}

Run the runMigrator command

The runMigrator executable file is available in the extracted NoSQL Database


Migrator files. You must install Java 8 or higher version and bash on your system to
successfully run the runMigrator command.

You can run the runMigrator command in two ways:


1. By creating the configuration file using the runtime options of the runMigrator
command as shown below.

[~]$ ./runMigrator
configuration file is not provided. Do you want to generate
configuration?
(y/n)

[n]: y
...
...

• When you invoke the runMigrator utility, it provides a series of runtime


options and creates the configuration file based on your choices for each
option.
• After the utility creates the configuration file, you have a choice to either
proceed with the migration activity in the same run or save the configuration
file for a future migration.
• Irrespective of your decision to proceed or defer the migration activity with the
generated configuration file, the file will be available for edits or customization
to meet your future requirements. You can use the customized configuration
file for migration later.
2. By passing a manually created configuration file (in the JSON format) as a runtime
parameter using the -c or --config option. You must create the configuration file

1-236
Chapter 1
Develop

manually before running the runMigrator command with the -c or --config option. For
any help with the source and sink configuration parameters, see Oracle NoSQL
Database Migrator Reference.

[~]$ ./runMigrator -c </path/to/the/configuration/json/file>

Logging Migrator Progress


NoSQL Database Migrator tool provides options, which enables trace, debugging, and
progress messages to be printed to standard output or to a file. This option can be useful in
tracking the progress of migration operation, particularly for very large tables or data sets.
• Log Levels
To control the logging behavior through the NoSQL Database Migrator tool, pass the --
log-level or -l run time parameter to the runMigrator command. You can specify the
amount of log information to write by passing the appropriate log level value.

$./runMigrator --log-level <loglevel>

Example:

$./runMigrator --log-level debug

Table 1-15 Supported Log Levels for NoSQL Database Migrator

Log Level Description


warning Prints errors and warnings.
info (default) Prints the progress status of data migration such
as validating source, validating sink, creating
tables, and count of number of data records
migrated.
debug Prints additional debug information.
all Prints everything. This level turns on all levels of
logging.

• Log File:
You can specify the name of the log file using --log-file or -f parameter. If --log-
file is passed as run time parameter to the runMigrator command, the NoSQL
Database Migrator writes all the log messages to the file else to the standard output.

$./runMigrator --log-file <log file name>

Example:

$./runMigrator --log-file nosql_migrator.log

Use Case Demonstrations


Learn how to perform data migration using the Oracle NoSQL Database Migrator for specific
use cases. You can find detailed systematic instructions with code examples to perform
migration in each of the use cases listed below.

1-237
Chapter 1
Develop

Topics:
• Migrate from Oracle NoSQL Database Cloud Service to a JSON file
• Migrate from Oracle NoSQL Database On-Premise to Oracle NoSQL Database
Cloud Service
• Migrate from JSON file source to Oracle NoSQL Database Cloud Service
• Migrate from MongoDB JSON file to an Oracle NoSQL Database Cloud
Service
• Migrate from DynamoDB JSON file in AWS S3 to an Oracle NoSQL Database
Cloud Service
• Migrate from DynamoDB JSON file to Oracle NoSQL Database
• Migrate from CSV file to Oracle NoSQL Database

Migrate from Oracle NoSQL Database Cloud Service to a JSON file


This example shows how to use the Oracle NoSQL Database Migrator to copy data
and the schema definition of a NoSQL table from Oracle NoSQL Database Cloud
Service (NDCS) to a JSON file.

Use Case
An organization decides to train a model using the Oracle NoSQL Database Cloud
Service (NDCS) data to predict future behaviors and provide personalized
recommendations. They can take a periodic copy of the NDCS tables' data to a JSON
file and apply it to the analytic engine to analyze and train the model. Doing this helps
them separate the analytical queries from the low-latency critical paths.

Example
For the demonstration, let us look at how to migrate the data and schema definition of
a NoSQL table called myTable from NDCS to a JSON file.
Prerequisites
• Identify the source and sink for the migration.
– Source: Oracle NoSQL Database Cloud Service
– Sink: JSON file
• Identify your OCI cloud credentials and capture them in the OCI config file. Save
the config file in /home/.oci/config. See Acquiring Credentials in Using Oracle
NoSQL Database Cloud Service.

[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>

• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-phoenix-1

1-238
Chapter 1
Develop

– compartment: developers
Procedure
To migrate the data and schema definition of myTable from Oracle NoSQL Database Cloud
Service to a JSON file:
1. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
2. To generate the configuration file using the NoSQL Database Migrator, run the
runMigrator command without any runtime parameters.

[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator

3. As you did not provide the configuration file as a runtime parameter, the utility prompts if
you want to generate the configuration now. Type y.

configuration file is not provided. Do you want to generate


configuration? (y/n) [n]: y

This command provides a walkthrough of creating a valid config for


Oracle NoSQL data migrator.

4. Based on the prompts from the utility, choose your options for the Source configuration.

Enter a location for your config [./migrator-config.json]: /home/apothula/


nosqlMigrator/NDCS2JSON
Select the source:
1) nosqldb
2) nosqldb_cloud
3) file
#? 2
Configuration for source type=nosqldb_cloud
Enter endpoint URL or region of the Oracle NoSQL Database Cloud: us-
phoenix-1
Enter table name: myTable
Enter compartment name or id of the source table []: developers
Enter path to the file containing OCI credentials [/home/apothula/.oci/
config]:
Enter the profile name in OCI credentials file [DEFAULT]:
Enter percentage of table read units to be used for migration operation.
(1-100) [90]:
Enter store operation timeout in milliseconds. (1-30000) [5000]:

5. Based on the prompts from the utility, choose your options for the Sink configuration.

Select the sink:


1) nosqldb
2) nosqldb_cloud
3) file
#? 3
Configuration for sink type=file
Enter path to a file to store JSON data: /home/apothula/nosqlMigrator/
myTableJSON
Would you like to store JSON in pretty format? (y/n) [n]: y

1-239
Chapter 1
Develop

Would you like to migrate the table schema also? (y/n) [y]: y
Enter path to a file to store table schema: /home/apothula/
nosqlMigrator/myTableSchema

6. Based on the prompts from the utility, choose your options for the source data
transformations. The default value is n.

Would you like to add transformations to source data? (y/n) [n]:

7. Enter your choice to determine whether to proceed with the migration in case any
record fails to migrate.

Would you like to continue migration in case of any record/row is


failed to migrate?: (y/n) [n]:

8. The utility displays the generated configuration on the screen.

generated configuration is:


{
"source": {
"type": "nosqldb_cloud",
"endpoint": "us-phoenix-1",
"table": "myTable",
"compartment": "developers",
"credentials": "/home/apothula/.oci/config",
"credentialsProfile": "DEFAULT",
"readUnitsPercent": 90,
"requestTimeoutMs": 5000
},
"sink": {
"type": "file",
"format": "json",
"schemaPath": "/home/apothula/nosqlMigrator/myTableSchema",
"pretty": true,
"dataPath": "/home/apothula/nosqlMigrator/myTableJSON"
},
"abortOnError": true,
"migratorVersion": "1.0.0"
}

9. Finally, the utility prompts for your choice to decide whether to proceed with the
migration with the generated configuration file or not. The default option is y.

Note:
If you select n, you can use the generated configuration file to run the
migration using the ./runMigrator -c or the ./runMigrator --config
option.

would you like to run the migration with above configuration?


If you select no, you can use the generated configuration file to
run the migration using

1-240
Chapter 1
Develop

./runMigrator --config /home/apothula/nosqlMigrator/NDCS2JSON


(y/n) [y]:

10. The NoSQL Database Migrator migrates your data and schema from NDCS to the JSON
file.

Records provided by source=10,Records written to sink=10,Records failed=0.


Elapsed time: 0min 1sec 277ms
Migration completed.

Validation
To validate the migration, you can open the JSON Sink files and view the schema and data.

-- Exported myTable Data

[~/nosqlMigrator]$cat myTableJSON
{
"id" : 10,
"document" : {
"course" : "Computer Science",
"name" : "Neena",
"studentid" : 105
}
}
{
"id" : 3,
"document" : {
"course" : "Computer Science",
"name" : "John",
"studentid" : 107
}
}
{
"id" : 4,
"document" : {
"course" : "Computer Science",
"name" : "Ruby",
"studentid" : 100
}
}
{
"id" : 6,
"document" : {
"course" : "Bio-Technology",
"name" : "Rekha",
"studentid" : 104
}
}
{
"id" : 7,
"document" : {
"course" : "Computer Science",
"name" : "Ruby",
"studentid" : 100
}

1-241
Chapter 1
Develop

}
{
"id" : 5,
"document" : {
"course" : "Journalism",
"name" : "Rani",
"studentid" : 106
}
}
{
"id" : 8,
"document" : {
"course" : "Computer Science",
"name" : "Tom",
"studentid" : 103
}
}
{
"id" : 9,
"document" : {
"course" : "Computer Science",
"name" : "Peter",
"studentid" : 109
}
}
{
"id" : 1,
"document" : {
"course" : "Journalism",
"name" : "Tracy",
"studentid" : 110
}
}
{
"id" : 2,
"document" : {
"course" : "Bio-Technology",
"name" : "Raja",
"studentid" : 108
}
}

-- Exported myTable Schema

[~/nosqlMigrator]$cat myTableSchema
CREATE TABLE IF NOT EXISTS myTable (id INTEGER, document JSON, PRIMARY
KEY(SHARD(id)))

1-242
Chapter 1
Develop

Migrate from Oracle NoSQL Database On-Premise to Oracle NoSQL Database Cloud Service
This example shows how to use the Oracle NoSQL Database Migrator to copy data and the
schema definition of a NoSQL table from Oracle NoSQL Database to Oracle NoSQL
Database Cloud Service (NDCS).

Use Case
As a developer, you are exploring options to avoid the overhead of managing the resources,
clusters, and garbage collection for your existing NoSQL Database KVStore workloads. As a
solution, you decide to migrate your existing on-premise KVStore workloads to Oracle
NoSQL Database Cloud Service because NDCS manages them automatically.

Example
For the demonstration, let us look at how to migrate the data and schema definition of a
NoSQL table called myTable from the NoSQL Database KVStore to NDCS. We will also use
this use case to show how to run the runMigrator utility by passing a precreated
configuration file.
Prerequisites
• Identify the source and sink for the migration.
– Source: Oracle NoSQL Database
– Sink: Oracle NoSQL Database Cloud Service
• Identify your OCI cloud credentials and capture them in the OCI config file. Save the
config file in /home/.oci/config. See Acquiring Credentials in Using Oracle NoSQL
Database Cloud Service.

[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>

• Identify the region endpoint and compartment name for your Oracle NoSQL Database
Cloud Service.
– endpoint: us-phoenix-1
– compartment: developers
• Identify the following details for the on-premise KVStore:
– storeName: kvstore
– helperHosts: <hostname>:5000
– table: myTable
Procedure
To migrate the data and schema definition of myTable from NoSQL Database KVStore to
NDCS:

1-243
Chapter 1
Develop

1. Prepare the configuration file (in JSON format) with the identified Source and Sink
details. See Source Configuration Templates and Sink Configuration Templates .

{
"source" : {
"type" : "nosqldb",
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"],
"table" : "myTable",
"requestTimeoutMs" : 5000
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-phoenix-1",
"table" : "myTable",
"compartment" : "developers",
"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/JSON/file/with/DDL/
commands/for/the/schema/definition>",
"readUnits" : 100,
"writeUnits" : 100,
"storageSize" : 1
},
"credentials" : "<complete/path/to/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

2. Open the command prompt and navigate to the directory where you extracted the
NoSQL Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --
config or -c option.

[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator --config
<complete/path/to/the/JSON/config/file>

4. The utility proceeds with the data migration, as shown below.

Records provided by source=10, Records written to sink=10, Records


failed=0.
Elapsed time: 0min 10sec 426ms
Migration completed.

Validation
To validate the migration, you can login to your NDCS console and verify that myTable
is created with the source data.

1-244
Chapter 1
Develop

Migrate from JSON file source to Oracle NoSQL Database Cloud Service
This example shows the usage of Oracle NoSQL Database Migrator to copy data from a
JSON file source to Oracle NoSQL Database Cloud Service.
After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud
Service as its NoSQL Database platform. As its source contents are in JSON file format, they
are looking for a way to migrate them to Oracle NoSQL Database Cloud Service.
In this example, you will learn to migrate the data from a JSON file called SampleData.json.
You run the runMigrator utility by passing a pre-created configuration file. If the configuration
file is not provided as a run time parameter, the runMigrator utility prompts you to generate
the configuration through an interactive procedure.
Prerequisites
• Identify the source and sink for the migration.
– Source: JSON source file.
SampleData.json is the source file. It contains multiple JSON documents with one
document per line, delimited by a new line character.

{"id":6,"val_json":{"array":
["q","r","s"],"date":"2023-02-04T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-03-04T02:38:57.520Z","numfield":30,"strfield":"foo
54"},
{"datefield":"2023-02-04T02:38:57.520Z","numfield":56,"strfield":"bar2
3"}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":3,"val_json":{"array":
["g","h","i"],"date":"2023-02-02T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-02-02T02:38:57.520Z","numfield":28,"strfield":"foo
3"},
{"datefield":"2023-02-02T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":7,"val_json":{"array":
["a","b","c"],"date":"2023-02-20T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-01-20T02:38:57.520Z","numfield":28,"strfield":"foo
"},
{"datefield":"2023-01-22T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}
{"id":4,"val_json":{"array":
["j","k","l"],"date":"2023-02-03T02:38:57.520Z","nestarray":[[1,2,3],
[10,20,30]],"nested":{"arrayofobjects":
[{"datefield":"2023-02-03T02:38:57.520Z","numfield":28,"strfield":"foo
"},
{"datefield":"2023-02-03T02:38:57.520Z","numfield":38,"strfield":"bar"
}],"nestNum":10,"nestString":"bar"},"num":1,"string":"foo"}}

– Sink: Oracle NoSQL Database Cloud Service.

1-245
Chapter 1
Develop

• Identify your OCI cloud credentials and capture them in the configuration file. Save
the config file in /home/user/.oci/config. For more details, see Acquiring
Credentials in Using Oracle NoSQL Database Cloud Service.

[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
region=us-ashburn-1
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>

• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-ashburn-1
– compartment: Training-NoSQL
• Identify the following details for the JSON source file:
– schemaPath: <absolute path to the schema definition file containing
DDL statements for the NoSQL table at the sink>.
In this example, the DDL file is schema_json.ddl.

create table Migrate_JSON (id INTEGER, val_json JSON, PRIMARY


KEY(id));

The Oracle NoSQL Database Migrator provides an option to create a table


with the default schema if the schemaPath is not provided. For more details,
see Identify the Source and Sink topic in the Workflow for Oracle NoSQL
Database Migrator.
– Datapath: <absolute path to a file or directory containing the JSON
data for migration>.
Procedure
To migrate the JSON source file from SampleData.json to Oracle NoSQL Database
Cloud Service, perform the following:
1. Prepare the configuration file (in JSON format) with the identified source and sink
details. See Source Configuration Templates and Sink Configuration Templates .

{
"source" : {
"type" : "file",
"format" : "json",
"schemaInfo" : {
"schemaPath" : "[~/nosql-migrator-1.5.0]/schema_json.ddl"
},
"dataPath" : "[~/nosql-migrator-1.5.0]/SampleData.json"
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-ashburn-1",
"table" : "Migrate_JSON",

1-246
Chapter 1
Develop

"compartment" : "Training-NoSQL",
"includeTTL" : false,
"schemaInfo" : {
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1,
"useSourceSchema" : true
},
"credentials" : "/home/user/.oci/config",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"overwrite" : true,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.5.0"
}

2. Open the command prompt and navigate to the directory where you extracted the Oracle
NoSQL Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.

[~/nosql-migrator-1.5.0]$./runMigrator --config <complete/path/to/the/


config/file>

4. The utility proceeds with the data migration, as shown below. The Migrate_JSON table is
created at the sink with the schema provided in the schemaPath.

creating source from given configuration:


source creation completed
creating sink from given configuration:
sink creation completed
creating migrator pipeline
migration started
[cloud sink] : start loading DDLs
[cloud sink] : executing DDL: create table Migrate_JSON (id INTEGER,
val_json JSON, PRIMARY KEY(id)),limits: [100, 60, 1]
[cloud sink] : completed loading DDLs
[cloud sink] : start loading records
[json file source] : start parsing JSON records from file: SampleData.json
[INFO] migration completed.
Records provided by source=4, Records written to sink=4, Records
failed=0, Records skipped=0.
Elapsed time: 0min 5sec 778ms
Migration completed.

Validation
To validate the migration, you can log in to your Oracle NoSQL Database Cloud Service
console and verify that the Migrate_JSON table is created with the source data. For the
procedure to access the console, see Accessing the Service from the Infrastructure Console
article in the Oracle NoSQL Database Cloud Service document.

1-247
Chapter 1
Develop

Figure 1-1 Oracle NoSQL Database Cloud Service Console Tables

Figure 1-2 Oracle NoSQL Database Cloud Service Console Table Data

Migrate from MongoDB JSON file to an Oracle NoSQL Database Cloud Service
This example shows how to use the Oracle NoSQL Database Migrator to copy Mongo-
DB Formatted Data to the Oracle NoSQL Database Cloud Service (NDCS).

Use Case
After evaluating multiple options, an organization finalizes Oracle NoSQL Database
Cloud Service as its NoSQL Database platform. As its NoSQL tables and data are in
MongoDB, they are looking for a way to migrate those tables and data to Oracle
NDCS.
You can copy a file or directory containing the MongoDB exported JSON data for
migration by specifying the file or directory in the source configuration template.
A sample MongoDB-formatted JSON File is as follows:

{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},

1-248
Chapter 1
Develop

{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}

MongoDB supports two types of extensions to the JSON format of files, Canonical mode and
Relaxed mode. You can supply the MongoDB-formatted JSON file that is generated using the
mongoexport tool in either Canonical or Relaxed mode. Both the modes are supported by the
NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See mongoexport_formats.
For more information on the generation of MongoDB-formatted JSON file, See mongoexport.

Example
For the demonstration, let us look at how to migrate a MongoDB-formatted JSON file to
NDCS. We will use a manually created configuration file for this example.
Prerequisites
• Identify the source and sink for the migration.
– Source: MongoDB-Formatted JSON File
– Sink: Oracle NoSQL Database Cloud Service
• Extract the data from Mongo DB using the mongoexport utility. See mongoexport for
more information.
• Create a NoSQL table in the sink with a table schema that matches the data in the
Mongo-DB-formatted JSON file. As an alternative, you can instruct the NoSQL Database
Migratorto create a table with the default schema structure by setting the defaultSchema
attribute to true.

1-249
Chapter 1
Develop

Note:
For a MongoDB-Formatted JSON source, the default schema for the
table will be as:

CREATE TABLE IF NOT EXISTS <tablename>(ID STRING, DOCUMENT


JSON,PRIMARY KEY(SHARD(ID))

Where:
– tablename = value of the table config.
– ID = _id value from the mongoDB exported JSON source file.
– DOCUMENT = The entire contents of the mongoDB exported JSON
source file is aggregated into the DOCUMENT column excluding the _id
field.

• Identify your OCI cloud credentials and capture them in the OCI config file. Save
the config file in /home/.oci/config.See Acquiring Credentials in Using Oracle
NoSQL Database Cloud Service.

[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>

• Identify the region endpoint and compartment name for your Oracle NoSQL
Database Cloud Service.
– endpoint: us-phoenix-1
– compartment: developers
Procedure
To migrate the MongoDB-formatted JSON data to the Oracle NoSQL Database Cloud
Service:
1. Prepare the configuration file (in JSON format) with the identified Source and Sink
details. See Source Configuration Templates and Sink Configuration Templates .

{
"source" : {
"type" : "file",
"format" : "mongodb_json",
"dataPath" : "<complete/path/to/the/MongoDB/Formatted/JSON/
file>"
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "us-phoenix-1",
"table" : "mongoImport",
"compartment" : "developers",

1-250
Chapter 1
Develop

"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.

[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator --config <complete/


path/to/the/JSON/config/file>

4. The utility proceeds with the data migration, as shown below.

Records provided by source=29,353, Records written to sink=29,353,


Records failed=0.
Elapsed time: 9min 9sec 630ms
Migration completed.

Validation
To validate the migration, you can login to your NDCS console and verify that myTable is
created with the source data.

Migrate from DynamoDB JSON file in AWS S3 to an Oracle NoSQL Database Cloud
Service
This example shows how to use the Oracle NoSQL Database Migrator to copy DynamoDB
JSON file stored in an AWS S3 store to the Oracle NoSQL Database Cloud Service (NDCS).
Use Case:
After evaluating multiple options, an organization finalizes Oracle NoSQL Database Cloud
Service over DynamoDB database. The organization wants to migrate their tables and data
from DynamoDB to Oracle NoSQL Database Cloud Service.
See Mapping of DynamoDB table to Oracle NoSQL table for more details.
You can migrate a file containing the DynamoDB exported JSON data from the AWS S3
storage by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:

{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":

1-251
Chapter 1
Develop

{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":
["Red","Green"]},"Age":{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":
{"N":"1024"},"City":{"S":"Wales"}}},"FirstName":
{"S":"John"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":{"N":"48"}}}

You export the DynamoDB table to AWS S3 storage as specified in Exporting


DynamoDB table data to Amazon S3.

Example:
For this demonstration, you will learn how to migrate a DynamoDB JSON file in an
AWS S3 source to NDCS. You will use a manually created configuration file for this
example.

Prerequisites
• Identify the source and sink for the migration.
– Source: DynamoDB JSON File in AWS S3
– Sink: Oracle NoSQL Database Cloud Service
• Identify the table in AWS DynamoDB that needs to be migrated to NDCS. Login to
your AWS console using your credentials. Go to DynamoDB. Under Tables,
choose the table to be migrated.
• Create an object bucket and export the table to S3. From your AWS console, go to
S3. Under buckets, create a new object bucket. Go back to DynamoDB and click
Exports to S3. Provide the source table and the destination S3 bucket and click
Export.
Refer to steps provided in Exporting DynamoDB table data to Amazon S3 to
export your table. While exporting, you select the format as DynamoDB JSON.
The exported data contains DynamoDB table data in multiple gzip files as shown
below.

/ 01639372501551-bb4dd8c3
|-- 01639372501551-bb4dd8c3 ==> exported data prefix
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started

• You need aws credentials (including access key ID and secret access key) and
config files (credentials and optionally config) to access AWS S3 from the migrator.
See Set and view configuration settings for more details on the configuration files.
See Creating a key pair for more details on creating access keys.

1-252
Chapter 1
Develop

• Identify your OCI cloud credentials and capture them in the OCI config file. Save the
config file in a directory .oci under your home directory (~/.oci/config). See Acquiring
Credentials for more details.

[DEFAULT]
tenancy=ocid1.tenancy.oc1....
user=ocid1.user.oc1....
fingerprint= 43:d1:....
key_file=</fully/qualified/path/to/the/private/key/>
pass_phrase=<passphrase>

• Identify the region endpoint and compartment name for your Oracle NoSQL Database.
For example,
– endpoint: us-phoenix-1
– compartment: developers

Procedure
To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
1. Prepare the configuration file (in JSON format) with the identified source and sink details.
See Source Configuration Templates and Sink Configuration Templates .
You can choose one of the following two options.
• Option 1: Importing DynamoDB table a as JSON document using default schema
config.
Here the defaultSchema is TRUE and so the migrator creates the default schema at
the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL
column type. Else an error is thrown.

{
"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<region_name>",
"table" : "<table_name>",
"compartment" : "<compartment_name>",
"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"DDBPartitionKey" : "<PrimaryKey:Datatype>",
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},

1-253
Chapter 1
Develop

"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

For a DynamoDB JSON source, the default schema for the table will be as
shown below:

CREATE TABLE IF NOT EXISTS <TABLE_NAME>(DDBPartitionKey_name


DDBPartitionKey_type,
[DDBSortKey_name DDBSortKey_type], DOCUMENT JSON,
PRIMARY KEY(SHARD(DDBPartitionKey_name),[DDBSortKey_name]))

Where
TABLE_NAME = value provided for the sink 'table' in the configuration
DDBPartitionKey_name = value provided for the partiiton key in the
configuration
DDBPartitionKey_type = value provided for the data type of the partition key in
the configuration
DDBSortKey_name = value provided for the sort key in the configuration if any
DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
DOCUMENT = All attributes except the partition and sort key of a Dynamo DB
table item aggregated into a NoSQL JSON column
• Option 2: Importing DynamoDB table as fixed columns using a user-supplied
schema file.
Here the defaultSchema is FALSE and you specify the schemaPath as a file
containing your DDL statement. See Mapping of DynamoDB types to Oracle
NoSQL types for more details.

Note:
If the Dynamo DB table has a data type that is not supported in
NoSQL, the migration fails.

A sample schema file is shown below.

CREATE TABLE IF NOT EXISTS sampledynDBImp (AccountId


INTEGER,document JSON,
PRIMARY KEY(SHARD(AccountId)));

The schema file is used to create the table at the sink as part of the migration.
As long as the primary key data is provided, the input JSON record will be
inserted, otherwise it throws an error.

1-254
Chapter 1
Develop

Note:
If the input data does not contain a value for a particular column(other than
the primary key) then the column default value will be used. The default
value should be part of the column definition while creating the table. For
example id INTEGER not null default 0. If the column does not have a
default definition then SQL NULL is inserted if no values are provided for
the column.

{
"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<https://<bucket-name>.<s3_endpoint>/export_path>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
},
"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<region_name>",
"table" : "<table_name>",
"compartment" : "<compartment_name>",
"schemaInfo" : {
"defaultSchema" : false,
"readUnits" : 100,
"writeUnits" : 60,
"schemaPath" : "<full path of the schema file with the DDL
statement>",
"storageSize" : 1
},
"credentials" : "<complete/path/to/the/oci/config/file>",
"credentialsProfile" : "DEFAULT",
"writeUnitsPercent" : 90,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.

[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
--config <complete/path/to/the/JSON/config/file>

4. The utility proceeds with the data migration, as shown below.

Records provided by source=7..,


Records written to sink=7,
Records failed=0,

1-255
Chapter 1
Develop

Records skipped=0.
Elapsed time: 0 min 2sec 50ms
Migration completed.

Validation
You can login to your NDCS console and verify that the new table is created with the
source data.

Migrate from DynamoDB JSON file to Oracle NoSQL Database


This example shows how to use the Oracle NoSQL Database Migrator to copy
DynamoDB JSON file to Oracle NoSQL Database.
Use Case:
After evaluating multiple options, an organization finalizes Oracle NoSQL Database
over DynamoDB database. The organization wants to migrate their tables and data
from DynamoDB to Oracle NoSQL Database (On-premises).
See Mapping of DynamoDB table to Oracle NoSQL table for more details.
You can migrate a file or directory containing the DynamoDB exported JSON data from
a file system by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:

{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":
{"N":"201"},"City":{"S":"London"}}},"FirstName":
{"S":"Fred"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":
{"Zip":{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":
{"N":"1024"},"City":{"S":"Wales"}}},"FirstName":
{"S":"John"},"FavNumbers":{"NS":["10"]},"LastName":
{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":{"N":"48"}}}

You copy the exported DynamoDB table data from AWS S3 storage to a local mounted
file system.

Example:
For this demonstration, you will learn how to migrate a DynamoDB JSON file to Oracle
NoSQL Database(On-premises). You will use a manually created configuration file for
this example.

Prerequisites
• Identify the source and sink for the migration.
– Source: DynamoDB JSON File
– Sink: Oracle NoSQL Database (On-premises)

1-256
Chapter 1
Develop

• In order to import DynamoDB table data to Oracle NoSQL Database, you must first
export the DynamoDB table to S3. Refer to steps provided in Exporting DynamoDB table
data to Amazon S3 to export your table. While exporting, you select the format as
DynamoDB JSON. The exported data contains DynamoDB table data in multiple gzip
files as shown below.

/ 01639372501551-bb4dd8c3
|-- 01639372501551-bb4dd8c3 ==> exported data prefix
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started

• You must download the files from AWS s3. The structure of the files after the download
will be as shown below.

download-dir/01639372501551-bb4dd8c3
|----data
|------sxz3hjr3re2dzn2ymgd2gi4iku.json.gz ==>table data
|----manifest-files.json
|----manifest-files.md5
|----manifest-summary.json
|----manifest-summary.md5
|----_started

Procedure
To migrate the DynamoDB JSON data to the Oracle NoSQL Database:
1. Prepare the configuration file (in JSON format) with the identified source and sink
details.See Source Configuration Templates and Sink Configuration Templates .
You can choose one of the following two options.
• Option 1: Importing DynamoDB table a as JSON document using default schema
config.
Here the defaultSchema is TRUE and so the migrator creates the default schema at
the sink. You need to specify the DDBPartitionKey and the corresponding NoSQL
column type. Else an error is thrown.

{
"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
},
"sink" : {
"type" : "nosqldb",
"table" : "<table_name>",
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"]
"schemaInfo" : {
"defaultSchema" : true,

1-257
Chapter 1
Develop

"DDBPartitionKey" : "<PrimaryKey:Datatype>",
},
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

For a DynamoDB JSON source, the default schema for the table will be as
shown below:

CREATE TABLE IF NOT EXISTS <TABLE_NAME>(DDBPartitionKey_name


DDBPartitionKey_type,
[DDBSortKey_name DDBSortKey_type], DOCUMENT JSON,
PRIMARY KEY(SHARD(DDBPartitionKey_name),[DDBSortKey_name]))

Where
TABLE_NAME = value provided for the sink 'table' in the configuration
DDBPartitionKey_name = value provided for the partiiton key in the
configuration
DDBPartitionKey_type = value provided for the data type of the partition key in
the configuration
DDBSortKey_name = value provided for the sort key in the configuration if any
DDBSortKey_type = value provided for the data type of the sort key in the
configuration if any
DOCUMENT = All attributes except the partition and sort key of a Dynamo DB
table item aggregated into a NoSQL JSON column
• Option 2: Importing DynamoDB table as fixed columns using a user-supplied
schema file.
Here the defaultSchema is FALSE and you specify the schemaPath as a file
containing your DDL statement. See Mapping of DynamoDB types to Oracle
NoSQL types for more details.

Note:
If the Dynamo DB table has a data type that is not supported in
NoSQL, the migration fails.

A sample schema file is shown below.

CREATE TABLE IF NOT EXISTS sampledynDBImp (AccountId


INTEGER,document JSON,
PRIMARY KEY(SHARD(AccountId)));

The schema file is used to create the table at the sink as part of the migration.
As long as the primary key data is provided, the input JSON record will be
inserted, otherwise it throws an error.

1-258
Chapter 1
Develop

Note:
If the input data does not contain a value for a particular column(other than
the primary key) then the column default value will be used. The default
value should be part of the column definition while creating the table. For
example id INTEGER not null default 0. If the column does not have a
default definition then SQL NULL is inserted if no values are provided for
the column.

{
"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<complete/path/to/the/DynamoDB/Formatted/JSON/file>"
},
"sink" : {
"type" : "nosqldb",
"table" : "<table_name>",
"schemaInfo" : {
"defaultSchema" : false,
"readUnits" : 100,
"writeUnits" : 60,
"schemaPath" : "<full path of the schema file with the DDL
statement>",
"storageSize" : 1
},
"storeName" : "kvstore",
"helperHosts" : ["<hostname>:5000"]
},
"abortOnError" : true,
"migratorVersion" : "1.0.0"
}

2. Open the command prompt and navigate to the directory where you extracted the NoSQL
Database Migrator utility.
3. Run the runMigrator command by passing the configuration file using the --config or -
c option.

[~/nosqlMigrator/nosql-migrator-1.0.0]$./runMigrator
--config <complete/path/to/the/JSON/config/file>

4. The utility proceeds with the data migration, as shown below.

Records provided by source=7..,


Records written to sink=7,
Records failed=0,
Records skipped=0.
Elapsed time: 0 min 2sec 50ms
Migration completed.

Validation

1-259
Chapter 1
Develop

Start the SQL prompt in your KVStore.

java -jar lib/sql.jar -helper-hosts localhost:5000 -store kvstore

Verify that the new table is created with the source data:

desc <table_name>
SELECT * from <table_name>

Migrate from CSV file to Oracle NoSQL Database


This example shows the usage of Oracle NoSQL Database Migrator to copy data from
a CSV file to Oracle NoSQL Database.

Example
After evaluating multiple options, an organization finalizes Oracle NoSQL Database as
its NoSQL Database platform. As its source contents are in CSV file format, they are
looking for a way to migrate them to Oracle NoSQL Database.
In this example, you will learn to migrate the data from a CSV file called course.csv,
which contains information about various courses offered by a university. You generate
the configuration file from the runMigrator utility.

You can also prepare the configuration file with the identified source and sink details.
See Oracle NoSQL Database Migrator Reference.
Prerequisites
• Identify the source and sink for the migration.
– Source: CSV file
In this example, the source file is course.csv

cat [~/nosql-migrator-1.5.0]/course.csv
1,"Computer Science", "San Francisco", "2500"
2,"Bio-Technology", "Los Angeles", "1200"
3,"Journalism", "Las Vegas", "1500"
4,"Telecommunication", "San Francisco", "2500"

– Sink: Oracle NoSQL Database


• The CSV file must conform to the RFC4180 format.
• Create a file containing the DDL commands for the schema of the target table,
course. The table definition must match the CSV data file concerning the number
of columns and their types.
In this example, the DDL file is mytable_schema.ddl

cat [~/nosql-migrator-1.5.0]/mytable_schema.ddl
create table course (id INTEGER, name STRING, location STRING, fees
INTEGER, PRIMARY KEY(id));

Procedure

1-260
Chapter 1
Develop

To migrate the CSV file data from course.csv to Oracle NoSQL Database Service, perform
the following steps:
1. Open the command prompt and navigate to the directory where you extracted the Oracle
NoSQL Database Migrator utility.
2. To generate the configuration file using Oracle NoSQL Database Migrator, execute the
runMigrator command without any runtime parameters.

[~/nosql-migrator-1.5.0]$./runMigrator

3. As you did not provide the configuration file as a runtime parameter, the utility prompts if
you want to generate the configuration now. Type y.
You can choose a location for the configuration file or retain the default location by
pressing the Enter key.

Configuration file is not provided. Do you want to generate


configuration? (y/n) [n]: y
Generating a configuration file interactively.

Enter a location for your config [./migrator-config.json]:


./migrator-config.json already exist. Do you want to overwrite?(y/n) [n]:
y

4. Based on the prompts from the utility, choose your options for the Source configuration.

Select the source:


1) nosqldb
2) nosqldb_cloud
3) file
4) object_storage_oci
5) aws_s3
#? 3

Configuration for source type=file


Select the source file format:
1) json
2) mongodb_json
3) dynamodb_json
4) csv
#? 4

5. Provide the path to the source CSV file. Further, based on the prompts from the utility,
you can choose to reorder the column names, select the encoding method, and trim the
tailing spaces from the target table.

Enter path to a file or directory containing csv data: [~/nosql-


migrator-1.5.0]/course.csv
Does the CSV file contain a headerLine? (y/n) [n]: n
Do you want to reorder the column names of NoSQL table with respect to
CSV file columns? (y/n) [n]: n
Provide the CSV file encoding. The supported encodings are:

1-261
Chapter 1
Develop

UTF-8,UTF-16,US-ASCII,ISO-8859-1. [UTF-8]:
Do you want to trim the tailing spaces? (y/n) [n]: n

6. Based on the prompts from the utility, choose your options for the Sink
configuration.

Select the sink:


1) nosqldb
2) nosqldb_cloud
#? 1
Configuration for sink type=nosqldb
Enter store name of the Oracle NoSQL Database: mystore
Enter comma separated list of host:port of Oracle NoSQL Database:
<hostname>:5000

7. Based on the prompts from the utility, provide the name of the target table.

Enter fully qualified table name: course

8. Enter your choice to set the TTL value. The default value is n.

Include TTL data? If you select 'yes' TTL value provided by the
source will be set on imported rows. (y/n) [n]: n

9. Based on the prompts from the utility, specify whether or not the target table must
be created through the Oracle NoSQL Database Migrator tool. If the table is
already created, it is suggested to provide n. If the table is not created, the utility
will request the path for the file containing the DDL commands for the schema of
the target table.

Would you like to create table as part of migration process?


Use this option if you want to create table through the migration
tool.
If you select yes, you will be asked to provide a file that contains
table DDL or to use schema provided by the source or default schema.
(y/n) [n]: y
Enter path to a file containing table DDL: [~/nosql-migrator-1.5.0]/
mytable_schema.ddl
Is the store secured? (y/n) [y]: n
would you like to overwrite records which are already present?
If you select 'no' records with same primary key will be skipped
[y/n] [y]: y
Enter store operation timeout in milliseconds. [5000]:
Would you like to add transformations to source data? (y/n) [n]: n

10. Enter your choice to determine whether to proceed with the migration in case any
record fails to migrate.

Would you like to continue migration if any data fails to be

1-262
Chapter 1
Develop

migrated?
(y/n) [n]: n

11. The utility displays the generated configuration on the screen.

Generated configuration is:


{
"source" : {
"type" : "file",
"format" : "csv",
"dataPath" : "[~/nosql-migrator-1.5.0]/course.csv",
"hasHeader" : false,
"csvOptions" : {
"encoding" : "UTF-8",
"trim" : false
}
},
"sink" : {
"type" : "nosqldb",
"storeName" : "mystore",
"helperHosts" : ["<hostname>:5000"],
"table" : "migrated_table",
"query" : "",
"includeTTL" : false,
"schemaInfo" : {
"schemaPath" : "[~/nosql-migrator-1.5.0]/mytable_schema.ddl"
},
"overwrite" : true,
"requestTimeoutMs" : 5000
},
"abortOnError" : true,
"migratorVersion" : "1.5.0"
}

12. Finally, the utility prompts you to specify whether or not to proceed with the migration
using the generated configuration file. The default option is y.
Note: If you select n, you can use the generated configuration file to perform the
migration. Specify the ./runMigrator -c or the ./runMigrator --config option.

Would you like to run the migration with above configuration?


If you select no, you can use the generated configuration file to
run the migration using:
./runMigrator --config ./migrator-config.json
(y/n) [y]: y

13. The NoSQL Database Migrator copies your data from the CSV file to Oracle NoSQL
Database.

creating source from given configuration:


source creation completed

1-263
Chapter 1
Develop

creating sink from given configuration:


sink creation completed
creating migrator pipeline
migration started
[nosqldb sink] : start loading DDLs
[nosqldb sink] : executing DDL: create table course (id INTEGER,
name STRING, location STRING, fees INTEGER, PRIMARY KEY(id))
[nosqldb sink] : completed loading DDLs
[nosqldb sink] : start loading records
[csv file source] : start parsing CSV records from file: course.csv
migration completed. Records provided by source=4, Records written
to sink=4, Records failed=0,Records skipped=0.
Elapsed time: 0min 0sec 559ms
Migration completed.

Validation
Start the SQL prompt in your KVStore.

java -jar lib/sql.jar -helper-hosts localhost:5000 -store kvstore

Verify that the new table is created with the source data:

sql-> select * from course;


{"id":4,"name":"Telecommunication","location":"San
Francisco","fees":2500}
{"id":1,"name":"Computer Science","location":"San
Francisco","fees":2500}
{"id":2,"name":"Bio-Technology","location":"Los Angeles","fees":1200}
{"id":3,"name":"Journalism","location":"Las Vegas","fees":1500}

4 rows returned

Oracle NoSQL Database Migrator Reference


Learn about source, sink, and transformation configuration template parameters
available for Oracle NoSQL Database Migrator.
This article has the following topics:

Source Configuration Templates


Learn about the source configuration file formats for each valid source and the
purpose of each configuration parameter.
For the configuration file template, see Configuration File in Terminology used with
NoSQL Data Migrator.
For details on valid sink formats for each of the source, see Sink Configuration
Templates.

Topics
• JSON as the File Source

1-264
Chapter 1
Develop

The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a JSON file as a source to a valid sink.
• JSON File in OCI Object Storage Bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a JSON file in the OCI Object Storage bucket as a source to a valid sink.
• MongoDB-Formatted JSON File
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a MongoDB-Formatted JSON file as a source to a valid sink.
• MongoDB-Formatted JSON File in OCI Object Storage bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a MongoDB-Formatted JSON file in the OCI Object Storage bucket as a source
to a valid sink.
• DynamoDB-Formatted JSON File stored in AWS S3
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a DynamoDB-Formatted JSON file in the AWS S3 storage as a source to a
valid sink.
• DynamoDB-Formatted JSON File
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from a DynamoDB-Formatted JSON file as a source to a valid sink.
• Oracle NoSQL Database
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from Oracle NoSQL Database tables as a source to a valid sink.
• Oracle NoSQL Database Cloud Service
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from Oracle NoSQL Database Cloud Service tables as a source to a valid sink.
• CSV as the File Source
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from CSV file as a source to a valid sink.
• CSV file in OCI Object Storage Bucket
The source configuration template for the Oracle NoSQL Database Migrator to copy the
data from CSV file stored in OCI Object Storage bucket as a source to a valid sink.

JSON as the File Source


The configuration file format for JSON File as a source of NoSQL Database Migrator is
shown below.
You can migrate a JSON source file from a file path or a directory by specifying the file path
or directory in the source configuration template.
A sample JSON source file is as follows:

{"Item":{"PK":{"S":"ACCT#82691500"},"SK":
{"S":"ACCT#82691500"},"AccountIndexId":{"S":"ACCT#82691500"},"Emailid":
{"S":"alejandro.rosalez11@example.org"},"AccountId":
{"N":"82691500"},"PlasticCardNumber":{"S":"9610432116466295"},"FirstName":
{"S":"Alejandro"},"Addresses":{"M":{"RESIDENCE":{"M":{"city":{"S":"Any
Town"},"country":{"S":"USA"},"street":{"S":"123 Any Street"}}},"BUSINESS":
{"M":{"city":{"S":"Anytown"},"country":{"S":"country"},"street":{"S":"221
Main Street"}}}}},"LastName":{"S":"Rosalez"}}}
{"Item":{"PK":{"S":"ACCT#76584123"},"SK":

1-265
Chapter 1
Develop

{"S":"ACCT#76584123"},"AccountIndexId":{"S":"ACCT#76584123"},"Emailid":
{"S":"zhang.wei@example.com"},"AccountId":
{"N":"76584123"},"PlasticCardNumber":
{"S":"4235400034568756"},"FirstName":{"S":"Zhang"},"Addresses":{"M":
{"RESIDENCE":{"M":{"city":{"S":"Any Town"},"country":
{"S":"USA"},"street":{"S":"135 Any Street"}}},"BUSINESS":{"M":{"city":
{"S":"AnyTown"},"country":{"S":"country"},"street":{"S":"100 Main
Street"}}}}},"LastName":{"S":"Wei"},"AuthUsers":{"M":{"AUTHUSER-2":
{"M":{"Name":{"S":"Mateo Jackson"},"PlasticCardNumber":
{"S":"4036516984267960"}}},"AUTHUSER-1":{"M":{"Name":{"S":"Paulo
Santos"},"PlasticCardNumber":{"S":"4036546984262340"}}}}}}}

Source Configuration Template

"source" : {
"type" : "file",
"format" : "json",
"dataPath": "</path/to/a/json/file>",
"schemaInfo": {
"schemaPath": "</path/to/schema/file>"
}
}

Source Parameters
• type
• format
• dataPath
• schemaInfo
• schemaInfo.schemaPath

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"

format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"

dataPath
• Purpose: Specifies the absolute path to a file or directory containing the JSON
data for migration.

1-266
Chapter 1
Develop

You must ensure that this data matches with the NoSQL table schema defined at the
sink. If you specify a directory, the NoSQL Database Migrator identifies all the files with
the .json extension in that directory for the migration. Sub-directories are not supported.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a JSON file
"dataPath" : "/home/user/sample.json"
– Specifying a directory
"dataPath" : "/home/user"

schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaPath
• Purpose: Specifies the absolute path to the schema definition file containing DDL
statements for the NoSQL table being migrated.
• Data Type: string
• Mandatory (Y/N): Y
• Example:

"schemaInfo" : {
"schemaPath" : "/home/user/mytable/Schema/schema.ddl"
}

JSON File in OCI Object Storage Bucket


The configuration file format for JSON file in OCI Object Storage bucket as a source of
NoSQL Database Migrator is shown below.
You can migrate a JSON file in the OCI Object Storage bucket by specifying the name of the
bucket in the source configuration template.
A sample JSON source file in the OCI Object Storage bucket is as follows:

{"Item":{"PK":{"S":"ACCT#82691500"},"SK":
{"S":"ACCT#82691500"},"AccountIndexId":{"S":"ACCT#82691500"},"Emailid":
{"S":"alejandro.rosalez11@example.org"},"AccountId":
{"N":"82691500"},"PlasticCardNumber":{"S":"9610432116466295"},"FirstName":
{"S":"Alejandro"},"Addresses":{"M":{"RESIDENCE":{"M":{"city":{"S":"Any
Town"},"country":{"S":"USA"},"street":{"S":"123 Any Street"}}},"BUSINESS":
{"M":{"city":{"S":"Anytown"},"country":{"S":"country"},"street":{"S":"221
Main Street"}}}}},"LastName":{"S":"Rosalez"}}}
{"Item":{"PK":{"S":"ACCT#76584123"},"SK":

1-267
Chapter 1
Develop

{"S":"ACCT#76584123"},"AccountIndexId":{"S":"ACCT#76584123"},"Emailid":
{"S":"zhang.wei@example.com"},"AccountId":
{"N":"76584123"},"PlasticCardNumber":
{"S":"4235400034568756"},"FirstName":{"S":"Zhang"},"Addresses":{"M":
{"RESIDENCE":{"M":{"city":{"S":"Any Town"},"country":
{"S":"USA"},"street":{"S":"135 Any Street"}}},"BUSINESS":{"M":{"city":
{"S":"AnyTown"},"country":{"S":"country"},"street":{"S":"100 Main
Street"}}}}},"LastName":{"S":"Wei"},"AuthUsers":{"M":{"AUTHUSER-2":
{"M":{"Name":{"S":"Mateo Jackson"},"PlasticCardNumber":
{"S":"4036516984267960"}}},"AUTHUSER-1":{"M":{"Name":{"S":"Paulo
Santos"},"PlasticCardNumber":{"S":"4036546984262340"}}}}}}}

Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.

Source Configuration Template

"source" : {
"type" : "object_storage_oci",
"format" : "json",
"endpoint" : "<OCI Object Storage service endpoint URL or region
ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"schemaInfo" : {
"schemaObject" : "<object name>"
},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}

Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• schemaInfo
• schemaInfo.schemaObject
• credentials
• credentialsProfile

1-268
Chapter 1
Develop

• useInstancePrincipal

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"

format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"

endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"

namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"

bucket
• Purpose: Specifies the name of the bucket, which contains the source JSON files.
Ensure that the required bucket already exists in the OCI Object Storage instance and
has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"

1-269
Chapter 1
Develop

prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, no filter is applied and all the objects present in
the bucket are migrated.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_table/Data/000000.json" (migrates only 000000.json)
2. "prefix" : "my_table/Data" (migrates all the objects with prefix my_table/
Data)

schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaObject
• Purpose: Specifies the name of the object in the bucket where NoSQL table
schema definitions for the data being migrated are stored.
• Data Type: string
• Mandatory (Y/N): Y
• Example:

"schemaInfo" : {
"schemaObject" : "mytable/Schema/schema.ddl"
}

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.

1-270
Chapter 1
Develop

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a 'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.

Note:
This parameter is valid ONLY if the credentials parameter is specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"
2. "credentialsProfile": "ADMIN_USER"

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is running


within an OCI compute instance, for example, NoSQL Database Migrator
tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters MUST
be specified. Additionally, these two parameters are mutually exclusive.
Specify ONLY one of these parameters, but not both at the same time.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

1-271
Chapter 1
Develop

MongoDB-Formatted JSON File


The configuration file format for MongoDB-formatted JSON File as a source of NoSQL
Database Migrator is shown below.
You can copy a file or directory containing the MongoDB exported JSON data for
migration by specifying the file or directory in the source configuration template.
A sample MongoDB-formatted JSON File is as follows:

{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},
{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}

MongoDB supports two types of extensions to the JSON format of files, Canonical
mode and Relaxed mode. You can supply the MongoDB-formatted JSON file that is
generated using the mongoexport tool in either Canonical or Relaxed mode. Both the
modes are supported by the NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See
mongoexport_formats.
For more information on the generation of MongoDB-formatted JSON file, see
mongoexport for more information.

Source Configuration Template

"source" : {
"type" : "file",
"format" : "mongodb_json",
"dataPath": "</path/to/a/json/file>",
"schemaInfo": {
"schemaPath": "</path/to/schema/file>"
}
}

1-272
Chapter 1
Develop

Source Parameters
• type
• format
• dataPath
• schemaInfo
• schemaInfo.schemaPath

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"

format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "mongodb_json"

dataPath
• Purpose: Specifies the absolute path to a file or directory containing the MongoDB
exported JSON data for migration.
You must have generated these files using the mongoexport tool. See mongoexport for
more information.
You can supply the MongoDB-formatted JSON file that is generated using the
mongoexport tool in either canonical or relaxed mode. Both the modes are supported by
the NoSQL Database Migrator for migration.
For more information on the MongoDB Extended JSON (v2) file, See
mongoexport_formats.
If you specify a directory, the NoSQL Database Migrator identifies all the files with
the .json extension in that directory for the migration. Sub-directories are not supported.
You must ensure that this data matches with the NoSQL table schema defined at the
sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a MongoDB formatted JSON file
"dataPath" : "/home/user/sample.json"
– Specifying a directory
"dataPath" : "/home/user"

1-273
Chapter 1
Develop

schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaPath
• Purpose: Specifies the absolute path to the schema definition file containing DDL
statements for the NoSQL table being migrated.
• Data Type: string
• Mandatory (Y/N): Y
• Example:

"schemaInfo" : {
"schemaPath" : "/home/user/mytable/Schema/schema.ddl"
}

MongoDB-Formatted JSON File in OCI Object Storage bucket


The configuration file format for MongoDB-Formatted JSON file in OCI Object Storage
bucket as a source of NoSQL Database Migrator is shown below.
You can copy the MongoDB exported JSON data in the OCI Object Storage bucket for
migration by specifying the name of the bucket in the source configuration template.
A sample MongoDB-formatted JSON File is as follows:

{"_id":0,"name":"Aimee Zank","scores":
[{"score":1.463179736705023,"type":"exam"},
{"score":11.78273309957772,"type":"quiz"},
{"score":35.8740349954354,"type":"homework"}]}
{"_id":1,"name":"Aurelia Menendez","scores":
[{"score":60.06045071030959,"type":"exam"},
{"score":52.79790691903873,"type":"quiz"},
{"score":71.76133439165544,"type":"homework"}]}
{"_id":2,"name":"Corliss Zuk","scores":
[{"score":67.03077096065002,"type":"exam"},
{"score":6.301851677835235,"type":"quiz"},
{"score":66.28344683278382,"type":"homework"}]}
{"_id":3,"name":"Bao Ziglar","scores":
[{"score":71.64343899778332,"type":"exam"},
{"score":24.80221293650313,"type":"quiz"},
{"score":42.26147058804812,"type":"homework"}]}
{"_id":4,"name":"Zachary Langlais","scores":
[{"score":78.68385091304332,"type":"exam"},
{"score":90.2963101368042,"type":"quiz"},
{"score":34.41620148042529,"type":"homework"}]}

Extract the data from MongoDB using the mongoexport utility and upload it to the OCI
Object Storage bucket. See mongoexport for more information.

1-274
Chapter 1
Develop

Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.

Source Configuration Template

"source" : {
"type" : "object_storage_oci",
"format" : "mongodb_json",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"schemaInfo" : {
"schemaObject" : "<object name>"
},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}

Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• schemaInfo
• schemaInfo.schemaObject
• credentials
• credentialsProfile
• useInstancePrincipal

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"

format
• Purpose: Specifies the source format.

1-275
Chapter 1
Develop

• Data Type: string


• Mandatory (Y/N): Y
• Example: "format" : "mongodb_json"

endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud
Service for the list of data regions supported for Oracle NoSQL Database Cloud
Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"

namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an
optional parameter. If you don't specify this parameter, the default namespace of
the tenancy is used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"

bucket
• Purpose: Specifies the name of the bucket, which contains the source MongoDB-
Formatted JSON files. Ensure that the required bucket already exists in the OCI
Object Storage instance and has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"

prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, no filter is applied and all the MongoDB JSON
formatted objects present in the bucket are migrated. Extract the data from
MongoDB using the mongoexport utility and upload it to the OCI Object Storage
bucket. See mongoexport for more information.
If you do not provide any value, no filter is applied and all the objects present in
the bucket are migrated.
• Data Type: string

1-276
Chapter 1
Develop

• Mandatory (Y/N): N
• Example:
1. "prefix" : "mongo_export/Data/table.json" (migrates only table.json)
2. "prefix" : "mongo_export/Data" (migrates all the objects with prefix
mongo_export/Data)

schemaInfo
• Purpose: Specifies the schema of the source data being migrated. This schema is
passed to the NoSQL sink.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaObject
• Purpose: Specifies the name of the object in the bucket where NoSQL table schema
definitions for the data being migrated are stored.
• Data Type: string
• Mandatory (Y/N): Y
• Example:

"schemaInfo" : {
"schemaObject" : "mytable/Schema/schema.ddl"
}

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal parameters
are not mandatory individually, one of these parameters MUST be specified.
Additionally, these two parameters are mutually exclusive. Specify ONLY one of
these parameters, but not both at the same time.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

1-277
Chapter 1
Develop

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.

Note:
This parameter is valid ONLY if the credentials parameter is
specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"
2. "credentialsProfile": "ADMIN_USER"

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For
more information on Instance Principal authentication method, see Source and
Sink Security .
If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is


running within an OCI compute instance, for example, NoSQL
Database Migrator tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at
the same time.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

DynamoDB-Formatted JSON File stored in AWS S3


The configuration file format for DynamoDB-formatted JSON File in AWS S3 as a
source of NoSQL Database Migrator is shown below.

1-278
Chapter 1
Develop

You can migrate a file containing the DynamoDB exported JSON data from the AWS S3
storage by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:

{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":
{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":{"N":"1024"},"City":
{"S":"Wales"}}},"FirstName":{"S":"John"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":
{"N":"48"}}}

You must export the DynamoDB table to AWS S3 storage as specified in Exporting
DynamoDB table data to Amazon S3.
The valid sink types for DynamoDB-formatted JSON stored in AWS S3 are nosqldb and
nosqldb_cloud.

Source Configuration Template

"source" : {
"type" : "aws_s3",
"format" : "dynamodb_json",
"s3URL" : "<S3 object url>",
"credentials" : "</path/to/aws/credentials/file>",
"credentialsProfile" : <"profile name in aws credentials file">
}

Source Parameters:
• type
• format
• s3URL
• credentials
• credentialsProfile
type
• Purpose:Identifies the source type.
• Data Type: string
• Mandatory (Y/N):Y
• Example: "type" : "aws_s3"

format
• Purpose:Specifies the source format.
• Data Type: string

1-279
Chapter 1
Develop

• Mandatory (Y/N):Y
• Example: "format" : "dynamodb_json"

Note:
If the value of the "type" is aws_s3, then format must be dynamodb_json.

s3URL
• Purpose: Specifies the URL of an exported DynamoDB table stored in AWS S3.
You can obtain this URL from the AWS S3 console. Valid URL format is https://
<bucket-name>.<s3_endpoint>/<prefix>. The migrator will look for json.gz files
in the prefix for import.

Note:
You must export DynamoDB table as specified in Exporting DynamoDB
table data to Amazon S3.

• Data Type: string


• Mandatory: Yes
• Example: https://my-bucket.s3.ap-south-1.amazonaws.com/AWSDynamoDB/
01649660790057-14f642be

credentials
• Purpose: Specifies the absolute path to a file containing the AWS credentials. If
not specified, it defaults to $HOME/.aws/credentials. Please refer to Configuration
and credential file settings for more details on the credentials file.
• Data Type: string
• Mandatory (Y/N):N
• Example:

"credentials" : "/home/user/.aws/credentials"
"credentials" : "/home/user/security/credentials

Note:
The Migrator does not log any of the credentials information. You should
properly protect the credentials file from unauthorized access.

credentialsProfile
• Purpose: Name of the profile in the AWS credentials file to be used to connect to
AWS S3. User account credentials are referred to as a profile. If you do not specify
this value, it defaults to the default profile. Please refer to Configuration and
credential file settings for more details on the credentials file.

1-280
Chapter 1
Develop

• Data Type: string


• Mandatory (Y/N):N
• Example:

"credentialsProfile" : "default"
"credentialsProfile": "test"

DynamoDB-Formatted JSON File


The configuration file format for DynamoDB-formatted JSON File as a source of NoSQL
Database Migrator is shown below.
You can migrate a file or directory containing the DynamoDB exported JSON data from a file
system by specifying the path in the source configuration template.
A sample DynamoDB-formatted JSON File is as follows:

{"Item":{"Id":{"N":"101"},"Phones":{"L":[{"L":[{"S":"555-222"},
{"S":"123-567"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"570004"},"Street":{"S":"21 main"},"DoorNum":{"N":"201"},"City":
{"S":"London"}}},"FirstName":{"S":"Fred"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"Smith"},"FavColors":{"SS":["Red","Green"]},"Age":
{"N":"22"}}}
{"Item":{"Id":{"N":"102"},"Phones":{"L":[{"L":
[{"S":"222-222"}]}]},"PremierCustomer":{"BOOL":false},"Address":{"M":{"Zip":
{"N":"560014"},"Street":{"S":"32 main"},"DoorNum":{"N":"1024"},"City":
{"S":"Wales"}}},"FirstName":{"S":"John"},"FavNumbers":{"NS":
["10"]},"LastName":{"S":"White"},"FavColors":{"SS":["Blue"]},"Age":
{"N":"48"}}}

You must copy the exported DynamoDB table data from AWS S3 storage to a local mounted
file system.
The valid sink types for DynamoDB JSON file are nosqldb and nosqldb_cloud.

Source Configuration Template

"source" : {
"type" : "file",
"format" : "dynamodb_json",
"dataPath" : "<path to a file or directory containing exported DDB table
data>"
}

Source Parameters:
• type
• format
• dataPath
type
• Purpose:Identifies the source type.

1-281
Chapter 1
Develop

• Data Type: string


• Mandatory (Y/N):Y
• Example: "type" : "file"

format
• Purpose:Specifies the source format.
• Data Type: string
• Mandatory (Y/N):Y
• Example: "format" : "dynamodb_json"

dataPath
• Purpose: Specifies the absolute path to a file or directory containing the exported
DynamoDB table data. You must copy exported DynamoDB table data from AWS
S3 to a local mounted file system. You must ensure that this data matches with the
NoSQL table schema defined at the sink. If you specify a directory, the NoSQL
Database Migrator identifies all the files with the .json.gz extension in that
directory and the datasub-directory.
• Data Type: string
• Mandatory (Y/N):Y
• Example:
– Specifying a file

"dataPath" : "/home/user/AWSDynamoDB/01639372501551-bb4dd8c3/
data/zclclwucjy6v5mkefvckxzhfvq.json.gz"

– Specifying a directory

"dataPath" : "/home/user/AWSDynamoDB/01639372501551-bb4dd8c3"

Oracle NoSQL Database


The configuration file format for Oracle NoSQL Database as a source of NoSQL
Database Migrator is shown below.
You can migrate a table from Oracle NoSQL Database by specifying the table name in
the source configuration template.
A sample Oracle NoSQL Database table is as follows:

{"id":20,"firstName":"Jane","lastName":"Smith","otherNames":
[{"first":"Jane","last":"teacher"}],"age":25,"income":55000,"address":
{"city":"San Jose","number":201,"phones":
[{"area":608,"kind":"work","number":6538955},
{"area":931,"kind":"home","number":9533341},
{"area":931,"kind":"mobile","number":9533382}],"state":"CA","street":"A
tlantic Ave","zip":95005},"connections":[40,75,63],"expenses":null}
{"id":10,"firstName":"John","lastName":"Smith","otherNames":
[{"first":"Johny","last":"chef"}],"age":22,"income":45000,"address":
{"city":"Santa Cruz","number":101,"phones":
[{"area":408,"kind":"work","number":4538955},

1-282
Chapter 1
Develop

{"area":831,"kind":"home","number":7533341},
{"area":831,"kind":"mobile","number":7533382}],"state":"CA","street":"Pacific
Ave","zip":95008},"connections":[30,55,43],"expenses":null}
{"id":30,"firstName":"Adam","lastName":"Smith","otherNames":
[{"first":"Adam","last":"handyman"}],"age":45,"income":75000,"address":
{"city":"Houston","number":301,"phones":
[{"area":618,"kind":"work","number":6618955},
{"area":951,"kind":"home","number":9613341},
{"area":981,"kind":"mobile","number":9613382}],"state":"TX","street":"Indian
Ave","zip":95075},"connections":[60,45,73],"expenses":null}

Source Configuration Template

"source" : {
"type": "nosqldb",
"table" : "<fully qualified table name>",
"storeName" : "<store name>",
"helperHosts" : ["hostname1:port1","hostname2:port2,..."],
"security" : "</path/to/store/security/file>",
"requestTimeoutMs" : 5000,
"includeTTL": <true|false>
}

Source Parameters
• type
• table
• storeName
• helperHosts
• security
• requestTimeoutMs
• includeTTL

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb"

table
• Purpose: Fully qualified table name from which to migrate the data.
Format: [namespace_name:]<table_name>
If the table is in the DEFAULT namespace, you can omit the namespace_name. The table
must exist in the store.
• Data Type: string
• Mandatory (Y/N): Y

1-283
Chapter 1
Develop

• Example:
– With the DEFAULT namespace "table" :"mytable"
– With a non-default namespace "table" : "mynamespace:mytable"
– To specify a child table "table" : "mytable.child"

storeName
• Purpose: Name of the Oracle NoSQL Database store.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "storeName" : "kvstore"

helperHosts
• Purpose: A list of host and registry port pairs in the hostname:port format. Delimit
each item in the list using a comma. You must specify at least one helper host.
• Data Type: array of strings
• Mandatory (Y/N): Y
• Example: "helperHosts" : ["localhost:5000","localhost:6000"]

security
• Purpose:
If your store is a secure store, provide the absolute path to the security login file
that contains your store credentials. See Configuring Security with Remote Access
in Administrator's Guide to know more about the security login file.
You can use either password file based authentication or wallet based
authentication. However, the wallet based authentication is supported only in the
Enterprise Edition (EE) of Oracle NoSQL Database. For more information on
wallet-based authentication, see Source and Sink Security .
The Community Edition(CE) edition supports password file based authentication
only.
• Data Type: string
• Mandatory (Y/N): Y for a secure store
• Example:
"security" : "/home/user/client.credentials"
Example security file content for password file based authentication:

oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.pwdfile.file=/home/nosql/login.passwd
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)

1-284
Chapter 1
Develop

Example security file content for wallet based authentication:

oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.wallet.dir=/home/nosql/login.wallet
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)

requestTimeoutMs
• Purpose: Specifies the time to wait for each read operation from the store to complete.
This is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000

includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows when
exporting Oracle NoSQL Database tables. If set to true, the TTL data for rows also gets
included in the data provided by the source. TTL is present in the _metadata JSON
object associated with each row. The expiration time for each row gets exported as the
number of milliseconds since the UNIX epoch (Jan 1st, 1970).
If you do not specify this parameter, it defaults to false.
Only the rows having a positive expiration value for TTL get included as part of the
exported rows. If a row does not expire, which means TTL=0, then its TTL metadata is
not included explicitly. For example, if ROW1 expires at 2021-10-19 00:00:00 and ROW2
does not expire, the exported data looks like as follows:

//ROW1
{
"id" : 1,
"name" : "abc",
"_metadata" : {
"expiration" : 1634601600000
}
}

//ROW2
{
"id" : 2,
"name" : "xyz"
}

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "includeTTL" : true

1-285
Chapter 1
Develop

Oracle NoSQL Database Cloud Service


The configuration file format for Oracle NoSQL Database Cloud Service as a source of
NoSQL Database Migrator is shown below.
You can migrate a table from Oracle NoSQL Database Cloud Service by specifying the
name or OCID of the compartment in which the table resides in the source
configuration template.
A sample Oracle NoSQL Database Cloud Service table is as follows:

{"id":20,"firstName":"Jane","lastName":"Smith","otherNames":
[{"first":"Jane","last":"teacher"}],"age":25,"income":55000,"address":
{"city":"San Jose","number":201,"phones":
[{"area":608,"kind":"work","number":6538955},
{"area":931,"kind":"home","number":9533341},
{"area":931,"kind":"mobile","number":9533382}],"state":"CA","street":"A
tlantic Ave","zip":95005},"connections":[40,75,63],"expenses":null}
{"id":10,"firstName":"John","lastName":"Smith","otherNames":
[{"first":"Johny","last":"chef"}],"age":22,"income":45000,"address":
{"city":"Santa Cruz","number":101,"phones":
[{"area":408,"kind":"work","number":4538955},
{"area":831,"kind":"home","number":7533341},
{"area":831,"kind":"mobile","number":7533382}],"state":"CA","street":"P
acific Ave","zip":95008},"connections":[30,55,43],"expenses":null}
{"id":30,"firstName":"Adam","lastName":"Smith","otherNames":
[{"first":"Adam","last":"handyman"}],"age":45,"income":75000,"address":
{"city":"Houston","number":301,"phones":
[{"area":618,"kind":"work","number":6618955},
{"area":951,"kind":"home","number":9613341},
{"area":981,"kind":"mobile","number":9613382}],"state":"TX","street":"I
ndian Ave","zip":95075},"connections":[60,45,73],"expenses":null}

Source Configuration Template

"source" : {
"type" : "nosqldb_cloud",
"endpoint" : "<Oracle NoSQL Cloud Service Endpoint. You can either
specify the complete URL or the Region ID alone>",
"table" : "<table name>",
"compartment" : "<OCI compartment name or id>",
"credentials" : "</path/to/oci/credential/file>",
"credentialsProfile" : "<oci credentials profile name>",
"readUnitsPercent" : <table readunits percent>,
"requestTimeoutMs" : <timeout in milli seconds>,
"useInstancePrincipal" : <true|false>,
"includeTTL": <true|false>
}

Source Parameters
• type
• endpoint

1-286
Chapter 1
Develop

• table
• compartment
• credentials
• credentialsProfile
• readUnitsPercent
• requestTimeoutMs
• useInstancePrincipal
• includeTTL

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb_cloud"

endpoint
• Purpose: Specifies the Service Endpoint of the Oracle NoSQL Database Cloud Service.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://nosql.us-ashburn-1.oci.oraclecloud.com/"

table
• Purpose: Name of the table from which to migrate the data.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– To specify a table "table" : "myTable"
– To specify a child table "table" : "mytable.child"

compartment
• Purpose: Specifies the name or OCID of the compartment in which the table resides.
If you do not provide any value, it defaults to the root compartment.
You can find your compartment's OCID from the Compartment Explorer window under
Governance in the OCI Cloud Console.
• Data Type: string

1-287
Chapter 1
Develop

• Mandatory (Y/N): Yes, if the table is not in the root compartment of the tenancy
OR when the useInstancePrincipal parameter is set to true.

Note:
If the useInstancePrincipal parameter is set to true, the
compartment must specify the compartment OCID and not the name.

• Example:
– Compartment name
"compartment" : "mycompartment"
– Compartment name qualified with its parent compartment
"compartment" : "parent.childcompartment"
– No value provided. Defaults to the root compartment.
"compartment": ""
– Compartment OCID
"compartment" : "ocid1.tenancy.oc1...4ksd"

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.

1-288
Chapter 1
Develop

Note:
This parameter is valid ONLY if the credentials parameter is specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"
2. "credentialsProfile": "ADMIN_USER"

readUnitsPercent
• Purpose: Percentage of table read units to be used while migrating the NoSQL table.
The default value is 90. The valid range is any integer between 1 to 100. Please note that
amount of time required to migrate data is directly proportional to this attribute. It's better
to increase the read throughput of the table for the migration activity. You can reduce the
read throughput after the migration process completes.
To learn the daily limits on throughput changes, see Cloud Limits in Using Oracle NoSQL
Database Cloud Service.
The default value is 90. The valid range is any integer between 1 to 100.

Note:
The time required for the data migration is directly proportional to the
writeUnitsPercent value.

See Troubleshooting the Oracle NoSQL Database Migrator to learn how to use this
attribute to improve the data migration speed.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "readUnitsPercent" : 90

requestTimeoutMs
• Purpose: Specifies the time to wait for each read operation in the sink to complete. This
is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .

1-289
Chapter 1
Develop

If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is


running within an OCI compute instance, for example, NoSQL
Database Migrator tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at
the same time.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows when
exporting Oracle NoSQL Database Cloud Service tables. If set to true, the TTL
data for rows also gets included in the data provided by the source. TTL is present
in the _metadata JSON object associated with each row. The expiration time for
each row gets exported as the number of milliseconds since the UNIX epoch (Jan
1st, 1970).
If you do not specify this parameter, it defaults to false.
Only the rows having a positive expiration value for TTL get included as part of the
exported rows. If a row does not expire, which means TTL=0, then its TTL
metadata is not included explicitly. For example, if ROW1 expires at 2021-10-19
00:00:00 and ROW2 does not expire, the exported data looks like as follows:

//ROW1
{
"id" : 1,
"name" : "abc",
"_metadata" : {
"expiration" : 1634601600000
}
}

//ROW2
{
"id" : 2,
"name" : "xyz"
}

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "includeTTL" : true

1-290
Chapter 1
Develop

CSV as the File Source


The configuration file format for the CSV file as a source of NoSQL Database Migrator is
shown below. The CSV file must conform to the RFC4180 format.

You can migrate a CSV file or a directory containing the CSV data by specifying the file name
or directory in the source configuration template.
A sample CSV file is as follows:

1,"Computer Science","San Francisco","2500"


2,"Bio-Technology","Los Angeles","1200"
3,"Journalism","Las Vegas","1500"
4,"Telecommunication","San Francisco","2500"

Source Configuration Template

"source" : {
"type" : "file",
"format" : "csv",
"dataPath": "</path/to/a/csv/file-or-directory>",
"hasHeader" : <true | false>,
"columns" : ["column1", "column2", ....],
"csvOptions" : {
"trim" : <true | false>,
"encoding" : "<character set encoding>"
}
}

Source Parameters
• type
• format
• dataPath
• hasHeader
• columns
• csvOptions
• csvOptions.trim
• csvOptions.encoding

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"

1-291
Chapter 1
Develop

format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "csv"

datapath
• Purpose: Specifies the absolute path to a file or directory containing the CSV data
for migration. If you specify a directory, NoSQL Database Migrator imports all the
files with the .csv or .CSV extension in that directory. All the CSV files are copied
into a single table, but not in any particular order.
CSV files must conform to the RFC4180 standard. You must ensure that the data in
each CSV file matches with the NoSQL Database table schema defined in the sink
table. Sub-directories are not supported.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Specifying a CSV file
"dataPath" : "/home/user/sample.csv"
– Specifying a directory
"dataPath" : "/home/user"

Note:
The CSV files must contain only scalar values. Importing CSV files
containing complex types such as MAP, RECORD, ARRAY, and JSON is not
supported. The NoSQL Database Migrator tool does not check for the
correctness of the data in the input CSV file. The NoSQL Database Migrator
tool supports the importing of CSV data that conforms to the RFC4180 format.
CSV files containing data that does not conform to the RFC4180 standard may
not get copied correctly or may result in an error. If the input data is
corrupted, the NoSQL Database Migrator tool will not parse the CSV records.
If any errors are encountered during migration, the NoSQL Database
Migrator tool logs the information about the failed input records for debugging
and informative purposes. For more details, see Logging Migrator Progress
in Using Oracle NoSQL Data Migrator .

hasHeader
• Purpose: Specifies if the CSV file has a header or not. If this is set to true, the
first line is ignored. If it is set to false, the first line is considered a CSV record.
The default value is false.
• Data Type: Boolean

1-292
Chapter 1
Develop

• Mandatory (Y/N): N
• Example: "hasHeader" : "false"

columns
• Purpose: Specifies the list of NoSQL Database table column names. The order of the
column names indicates the mapping of the CSV file fields with corresponding NoSQL
Database table columns. If the order of the input CSV file columns does not match the
existing or newly created NoSQL Database table columns, you can map the ordering
using this parameter. Also, when importing into a table that has an Identity Column, you
can skip the Identity column name in the columns configuration.

Note:

– If the NoSQL Database table has additional columns that are not available
in the CSV file, the values of the missing columns are updated with the
default value as defined in the NoSQL Database table. If a default value is
not provided, a Null value is inserted during migration. For more information
on default values, see Data Type Definitions section in the SQL Reference
Guide.
– If the CSV file has additional columns that are not defined in the NoSQL
Database table, the additional column information is ignored.
– If any value in the CSV record is empty, it is set to the default value of the
corresponding columns in the NoSQL Database table. If a default value is
not provided, a Null value is inserted during migration.

• Data Type: Array of Strings


• Mandatory (Y/N): N
• Example: "columns" : ["table_column_1", "table_column_2"]

csvOptions
• Purpose: Specifies the formatting options for a CSV file. Provide the character set
encoding format of the CSV file and choose whether or not to trim the blank spaces.
• Data Type: Object
• Mandatory (Y/N): N

csvOptions.trim
• Purpose: Specifies if the leading and trailing blanks of a CSV field value must be
trimmed. The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "trim" : "true"

csvOptions.encoding
• Purpose: Specifies the character set to decode the CSV file. The default value is UTF-8.
The supported character sets are US-ASCII, ISO-8859-1, UTF-8, and UTF-16.

1-293
Chapter 1
Develop

• Data Type: String


• Mandatory (Y/N): N
• Example: "encoding" : "UTF-8"

CSV file in OCI Object Storage Bucket


The configuration file format for the CSV file in OCI Object Storage bucket as a source
of NoSQL Database Migrator is shown below. The CSV file must conform to the
RFC4180 format.

You can migrate a CSV file in the OCI Object Storage bucket by specifying the name
of the bucket in the source configuration template.
A sample CSV file in the OCI Object Storage bucket is as follows:

1,"Computer Science","San Francisco","2500"


2,"Bio-Technology","Los Angeles","1200"
3,"Journalism","Las Vegas","1500"
4,"Telecommunication","San Francisco","2500"

Note:
The valid sink types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.

Source Configuration Template

"source" : {
"type" : "object_storage_oci",
"format" : "csv",
"endpoint" : "<OCI Object Storage service endpoint URL or region
ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>,
"hasHeader" : <true | false>,
"columns" : ["column1", "column2", ....],
"csvOptions" : {
"trim" : <true | false>,
"encoding" : "<character set encoding>"
}
}

Source Parameters
• type
• format

1-294
Chapter 1
Develop

• endpoint
• namespace
• bucket
• prefix
• credentials
• credentialsProfile
• useInstancePrincipal
• hasHeader
• columns
• csvOptions
• csvOptions.trim
• csvOptions.encoding

type
• Purpose: Identifies the source type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"

format
• Purpose: Specifies the source format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "csv"

endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"

namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.

1-295
Chapter 1
Develop

• Data Type: string


• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"

bucket
• Purpose: Specifies the name of the bucket, which contains the source CSV files.
The NoSQL Database Migrator imports all the files with the .csv or .CSV extension
object-wise and copies them into a single table in the same order.
Ensure that the required bucket already exists in the OCI Object Storage instance
and has read permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"

Note:
The CSV files must contain only scalar values. Importing CSV files
containing complex types such as MAP, RECORD, ARRAY, and JSON is
not supported. The NoSQL Database Migrator tool does not check for
the correctness of the data in the input CSV file. The NoSQL Database
Migrator tool supports the importing of CSV data that conforms to the
RFC4180 format. CSV files containing data that does not conform to the
RFC4180 standard may not get copied correctly or may result in an error.
If the input data is corrupted, the NoSQL Database Migrator tool will not
parse the CSV records. If any errors are encountered during migration,
the NoSQL Database Migrator tool logs the information about the failed
input records for debugging and informative purposes. For more details,
see Logging Migrator Progress in Using Oracle NoSQL Data Migrator .

prefix
• Purpose: Used for filtering the objects that are being migrated from the bucket. All
the objects with the given prefix present in the bucket are migrated. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If you do not provide any value, filter is not applied and all the objects present in
the bucket are migrated.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_table/Data/000000.json" (migrates only 000000.json)
2. "prefix" : "my_table/Data" (migrates all the objects with prefix my_table/
Data)

credentials
• Purpose: Absolute path to a file containing OCI credentials.

1-296
Chapter 1
Develop

If not specified, it defaults to $HOME/.oci/config


See Example Configuration for an example of the credentials file.

Note:
You must specify either credentials or useInstancePrincipal parameters in
the configuration template.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a profile.
If you do not specify this value, it defaults to the DEFAULT profile.

Note:
This parameter is valid only if the credentials parameter is specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"
2. "credentialsProfile": "ADMIN_USER"

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Database Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
The default value is false.

1-297
Chapter 1
Develop

Note:

– The authentication with Instance Principals is supported only when


the NoSQL Database Migrator tool is running within an OCI compute
instance, for example, NoSQL Database Migrator tool running in a
VM hosted on OCI.
– You must specify either credentials or
useInstancePrincipal parameters in the configuration template.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

hasHeader
• Purpose: Specifies if the CSV file has a header or not. If this is set to true, the
first line is ignored. If it is set to false, the first line is considered a CSV record.
The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "hasHeader" : "false"

columns
• Purpose: Specifies the list of NoSQL Database table column names. The order of
the column names indicates the mapping of the CSV file fields with corresponding
NoSQL Database table columns. If the order of the input CSV file columns does
not match the existing or newly created NoSQL Database table columns, you can
map the ordering using this parameter. Also, when importing into a table that has
an Identity Column, you can skip the Identity column name in the columns
configuration.

Note:

– If the NoSQL Database table has additional columns that are not
available in the CSV file, the values of the missing columns are
updated with the default value as defined in the NoSQL Database
table. If a default value is not provided, a Null value is inserted during
migration. For more information on default values, see Data Type
Definitions section in the SQL Reference Guide.
– If the CSV file has additional columns that are not defined in the
NoSQL Database table, the additional column information is ignored.
– If any value in the CSV record is empty, it is set to the default value
of the corresponding columns in the NoSQL Database table. If a
default value is not provided, a Null value is inserted during
migration.

1-298
Chapter 1
Develop

• Data Type: Array of Strings


• Mandatory (Y/N): N
• Example: "columns" : ["table_column_1", "table_column_2"]

csvOptions
• Purpose: Specifies the formatting options for a CSV file. Provide the character set
encoding format of the CSV file and choose whether or not to trim the blank spaces.
• Data Type: Object
• Mandatory (Y/N): N

csvOptions.trim
• Purpose: Specifies if the leading and trailing blanks of a CSV field value must be
trimmed. The default value is false.
• Data Type: Boolean
• Mandatory (Y/N): N
• Example: "trim" : "true"

csvOptions.encoding
• Purpose: Specifies the character set to decode the CSV file. The default value is UTF-8.
The supported character sets are US-ASCII, ISO-8859-1, UTF-8, and UTF-16.
• Data Type: String
• Mandatory (Y/N): N
• Example: "encoding" : "UTF-8"

Sink Configuration Templates


Learn about the sink configuration file formats for each valid sink and the purpose of each
configuration parameter.
For the configuration file template, see Configuration File in Terminology used with NoSQL
Data Migrator.
For details on valid source formats for each of the sinks, see Source Configuration
Templates.

Topics
• JSON as the File Sink
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a JSON file as the sink.
• Parquet File
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a Parquet file as the sink.
• JSON File in OCI Object Storage Bucket
The sink configuration template for the Oracle NoSQL Database Migrator to copy the
data from a valid source to a JSON file in the OCI Object Storage bucket as the sink.
• Parquet File in OCI Object Storage Bucket

1-299
Chapter 1
Develop

The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to a Parquet file in the OCI Object Storage bucket as
the sink.
• Oracle NoSQL Database
The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to Oracle NoSQL Database tables as the sink.
• Oracle NoSQL Database Cloud Service
The sink configuration template for the Oracle NoSQL Database Migrator to copy
the data from a valid source to Oracle NoSQL Database Cloud Service tables as
the sink.

JSON as the File Sink


The configuration file format for JSON File as a sink of NoSQL Database Migrator is
shown below.

Sink Configuration Template

"sink" : {
"type" : "file",
"format" : "json",
"dataPath": "</path/to/a/file>",
"schemaPath" : "<path/to/a/file>",
"pretty" : <true|false>,
"useMultiFiles" : <true|false>,
"chunkSize" : <size in MB>
}

Sink Parameters
• type
• format
• dataPath
• schemaPath
• pretty
• useMultiFiles
• chunkSize

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"

format
• Purpose: Specifies the sink format.
• Data Type: string

1-300
Chapter 1
Develop

• Mandatory (Y/N): Y
• Example: "format" : "json"

dataPath
• Purpose: Specifies the absolute path to a file where the source data will be copied in the
JSON format.
If the file does not exist in the specified data path, the NoSQL Database Migrator creates
it. If it exists already, the NoSQL Database Migrator will overwrite its contents with the
source data.
You must ensure that the parent directory for the file specified in the data path is valid.

Note:
If the useMultiFiles parameter is set to true, specify the path to a directory
else specifies the path to the file.

• Data Type: string


• Mandatory (Y/N): Y
• Example:
– With useMultiFiles parameter set to true
"dataPath" :"/home/user/data"
– With useMultiFiles parameter not specified or it is set to false
"dataPath" :"/home/user/sample.json"

schemaPath
• Purpose: Specifies the absolute path to write schema information provided by the
source.
If this value is not defined, the source schema information will not be migrated to the sink.
If this value is specified, the migrator utility writes the schema of the source table into the
file specified here.
The schema information is written as one DDL command per line in this file. If the file
does not exist in the specified data path, NoSQL Database Migrator creates it. If it exists
already, NoSQL Database Migrator will overwrite its contents with the source data. You
must ensure that the parent directory for the file specified in the data path is valid.
• Data Type: string
• Mandatory (Y/N): N
• Example: "schemaPath" : "/home/user/schema_file"

pretty
• Purpose: Specifies whether to beautify the JSON output to increase readability or not.
If not specified, it defaults to false.
• Data Type: boolean
• Mandatory (Y/N): N

1-301
Chapter 1
Develop

• Example: "pretty" : true

useMultiFiles
• Purpose: Specifies whether or not to split the NoSQL table data into multiple files
when migrating source data to a file.
If not specified, it defaults to false.
If set to true, when migrating source data to a file, the NoSQL table data is split
into multiple smaller files. For example, <chunk>.json, where
chunk=000000,000001,000002, and so forth.

dataPath
|--000000.json
|--000001.json

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useMultiFiles" : true

chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at
the sink. During migration, a table is split into chunkSize chunks and each chunk is
written as a separate file to the sink. When the source data being migrated
exceeds this size, a new file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.

Note:
This parameter is applicable ONLY when the useMultiFiles
parameter is set to true.

• Data Type: integer


• Mandatory (Y/N): N
• Example: "chunkSize" : 40

Parquet File
The configuration file format for Parquet File as a sink of NoSQL Database Migrator is
shown below.

Sink Configuration Template

"sink" : {
"type" : "file",
"format" : "parquet",
"dataPath": "</path/to/a/dir>",
"chunkSize" : <size in MB>,
"compression": "<SNAPPY|GZIP|NONE>",
"parquetOptions": {

1-302
Chapter 1
Develop

"useLogicalJson": <true|false>,
"useLogicalEnum": <true|false>,
"useLogicalUUID": <true|false>,
"truncateDoubleSpecials": <true|false>
}
}

Sink Parameters
• type
• format
• dataPath
• chunkSize
• compression
• parquetOptions
• parquetOptions.useLogicalJson
• parquetOptions.useLogicalEnum
• parquetOptions.useLogicalUUID
• parquetOptions.truncateDoubleSpecials

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "file"

format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "parquet"

dataPath
• Purpose: Specifies the path to a directory to use for storing the migrated NoSQL table
data. Ensure that the directory already exists and has read and write permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "dataPath" : "/home/user/migrator/my_table"

chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a

1-303
Chapter 1
Develop

separate file to the sink. When the source data being migrated exceeds this size, a
new file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40

compression
• Purpose: Specifies the compression type to use to compress the Parquet data.
Valid values are SNAPPY, GZIP, and NONE.
If not specified, it defaults to SNAPPY.
• Data Type: string
• Mandatory (Y/N): N
• Example: "compression" : "GZIP"

parquetOptions
• Purpose: Species the options to select Parquet logical types for NoSQL ENUM,
JSON, and UUID columns.
If you do not specify this parameter, the NoSQL Database Migrator writes the data
of ENUM, JSON, and UUID columns as String.
• Data Type: object
• Mandatory (Y/N): N

parquetOptions.useLogicalJson
• Purpose: Specifies whether or not to write NoSQL JSON column data as Parquet
logical JSON type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL JSON
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalJson" : true

parquetOptions.useLogicalEnum
• Purpose: Specifies whether or not to write NoSQL ENUM column data as Parquet
logical ENUM type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL ENUM
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalEnum" : true

1-304
Chapter 1
Develop

parquetOptions.useLogicalUUID
• Purpose: Specifies whether or not to write NoSQL UUID column data as Parquet logical
UUID type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL UUID column
data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalUUID" : true

parquetOptions.truncateDoubleSpecials
• Purpose: Specifies whether or not to truncate the double +Infinity, -Infinity, and Nan
values.
By default, it is set to false. If set to true,
– Positive_Infinity is truncated to Double.MAX_VALUE.
– NEGATIVE_INFINITY is truncated to -Double.MAX_VALUE.
– NaN is truncated to 9.9999999999999990E307.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "truncateDoubleSpecials" : true

JSON File in OCI Object Storage Bucket


The configuration file format for JSON file in OCI Object Storage bucket as a sink of NoSQL
Database Migrator is shown below.

Note:
The valid source types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.

Sink Configuration Template

"sink" : {
"type" : "object_storage_oci",
"format" : "json",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"chunkSize" : <size in MB>,
"pretty" : <true|false>,
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",

1-305
Chapter 1
Develop

"useInstancePrincipal" : <true|false>
}

Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• chunkSize
• pretty
• credentials
• credentialsProfile
• useInstancePrincipal

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"

format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "json"

endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud
Service for the list of data regions supported for Oracle NoSQL Database Cloud
Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"

1-306
Chapter 1
Develop

namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"

bucket
• Purpose: Specifies the bucket name to use for storing the migrated data. Ensure that the
required bucket already exists in the OCI Object Storage instance and has write
permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"

prefix
• Purpose: Specifies the prefix that is added to the object name when objects are created
in the bucket. The prefix acts as a logical container or directory for storing data. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If not specified, the table name from the source is used as the prefix. If any object with
the same name already exists in the bucket, it is overwritten.
Schema is migrated to the <prefix>/Schema /schema.ddl file and source data is
migrated to the <prefix>/Data/<chunk>.json file(s), where chunk=000000.json,
000001.json, and so forth.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_export"
2. "prefix" : "my_export/2021-04-05/"

chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a
separate file to the sink. When the source data being migrated exceeds this size, a new
file is created.
If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40

1-307
Chapter 1
Develop

pretty
• Purpose: Specifies whether to beautify the JSON output to increase readability or
not.
If not specified, it defaults to false.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "pretty" : true

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service. User account credentials are referred to as a
'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.

Note:
This parameter is valid ONLY if the credentials parameter is
specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"

1-308
Chapter 1
Develop

2. "credentialsProfile": "ADMIN_USER"

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security .
If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is running


within an OCI compute instance, for example, NoSQL Database Migrator
tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters MUST
be specified. Additionally, these two parameters are mutually exclusive.
Specify ONLY one of these parameters, but not both at the same time.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

Parquet File in OCI Object Storage Bucket


The configuration file format for Parquet file in OCI Object Storage bucket as a sink of NoSQL
Database Migrator is shown below.

Note:
The valid source types for OCI Object Storage source type are nosqldb and
nosqldb_cloud.

Sink Configuration Template

"sink" : {
"type" : "object_storage_oci",
"format" : "parquet",
"endpoint" : "<OCI Object Storage service endpoint URL or region ID>",
"namespace" : "<OCI Object Storage namespace>",
"bucket" : "<bucket name>",
"prefix" : "<object prefix>",
"chunkSize" : <size in MB>,
"compression": "<SNAPPY|GZIP|NONE>",
"parquetOptions": {
"useLogicalJson": <true|false>,
"useLogicalEnum": <true|false>,
"useLogicalUUID": <true|false>,
"truncateDoubleSpecials": <true|false>

1-309
Chapter 1
Develop

},
"credentials" : "</path/to/oci/config/file>",
"credentialsProfile" : "<profile name in oci config file>",
"useInstancePrincipal" : <true|false>
}

Source Parameters
• type
• format
• endpoint
• namespace
• bucket
• prefix
• chunkSize
• compression
• parquetOptions
• parquetOptions.useLogicalJson
• parquetOptions.useLogicalEnum
• parquetOptions.useLogicalUUID
• parquetOptions.truncateDoubleSpecials
• credentials
• credentialsProfile
• useInstancePrincipal

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "object_storage_oci"

format
• Purpose: Specifies the sink format.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "format" : "parquet"

endpoint
• Purpose: Specifies the OCI Object Storage service endpoint URL or region ID.
You can either specify the complete URL or the Region ID alone. See Data
Regions and Associated Service URLs in Using Oracle NoSQL Database Cloud

1-310
Chapter 1
Develop

Service for the list of data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://objectstorage.us-
ashburn-1.oraclecloud.com"

namespace
• Purpose: Specifies the OCI Object Storage service namespace. This is an optional
parameter. If you don't specify this parameter, the default namespace of the tenancy is
used.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "namespace" : "my-namespace"

bucket
• Purpose: Specifies the bucket name to use for storing the migrated data. Ensure that the
required bucket already exists in the OCI Object Storage instance and has write
permissions.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "bucket" : "staging_bucket"

prefix
• Purpose: Specifies the prefix that is added to the object name when objects are created
in the bucket. The prefix acts as a logical container or directory for storing data. For more
information about prefixes, see Object Naming Using Prefixes and Hierarchies.
If not specified, the table name from the source is used as the prefix. If any object with
the same name already exists in the bucket, it is overwritten.
Source data is migrated to the <prefix>/Data/<chunk>.parquet file(s), where
chunk=000000.parquet, 000001.parquet, and so forth.
• Data Type: string
• Mandatory (Y/N): N
• Example:
1. "prefix" : "my_export"
2. "prefix" : "my_export/2021-04-05/"

chunkSize
• Purpose: Specifies the maximum size of a "chunk" of table data to be stored at the sink.
During migration, a table is split into chunkSize chunks and each chunk is written as a
separate file to the sink. When the source data being migrated exceeds this size, a new
file is created.

1-311
Chapter 1
Develop

If not specified, defaults to 32MB. The valid value is an integer between 1 to 1024.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "chunkSize" : 40

compression
• Purpose: Specifies the compression type to use to compress the Parquet data.
Valid values are SNAPPY, GZIP, and NONE.
If not specified, it defaults to SNAPPY.
• Data Type: string
• Mandatory (Y/N): N
• Example: "compression" : "GZIP"

parquetOptions
• Purpose: Species the options to select Parquet logical types for NoSQL ENUM,
JSON, and UUID columns.
If you do not specify this parameter, the NoSQL Database Migrator writes the data
of ENUM, JSON, and UUID columns as String.
• Data Type: object
• Mandatory (Y/N): N

parquetOptions.useLogicalJson
• Purpose: Specifies whether or not to write NoSQL JSON column data as Parquet
logical JSON type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL JSON
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalJson" : true

parquetOptions.useLogicalEnum
• Purpose: Specifies whether or not to write NoSQL ENUM column data as Parquet
logical ENUM type. For more information see Parquet Logical Type Definitions.
If not specified or set to false, NoSQL Database Migrator writes the NoSQL ENUM
column data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalEnum" : true

parquetOptions.useLogicalUUID
• Purpose: Specifies whether or not to write NoSQL UUID column data as Parquet
logical UUID type. For more information see Parquet Logical Type Definitions.

1-312
Chapter 1
Develop

If not specified or set to false, NoSQL Database Migrator writes the NoSQL UUID column
data as String.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "useLogicalUUID" : true

parquetOptions.truncateDoubleSpecials
• Purpose: Specifies whether or not to truncate the double +Infinity, -Infinity, and Nan
values.
By default, it is set to false. If set to true,
– Positive_Infinity is truncated to Double.MAX_VALUE.
– NEGATIVE_INFINITY is truncated to -Double.MAX_VALUE.
– NaN is truncated to 9.9999999999999990E307.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "truncateDoubleSpecials" : true

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal parameters
are not mandatory individually, one of these parameters MUST be specified.
Additionally, these two parameters are mutually exclusive. Specify ONLY one of
these parameters, but not both at the same time.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle NoSQL
Database Cloud Service. User account credentials are referred to as a 'profile'.
If you do not specify this value, it defaults to the DEFAULT profile.

1-313
Chapter 1
Develop

Note:
This parameter is valid ONLY if the credentials parameter is
specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"
2. "credentialsProfile": "ADMIN_USER"

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance
principal authentication to connect to Oracle NoSQL Database Cloud Service. For
more information on Instance Principal authentication method, see Source and
Sink Security .
If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is


running within an OCI compute instance, for example, NoSQL
Database Migrator tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at
the same time.

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

Oracle NoSQL Database


The configuration file format for Oracle NoSQL Database as a sink of NoSQL
Database Migrator is shown below.

Sink Configuration Template

"sink" : {
"type": "nosqldb",
"table" : "<fully qualified table name>",
"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"defaultSchema" : <true|false>,

1-314
Chapter 1
Develop

"useSourceSchema" : <true|false>,
"DDBPartitionKey" : <"name:type">,
"DDBSortKey" : "<name:type>"
},
"overwrite" : <true|false>,
"storeName" : "<store name>",
"helperHosts" : ["hostname1:port1","hostname2:port2,..."],
"security" : "</path/to/store/credentials/file>",
"requestTimeoutMs" : <timeout in milli seconds>,
"includeTTL": <true|false>,
"ttlRelativeDate": "<date-to-use in UTC>"
}

Sink Parameters
• type
• table
• schemaInfo
• schemaInfo.schemaPath
• schemaInfo.defaultSchema
• schemaInfo.useSourceSchema
• schemaInfo.DDBPartitionKey
• schemaInfo.DDBSortKey
• overwrite
• storeName
• helperHosts
• security
• requestTimeoutMs
• includeTTL
• ttlRelativeDate

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb"

table
• Purpose: Fully qualified table name from which to migrate the data.
Format: [namespace_name:]<table_name>
If the table is in the DEFAULT namespace, you can omit the namespace_name. The table
must exist in the store during the migration, and its schema must match with the source
data.

1-315
Chapter 1
Develop

If the table is not available in the sink, you can use the schemaInfo parameter to
instruct the NoSQL Database Migrator to create the table also in the sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– With the DEFAULT namespace "table" :"mytable"
– With a non-default namespace "table" : "mynamespace:mytable"
– To specify a child table "table" : "mytable.child"

Note:
You can migrate the child tables from a valid data source to Oracle
NoSQL Database. The NoSQL Database Migrator copies only a
single table in each execution. Ensure that the parent table is
migrated before the child table.

schemainfo
• Purpose: Specifies the schema for the data being migrated. If this is not specified,
the assumes that the table already exists in the sink's store.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaPath
• Purpose: Specifies the absolute path to a file containing DDL statements for the
NoSQL table.
The NoSQL Database Migrator executes the DDL commands listed in this file
before migrating the data.
The NoSQL Database Migrator does not support more than one DDL statement
per line in the schemaPath file.
• Data Type: string
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.

schemaInfo.defaultSchema
• Purpose: Setting this parameter to true instructs the NoSQL Database Migrator to
create a table with default schema. The default schema is defined by the migrator
itself. For more information about default schema definitions, see Default Schema
in Using Oracle NoSQL Data Migrator .
• Data Type: boolean
• Mandatory: N

1-316
Chapter 1
Develop

Note:
defaultSchema and schemaPath are mutually exclusive

• Example:
– With Default Schema:

"schemaInfo" : {
"defaultSchema" : true
}

– With a pre-defined schema:

"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>"
}

schemaInfo.useSourceSchema
• Purpose: Specifies whether or not the sink uses the table schema definition provided by
the source when migrating NoSQL tables.
• Data Type: boolean
• Mandatory (Y/N): N

Note:
defaultSchema, schemaPath, and useSourceSchema parameters are
mutually exclusive. Specify ONLY one of these parameters.

• Example:
– With Default Schema:

"schemaInfo" : {
"defaultSchema" : true
}

– With a pre-defined schema:

"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>"
}

– With source schema:

"schemaInfo" : {
"useSourceSchema" : true
}

1-317
Chapter 1
Develop

schemaInfo.DDBPartitionKey
• Purpose: Specifies the DynamoDB partition key and the corresponding Oracle
NoSQL Database type to be used in the sink Oracle NoSQL Database table. This
key will be used as a NoSQL DB table shard key. This is applicable only when
defaultSchema is set to true and the source format is dynamodb_json. See
Mapping of DynamoDB types to Oracle NoSQL types for more details.
• Mandatory: Yes if defaultSchema is true and the source is dynamodb_json.
• Example: "DDBPartitionKey" : "PersonID:INTEGER"

Note:
If the partition key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.

schemaInfo.DDBSortKey
• Purpose: Specifies the DynamoDB sort key and its corresponding Oracle NoSQL
Database type to be used in the target Oracle NoSQL Database table. If the
importing DynamoDB table does not have a sort key this should not be set. This
key will be used as a non-shard portion of the primary key in the NoSQL DB table.
This is applicable only when defaultSchema is set to true and the source is
dynamodb_json. See Mapping of DynamoDB types to Oracle NoSQL types for
more details.
• Mandatory: No
• Example:"DDBSortKey" : "Skey:STRING"

Note:
If the sort key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.

overwrite
• Purpose: Indicates the behavior of NoSQL Database Migrator when the record
being migrated from the source is already present in the sink.
If the value is set to false, when migrating tables the NoSQL Database Migrator
skips those records for which the same primary key already exists in the sink.
If the value is set to true, when migrating tables the NoSQL Database Migrator
overwrites those records for which the same primary key already exists in the sink.
If not specified, it defaults to true.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "overwrite" : false

1-318
Chapter 1
Develop

storeName
• Purpose: Name of the Oracle NoSQL Database store.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "storeName" : "kvstore"

helperHosts
• Purpose: A list of host and registry port pairs in the hostname:port format. Delimit each
item in the list using a comma. You must specify at least one helper host.
• Data Type: array of strings
• Mandatory (Y/N): Y
• Example: "helperHosts" : ["localhost:5000","localhost:6000"]

security
• Purpose:
If your store is a secure store, provide the absolute path to the security login file that
contains your store credentials. See Configuring Security with Remote Access in
Administrator's Guide to know more about the security login file.
You can use either password file based authentication or wallet based authentication.
However, the wallet based authentication is supported only in the Enterprise Edition (EE)
of Oracle NoSQL Database. For more information on wallet-based authentication, see
Source and Sink Security .
• Data Type: string
• Mandatory (Y/N): Y for a secure store
• Example:
"security" : "/home/user/client.credentials"
Example security file content for password file based authentication:

oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.pwdfile.file=/home/nosql/login.passwd
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)

Example security file content for wallet based authentication:

oracle.kv.password.noPrompt=true
oracle.kv.auth.username=admin
oracle.kv.auth.wallet.dir=/home/nosql/login.wallet
oracle.kv.transport=ssl
oracle.kv.ssl.trustStore=/home/nosql/client.trust
oracle.kv.ssl.protocols=TLSv1.2
oracle.kv.ssl.hostnameVerifier=dnmatch(CN\=NoSQL)

1-319
Chapter 1
Develop

requestTimeoutMs
• Purpose: Specifies the time to wait for each write operation in the sink to
complete. This is provided in milliseconds. The default value is 5000. The value
can be any positive integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000

includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows
provided by the source when importing Oracle NoSQL Database tables.
If you do not specify this parameter, it defaults to false. In that case, the NoSQL
Database Migrator does not include TTL metadata for table rows provided by the
source when importing Oracle NoSQL Database tables.
If set to true, the NoSQL Database Migrator tool performs the following checks on
the TTL metadata when importing a table row:
– If you import a row that does not have _metadata definition, the NoSQL
Database Migrator tool sets the TTL to 0, which means the row never expires.
– If you import a row that has _metadata definition, the NoSQL Database
Migrator tool compares the TTL value against a Reference Time when a row
gets imported. If the row has already expired relative to the Reference Time,
then it is skipped. If the row has not expired, then it is imported along with the
TTL value. By default, the Reference Time of import operation is the current
time in milliseconds, obtained from System.currentTimeMillis(), of the machine
where the NoSQL Database Migrator tool is running. But you can also set a
custom Reference Time using the ttlRelativeDate configuration parameter if
you want to extend the expiration time and import rows that would otherwise
expire immediately.
The formula to calculate the expiration time of a row is as follows:

expiration = (TTL value of source row in milliseconds -


Reference Time in milliseconds)
if (expiration <= 0) then it indicates that row has expired.

Note:
Since Oracle NoSQL TTL boundaries are in hours and days, in some
cases, the TTL of the imported row might get adjusted to the nearest
hour or day. For example, consider a row that has expiration value of
1629709200000 (2021-08-23 09:00:00) and Reference Time value
is 1629707962582 (2021-08-23 08:39:22). Here, even though the
row is not expired relative to the Reference Time when this data gets
imported, the new TTL for the row is 1629712800000 (2021-08-23
10:00:00).

• Data Type: boolean

1-320
Chapter 1
Develop

• Mandatory (Y/N): N
• Example: "includeTTL" : true

ttlRelativeDate
• Purpose: Specify a UTC date in the YYYY-MM-DD hh:mm:ss format used to set the TTL
expiry of table rows during importing into the Oracle NoSQL Database.
If a table row in the data you are exporting has expired, you can set the
ttlRelativeDate parameter to a date before the expiration time of the table row in the
exported data.
If you do not specify this parameter, it defaults to the current time in milliseconds,
obtained from System.currentTimeMillis(), of the machine where the NoSQL Database
Migrator tool is running.
• Data Type: date
• Mandatory (Y/N): N
• Example: "ttlRelativeDate" : "2021-01-03 04:31:17"
Let us consider a scenario where table rows expire after seven days from 1-Jan-2021.
After exporting this table, on 7-Jan-2021, you run into an issue with your table and decide
to import the data. The table rows are going to expire in one day (data expiration date
minus the default value of ttlRelativedate configuration parameter, which is the
current date).But if you want to extend the expiration date of table rows to five days
instead of one day, use the ttlRelativeDate parameter and choose an earlier date.
Therefore, in this scenario if you want to extend expiration time of the table rows by five
days, set the value of ttlRelativeDate configuration parameters to 3-Jan-2021,
which is used as Reference Time when table rows get imported.

Oracle NoSQL Database Cloud Service


The configuration file format for Oracle NoSQL Database Cloud Service as a sink of NoSQL
Database Migrator is shown below.

Sink Configuration Template

"sink" : {
"type" : "nosqldb_cloud",
"endpoint" : "<Oracle NoSQL Cloud Service Endpoint>",
"table" : "<table name>",
"compartment" : "<OCI compartment name or id>",
"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"defaultSchema" : <true|false>,
"useSourceSchema" : <true|false>,
"DDBPartitionKey" : <"name:type">,
"DDBSortKey" : "<name:type>",
"onDemandThroughput" : <true|false>,
"readUnits" : <table read units>,
"writeUnits" : <table write units>,
"storageSize" : <storage size in GB>
},
"credentials" : "</path/to/oci/credential/file>",
"credentialsProfile" : "<oci credentials profile name>",
"writeUnitsPercent" : <table writeunits percent>,

1-321
Chapter 1
Develop

"requestTimeoutMs" : <timeout in milli seconds>,


"useInstancePrincipal" : <true|false>,
"overwrite" : <true|false>,
"includeTTL": <true|false>,
"ttlRelativeDate" : "<date-to-use in UTC>"
}

Sink Parameters
• type
• endpoint
• table
• compartment
• schemaInfo
• schemaInfo.schemaPath
• schemaInfo.defaultSchema
• schemaInfo.useSourceSchema
• schemaInfo.DDBPartitionKey
• schemaInfo.DDBSortKey
• schemaInfo.onDemandThroughput
• schemaInfo.readUnits
• schemaInfo.writeUnits
• schemaInfo.storageSize
• credentials
• credentialsProfile
• writeUnitsPercent
• requestTimeoutMs
• useInstancePrincipal
• overwrite
• includeTTL
• ttlRelativeDate

type
• Purpose: Identifies the sink type.
• Data Type: string
• Mandatory (Y/N): Y
• Example: "type" : "nosqldb_cloud"

endpoint
• Purpose: Specifies the Service Endpoint of the Oracle NoSQL Database Cloud
Service.

1-322
Chapter 1
Develop

You can either specify the complete URL or the Region ID alone. See Data Regions and
Associated Service URLs in Using Oracle NoSQL Database Cloud Service for the list of
data regions supported for Oracle NoSQL Database Cloud Service.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– Region ID: "endpoint" : "us-ashburn-1"
– URL format: "endpoint" : "https://nosql.us-ashburn-1.oci.oraclecloud.com/"

table
• Purpose: Name of the table to which to migrate the data.
You must ensure that this table exists in your Oracle NoSQL Database Cloud Service.
Otherwise, you have to use the schemaInfo object in the sink configuration to instruct the
NoSQL Database Migrator to create the table.
The schema of this table must match the source data.
• Data Type: string
• Mandatory (Y/N): Y
• Example:
– To specify a table "table" : "mytable"
– To specify a child table "table" : "mytable.child"

Note:
You can migrate the child tables from a valid data source to Oracle NoSQL
Database Cloud Service. The NoSQL Database Migrator copies only a
single table in each execution. Ensure that the parent table is migrated
before the child table.

compartment
• Purpose: Specifies the name or OCID of the compartment in which the table resides.
If you do not provide any value, it defaults to the root compartment.
You can find your compartment's OCID from the Compartment Explorer window under
Governance in the OCI Cloud Console.
• Data Type: string
• Mandatory (Y/N): Y if the table is not in the root compartment of the tenancy OR when
the useInstancePrincipal parameter is set to true.

Note:
If the useInstancePrincipal parameter is set to true, the compartment
must specify the compartment OCID and not the name.

1-323
Chapter 1
Develop

• Example:
– Compartment name
"compartment" : "mycompartment"
– Compartment name qualified with its parent compartment
"compartment" : "parent.childcompartment"
– No value provided. Defaults to the root compartment.
"compartment": ""
– Compartment OCID
"compartment" : "ocid1.tenancy.oc1...4ksd"

schemaInfo
• Purpose: Specifies the schema for the data being migrated.
If you do not specify this parameter, the NoSQL Database Migrator assumes that
the table already exists in your Oracle NoSQL Database Cloud Service.
If this parameter is not specified and the table does not exist in the sink, the
migration fails.
• Data Type: Object
• Mandatory (Y/N): N

schemaInfo.schemaPath
• Purpose: Specifies the absolute path to a file containing DDL statements for the
NoSQL table.
The NoSQL Database Migrator executes the DDL commands listed in this file
before migrating the data.
The NoSQL Database Migrator does not support more than one DDL statement
per line in the schemaPath file.
• Data Type: string
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.

schemaInfo.defaultSchema
• Purpose: Setting this parameter to Yes instructs the NoSQL Database Migrator to
create a table with default schema. The default schema is defined by the migrator
itself. For more information about default schema definitions, see Default Schema
in Using Oracle NoSQL Data Migrator .
• Data Type: boolean
• Mandatory: Y, only when the schemaInfo.defaultSchema parameter is set to No.

Note:
defaultSchema and schemaPath are mutually exclusive

1-324
Chapter 1
Develop

schemaInfo.useSourceSchema
• Purpose: Specifies whether or not the sink uses the table schema definition provided by
the source when migrating NoSQL tables.
• Data Type: boolean
• Mandatory (Y/N): N

Note:
defaultSchema, schemaPath, and useSourceSchema parameters are
mutually exclusive. Specify ONLY one of these parameters.

• Example:
– With Default Schema:

"schemaInfo" : {
"defaultSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
}

– With a pre-defined schema:

"schemaInfo" : {
"schemaPath" : "<complete/path/to/the/schema/definition/file>",
"readUnits" : 100,
"writeUnits" : 100,
"storageSize" : 1
}

– With source schema:

"schemaInfo" : {
"useSourceSchema" : true,
"readUnits" : 100,
"writeUnits" : 60,
"storageSize" : 1
}

schemaInfo.DDBPartitionKey
• Purpose: Specifies the DynamoDB partition key and the corresponding Oracle NoSQL
Database type to be used in the sink Oracle NoSQL Database table. This key will be
used as a NoSQL DB table shard key. This is applicable only when defaultSchema is set
to true and the source format is dynamodb_json. See Mapping of DynamoDB types to
Oracle NoSQL types for more details.
• Mandatory: Yes if defaultSchema is true and the source is dynamodb_json.
• Example: "DDBPartitionKey" : "PersonID:INTEGER"

1-325
Chapter 1
Develop

Note:
If the partition key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.

schemaInfo.DDBSortKey
• Purpose: Specifies the DynamoDB sort key and its corresponding Oracle NoSQL
Database type to be used in the target Oracle NoSQL Database table. If the
importing DynamoDB table does not have a sort key this should not be set. This
key will be used as a non-shard portion of the primary key in the NoSQL DB table.
This is applicable only when defaultSchema is set to true and the source is
dynamodb_json.See Mapping of DynamoDB types to Oracle NoSQL types for
more details.
• Mandatory: No
• Example:"DDBSortKey" : "Skey:STRING"

Note:
If the sort key contains dash(-) or dot(.), Migrator will replace it with
underscore(_) as NoSQL column name does not support dot and dash.

schemaInfo.onDemandThroughput
• Purpose: Specifies to create the table with on-demand read and write throughput.
If this parameter is not set, the table is created with provisioned capacity.
The default value is false.

Note:
This parameter is not applicable for child tables as they share the
throughput of the top-level parent table.

• Data Type: Boolean


• Mandatory: N
• Example:"onDemandThroughput" : "true"

schemaInfo.readUnits
• Purpose: Specifies the read throughput of the new table.

1-326
Chapter 1
Develop

Note:

– This parameter is not applicable for tables provisioned with on-demand


capacity.
– This parameter is not applicable for child tables as they share the read
throughput of the top-level parent table.

• Data Type: integer


• Mandatory: Y when the table is not a child table or when
schemaInfo.onDemandThroughput parameter is set to false, else N.
• Example:"readUnits" : 100

schemaInfo.writeUnits
• Purpose: Specifies the write throughput of the new table.

Note:

– This parameter is not applicable for tables provisioned with on-demand


capacity.
– This parameter is not applicable for child tables as they share the write
throughput of the top-level parent table.

• Data Type: integer


• Mandatory: Y when the table is not a child table or when
schemaInfo.onDemandThroughput parameter is set to false, else N.
• Example:"writeUnits" : 100

schemaInfo.storageSize
• Purpose: Specifies the storage size of the new table in GB

Note:
This parameter is not applicable for child tables as they share the storage size
of the top-level parent table.

• Data Type: integer


• Mandatory: Y when the table is not a child table, else N.
• Example:
– With schemaPath

"schemaInfo" : {
"schemaPath" : "</path/to/a/schema/file>",
"readUnits" : 500,

1-327
Chapter 1
Develop

"writeUnits" : 1000,
"storageSize" : 5 }

– With defaultSchema

"schemaInfo" : {
"defaultSchema" :Yes,
"readUnits" : 500,
"writeUnits" : 1000,
"storageSize" : 5
}

credentials
• Purpose: Absolute path to a file containing OCI credentials.
If not specified, it defaults to $HOME/.oci/config
See Example Configuration for an example of the credentials file.

Note:
Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters
MUST be specified. Additionally, these two parameters are mutually
exclusive. Specify ONLY one of these parameters, but not both at the
same time.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentials" : "/home/user/.oci/config"
2. "credentials" : "/home/user/security/config"

credentialsProfile
• Purpose: Name of the configuration profile to be used to connect to the Oracle
NoSQL Database Cloud Service.
If you do not specify this value, it defaults to the DEFAULT profile.

Note:
This parameter is valid ONLY if the credentials parameter is
specified.

• Data Type: string


• Mandatory (Y/N): N
• Example:
1. "credentialsProfile" : "DEFAULT"

1-328
Chapter 1
Develop

2. "credentialsProfile": "ADMIN_USER"

writeUnitsPercent
• Purpose: Specifies the Percentage of table write units to be used during the migration
activity.
The default value is 90. The valid range is any integer between 1 to 100.

Note:
The time required for the data migration is directly proportional to the
writeUnitsPercent value.

See Troubleshooting the Oracle NoSQL Database Migrator to learn how to use this
attribute to improve the data migration speed.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "writeUnitsPercent" : 90

requestTimeoutMs
• Purpose: Specifies the time to wait for each write operation in the sink to complete. This
is provided in milliseconds. The default value is 5000. The value can be any positive
integer.
• Data Type: integer
• Mandatory (Y/N): N
• Example: "requestTimeoutMs" : 5000

useInstancePrincipal
• Purpose: Specifies whether or not the NoSQL Migrator tool uses instance principal
authentication to connect to Oracle NoSQL Database Cloud Service. For more
information on Instance Principal authentication method, see Source and Sink Security
If not specified, it defaults to false.

Note:

– It is supported ONLY when the NoSQL Database Migrator tool is running


within an OCI compute instance, for example, NoSQL Database Migrator
tool running in a VM hosted on OCI.
– Even though the credentials and useInstancePrincipal
parameters are not mandatory individually, one of these parameters MUST
be specified. Additionally, these two parameters are mutually exclusive.
Specify ONLY one of these parameters, but not both at the same time.

• Data Type: boolean

1-329
Chapter 1
Develop

• Mandatory (Y/N): N
• Example: "useInstancePrincipal" : true

overwrite
• Purpose: Indicates the behavior of NoSQL Database Migrator when the record
being migrated from the source is already present in the sink.
If the value is set to false, when migrating tables the NoSQL Database Migrator
skips those records for which the same primary key already exists in the sink.
If the value is set to true, when migrating tables the NoSQL Database Migrator
overwrites those records for which the same primary key already exists in the sink.
If not specified, it defaults to true.
• Data Type: boolean
• Mandatory (Y/N): N
• Example: "overwrite" : false

includeTTL
• Purpose: Specifies whether or not to include TTL metadata for table rows
provided by the source when importing Oracle NoSQL Database tables.
If you do not specify this parameter, it defaults to false. In that case, the NoSQL
Database Migrator does not include TTL metadata for table rows provided by the
source when importing Oracle NoSQL Database tables.
If set to true, the NoSQL Database Migrator tool performs the following checks on
the TTL metadata when importing a table row:
– If you import a row that does not have _metadata definition, the NoSQL
Database Migrator tool sets the TTL to 0, which means the row never expires.
– If you import a row that has _metadata definition, the NoSQL Database
Migrator tool compares the TTL value against a Reference Time when a row
gets imported. If the row has already expired relative to the Reference Time,
then it is skipped. If the row has not expired, then it is imported along with the
TTL value. By default, the Reference Time of import operation is the current
time in milliseconds, obtained from System.currentTimeMillis(), of the machine
where the NoSQL Database Migrator tool is running. But you can also set a
custom Reference Time using the ttlRelativeDate configuration parameter if
you want to extend the expiration time and import rows that would otherwise
expire immediately.
The formula to calculate the expiration time of a row is as follows:

expiration = (TTL value of source row in milliseconds -


Reference Time in milliseconds)
if (expiration <= 0) then it indicates that row has expired.

1-330
Chapter 1
Develop

Note:
Since Oracle NoSQL TTL boundaries are in hours and days, in some
cases, the TTL of the imported row might get adjusted to the nearest hour
or day. For example, consider a row that has expiration value of
1629709200000 (2021-08-23 09:00:00) and Reference Time value is
1629707962582 (2021-08-23 08:39:22). Here, even though the row is not
expired relative to the Reference Time when this data gets imported, the
new TTL for the row is 1629712800000 (2021-08-23 10:00:00).

• Data Type: boolean


• Mandatory (Y/N): N
• Example: "includeTTL" : true

ttlRelativeDate
• Purpose: Specify a UTC date in the YYYY-MM-DD hh:mm:ss format used to set the TTL
expiry of table rows during importing into the Oracle NoSQL Database.
If a table row in the data you are exporting has expired, you can set the
ttlRelativeDate parameter to a date before the expiration time of the table row in the
exported data.
If you do not specify this parameter, it defaults to the current time in milliseconds,
obtained from System.currentTimeMillis(), of the machine where the NoSQL Database
Migrator tool is running.
• Data Type: date
• Mandatory (Y/N): N
• Example: "ttlRelativeDate" : "2021-01-03 04:31:17"
Let us consider a scenario where table rows expire after seven days from 1-Jan-2021.
After exporting this table, on 7-Jan-2021, you run into an issue with your table and decide
to import the data. The table rows are going to expire in one day (data expiration date
minus the default value of ttlRelativedate configuration parameter, which is the
current date).But if you want to extend the expiration date of table rows to five days
instead of one day, use the ttlRelativeDate parameter and choose an earlier date.
Therefore, in this scenario if you want to extend expiration time of the table rows by five
days, set the value of ttlRelativeDate configuration parameters to 3-Jan-2021,
which is used as Reference Time when table rows get imported.

Transformation Configuration Templates


This topic explains the configuration parameters for the different transformations supported
by the Oracle NoSQL Database Migrator. For the complete configuration file template, see
Configuration File in Terminology used with NoSQL Data Migrator.
Oracle NoSQL Database Migrator lets you modify the data, that is, add data transformations
as part of the migration activity. You can define multiple transformations in a single migration.
In such a case, the order of transformations is vital because the source data undergoes each
transformation in the given order. The output of one transformation becomes the input to the
next one in the migrator pipeline.
The different transformations supported by the NoSQL Data Migrator are:

1-331
Chapter 1
Develop

Table 1-16 Transformations

Transformation Config Attribute You can use this transformation to ...


ignoreFields Ignore the identified columns from the source
row before writing to the sink.
includeFields Include the identified columns from the source
row before writing to the sink.
renameFields Rename the identified columns from the
source row before writing to the sink.
aggregateFields Aggregate multiple columns from the source
into a single column in the sink. As part of this
transformation, you can also identify the
columns that you want to exclude in the
aggregation. Those fields will be skipped from
the aggregated column.

You can find the configuration template for each supported transformation below.

ignoreFields
The configuration file format for the ignoreFields transformation is shown below.

Transformation Configuration Template

"transforms" : {
"ignoreFields" : ["<field1>","<field2>",...]
}

Transformation Parameter

ignoreFields
• Purpose: An array of the column names to be ignored from the source records.

Note:
You can supply only top-level fields. Transformations can not be applied
on the data in the nested fields.

• Data Type: array of strings


• Mandatory (Y/N): Y
• Example: To ignore the columns named "name" and "address" from the source
record:
"ignoreFields" : ["name","address"]

1-332
Chapter 1
Develop

includeFields
The configuration file format for the includeFields transformation is shown below.

Transformation Configuration Template

"transforms" : {
"includeFields" : ["<field1>","<field2>",...]
}

Transformation Parameter

includeFields
• Purpose: An array of the column names to be included from the source records. It only
includes the fields specified in the array, the rest of the fields are ignored.

Note:
The NoSQL Database Migrator tool throws an error if you specify an empty
array. Additionally, you can specify only the top-level fields. The NoSQL
Database Migrator tool does not apply transformations to the data in the nested
fields.

• Data Type: array of strings


• Mandatory (Y/N): Y
• Example: To ignore the columns named "age" and "gender" from the source record:
"includeFields" : ["age","gender"]

renameFields
The configuration file format for the renameFields transformation is shown below.

Transformation Configuration Template

"transforms" : {
"renameFields" : {
"<old_name>" : "<new_name>",
"<old_name>" : "<new_name>,"
.....
}
}

Transformation Parameter

renameFields
• Purpose: Key-Value pairs of the old and new names of the columns to be renamed.

1-333
Chapter 1
Develop

Note:
You can supply only top-level fields. Transformations can not be applied
on the data in the nested fields.

• Data Type: JSON object


• Mandatory (Y/N): Y
• Example: To rename the column named "residence" to "address" and the column
named "_id" to "id":
"renameFields" : { "residence" : "address", "_id" : "id" }

aggregateFields
The configuration file format for the aggregateFields transformation is shown below.

Transformation Configuration Template

"transforms" : {
"aggregateFields" : {
"fieldName" : "name of the new aggregate field",
"skipFields" : ["<field1>","<field2">,...]
}
}

Transformation Parameter

aggregateFields
• Purpose: Name of the aggregated field in the sink.
• Data Type: string
• Mandatory (Y/N): Y
• Example: If the given record is:

{
"id" : 100,
"name" : "john",
"address" : "USA",
"age" : 20
}

If the aggregate transformation is:

"aggregateFields" : {
"fieldName" : "document",
"skipFields" : ["id"]
}

1-334
Chapter 1
Develop

The aggregated column in the sink looks like:

{
"id": 100,
"document": {
"name": "john",
"address": "USA",
"age": 20
}
}

Mapping of DynamoDB types to Oracle NoSQL types


The table below shows the mapping of DynamoDB types to Oracle NoSQL types.

Table 1-17 Mapping DynamoDB type to Oracle NoSQL type

# DynamoDB type JSON type for NoSQL Oracle NoSQL type


JSON column
1 String (S) JSON String STRING
2 Number Type (N) JSON Number INTEGER/LONG/
FLOAT/DOUBLE/
NUMBER
3 Boolean (BOOL) JSON Boolean BOOLEAN
4 Binary type (B) - Byte BASE-64 encoded BINARY
buffer JSON String
5 NULL JSON null NULL
6 String Set (SS) JSON Array of Strings ARRAY(STRING)
7 Number Set (NS) JSON Array of Numbers ARRAY(INTEGER/
LONG/FLOAT/DOUBLE/
NUMBER)
8 Binary Set (BS) JSON Array of Base-64 ARRAY(BINARY)
encoded Strings
9 LIST (L) Array of JSON ARRAY(JSON)
10 MAP (M) JSON Object JSON
11 PARTITION KEY NA PRIMARY KEY and
SHARD KEY
12 SORT KEY NA PRIMARY KEY
13 Attribute names with JSON field names with Column names with
dash and dot a underscore underscore

Few additional points to consider while mapping DynamoDB types to Oracle NoSQL types:
• DynamoDB Supports only one data type for Numbers and can have up to 38 digits of
precision,on contrast Oracle NoSQL supports many types to choose from based on the
range and precision of the data.You can select the appropriate Number type that fits the
range of your input data. If you are not sure of the nature of the data, NoSQL NUMBER
type can be used.
• DynamoDB Supports only one data type for Numbers and can have up to 38 digits of
precision,on contrast Oracle NoSQL supports many types to choose from based on the
range and precision of the data.You can select the appropriate Number type that fits the

1-335
Chapter 1
Develop

range of your input data. If you are not sure of the nature of the data, NoSQL
NUMBER type can be used.
• Partition key in DynamoDB has a limit of 2048 bytes but Oracle NoSQL Cloud
Service has a limit of 64 bytes for the Primary key/Shard key.
• Sort key in DynamoDB has a limit of 1024 bytes but Oracle NoSQL Cloud Service
has a limit of 64 bytes for the Primary key.
• Attribute names in DynamoDB can be 64KB long but Oracle NoSQL Cloud service
column names have a limit of 64 characters.

Oracle NoSQL to Parquet Data Type Mapping


Describes the mapping of Oracle NoSQL data types to Parquet data types.

NoSQL Type Parquet Type


BOOLEAN BOOLEAN
INTEGER INT32
LONG INT64
FLOAT DOUBLE
DOUBLE DOUBLE
BINARY BINARY
FIXED_BINARY BINARY
STRING BINARY(STRING)
ENUM BINARY(STRING)
or
BINARY(ENUM), if the logical ENUM is
configured
UUID BINARY(STRING)
or
FIXED_BINARY(16), if the logical UUID is
configured
TIMESTAMP(p) INT64(TIMESTAMP(p))
NUMBER DOUBLE
field_name ARRAY(T)
group field_name(LIST) {
repeated group list {
required T element
}
}

1-336
Chapter 1
Develop

NoSQL Type Parquet Type


field_name MAP(T)
group field_name (MAP) {
repeated group key_value
(MAP_KEY_VALUE) {
required binary key
(STRING);
required T value;
}
}

field_name RECORD(K T N, K T N, ....)


where: group field_name {
K = Key name ni == true ? optional Ti ki :
T = Type required Ti ki
N = Nullable or not }

JSON BINARY(STRING)
or
BINARY(JSON), if logical JSON is configured

Note:
When the NoSQL Number type is converted to Parquet Double type, there may be
some loss of precision in case the value cannot be represented in Double. If the
number is too big to represent as Double, it is converted to
Double.NEGATIVE_INFINITY or Double.POSITIVE_INFINITY.

Mapping of DynamoDB table to Oracle NoSQL table


In DynamoDB, a table is a collection of items, and each item is a collection of attributes. Each
item in the table has a unique identifier, or a primary key. Other than the primary key, the
table is schema-less. Each item can have its own distinct attributes.
DynamoDB supports two different kinds of primary keys:
• Partition key – A simple primary key, composed of one attribute known as the partition
key. DynamoDB uses the partition key's value as input to an internal hash function. The
output from the hash function determines the partition in which the item will be stored.
• Partition key and sort key – As a composite primary key, this type of key is composed
of two attributes. The first attribute is the partition key, and the second attribute is the sort
key. DynamoDB uses the partition key value as input to an internal hash function. The
output from the hash function determines the partition in which the item will be stored. All
items with the same partition key value are stored together, in sorted order by sort key
value.
In contrast, Oracle NoSQL tables support flexible data models with both schema and
schema-less design.
There are two different ways of modelling a DynamoDB table:

1-337
Chapter 1
Develop

1. Modeling DynamoDB table as a JSON document(Recommended): In this


modeling, you map all the attributes of the Dynamo DB tables into a JSON column
of the NoSQL table except partition key and sort key. You will model partition key
and sort key as the Primary Key columns of the NoSQL table. You will use
AggregateFields transform in order to aggregate non-primary key data into a
JSON column.

Note:
The Migrator provides a user-friendly configuration defaultSchema to
automatically create a schema-less DDL table which also aggregates
attributes into a JSON column.

2. Modeling DynamoDB table as fixed columns in NoSQL table: In this modeling,


for each attribute of the DynamoDB table, you will create a column in the NoSQL
table as specified in the Mapping of DynamoDB types to Oracle NoSQL types. You
will model partition key and sort key attributes as Primary key(s). This should be
used only when you are certain that importing DynamoDB table schema is fixed
and each item has values for the most of the attributes. If DynamoDB items do not
have common attributes, this can result in lot of NoSQL columns with empty
values.

Note:
We highly recommend using schema-less tables when migrating data
from DynamoDB to Oracle NoSQL Database due to the nature of
DynamoDB tables being schema-less. This is especially for large tables
where the content of each record may not be uniform across the table.

Troubleshooting the Oracle NoSQL Database Migrator


Learn about the general challenges that you may face while using the , and how to
resolve them.

Migration has failed. How can I resolve this?


A failure of the data migration can be because of multiple underlying reasons. The
important causes are listed below:

1-338
Chapter 1
Develop

Table 1-18 Migration Failure Causes

Error Message Meaning Resolution


Failed to connect to The migrator could not establish • Check if the values of the
Oracle NoSQL Database a connection with the NoSQL storeName and
Database. helperHosts attributes in
the configuration JSON file
are valid and that the hosts
are reachable.
• For a secured store, verify if
the security file is valid with
correct user name and
password values.
Failed to connect to The migrator could not establish • Verify if the endpoint URL or
Oracle NoSQL Database a connection with the Oracle region name specified in the
Cloud Service NoSQL Database Cloud Service. configuration JSON file is
correct.
• Check if the OCI credentials
file is available in the path
specified in the configuration
JSON file.
• Ensure that the OCI
credentials provided in the
OCI credentials are valid.
Table not found The table identified for the For the Source:
migration could not be located by • Verify if the table is present
the NoSQL Database Migrator. in the source database.
• Ensure that the table is
qualified with its namespace
in the configuration JSON
file, if the table is created in
a non-default namespace.
• Verify if you have the
required read/write
authorization to access the
table.
• If the source is Oracle
NoSQL Database Cloud
Service, verify if the valid
compartment name is
specified in the configuration
JSON file, and ensure that
you have the required
authorization to access the
table.
For the Sink:
• Verify if the table is present
in the Sink. If it does not
exist, you must either create
the table manually or use
the schemaInfo config to
create it through the
migration.

1-339
Chapter 1
Develop

Table 1-18 (Cont.) Migration Failure Causes

Error Message Meaning Resolution


DDL Execution failed The DDL commands provided in • Check the syntax of the DDL
the input schema definition file is commands in the
invalid. schemaPath file.
• Ensure that there is only one
DDL statement per line in
the schemaPath file.
failed to write record to The input record is not matching • Check if the data types and
the sink table with with the table schema of the sink. column names specified in
java.lang.IllegalArgument the target sink table are
Exception matching with sink table
schema.
• If you applied any
transformation, check if the
transformed records are
matching with the sink table
schema.
Request timeout The source or sink's operation • Verify the network
did not complete within the connection.
expected time. • Check if the NoSQL
Database is up and running.
• Try to increase
requestTimeout value in
the configuration JSON file.

What should I consider before restarting a failed migration?


When a data migration task fails, the sink will be at an intermediate state containing
the imported data until the point of failure. You can identify the error and failure details
from the logs and restart the migration after diagnosing and correcting the error. A
restarted migration starts over, processing all data from the beginning. There is no way
to checkpoint and restart the migration from the point of failure. Therefore, NoSQL
Database Migrator overwrites any record that was migrated to the sink already.

Migration is too slow. How can I speed it up?


The time taken for the data migration depends on multiple factors such as volume of
data being migrated, network speed, current load on the database. In case of a cloud
service, the speed of migration also depends on the read throughput and the write
throughput provisioned. So, to improve the migration speed, you can:
• Try to reduce the current workload on your Oracle NoSQL Database while
migrating the data.
• Ensure that the machine that is running the migration, source, and sink all are
located in the same data center and the network latencies are minimal.
• In case of Oracle NoSQL Database Cloud Service, provision high read/write
throughput and verify if the storage allocated for table is sufficient or not. If the
NoSQL Database Migrator is not creating the table, you can increase the write
throughput. If the migrator is creating the table, consider specifying a higher value
for the schemaInfo.writeUnits parameter in the sink configuration. Once the data
migration completes, you can lower this value. Be aware of daily limits on
throughput changes. see Cloud Limits and Sink Configuration Templates .

1-340
Chapter 1
Manage

I have a long running migration involving huge datasets. How can I track the progress
of the migration?
You can enable additional logging to track the progress of a long-running migration. To control
the logging behavior of Oracle NoSQL Database Migrator, you must set the desired level of
logging in the logging.properties file. This file is provided with the NoSQL Database
Migrator package and available in the directory where the Oracle NoSQL Database Migrator
was unpacked. The different levels of logging are OFF, SEVERE, WARNING, INFO, FINE, and
ALL in the order of increasing verbosity. Setting the log level to OFF turns off all the logging
information, whereas setting the log level to ALL provides the full log information. The default
log level is WARNING. All the logging output is configured to go to the console by default. You
can see comments in the logging.properties file to know about each log level.

Manage
• Using APIs to manage tables
• Using console to manage tables

Using APIs to manage tables


• Reading Data
• Using Queries
• Modifying Tables
• Deleting Data
• Dropping Tables and Indexes

Reading Data
Learn how to read data from your table.
You can read data from your application by using the different API methods for the language-
specific drivers. You can retrieve a record based on a single primary key value, or by using
queries.

Note:
First, connect your client driver to Oracle NoSQL Database Cloud Service to get a
connection and then complete other steps. This topic omits the steps for connecting
your client driver and creating a table.

• Java

• Python

• Go

• Node.js

1-341
Chapter 1
Manage

• C#

• Spring Data

Java
The GetRequest class provides a simple and powerful way to read data, while queries
can be used for more complex read requests. To read data from a table, specify the
target table and target key using the GetRequest class and use NoSQLHandle.get() to
execute your request. The result of the operation is available in GetResult. The
following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle. To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments.
To read data from your table:

/* GET the row, first create the row key */


MapValue key = new MapValue().put("id", 1);
GetRequest getRequest = new GetRequest().setKey(key)
.setTableName("users");
GetResult getRes = handle.get(getRequest);

/* on success, GetResult.getValue() returns a non-null value */


if (getRes.getValue() != null) {
// success
} else {
// failure
}

Note:
By default, all read operations are eventually consistent. You can change the
default Consistency for a NoSQLHandle instance by using the
NoSQLHandleConfig.setConsistency(oracle.nosql.driver.Consistency)
and GetRequest.setConsistency() methods.

See the Java API Reference Guide for more information about the GET APIs.

Python
Learn how to read data from your table. You can read single rows using the
borneo.NoSQLHandle.get() method. This method allows you to retrieve a record
based on its primary key value. The borneo.GetRequest class is used for simple get
operations. It contains the primary key value for the target row and returns an instance
of borneo.GetResult.

from borneo import GetRequest


# GetRequest requires a table name
request = GetRequest().set_table_name('users')
# set the primary key to use request.set_key({'id': 1})
result = handle.get(request)
# on success the value is not empty

1-342
Chapter 1
Manage

if result.get_value() is not None:


# success

By default all read operations are eventually consistent, using


borneo.Consistency.EVENTUAL. This type of read is less costly than those using absolute
consistency, borneo.Consistency.ABSOLUTE. This default can be changed in
borneo.NoSQLHandle using borneo.NoSQLHandleConfig.set_consistency() before creating
the handle. It can be changed for a single request using
borneo.GetRequest.set_consistency().

Go
You can read single rows using the Client.Get function. This function allows you to retrieve
a record based on its primary key value. The nosqldb.GetRequest is used for simple get
operations. It contains the primary key value for the target row and returns an instance of
nosqldb.GetResult. If the get operation succeeds, a non-nil GetResult.Version is returned.

key:=&types.MapValue{}
key.Put("id", 1)
req:=&nosqldb.GetRequest{
TableName: "users",
Key: key,
}
res, err:=client.Get(req)

By default all read operations are eventually consistent, using types.Eventual. This type of
read is less costly than those using absolute consistency, types.Absolute. This default can
be changed in nosqldb.RequestConfig using RequestConfig.Consistency before creating
the client. It can be changed for a single request using GetRequest.Consistency field.
1. Change default consistency for all read operations.

cfg:= nosqldb.Config{
RequestConfig: nosqldb.RequestConfig{
Consistency: types.Absolute,
...
},
...
}
client, err:=nosqldb.NewClient(cfg)

2. Change consistency for a single read operation.

req:=&nosqldb.GetRequest{
TableName: "users",
Key: key,
Consistency: types.Absolute,
}

Node.js
You can read single rows using the get method. This method allows you to retrieve a record
based on its primary key value. You can set consistency of read operation using Consistency

1-343
Chapter 1
Manage

enumeration. By default all read operations are eventually consistent, using


Consistency.EVENTUAL. This type of read is less costly than those using absolute
consistency, Consistency.ABSOLUTE. This default consistency for read operations can
be set in the initial configuration used to create NoSQLClient instance using
consistency property. You may also change it for a single read operation by setting
consistency property in the opt argument of the get method.

get method returns Promise of GetResult which which is plain JavaScript object
containing the resulting row and its Version. If the provided primary key does not exist
in the table, the value of row property will be null. Note that the property names in the
provided primary key key object should be the same as underlying table column
names.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const Consistency = require('oracle-nosqldb').Consistency;
const client = new NoSQLClient('config.json');
async function getRowsFromUsersTable() {
const tableName = 'users';
try {
let result = await client.get(tableName,
{ id: 1 });
console.log('Got row: ' + result.row);
// Use absolute consistency
result = await client.get(tableName, 'users',
{ id: 1 }, { consistency: Consistency.ABSOLUTE });
console.log('Got row with absolute consistency: ' + result.row);
}
catch(error){
//handle errors
}
}

C#
You can read a single row using the GetAsync method. This method allows you to
retrieve a row based on its primary key value. This method takes the primary key value
as MapValue. The field names should be the same as the table primary key column
names. You may also pass options as GetOptions.

You can set consistency of a read operation using Consistency enumeration. By


default all read operations are eventually consistent. This type of read is less costly
than those using absolute consistency. The default consistency for read operations
may be set as Consistency property of NoSQLConfig. You may also change the
consistency for a single Get operation by using Consistency property of GetOptions.

GetAsync method returns Task<GetResult<RecordValue>>. GetResult instance


contains the returned Row, the row Version and other information. If the row with the
provided primary key does not exist in the table, the values of both Row and Version
properties will be null.

var client = new NoSQLClient("config.json");


..................................................
var tableName = "users";
try{
var result = await client.GetAsync(tableName,

1-344
Chapter 1
Manage

new MapValue
{
["id"] =1
});
// Continuing from the Put example, the expected output will be:
// { "id": 1, "name": "Kim" }
Console.WriteLine("Got row: {0}", result.row);
// Use absolute consistency.
result = await client.GetAsync(tableName,
new MapValue
{
["id"] =2
}),
new GetOptions
{
Consistency=Consistency.Absolute
});
// The expected output will be:
// { "id": 2, "name": "Jack" }
Console.WriteLine("Got row with absolute consistency: {0}",
result.row);
// Continuing from the Put example, the expiration time should be
// 30 days from now.
Console.WriteLine("Expiration time: {0}", result.ExpirationTime)
}
catch(Exception ex){
// handle exceptions
}

Spring Data
Use one of these methods to read the data from the table - NosqlRepository findById(),
findAllById(), findAll() or using NosqlTemplate find(), findAll(), findAllById(). For
details, see SDK for Spring Data API Reference.

Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.

In this section, you use the NosqlRepository findAll() method.

Create the UsersRepository interface. This interface extends the NosqlRepository interface
and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. This NosqlRepository interface
provides methods that are used to retrieve data from the database.

import com.oracle.nosql.spring.data.repository.NosqlRepository;

/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the

1-345
Chapter 1
Manage

Users class. */

public interface UsersRepository extends NosqlRepository<Users, Long> {


Iterable<Users> findAll();
}

In the application, you select all the rows from the Users table and provide them to an
iterable instance. Print the values to the output from the iterable object.

@Autowired
private UsersRepository repo;

/* Select all the rows in the Users table and provides them into an
iterable instance.*/

System.out.println("\nfindAll:");
Iterable < Users > allusers = repo.findAll();

/* Print the values to the output from the iterable object.*/


for (Users u: allusers) {
System.out.println(" User: " + u);
}

Run the program to display the output.

findAll:

User: Users{id=1, firstName=John, lastName=Doe}


User: Users{id=2, firstName=Angela, lastName=Willard}

Using Queries
Learn about some aspects of using queries to your application in Oracle NoSQL
Database Cloud Service.
Oracle NoSQL Database Cloud Service provides a rich query language to read and
update data. See Developers Guide for a full description of the query language.

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

1-346
Chapter 1
Manage

Java
To execute your query, you use the NoSQLHandle.query() API. See the Java API Reference
Guide for more information about this API.

Note:
The following examples consider that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL
tables, see About Compartments .

To execute a SELECT query to read data from your table:

/* QUERY a table named "users", using the primary key field "name".
* The table name is inferred from the query statement.
*/
QueryRequest queryRequest = new QueryRequest().
setStatement("SELECT * FROM users WHERE name = \"Taylor\"");

/* Queries can return partial results. It is necessary to loop,


* reissuing the request until it is "done"
*/

do {
QueryResult queryResult = handle.query(queryRequest);

/* process current set of results */


List<MapValue> results = queryResult.getResults();
for (MapValue qval : results) {
//handle result
}
} while (!queryRequest.isDone());

When using queries, be aware of the following considerations:


• You can use prepared queries when you want to run the same query multiple times.
When you use prepared queries, the execution is more efficient than starting with a query
string every time. The query language and API support query variables to assist with the
reuse. See NoSQLHandle.prepare in the Java API Reference Guide for more information.
• You can set important query attributes, such as the usable amount of resources, or the
read consistency used by the read operations, using the QueryRequest class. See
QueryRequest in the Java API Reference Guide for more information.
For example, to execute a SELECT query to read data from your table using a prepared
statement:

/* Perform the same query using a prepared statement. This is more


* efficient if the query is executed repeatedly and required if
* the query contains any bind variables.
*/
String query = "DECLARE $name STRING; " +
"SELECT * from users WHERE name = $name";

1-347
Chapter 1
Manage

PrepareRequest prepReq = new PrepareRequest().setStatement(query);


/* prepare the statement */
PrepareResult prepRes = handle.prepare(prepReq);
/* set the bind variable and set the statement in the QueryRequest */
prepRes.getPreparedStatement()
.setVariable("$name", new StringValue("Taylor"));
QueryRequest queryRequest = new
QueryRequest().setPreparedStatement(prepRes);

/* perform the query in a loop until done */


do {
QueryResult queryResult = handle.query(queryRequest);
/* handle result */
} while (!queryRequest.isDone());

Python
To execute a query use the borneo.NoSQLHandle.query() method. For example, to
execute a SELECT query to read data from your table, a borneo.QueryResult
contains a list of results. And if the borneo.QueryRequest.is_done() returns False,
there may be more results, so queries should generally be run in a loop. It is possible
for single request to return no results but the query still not done, indicating that the
query loop should continue. For example:

from borneo import QueryRequest


# Query at table named 'users" using the field 'name' where name may
match
# 0 or more rows in the table. The table name is inferred from the
query
statement = 'select * from users where name = "Jane"'
request = QueryRequest().set_statement(statement)
# loop until request is done, handling results as they arrive
while True: result = handle.query(request)
# handle results
handle_results(result)
# do something with results
if request.is_done(): break

When using queries it is important to be aware of the following considerations:


• Oracle NoSQL Database provides the ability to prepare queries for execution and
reuse. It is recommended that you use prepared queries when you run the same
query for multiple times. When you use prepared queries, the execution is much
more efficient than starting with a query string every time. The query language and
API support query variables to assist with query reuse.
• The borneo.QueryRequest allows you to set the read consistency for a query as
well as modifying the maximum amount of resource (read and write) to be used by
a single request. This can be important to prevent a query from getting throttled
because it uses too much resource too quickly.

1-348
Chapter 1
Manage

Here is an example of using a prepared query with a single variable:

from borneo import PrepareRequest, QueryRequest


# Use a similar query to above but make the name a variable
statement = 'declare $name string
select * from users where name = $name'
prequest = PrepareRequest().set_statement(statement)
presult = handle.prepare(prequest)
# use the prepared statement, set the variable
pstatement = presult.get_prepared_statement()
pstatement.set_variable('$name', 'Jane')
qrequest = QueryRequest().set_prepared_statement(pstatement)
# loop until qrequest is done, handling results as they arrive
while True:
# use the prepared query in the query
request qresult = handle.query(qrequest)
# handle results
handle_results(qresult)
# do something with results
if qrequest.is_done(): break
# use a different variable value with the same prepared query
pstatement.set_variable('$name', 'another_name')
qrequest = QueryRequest().set_prepared_statement(pstatement)
# loop until qrequest is done, handling results as they arrive
while True:
# use the prepared query in the query
request qresult = handle.query(qrequest)
# handle results
handle_results(qresult)
# do something with results
if qrequest.is_done(): break

Go
To execute a query use the Client.Query function. For example, to execute a SELECT query
to read data from your table:

prepReq := &nosqldb.PrepareRequest{
Statement: "select * from users",
}
prepRes, err := client.Prepare(prepReq)
if err != nil {
fmt.Printf("Prepare failed: %v\n", err)
return
}
queryReq := &nosqldb.QueryRequest{
PreparedStatement: &prepRes.PreparedStatement,
}
var results []*types.MapValue
for {
queryRes, err := client.Query(queryReq)
if err != nil {
fmt.Printf("Query failed: %v\n", err)
return

1-349
Chapter 1
Manage

}
res, err := queryRes.GetResults()
if err != nil {
fmt.Printf("GetResults() failed: %v\n", err)
return
}
results = append(results, res...)
if queryReq.IsDone() {
break
}
}

Queries should generally be run in a loop and check QueryRequest.IsDone() to


determine if the query completes. It is possible for a single request to return no results
but still have QueryRequest.IsDone() evaluated to false, indicating that the query loop
should continue.
When using queries it is important to be aware of the following considerations:
• Oracle NoSQL Database provides the ability to prepare queries for execution and
reuse. It is recommended that you use prepared queries when you run the same
query multiple times. When you use prepared queries, the execution is much more
efficient than starting with a query string every time. The query language and API
support query variables to assist with query reuse.
• The nosqldb.QueryRequest allows you to set the read consistency for a query (via
the QueryRequest.Consistency field), as well as modifying the maximum amount
of resources (read and write, via the QueryRequest.MaxReadKB and
QueryRequest.MaxWriteKB fields) to be used by a single request. This can be
important to prevent a query from getting throttled because it uses too many
resources too quickly.

Node.js
To execute a query use query method. This method returns returns Promise of
QueryResult which is plain JavaScript object that contains an Array of resulting rows
as well as continuation key. The amount of data returned by the query is limited by the
system default and could be further limited by setting maxReadKB property in the opt
argument of query, which means that one invocation of query method may not return
all available results. This situation is dealt with by using continuationKey property.
Not-null continuation key means that more query results may be available. This means
that queries should generally run in a loop, looping until continuation key becomes
null. Note that it is possible for rows to be empty yet have not-null continuationKey,
which means the query loop should continue. In order to receive all the results, call
query in a loop. At each iteration, if non-null continuation key is received in
QueryResult, set continuationKey property in the opt argument for the next iteration:

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


.....
const client = new NoSQLClient('config.json');
async function queryUsersTable() {
const opt = {};
try {
do {
const result = await client.query('SELECT * FROM users',

1-350
Chapter 1
Manage

opt);
for(let row of result.rows) {
console.log(row);
}
opt.continuationKey = result.continuationKey;
} while(opt.continuationKey);
} catch(error) {
//handle errors
}
}

When using queries it is important to be aware of the following considerations:


• The Oracle NoSQL Database provides the ability to prepare queries for execution and
reuse. It is recommended that you use prepared queries when you run the same query
for multiple times. When you use prepared queries, the execution is much more efficient
than starting with a query string every time. The query language and API support query
variables to assist with query reuse.
• Using opt argument of query allows you to set the read consistency for query as well as
modifying the maximum amount of data it reads in a single call. This can be important to
prevent a query from getting throttled.
Use prepare method to prepare the query. This method returns Promise of
PreparedStatement object. Use set method to bind query variables. To run prepared query,
pass PreparedStatement to the query or queryIterable instead of the statement string.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


.....
const client = new NoSQLClient('config.json');

async function queryUsersTable() {


const statement = 'DECLARE $name STRING; SELECT * FROM users WHERE ' +
'name = $name';
try {
let prepStatement = await client.prepare(statement);
const opt = {};
// Set value for $name variable
prepStatement.set('$name', 'Taylor');
do {
let result = await client.query(prepStatement);
for(let row of result.rows) {
console.log(row);
}
opt.continuationKey = result.continuationKey;
} while (opt.continuationKey);
// Set different value for $name and re-execute the query
prepStatement.set('$name', 'Jane');
do {
let result = await client.query(prepStatement);
for(let row of result.rows) {
console.log(row);
}
opt.continuationKey = result.continuationKey;
} while (opt.continuationKey);

1-351
Chapter 1
Manage

} catch(error) {
//handle errors
}
}

C#
To execute a query, you may call QueryAsync method or call
GetQueryAsyncEnumerable method and iterate over the resulting async enumerable.
You may pass options to each of these methods as QueryOptions. QueryAsync method
return Task<QueryResult<RecordValue>>. QueryResult contains query results as a list
of RecordValue instances, as well as other information. When your query specifies a
complete primary key (or you are executing an INSERT statement), it is sufficient to
call QueryAsync once.

var client = new NoSQLClient("config.json");


try {
var result = await client.QueryAsync(
"SELECT * FROM users WHERE id = 1");
// Because we select by primary key, there can be at most one
record.
if (result.Rows.Count>0) {
Console.WriteLine("Got record: {0}.", result.Rows[0]);
}
else {
Console.WriteLine("Got no records.");
}
}
catch(Exception ex) {
// handle exceptions
}

The amount of data returned by the query is limited by the system. It could also be
further limited by setting MaxReadKB property of QueryOptions. This means that one
invocation of QueryAsync may not return all available results. This situation is dealt
with by using continuation key. Non-null ContinuationKey in QueryResult means that
more more query results may be available. This means that queries should run in a
loop, looping until the continuation key becomes null.

Note that it is possible for query to return now rows (QueryResult.Rows is empty) yet
have not-null continuation key, which means that the query should continue looping. To
continue the query, set ContinuationKey in the QueryOptions for the next call to
QueryAsync and loop until the continuation key becomes null. The following example
executes the query and prints query results:

var client = new NoSQLClient("config.json");


var options = new QueryOptions();
try {
do {
var result = await client.QueryAsync(
"SELECT id, name FROM users ORDER BY name",
options);
foreach(var row of result.Rows) {

1-352
Chapter 1
Manage

Console.WriteLine(row);
}
options.ContinuationKey = result.ContinuationKey;
}
while(options.ContinuationKey != null);
}
catch(Exception ex){
// handle exceptions
}

Another way to execute the query in a loop is to use GetQueryAsyncEnumerable. It returns an


instance of AsyncEnumerable<QueryResult> that can be iterated over. Each iteration step
returns a portion of the query results as QueryResult.

var client = new NoSQLClient("config.json");


try {
await foreach(var result in client.GetQueryAsyncEnumerable(
"SELECT id, name FROM users ORDER BY name"))
{
foreach(var row of result.Rows) {
Console.WriteLine(row);
}
}
}
catch(Exception ex){
// handle exceptions
}

Oracle NoSQL Database provides the ability to prepare queries for execution and reuse. It is
recommended that you use prepared queries when you run the same query for multiple
times. When you use prepared queries, the execution is much more efficient than starting
with a SQL statement every time. The query language and API support query variables to
assist with query reuse.
Use PrepareAsync to prepare the query. This method returns Task<PreparedStatement>.
PreparedStatement allows you to set query variables. The query methods QueryAsync and
GetQueryAsyncEnumerable have overloads that execute prepared queries by taking
PreparedStatement as a parameter instead of the SQL statement. For example:

var client = new NoSQLClient("config.json");


try {
var sql = "DECLARE $name STRING; SELECT * FROM users WHERE " +
"name = $name";
var preparedStatement = await client.PrepareAsync(sql);
// Set value for $name variable and execute the query
preparedStatement.Variables["$name"] = "Taylor";
await foreach(var result in client.GetQueryAsyncEnumerable(
preparedStatement)) {
foreach(var row of result.Rows) {
Console.WriteLine(row);
}
}
// Set different value for $name and re-execute the query.
preparedStatement.Variables["$name"] = "Jane";

1-353
Chapter 1
Manage

await foreach(var result in client.GetQueryAsyncEnumerable(


preparedStatement)) {
foreach(var row of result.Rows) {
Console.WriteLine(row);
}
}
}
catch(Exception ex){
// handle exceptions
}

Spring Data
Use one of these methods to run your query - The NosqlRepository derived queries,
native queries, or using NosqlTemplate runQuery(), runQueryJavaParams(),
runQueryNosqlParams(). For details, see SDK for Spring Data API Reference.

Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration
class to provide the connection details of the Oracle NoSQL Database. For
more details, see Obtaining a NoSQL connection.

In this section, you use the derived queries. For more details on the derived queries,
see Derived Queries.
Create the UsersRepository interface. This interface extends the NosqlRepository
interface and provides the entity class and the data type of the primary key in that
class as parameterized types to the NosqlRepository interface. The NosqlRepository
interface provides methods that are used to retrieve data from the database.

import com.oracle.nosql.spring.data.repository.NosqlRepository;

/* The Users is the entity class and Long is the data type of the
primary key in the Users class.
This interface provides methods that return iterable instances of
the Users class. */

public interface UsersRepository extends NosqlRepository<Users, Long> {


/* Search the Users table by the last name and return an iterable
instance of the Users class.*/
Iterable<Users> findByLastName(String lastname);
}

In the application, you select the row from the Users table with the last name as
required and print the values to the output from the object.

@Autowired
private UsersRepository repo;

System.out.println("\nfindBylastName: Willard");

1-354
Chapter 1
Manage

/* Use queries to find by the last Name. Search the Users table by the last
name and return an iterable instance of the Users class.*/
allusers = repo.findByLastName("Willard");

for (Users s: allusers) {


System.out.println(" User: " + s);
}

Run the program to display the output.

findBylastName: Willard

User: Users{id=2, firstName=Angela, lastName=Willard}

Modifying Tables
Learn how to modify tables.
You modify a table to:
• Add new fields to an existing table
• Delete currently existing fields in a table
• To change the default TTL value
• Modify table limits
Examples of DDL statements are:

/* Add a new field to the table */


ALTER TABLE users (ADD age INTEGER)

/* Drop an existing field from the table */


ALTER TABLE users (DROP age)

/* Modify the default TTL value*/


ALTER TABLE users USING TTL 4 days

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

1-355
Chapter 1
Manage

Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle . To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments.
When altering a table with provisioned capacity, you may also use the
TableRequests.setTableLimits method to modify table limits.

TableLimits limits = new TableLimits(40, 10, 5);


TableRequest treq = new
TableRequest().setTableName( "users" ).setTableLimits(limits);
TableResult tres = handle.tableRequest(treq);
/* wait for completion of the operation */
tres.waitForCompletion(handle, 60000,
/* wait for 60 sec */
1000);

You can also use the Oracle NoSQL Database Java SDK to modify a table and
change the capacity model to an on-demand capacity configuration. You can also
choose to change the storage capacity.

// Previous limit in Provisioned Mode


// TableLimits limits = new TableLimits(40, 10, 5);
// Call the constructor to only set storage limit (for on-demand)
TableLimits limits = newTableLimits(10);
TableRequest treq =
newTableRequest().setTableName("users").setTableLimits(limits);
TableResult tres = serviceHandle.tableRequest(treq);
tres.waitForCompletion(serviceHandle, 50000,3000);

You can change the definition of the table. The TTL value is changed below.

/* Alter the users table to modify the TTL value to 4 days.


* When modifying the table schema or other table state you cannot also
* modify the table limits. These must be independent operations.
*/
String alterTableDDL = "ALTER TABLE users " + "USING TTL 4 days";
TableRequest treq = new TableRequest().setStatement(alterTableDDL);
/* start the operation, it is asynchronous */
TableResult tres = handle.tableRequest(treq);
/* wait for completion of the operation */
tres.waitForCompletion(handle, 60000, 1000);
/* wait for 60 sec */
/* delay in ms for poll */

1-356
Chapter 1
Manage

Python
If using the Oracle NoSQL Database Cloud Service table limits can be modified using
borneo.TableRequest.set_table_limits(). If the table is configured with provisioned
capacity, the limits can be set as shown in the example below:

from borneo import TableLimits, TableRequest


# in this path the table name is required, as there is no DDL statement
request = TableRequest().set_table_name('users')
request.set_table_limits(TableLimits(40, 10, 5))
result = handle.table_request(request)
# table_request is asynchronous, so wait for the operation to complete,
# wait for 40 seconds, polling every 3 seconds
result.wait_for_completion(handle, 40000, 3000)

You can also use the Oracle NoSQL Database Python SDK to modify a table and change the
capacity model to an on-demand capacity configuration. You can also choose to change the
Storage capacity.

from borneo import TableLimits, TableRequest


# in this path the table name is required, as there is no DDL statement
request = TableRequest().set_table_name('users')
request.set_table_limits(TableLimits(10))
result = handle.table_request(request)
# table_request is asynchronous, so wait for the operation to complete,
# wait for 40 seconds, polling every 3 seconds
result.wait_for_completion(handle, 40000, 3000)

You can change the TTL value of a table as shown below.

/* Alter the users table to modify the TTL value to 4 days.


* When modifying the table schema or other table state you cannot also
* modify the table limits. These must be independent operations.
*/
TableRequest statement = "ALTER TABLE users " + "USING TTL 4 days";
request = TableRequest().set_statement(statement)
# assume that a handle has been created, as handle, make the request
#wait for 60 seconds, polling every 1 seconds
result = handle.do_table_request(request, 60000, 1000)
result.wait_for_completion(handle, 60000, 1000)

Go
Specify the DDL statement and other information in a TableRequest, and execute the request
using the nosqldb.DoTableRequest() or nosqldb.DoTableRequestAndWait() function.

req:=&nosqldb.TableRequest{
Statement: "ALTER TABLE users (ADD age INTEGER)",
}
res, err:=client.DoTableRequestAndWait(req, 5*time.Second, time.Second)

1-357
Chapter 1
Manage

The Oracle NoSQL Database Cloud Service table limits can be modified using
TableRequest.TableLimits. If the table is configured with provisioned capacity, the
limits can be set as shown in the example below.

req := &nosqldb.TableRequest{
TableName: "users",
TableLimits: &nosqldb.TableLimits{
ReadUnits: 100,
WriteUnits: 100,
StorageGB: 5,
},
}
res, err := client.DoTableRequestAndWait(req, 5*time.Second,
time.Second)

You can also use the Oracle NoSQL Database Go SDK to modify a table and change
the capacity model to an on-demand capacity configuration. You can also choose to
change the Storage capacity.

req := &nosqldb.TableRequest{ TableName: "users",


TableLimits: &nosqldb.TableLimits{StorageGB: 10}}
res, err := client.DoTableRequestAndWait(req, 5*time.Second,
time.Second)

Node.js
Use NoSQLClient#tableDDL to modify a table by issuing a DDL statement against this
table. Table limits can be modified using setTableLimits method. It takes table name
and new TableLimits as arguments and returns Promise of TableResult. If the table
is configured with provisioned capacity, the limits can be set as shown in the example
below.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const TableState = require('oracle-nosqldb').TableState;
const client = new NoSQLClient('config.json');

async function modifyUsersTableLimits() {


const tableName = 'users';
try {
let result = await client.setTableLimits(tableName, {
readUnits: 40,
writeUnits: 10,
storageGB: 5
});
// Wait for the operation completion using specified timeout
and
// specified polling interval (delay)
await client.forCompletion(result, TableState.ACTIVE, {
timeout: 30000,
delay: 2000
});
console.log('Table limits modified');
} catch(error) {
//handle errors

1-358
Chapter 1
Manage

}
}

You can also use the Oracle NoSQL Database Node.js SDK to modify a table and change the
capacity model to an on-demand capacity configuration. You can also choose to change the
Storage capacity.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const TableState = require('oracle-nosqldb').TableState;
const client = new NoSQLClient('config.json');
async function modifyUsersTableLimits() {
const tableName = 'users';
try {
let result = await client.setTableLimits(tableName, {storageGB: 10 });
// Wait for the operation completion using specified timeout and
polling interval (delay)
await client.forCompletion(result, TableState.ACTIVE, { timeout: 30000,
delay: 2000 });
console.log('Table limits modified');
}
catch(error) { //handle errors }
}

You can change the TTL value of a table as shown below.

/* Alter the users table to modify the TTL value to 4 days.


*/
const statement = 'ALTER TABLE users ' + 'USING TTL 4 days';
let result = await client.tableDDL(statement, complete: true });
console.log('Table users altered');

C#
Use ExecuteTableDDLAsync or ExecuteTableDDLWithCompletionAsync to modify a table by
issuing a DDL statement against this table.
Table limits can be modified using SetTableLimitsAsync or
SetTableLimitsWithCompletionAsync methods. They take table name and new TableLimits
as parameters and return Task<TableResult>. If the table is configured with provisioned
capacity, the limits can be set as shown in the example below.

var client = new NoSQLClient("config.json");


var tableName = "users";
try {
var result = await client.SetTableLimitsWithCompletionAsync(
tableName, new TableLimits(40, 10, 5));
// Expected output: Table state is Active.
Console.WriteLine("Table state is {0}.", result.TableState);
Console.WriteLine("Table limits have been changed");
}
catch(Exception ex) {
// handle exceptions
}

1-359
Chapter 1
Manage

You can also use the Oracle NoSQL Database .NET SDK to modify a table and
change the capacity model to an on-demand capacity configuration. You can also
choose to change the Storage capacity.

var client = new NoSQLClient("config.json");


var tableName = "users";
try {
var result = await
client.SetTableLimitsWithCompletionAsync( tableName, new
TableLimits(10));
// Expected output: Table state is Active.
Console.WriteLine("Table state is {0}.", result.TableState);
Console.WriteLine("Table limits have been changed");
}
catch(Exception ex) { // handle exceptions }

You can change the TTL value of a table as shown below.

/* Alter the users table to modify the TTL value to 4 days.


*/
var statement = "ALTER TABLE users " + "USING TTL 4 days";
var result = await client.ExecuteTableDDLAsync(statement);
await result.WaitForCompletionAsync();
Console.WriteLine("Table users altered.");

Spring Data
To modify a table, you can use the NosqlTemplate.runTableRequest() method. For
details, see SDK for Spring Data API Reference.

Note:
While the Spring Data SDK provides an option to modify the tables, it is not
recommended to alter the schemas as the Spring Data SDK expects tables
to comply with the default schema (two columns - the primary key column of
types String, int, long, or timestamp and a JSON column called kv_json_).

Deleting Data
Learn how to delete rows from your table.
After you insert or load data into a table, you can delete the table rows when they are
no longer required.

• Java

• Python

1-360
Chapter 1
Manage

• Go

• Node.js

• C#

• Spring Data

Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL Handle. To
explore other options of specifying a compartment for the NoSQL tables, see About
Compartments.
To delete a row from a table:

/* identify the row to delete */


MapValue delKey = new MapValue().put("id", 2);

/* construct the DeleteRequest */


DeleteRequest delRequest = new DeleteRequest().setKey(delKey)
.setTableName("users");
/* Use the NoSQL handle to execute the delete request */
DeleteResult del = handle.delete(delRequest);
/* on success DeleteResult.getSuccess() returns true */
if (del.getSuccess()) {
// success, row was deleted
} else {
// failure, row either did not exist or conditional delete failed
}

You can perform a sequence of DeleteRequest operations on a table using the


MultiDeleteRequest class.

See the Java API Reference Guide for more information about the APIs.

Python
Single rows are deleted using borneo.DeleteRequest using a primary key value as shown
below.

from borneo import DeleteRequest


# DeleteRequest requires table name and primary key
request = DeleteRequest().set_table_name('users')
request.set_key({'id': 1})
# perform the operation
result = handle.delete(request)
if result.get_success():
# success -- the row was deleted
# if the row didn't exist or was not deleted for any other reason, False is
returned

1-361
Chapter 1
Manage

Delete operations can be conditional based on a borneo.Version returned from a get


operation.You can perform multiple deletes in a single operation using a value range
using borneo.MultiDeleteRequest and borneo.NoSQLHandle.multi_delete() .

Go
Single rows are deleted using nosqldb.DeleteRequest using a primary key value:

key := &types.MapValue{}
key.Put("id", 1)
req := &nosqldb.DeleteRequest{
TableName: "users",
Key: key,
}
res, err := client.Delete(req)

Delete operations can be conditional based on a types.Version returned from a get


operation.

Node.js
To delete a row, use delete method. Pass to it the table name and primary key of the
row to delete. In addition, you can make delete operation conditional by specifying on
a Version of the row that was previously returned by get or put. You can pass it as
matchVersion property of the opt argument: { matchVersion: my_version }.
Alternatively you may use deleteIfVersion method.

delete and deleteIfVersion methods return Promise of DeleteResult, which is plain


JavaScript object, containing success status of the operation.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const client = new NoSQLClient('config.json');

async function deleteRowsFromUsersTable() {


const tableName = 'users';
try {
let result = await client.put(tableName, { id: 1, name:
'John' });

// Unconditional delete, should succeed


result = await client.delete(tableName, { id: 1 });
// Expected output: delete succeeded
console.log('delete ' + result.success ? 'succeeded' :
'failed');

// Delete with non-existent primary key, will fail


result = await client.delete(tableName, { id: 2 });
// Expected output: delete failed
console.log('delete ' + result.success ? 'succeeded' :
'failed');

// Re-insert the row


result = await client.put(tableName, { id: 1, name: 'John' });
let version = result.version;

1-362
Chapter 1
Manage

// Will succeed because the version matches existing row


result = await client.deleteIfVersion(tableName, { id: 1 }, version);
// Expected output: deleteIfVersion succeeded
console.log('deleteIfVersion ' + result.success ?
'succeeded' : 'failed');

// Re-insert the row


result = await client.put(tableName, { id: 1, name: 'John' });

// Will fail because the last put has changed the row version, so
// the old version no longer matches. The result will also contain
// existing row and its version because we specified returnExisting
in
// the opt argument.
result = await client.deleteIfVersion(tableName, { id: 1 }, version,
{ returnExisting: true });
// Expected output: deleteIfVersion failed
console.log('deleteIfVersion ' + result.success ?
'succeeded' : 'failed');
// Expected output: { id: 1, name: 'John' }
console.log(result.existingRow);
} catch(error) {
//handle errors
}
}

Note that similar to put operations, success results in false value only if trying to delete row
with non-existent primary key or because of version mismatch when matching version was
specified. Failure for any other reason will result in error. You can delete multiple rows having
the same shard key in a single atomic operation using deleteRange method. This method
deletes set of rows based on partial primary key (which must be a shard key or its superset)
and optional FieldRange which specifies a range of values of one of the other (not included
into the partial key) primary key fields.

C#
To delete a row, use DeleteAsync method. Pass to it the table name and primary key of the
row to delete. This method takes the primary key value as MapValue. The field names should
be the same as the table primary key column names. You may also pass options as
DeleteOptions. In addition, you can make delete operation conditional by specifying on a
RowVersion of the row that was previously returned by GetAsync or PutAsync. Use
DeleteIfVersionAsync method that takes the row version to match. Alternatively, you may
use DeleteAsync method and pass the version as MatchVersion property of DeleteOptions.

var client = new NoSQLClient("config.json");


var tableName = "users";
try
{
var row = new MapValue
{
["id"] = 1,
["name"] = "John"
};

1-363
Chapter 1
Manage

var putResult = await client.PutAsync(tableName, row);


Console.WriteLine("Put {0}.",
putResult.Success ? "succeeded" : "failed");

var primaryKey = new MapValue


{
["id"] = 1
};
// Unconditional delete, should succeed.
var deleteResult = await client.DeleteAsync(tableName, primaryKey);
// Expected output: Delete succeeded.
Console.WriteLine("Delete {0}.",
deleteResult.Success ? "succeeded" : "failed");
// Delete with non-existent primary key, should fail.
var deleteResult = await client.DeleteAsync(tableName,
new MapValue
{
["id"] = 200
});
// Expected output: Delete failed.
Console.WriteLine("Delete {0}.",
deleteResult.Success ? "succeeded" : "failed");
// Re-insert the row and get the new row version.
putResult = await client.PutAsync(tableName, row);
var version = putResult.Version;
// Delete should succeed because the version matches existing
// row.
deleteResult = await client.DeleteIfVersionAsync(tableName,
primaryKey, version);
// Expected output: DeleteIfVersion succeeded.
Console.WriteLine("DeleteIfVersion {0}.",
deleteResult.Success ? "succeeded" : "failed");
// Re-insert the row
putResult = await client.PutAsync(tableName, row);
// This delete should fail because the last put operation has
// changed the row version, so the old version no longer matches.
// The result will also contain existing row and its version
because
// we specified ReturnExisting in DeleteOptions.
deleteResult = await client.DeleteIfVersionAsync(tableName,
primaryKey, version);
// Expected output: DeleteIfVersion failed.
Console.WriteLine("DeleteIfVersion {0}.",
deleteResult.Success ? "succeeded" : "failed");
// Expected output: { "id": 1, "name": "John" }
Console.WriteLine(result.existingRow);
}
catch(Exception ex) {
// handle exceptions
}

Note that Success property of the result only indicates whether the row to delete was
found and for conditional Delete, whether the provided version was matched. If the
Delete operation fails for any other reason, an exception will be thrown. You can delete

1-364
Chapter 1
Manage

multiple rows having the same shard key in a single atomic operation using
DeleteRangeAsync method. This method deletes set of rows based on partial primary key
(which must include a shard key) and optional FieldRange which specifies a range of values
of one of the other (not included into the partial key) primary key fields.

Spring Data
Use one of these methods to delete the rows from the tables - NosqlRepository
deleteById(), delete(), deleteAll(Iterable<? extends T> entities), deleteAll() or
using NosqlTemplate delete(), deleteAll(), deleteById(), deleteInShard(). For details,
see SDK for Spring Data API Reference.

Note:
First, create the AppConfig class that extends AbstractNosqlConfiguration class
to provide the connection details of the Oracle NoSQL Database. For more details,
see Obtaining a NoSQL connection.

In this section, you use the NosqlRepository deleteAll() method to delete the rows from
your table.
Create the UsersRepository interface. This interface extends the NosqlRepository interface
and provides the entity class and the data type of the primary key in that class as
parameterized types to the NosqlRepository interface. The NosqlRepository interface
provides methods that are used to retrieve data from the database.

import com.oracle.nosql.spring.data.repository.NosqlRepository;

/* The Users is the entity class and Long is the data type of the primary
key in the Users class.
This interface provides methods that return iterable instances of the
Users class. */

public interface UsersRepository extends NosqlRepository<Users, Long> {


Iterable<Users> findAll();
}

In the application, you use the deleteAll() method to delete the existing rows from the table.

@Autowired
private UsersRepository repo;

/* Delete all the existing rows if any, from the Users table.*/
repo.deleteAll();

1-365
Chapter 1
Manage

Dropping Tables and Indexes


Learn how to delete a table or index that you have created in Oracle NoSQL Database
Cloud Service.
To drop a table in Oracle NoSQL Database Cloud Service, you must have the
NOSQL_TABLE_DROP permission. See Details for Verb + Resource-Type Combinations to
learn about different permissions.
To drop a table or index, use the DROP TABLE or DROP INDEX DDL statements. For
example:

/* Drop the table named users */


DROP TABLE users

/* Drop the index called nameIndex on the table users */


DROP INDEX IF EXISTS nameIndex ON users

• Java

• Python

• Go

• Node.js

• C#

• Spring Data

Java
The following example considers that the default compartment is specified in
NoSQLHandleConfig while obtaining the NoSQL handle. See Obtaining a NoSQL
Handle. To explore other options of specifying a compartment for the NoSQL tables,
see About Compartments .
To drop a table using the TableRequests.setStatement method:

/* create the TableRequest to drop the users table */


TableRequest tableRequest = new TableRequest().setStatement("drop
table users");

/* start the operation, it is asynchronous */


TableResult tres = handle.tableRequest(tableRequest);

/* wait for completion of the operation */


tres.waitForCompletion(handle,
60000, /* wait for 60 sec */
1000); /* delay in ms for poll */

1-366
Chapter 1
Manage

Python
The following example drops the table users.

from borneo import TableRequest


# the drop statement
statement = 'drop table users'
request = TableRequest().set_statement(statement)
# perform the operation, wait for 40 seconds, polling every 3 seconds
result = handle.do_table_request(request, 40000, 3000)

Go
The following example drops the given table.

// Drop the table


dropReq := &nosqldb.TableRequest{Statement: "DROP TABLE IF EXISTS " +
tableName}
tableRes, err = client.DoTableRequestAndWait(dropReq, 60*time.Second,
time.Second)
if err != nil {
fmt.Printf("failed to drop table: %v\n", err)
return
}
fmt.Println("Dropped table " + tableName)

Node.js
The following example drops the given table and index.

const NoSQLClient = require('oracle-nosqldb').NoSQLClient;


const TableState = require('oracle-nosqldb').TableState;
.....
const client = new NoSQLClient('config.json');

async function dropNameIndexUsersTable() {


try {
let result = await client.tableDDL('DROP INDEX nameIndex ON users');
// Before using the table again, wait for the operation completion
// (when the table state changes from UPDATING to ACTIVE)
await client.forCompletion(result);
console.log('Index dropped');
} catch(error) {
//handle errors
}
}

async function dropTableUsers() {


try {
// Here we are waiting until the drop table operation is completed
// in the underlying store
let result = await client.tableDDL('DROP TABLE users', {
completion: true
});
console.log('Table dropped');

1-367
Chapter 1
Manage

} catch(error) {
//handle errors
}
}

C#
To drop tables, use ExecuteTableDDLAsync and
ExecuteTableDDLWithCompletionAsync.

var client = new NoSQLClient("config.json");


try {
// Drop index "nameIndex" on table "users".
var result = await client.ExecuteTableDDLAsync(
"DROP INDEX nameIndex ON users");
// The following may print: Table state is Updating.
Console.WriteLine("Table state is {0}", result.TableState);
await result.WaitForCompletionAsync();
// Expected output: Table state is Active.
Console.WriteLine("Table state is {0}.", result.TableState);
// Drop table "TestTable".
result = await client.ExecuteTableDDLWithCompletionAsync(
"DROP TABLE TestTable");
// Expected output: Table state is Dropped.
Console.WriteLine("Table state is {0}.", result.TableState);
}
catch(Exception ex) {
// handle exceptions
}

Spring Data
To drop the tables and Indexes, you use NosqlTemplate.runTableRequest() or
NosqlTemplate.dropTableIfExists() methods. For details, see SDK for Spring Data
API Reference.
Create the AppConfig class that extends AbstractNosqlConfiguration class to
provide the connection details of the database. For more details, see Obtaining a
NoSQL connection.
In the application, you instantiate the NosqlTemplate class by providing the
NosqlTemplate create(NosqlDbConfig nosqlDBConfig) method with the instance of
the AppConfig class. You then drop the table using the
NosqlTemplate.dropTableIfExists() method. The
NosqlTemplate.dropTableIfExists() method drops the table and returns true if the
result indicates a change of the table's state to DROPPED or DROPPING.

import com.oracle.nosql.spring.data.core.NosqlTemplate;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;

1-368
Chapter 1
Manage

/* Drop the Users table.*/

try {
AppConfig config = new AppConfig();
NosqlTemplate tabledrop = NosqlTemplate.create(config.nosqlDbConfig());
Boolean result = tabledrop.dropTableIfExists("Users");
if (result == true) {
System.out.println("Table dropped successfully");
} else {
System.out.println("Failed to drop table");
}
} catch (Exception e) {
System.out.println("Exception creating index" + e);
}

Using console to manage tables


• Modifying Table Data Using Console
• Managing Table Data Using Console
• Managing Tables and Indexes Using Console

Modifying Table Data Using Console


Learn how to update and delete Oracle NoSQL Database Cloud Service table data using
Console.
This article has the following topics:

Updating Table Data


Learn how to update data in Oracle NoSQL Database Cloud Service tables from the NoSQL
console.
To update table data:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Explore Data tab under Resources.
4. By default, the query text is populated with a SQL query that will retrieve all the records
from the table. You can modify this query with any valid SQL for Oracle NoSQL
statement. You may get an error that your statement is Incomplete or faulty. See
Debugging SQL statement errors in the OCI console to learn about possible errors in the
OCI console and how to fix them. See Developers Guide for SQL query examples.

1-369
Chapter 1
Manage

5. Click the action menu corresponding to the row you wish to update, and select
Update Row.
6. Modify the values in Simple Input or Advanced JSON Input Updation Mode.
7. Click Update Row.
To view help for the current page, click the help link at the top of the page.

Deleting Table Data


Learn how to delete data in Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
To delete table data:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, select the Explore Data tab under Resources.
4. By default, the query text is populated with a SQL query that will retrieve all the
records from the table. You can modify this query with any valid SQL for Oracle
NoSQL statement. You may get an error that your statement is Incomplete or
faulty. See Debugging SQL statement errors in the OCI console to learn about
possible errors in the OCI console and how to fix them. See Developers Guide for
SQL query examples.
5. Click the action menu corresponding to the row you wish to delete, and select
Delete.
The Delete Row confirmation dialog opens.
6. Click Delete.
The row is deleted.

Managing Table Data Using Console


Learn how to view and download Oracle NoSQL Database Cloud Service table data
using Console.
This article has the following topics:

Viewing Table Data


Learn how to view data in Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
To view table data:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .

1-370
Chapter 1
Manage

2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Explore Data tab under Resources.
4. By default, the query text is populated with a SQL query that will retrieve all the records
from the table. You can modify this query with any valid SQL for Oracle NoSQL
statement. You may get an error that your statement is Incomplete or faulty. See
Debugging SQL statement errors in the OCI console to learn about possible errors in the
OCI console and how to fix them. See Developers Guide for SQL query examples.
5. Click Execute.
The table data is displayed in the Records section.
6. To view the query execution plan of the SQL query that was executed, click Show query
execution plan. The detailed query execution plan is displayed in a new window.

Downloading Table Data


Learn how to download data in Oracle NoSQL Database Cloud Service tables from the
NoSQL console.
To download table data:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Explore Data tab under Resources.
4. By default, the query text is populated with a SQL query that will retrieve all the records
from the table. You can modify this query with any valid SQL for Oracle NoSQL
statement. You may get an error that your statement is Incomplete or faulty. See
Debugging SQL statement errors in the OCI console to learn about possible errors in the
OCI console and how to fix them. See Developers Guide for SQL query examples.
5. Click the action menu corresponding to the row you wish to download, and select
Download JSON.
The row downloads in JSON format.

Managing Tables and Indexes Using Console


Learn how to manage Oracle NoSQL Database Cloud Service tables and indexes from the
Console.
This article has the following topics:

1-371
Chapter 1
Manage

Viewing Tables
You can view Oracle NoSQL Database Cloud Service tables from the NoSQL console.
To view tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. You can view all the tables in your tenancy from the NoSQL console.

Viewing Indexes
You can view Oracle NoSQL Database Cloud Service the list of indexes created for a
NoSQL table from the NoSQL console.
To view indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.

Upload data into tables


You can insert all data of a table using a single upload action.
To upload data:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console.
2. The NoSQL console lists all the tables in the tenancy. To upload data into a table,
Click the table name. The Table Details page opens up.
3. On the Table Details page, click Upload Data.
4. A new page opens. You can either drop the file in the given textbox or upload the
file from your local storage. Note: The file to be uploaded must be in JSON format.

Viewing Table Details


Learn how to view Oracle NoSQL Database Cloud Service table details from the
NoSQL console.
To view table details:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .

1-372
Chapter 1
Manage

2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. From the Table Details page, you can view all table columns, indexes, rows, and metrics.
4. A column in the list (Child tables) shows how many child tables are owned by the
specified table.

5. The list of child tables for a given parent table can be viewed by clicking the "Child tables"
link under Resources on the parent table's details page.

Viewing Table DDL


You can view the DDL statement used to create a table for the Table Details page.
To view table DDL:
1. In the Table Details page, click View Table DDL.
The View Table DDL window displays the table DDL statement.
2. Now, you can select and copy the table DDL statement from the window. Click OK to
close the window.

1-373
Chapter 1
Manage

Viewing Table DDL


Learn how to view Oracle NoSQL Database Cloud Service table DDL from the NoSQL
console.
To view table DDL:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, click View Table DDL.
The View Table DDL window displays the table DDL statement.
4. Now, you can select and copy the table DDL statement from the window. Click OK
to close the window.

Editing Tables
You can update reserved capacity (if the table is not an Always Free NoSQL table) and
Time to Live (TTL) values for your Oracle NoSQL Database Cloud Service tables from
the NoSQL console.
To edit tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. The value of Time to Live (TTL) can be updated.
• To update the value of Time to Live (TTL), click the Edit link next to the Time
to live (Days) field.

1-374
Chapter 1
Manage

• You can also update the value of Time to Live (TTL) by clicking the action menu
corresponding to the table name you wish to change and select Edit default time to
live.

• If the table is a child table, only the Time to live (TTL) value can be updated. To
update the value of Time to Live (TTL), click the Edit link next to the Time to live
(Days) field.

1-375
Chapter 1
Manage

Note:
You cannot edit the reserved capacity of a child table directly. Only
the corresponding values of the parent table can be edited.

• Table Time to Live (Days): (optional) Specify the default expiration time for
the rows in the table. After this time, the rows expire automatically, and are no
longer available. The default value is zero, indicating no expiration time.

Note:
Updating Table Time to Live (TTL) will not change the TTL value of
any existing data in the table. The new TTL value will only apply to
those rows that are added to the table after this value is modified
and to the rows for which no overriding row-specific value has been
supplied.

4. If your table is not an Always Free NoSQL table, then the reserved capacity and
the usage model can be modified.
• Under More Actions, click Edit reserved capacity.

1-376
Chapter 1
Manage

• You can also update the Reserved Capacity by clicking the action menu
corresponding to the table name you wish to change and select Edit reserved
capacity.

Modify the following values for the table:


• Read Capacity (ReadUnits): Enter the number of read units. See Estimating
Capacity to learn about read units.
• Write Capacity (WriteUnits): Enter the number of write units. See Estimating
Capacity to learn about write units.
• Disk Storage (GB): Specify the disk space in gigabytes (GB) to be used by the table.
See Estimating Capacity to learn about storage capacity.

You can also modify the Capacity mode from Provisioned Capacity to on Demand
Capacity or vice-versa. If you provision units greater than what the on Demand
capacity can offer, and then If you switch from Provisioned capacity to On Demand
capacity, the capacity of the table will be reduced. You should take into consideration
the reduction in the capacity due to the switch in this scenario.
5. (Optional) To dismiss the changes, click Cancel.
To view help for the current page, click the help link at the top of the page.

1-377
Chapter 1
Manage

Altering Tables
Learn how to alter Oracle NoSQL Database Cloud Service tables by adding in simple
or advanced mode, or deleting columns using the NoSQL console.
The NoSQL console lets you alter the Oracle NoSQL Database Cloud Service tables
in two modes:
1. Simple Input Mode: You can use this mode to alter the NoSQL Database Cloud
Service table declaratively, that is, without writing a DDL statement.
2. Advanced DDL Input Mode: You can use this mode to alter the NoSQL Database
Cloud Service table using a DDL statement.

Moving Tables
Learn how to move Oracle NoSQL Database Cloud Service table to a different
compartment from the NoSQL console.
To move a table:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, click Move Table.
4. Alternatively, Click the action menu corresponding to the table name and select
Move table.
5. In the Move Resource to a Different Compartment window, modify the following
values for the table:
• Choose New Compartment: Select the new compartment from the select list.
6. Click Move table.
7. (Optional) To dismiss the changes, click the Cancel link on the top right corner.
To view help for the current page, click the help link at the top of the page.

Note:
You cannot move a child table to another compartment. If the parent table is
moved to a new compartment, all the descendant tables within the hierarchy
will be automatically moved to the target compartment in a single operation.

1-378
Chapter 1
Manage

Viewing Table Metrics


Learn how to view Oracle NoSQL Database Cloud Service table metrics from the NoSQL
console.
To view table metrics:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do either of
the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View Details.
The Table Details page opens up.
3. In the Table Details page, select the Metrics tab under Resources.
Table metrics such as Read Units, Write Units, Storage GB, Read Throttle Count, Write
Throttle Count, and Storage Throttle Count show up. You can filter the metrics by date,
change interval, and statistic value.
4. For each of the metrics displayed on this page, you can perform the following actions:
• View Query in Metrics Explorer: This page lets you write and edit queries in
Monitoring Query Language (MQL), using metrics from either your application or an
Oracle Cloud Infrastructure service. If you're not familiar with MQL, see Monitoring
Query Language (MQL) Reference. To learn more about this page, see Metrics
Explorer.
• Copy Chart URL: Click this option to copy the default metrics chart URL for any
future reference.
• Copy Query (MQL): Click this option to copy the MQL query used to create the
default metrics chart. If you're not familiar with MQL, see Monitoring Query Language
(MQL) Reference.
• Create an Alarm on this Query: Click this option to create alarms to monitor your
cloud resources. To learn about alarms, see Managing Alarms.

Deleting Tables
Learn how to delete Oracle NoSQL Database Cloud Service tables from the NoSQL console.
To delete tables:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the Service
from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To delete the table, do either of the
following:
• Click the table name. In the Table Details page, click the Delete button, or
• Click the action menu corresponding to the table name you wish to delete and select
Delete.
• If a table has child tables, then the child table should be deleted first before deleting
the parent table.
The Delete Table confirmation dialog opens.

1-379
Chapter 1
Monitor

3. Click Delete.
The table is deleted.

Deleting Indexes
Learn how to delete Oracle NoSQL Database Cloud Service indexes from the NoSQL
console.
To delete indexes:
1. Access the NoSQL console from the Infrastructure Console. See Accessing the
Service from the Infrastructure Console .
2. The NoSQL console lists all the tables in the tenancy. To view table details, do
either of the following:
• Click the table name, or
• Click the action menu corresponding to the table name and select View
Details.
The Table Details page opens up.
3. In the Table Details page, select the Indexes tab under Resources.
You will see a list of all the indexes added to the table.
4. Click the action menu corresponding to the index you wish to delete, and select
Delete.
The Delete Index confirmation dialog opens.
5. Click Delete.
The index is deleted.

Monitor
• Monitoring Oracle NoSQL Database Cloud Service

Monitoring Oracle NoSQL Database Cloud Service


The Oracle Cloud Infrastructure Monitoring service enables you to actively and
passively monitor your cloud resources using the Metrics and Alarms features. The
Monitoring service uses metrics to monitor resources and alarms to notify you when
these metrics meet alarm-specified triggers.
A metric is a measurement related to the health, capacity, or performance of a given
resource. An alarm is a trigger rule and query. Alarms passively monitor your cloud
resources by using metrics. You can configure notification settings when creating an
alarm.
Metrics are emitted to the Monitoring service as raw data points (a timestamp-value
pair for a specified metric)along with dimensions (a resource identifier provided in the
metric definition)and metadata. The Monitoring service publishes alarm messages to
configured destinations managed by the Notifications service.
When you query a metric, the Monitoring service returns aggregated data according to
the specified parameters. You can specify a range (such as the last 24 hours), statistic,
and interval. A statistic is the aggregation function applied to the raw data points. SUM

1-380
Chapter 1
Monitor

aggregation function is an example of a statistic. An interval is the time window used to


convert a given set of raw data points. For example, 5 minutes.
The Console displays one monitoring chart per metric for selected resources. The
aggregated data in each chart reflects your selected statistic and interval. API requests can
optionally filter by dimension and specify a resolution. API responses include the metric name
along with its source compartment and metric namespace(indicates the resource, service, or
application that emits a metric). The namespace is provided in the metric definition. For
example, the CpuUtilization metric definition emitted by Oracle Cloud lists the
oci_computeagent metric namespace as the source of the metric.

Metric and alarm data is accessible via the Console, CLI, and API. For more information
about OCI monitoring service concepts, see Monitoring Concepts.
This article has the following topics:

Oracle NoSQL Database Cloud Service Metrics


Oracle NoSQL Database Cloud Service emits metrics using the metric namespace
oci_nosql.

Metrics for Oracle NoSQL Database Cloud Service include the following dimensions:
• RESOURCEID
The OCID of the NoSQL Table in the Oracle NoSQL Database Cloud Service.

Note:
OCID is an Oracle-assigned unique ID that is included as part of the resource's
information in both the console and API.

• TABLENAME
The name of the NoSQL table in the Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service sends metrics to the Oracle Cloud Infrastructure
Monitoring Service. You can view or create alarms on these metrics using the Oracle Cloud
Infrastructure Console SDKs or CLI.

Table 1-19 Oracle NoSQL Database Cloud Service Metrics

Metric Metric Display Unit Description Dimensions


Name
ReadUnits Read Units Units The number of resourceId
read units tableName
consumed during
this period.
WriteUnits Write Units Units The number of resourceId
write units tableName
consumed during
this period.

1-381
Chapter 1
Monitor

Table 1-19 (Cont.) Oracle NoSQL Database Cloud Service Metrics

Metric Metric Display Unit Description Dimensions


Name
StorageGB Storage Size GB The maximum resourceId
amount of tableName
storage
consumed by the
table. As this
information is
generated hourly,
you may see
values that are
out of date in
between the
refresh points.
ReadThrottleCount Read Throttle Count The number of resourceId
read throttling tableName
exceptions on this
table in the time
period.
WriteThrottleCount Write Throttle Count The number of resourceId
write throttling tableName
exceptions on this
table in the time
period.
StorageThrottleCount Storage Throttle Count The number of resourceId
storage throttling tableName
exceptions on this
table in the time
period.
MaxShardSizeUsagePer Maximum Percentage The ratio of the resourceId
cent Shard Size space used in the tableName
Usage shard over the
total space
allocated to the
shard. This is
specific to a table
and will be the
highest value
across all shards.

Additionally, you can publish custom metrics as per your requirement. For example,
you can set up metrics to capture application transaction latency (time spent per
completed transaction) and then post that data to the Monitoring service.

NDCS Metrics Explained


Oracle NoSQL Database Cloud Service sends metrics to the Oracle Cloud
Infrastructure Monitoring Service.
Read Units:
The number of read units consumed during this period. It is the throughput for up to 1
KB of data per second for an eventually consistent read operation. If your data is
greater than 1 KB it will require multiple read units to read it. The Read Unit metric

1-382
Chapter 1
Monitor

chart for a table is shown below. The metric is taken every minute and the metric charts are
plotted for an interval of 5 minutes by default.

Write Units:
The number of write units consumed during this period. It is the throughput for up to 1 KB of
data per second for a write operation. Write operations are triggered during insert, update,
and delete operations. If your data is greater than 1 KB it will require multiple read units to
write it. The Write Unit metric chart for a table is shown below. The metric is taken every
minute and the metric charts are plotted for an interval of 5 minutes by default.

StorageGB:
The maximum amount of storage consumed by the table. The Storage metric chart for a table
is shown below. The metric is taken every minute and the metric charts are plotted for an
interval of 5 minutes by default.

1-383
Chapter 1
Monitor

Note:
It takes one hour after table creation to seed the beginning of storage size
tracking. After the initial hour, storage statistics are updated every 5 minutes.

Note:
The storage GB metric is truncated. Therefore storage usage of less than 1
GB will be displayed as 0. The chart will begin to display storage when usage
is greater than 1 GB.

ReadThrottleCount:
This gives a count of the number of read throttling exceptions on the given table in the
time period. A throttling exception usually indicates that the provisioned read
throughput has been exceeded. If you get these frequently, then you should consider
increasing the Read Units on your table. The Read throttle count metric chart for a
table is shown below. The metric is taken every minute and the metric charts are
plotted for an interval of 5 minutes by default.

1-384
Chapter 1
Monitor

WriteThrottleCount:
This gives a count of the number of write throttling exceptions on the given table in the time
period. A throttling exception usually indicates that the provisioned write throughput has been
exceeded. If you get these frequently, then you should consider increasing the Write Units on
your table. The Write throttle count metric chart for a table is shown below. The metric is
taken every minute and the metric charts are plotted for an interval of 5 minutes by default.

StorageThrottleCount:
This gives a count of the number of storage throttling exceptions on the given table in the
time period. A throttling exception usually indicates that the provisioned storage capacity has
been exceeded. If you get these frequently, then you should consider increasing the storage
capacity of your table. The Storage throttle count metric chart for a table is shown below. The
metric is taken every minute and the metric charts are plotted for an interval of 5 minutes by
default.

1-385
Chapter 1
Monitor

MaxShardSizeUsagePercent
The highest usage of space in a shard for a specific table, as a percentage of space
used in that shard.

Note:
Oracle NoSQL Database Cloud Service hashes keys to shards to provide
distribution over a collection of storage nodes that provide storage for the
tables. Although not directly visible to you, Oracle NoSQL Database Cloud
Service tables are sharded and replicated for availability and performance. A
shard key either 100% matches the primary key or is a subset of the primary
key.. All records sharing a shard key are co-located to achieve data locality.

When maxShardSizeUsagepercent reaches 100, you can no longer do a write


operation in the table. You have to increase the storage capacity to perform a write into
the table. This metric helps to determine if a storage hotspot exists for your NoSQL
table.
This scenario happens because of an imbalance in how the table data is stored across
shards. An imbalance can occur when a majority of the table data is stored in a subset
of the shards. The storage in a NoSQL database is sharded, and the shard key is part
of the table definition. In hierarchical tables, the parent and child tables share the
same shard key. If you have a parent table with child tables, all the records share the
same shard key. So all of these data will be stored together. If a parent table has fewer
children, it occupies less storage space in a single shard. Due to this imbalance,
certain shards can contain much more data than other shards.
At a certain point, one shard will have the highest usage of space for a specific table
and the percentage used in that shard is the MaxShardSizeUsagePercent. The
maxShardSizeUsagepercent metric chart for a table is shown below. The metric is
taken every minute and the metric charts are plotted for an interval of 5 minutes by
default.

1-386
Chapter 1
Monitor

In addition to viewing the chart for a metric, you have the following options.

You can get the table view to check the value of a metric at a given point in time.

1-387
Chapter 1
Monitor

Monitoring the MaxShardSizeUsagePercent metric


You have to periodically monitor this chart to know if the maxShardSizeUsagepercent is
reached or not. Proactively you can create an alarm for this metric.

That is you should trigger an alarm when the metric reaches a particular value, say for
example 90 percent.

1-388
Chapter 1
Monitor

OCI alarm uses OCI notification service to send notifications. Usually, the alarm will be
configured to send notifications through configured email. When maxShardSizeUsagepercent
reaches 90 percent, an email notification is sent.

See Managing Alarms and Notifications for more details.


When there is an imbalance in the way your table data is distributed across shards, you will
not be able to utilize the storage capacity allocated to your table to its maximum. In this
scenario, maxShardSizeUsagepercent reaches the value of 100 even without utilizing the
entire storage allocated to the table. You are now required to add more storage to continue
writing on your table. This scenario can be avoided by following some guidelines while
designing your table.
• Decide on the correct shard key for your table. The attributes with high cardinality are a
good choice for shard keys.

1-389
Chapter 1
Monitor

• Limit the number of child tables to avoid a potential shard storage imbalance
situation.

Viewing or Listing Oracle NoSQL Database Cloud Service Metrics


You can view the metrics available for the Oracle NoSQL Database Cloud Service
from Console. Additionally, you can get the list of metrics available for the Oracle
NoSQL Database Cloud Service using OCI CLI commands.

• Viewing NoSQL metrics from Console

• Listing NoSQL metrics from OCI CLI Command Line

Viewing NoSQL metrics from Console


1. Open the navigation menu and click Observability & Management. Under
Monitoring, click Service Metrics.
2. Select the Compartment and Metric namespace (oci_nosql).

Listing NoSQL metrics from OCI CLI Command Line


From the Cloud Shell, run the following command. It returns metric definitions that
match the criteria specified in the request. Compartment OCID required. For more
information about the OPTIONS available with the list command, see List Metrics.
oci monitoring metric list --compartment-id <Compartment_OCID>
--namespace oci_nosql

For example:

oci monitoring metric list --compartment-id


ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjh
o3f7nf5ca3ya --namespace oci_nosql

Example response:

{
"data": [
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlx
xsrvrc4zxr6lo4a",
"tableName": "demo"
},
"name": "ReadThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},
{

1-390
Chapter 1
Monitor

"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "ReadUnits",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "StorageGB",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "StorageThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drjho3f7n
f5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlxxsrvrc
4zxr6lo4a",
"tableName": "demo"
},
"name": "WriteThrottleCount",
"namespace": "oci_nosql",
"resource-group": null
},

1-391
Chapter 1
Monitor

{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaawrmvqjzoegxbsixp5k3b5554vlv2kxukobw3drj
ho3f7nf5ca3ya",
"dimensions": {
"resourceId":
"ocid1_nosqltable_oc1_phx_amaaaaaau7x7rfyasvdkoclhgryulgzox3nvlxb2bqtlx
xsrvrc4zxr6lo4a",
"tableName": "demo"
},
"name": "WriteUnits",
"namespace": "oci_nosql",
"resource-group": null
}
]
}

How to Collect Oracle NoSQL Database Cloud Service Metrics?


You can build metric queries for collecting specific sets of metrics (aggregated data). A
metric query contains the Monitoring Query Language (MQL) expression to evaluate
for returning aggregated data. The query must specify a metric, statistic, and interval.
You can use metric queries to actively and passively monitor your cloud resources.
Actively monitor with metric queries that you generate spontaneously, on-demand. In
the Console, update a chart to show data from multiple queries. Store queries you
want to reuse. Passively monitor with alarms that add a condition, or trigger rule, to a
metric query.
Metric query syntax (boldface elements are required):

metric[interval]
{dimensionname=dimensionvalue}.groupingfunction.statistic

Threshold Alarm query syntax (boldface elements are required):

metric[interval]
{dimensionname=dimensionvalue}.groupingfunction.statistic
alarmoperator alarmvalue

For supported parameter values, see Monitoring Query Language (MQL) Reference.

Example Queries
Simple metric query
Sum of Storage Throttle counts for all the tables in a compartment at a one-minute
interval.
The number of lines displayed in the metric chart (Console): 1 per table.

StorageThrottleCount[1m].sum()

1-392
Chapter 1
Secure

Filtered metric query


Sum of Storage Throttle counts in a compartment at a one-minute interval, filtered to a single
table.
The number of lines displayed in the metric chart (Console): 1.

StorageThrottleCount[1m]{tableName = "demoKeyVal"}.sum()

Aggregated metric query


Aggregated average of read operation at a sixty-minute interval, filtered to a compartment,
aggregated for the average.
The number of lines displayed in the metric chart (Console): 1 per table.

ReadUnits[60m]
{compartmentId="ocid1.compartment.oc1.phx..exampleuniqueID"}.grouping().mean(
)

Group-aggregated metric query


Aggregated average of Read Throttle Count by read unit at a sixty-minute interval, filtered to
a single table in a compartment.
The number of lines displayed in the metric chart (Console): 1 per read unit.

ReadThrottleCount[60m]{tableName = "demoKeyVal"}.groupBy(ReadUnits).mean()

Secure
• About Oracle NoSQL Database Cloud Service Security Model
• Authorization to access OCI resources
• Managing Access to Oracle NoSQL Database Cloud Service Tables

About Oracle NoSQL Database Cloud Service Security Model


Learn about the security model for Oracle NoSQL Database Cloud Service.

Policies
Oracle NoSQL Database Cloud Service uses the Oracle Cloud Infrastructure Identity and
Access Management security model that is built on the policies. A policy is a document that
specifies who can access which Oracle Cloud Infrastructure resources, including NoSQL
tables that your company has, and how they can access these resources. A policy allows a
group to work in certain ways with specific types of resources such as NoSQL Tables in a
particular compartment.
To govern the control of your tables, your company will have at least one policy. Each policy
consists of one or more policy statements that follow this basic syntax:

Allow group <group_name> to <verb> <resource-type> in compartment


<compartment_name>

1-393
Chapter 1
Secure

To learn how policies work, see Overview of Policies in Oracle Cloud Infrastructure
Documentation.

Groups
In Oracle Cloud Infrastructure Identity and Access Management, you organize Users
within groups that usually share the same type of access to a particular set of NoSQL
tables or compartments.
You can grant access to the NoSQL Tables at the group and compartment level, by
writing a policy that gives a group a specific type of access within a particular
compartment, or to the tenancy itself. If you give a group access to the tenancy, the
group automatically gets the same type of access to all the compartments inside the
tenancy. For example, after you create a table in the compartment ProjectA, you must
write a policy to grant access to the group(s) you want them to manage or use the
tables. Otherwise, the tables are not even visible to the groups that don't have access.
For example, to allow the Developer group to manage all the NoSQL resources, you
can create the following policy:

allow group Developers to manage nosql-family in compartment ProjectA

Verbs
A verb specifies the type of access being granted by the policy. For example, inspect
nosql-tables lets you list the NoSQL tables. Inspect, read, use, and manage are the
verbs supported by Oracle NoSQL Database Cloud Service. See Verbs in Oracle
Cloud Infrastructure Documentation.

Resource-types
Resources are the cloud objects that your company's employees create and use when
interacting with the Oracle Cloud Infrastructure (OCI). Oracle defines resource-types
you can use in policies. nosql-tables, nosql-rows, and nosql-indexes are three
individual resource-types supported by NoSQL Database Cloud Service.
By specifying a resource-type in a policy, you give access permissions against that
resource type alone. For example, to grant read permissions on the rows of all NoSQL
tables in the tenancy, to the viewers group, you can create a policy as:

allow group viewers to read nosql-rows in tenancy

To simplify writing policies, NoSQL Database Cloud Service also provides an


aggregate resource-type called nosql-family. nosql-family includes nosql-tables,
nosql-indexes, and nosql-rows that are often managed together. For example, to
grant full access to NoSQL Tables in the tenancy, to the viewers group, you can write a
policy as:

allow group viewers to manage nosql-family in tenancy

Compartments
A compartment is the fundamental component of Oracle Cloud Infrastructure. You can
organize the Oracle NoSQL Database Cloud Service resources within compartments.
Compartments are used to separate tables for measuring usage and billing, defining
access, and isolating the resources between different projects or business units.

1-394
Chapter 1
Secure

Note:
Tenancy is the root compartment that contains all of your organization's Oracle
Cloud Infrastructure resources.

All the Oracle Cloud Infrastructure Identity and Access Management resources, users,
groups, compartments and policies are global and available across all regions, but the master
set of definitions reside in a single region, the home region. All the changes to your IAM
resources must be made in your home region. To learn more about the IAM components, see
Overview of Oracle Cloud Infrastructure Identity and Access Management. The following note
provides information regarding which version of the documentation you should read.

Note:
The way you manage users and groups for Oracle NoSQL Database Cloud Service
depends on whether or not your cloud account or tenancy is in the OCI region that
has been updated to use identity domains. Some OCI regions have been updated
to use identity domains. If you have a cloud account or tenancy in one of these OCI
regions, you can use the identity domains to manage the users who perform tasks
in Oracle Cloud Infrastructure. For more information on how to set up users and
groups for Oracle NoSQL Database Cloud Service, see About Setting Up Users,
Groups, and Policies .

Tip:
It's easy to determine whether or not your OCI region has been updated to use
Identity and Access Management (IAM) Identity Domains. For more information,
see Do You Have Access to Identity Domains?

Authorization to access OCI resources


The way you manage users and groups for Oracle NoSQL Database Cloud Service depends
on whether or not your cloud account or tenancy is in the OCI region that has been updated
to use identity domains. Some OCI regions have been updated to use identity domains. If you
have a cloud account or tenancy in one of these OCI regions, you can use the identity
domains to manage the users who perform tasks in Oracle Cloud Infrastructure.
It's easy to determine whether or not your OCI region has been updated to use Identity and
Access Management (IAM) Identity Domains. For more information, see Do You Have
Access to Identity Domains?
• If your OCI region offers identity domains to manage users and groups for Oracle Cloud
Infrastructure, see Setting Up Users, Groups, and Policies Using Identity Domains.
• If your OCI region does not offer identity domains to manage users and groups for Oracle
Cloud Infrastructure, see Setting Up Users, Groups, and Policies Using Identity and
Access Management.

1-395
Chapter 1
Secure

Setting Up Users, Groups, and Policies Using Identity and Access


Management
Oracle NoSQL Database Cloud Service uses Oracle Cloud Infrastructure Identity and
Access Management (IAM) to provide secure access to Oracle Cloud. Oracle Cloud
Infrastructure IAM enables you to create user accounts and give users permission to
inspect, read, use, or manage tables.
If you are authenticating as a User Principal (using API signing key, see Setting Up
Users, Groups, and Policies. Alternatively, if you are authenticating as an Instance
Principal or Resource Principal, see Setting up Dynamic Group and Policies.

Setting Up Users, Groups, and Policies


1. Sign in to your Cloud Account as Cloud Account Administrator.
2. In Oracle Cloud Infrastructure Console, add one or more users.
• Open the navigation menu and click Identity & Security. Under Identity, click
Users.

• Click Create User.


• Enter details about the user, and click Create.
3. In Oracle Cloud Infrastructure Console, create an OCI group.
• Open the navigation menu and click Identity & Security. Under Identity, click
Groups.
• Click Create Group.

1-396
Chapter 1
Secure

• Enter details about the group. For example, if you're creating a policy that gives users
permissions to fully manage Oracle NoSQL Database Cloud Service tables you might
name the group nosql_service_admin (or similar) and include a short description
such as "Users with permissions to set up and manage Oracle NoSQL Database
Cloud Service tables on Oracle Cloud Infrastructure" (or similar).
4. Create a policy that gives users belonging to an OCI group, specific access permissions
to Oracle NoSQL Database Cloud Service tables or compartments.
• Open the navigation menu and click Identity & Security. Under Identity, click
Policies.
• Select a compartment, and click Create Policy.
For details and examples, see Policies Reference and Typical Policy Statements to
Manage Tables .
If you're unfamiliar about how policies work, see How Policies Work.
5. To manage and use NoSQL tables via Oracle NoSQL Database Cloud Service SDKs, the
user must set up the API keys. See Authentication to connect to Oracle NoSQL
Database.

Note:
Federated users can also manage and use Oracle NoSQL Database Cloud
Service tables. This requires the service administrator to set up the federation
in Oracle Cloud Infrastructure Identity and Access Management. See
Federating with Identity Providers.

Users belonging to any groups mentioned in the policy statement get their new
permission when they next sign in to the Console.

Setting up Dynamic Group and Policies


Prior to making a call to an Oracle Cloud Infrastructure resource using either resource
principals or instance principals, an Oracle Cloud Infrastructure tenancy administrator must
create Oracle Cloud Infrastructure policies, dynamic groups, and rules that define the
resource principal or instance principal privileges.
• Sign in to your Cloud Account as Cloud Account Administrator.
• In Oracle Cloud Infrastructure Console, create a dynamic group.
– Open the navigation menu and click Identity & Security. Under Identity, click
Dynamic Groups.

1-397
Chapter 1
Secure

– Click Create Dynamic Group and enter a Name, a Description, and a rule, or
use the Rule Builder to add a rule.
– Click Create.
Resources that meet the rule criteria are members of the dynamic group.
When you define a rule for a dynamic group, consider what resource is going
to be given access to other resources. Some examples of creating rules:
1. A matching rule for functions:

ALL {resource.type = 'fnfunc',resource.compartment.id =


'ocid1.compartment.oc1..aaaaaaaafml3tca3zcxyifmdff3aadp5uojim
gx3cdnirgup6rhptxwnandq'}

This rule implies that any resource type called fnfunc in the given
compartment(with the id specified above) is a member of the dynamic
group.

Note:
See Resource Types for more information on different resource
types.

2. A rule when adding instances for Instance Principals:

ALL { instance.compartment.id =

'ocid1.compartment.oc1..aaaaaaaa4mlehopmvdluv2wjcdp4tnh2ypjz3
nhhpahb4ss7yvxaa3be3diq'}

1-398
Chapter 1
Secure

This rule implies that any instance with the compartment id specified above is a
member of the dynamic group.
3. A rule when using API Gateway with functions:

ALL {resource.type = 'ApiGateway',resource.compartment.id =

'ocid1.compartment.oc1..aaaaaaaafml3tca3zcxyifmdff3aadp5uojimgx3cdn
irgup6rhptxwnandq'}

This rule implies that any resource type called ApiGateway in the given
ompartment (with the id specified above) is a member of the dynamic group.
4. A rule when using Container Instances:

ALL {resource.type = 'computecontainerinstance',


resource.compartment.id =
'ocid1.compartment.oc1..aaaaaaaa4mlehopmvdluv2wjcdp4tnh2ypjz3nhhpah
b4ss7yvxaa3be3diq'}

This rule implies that any resource type called computecontainerinstance in the
given compartment (with the id specified above) is a member of the dynamic
group.

Note:
Inheritance does not apply to Dynamic groups. While using IAM Access
policies, the policy of a parent compartment automatically applies to all child
compartments. This is not the case when you use Dynamic groups. You need to
list each compartment in the Dynamic group separately for the compartment to
qualify.
Example:A matching rule for functions for parent-child tables:

ALL {resource.type = 'fnfunc',


ANY{resource.compartment.id = '<parent-compid>',
resource.compartment.id = '<child-compid1>',
resource.compartment.id = '<child-compid2>', ...}}

• Write policy statements for the dynamic group to enable access to Oracle Cloud
Infrastructure resources.
– In the Oracle Cloud Infrastructure console, click Identity and Security and click
Policies.
– To write policies for a dynamic group, click Create Policy, and enter a Name and a
Description.
– Use the Policy Builder to create a policy. The general syntax of defining a policy is
shown below:

Allow <subject> to <verb> <resource-type> in <location> where


<conditions>

1-399
Chapter 1
Secure

* Syntax of subject: One or more comma-separated groups by name or


OCID.
* Verbs: Values are inspect, read, use or manage.
* resource-type: An individual resource-type, A family resource-type (like
nosql-family) or all-resources.
* compartment: A single compartment or compartment path by name or
OCID
Example: This policy allows the dynamic group nosql_application the
fnfuncuse access on the resource in the compartment UATnosql.

allow dynamic-group nosql_application to use fnfunc in


compartment UATnosql

Example: This policy allows the dynamic group nosql_application the


manage access on the family resource nosql-family in the compartment
UATnosql.
– Click Create. See Manage Policies for more information on policies.

Setting Up Users, Groups, and Policies Using Identity Domains


Oracle NoSQL Database Cloud Service uses Oracle Cloud Infrastructure Identity and
Access Management (IAM) Identity Domains to provide secure access to Oracle
Cloud. Oracle Cloud Infrastructure IAM Identity Domains enables you to create user
accounts and give users permission to inspect, read, use, or manage tables.
1. Sign in to your Cloud Account as Cloud Account Administrator.
2. In Oracle Cloud Infrastructure Console, add one or more users.
a. Open the navigation menu and click Identity & Security. Under Identity, click
Domains.

1-400
Chapter 1
Secure

b. Select the identity domain you want to work in and click Users.
c. Click Create User.
d. Enter details about the user, and click Create.
3. In Oracle Cloud Infrastructure Console, create an OCI group.
a. Open the navigation menu and click Identity & Security. Under Identity, click
Domains.
b. Select the identity domain you want to work in and click Groups.
c. Click Create Group.
d. Enter details about the group.
For example, if you're creating a policy that gives users permissions to fully manage
Oracle NoSQL Database Cloud Service tables you might name the group
nosql_service_admin (or similar) and include a short description such as "Users with
permissions to set up and manage Oracle NoSQL Database Cloud Service tables on
Oracle Cloud Infrastructure" (or similar).
4. Create a policy that gives users belonging to an OCI group, specific access permissions
to Oracle NoSQL Database Cloud Service tables or compartments.

1-401
Chapter 1
Secure

a. Open the navigation menu and click Identity & Security. Under Identity, click
Policies.
b. Select a compartment, and click Create Policy.
For details and examples, see Policies Reference and Typical Policy
Statements to Manage Tables.
If you're unfamiliar about how policies work, see How Policies Work.
5. To manage and use NoSQL tables via Oracle NoSQL Database Cloud Service
SDKs, the user must set up the API keys. See Acquiring Credentials.

Note:
Federated users can also manage and use Oracle NoSQL Database
Cloud Service tables. This requires the service administrator to set up
the federation in Oracle Cloud Infrastructure Identity and Access
Management. See Federating with Identity Providers.

Users belonging to any groups mentioned in the policy statement get their new
permission when they next sign in to the Console.

Managing Access to Oracle NoSQL Database Cloud Service Tables


Learn about writing policies and viewing typical policy statements that you might use to
authorize access to Oracle NoSQL Database Cloud Service tables.
This article has the following topics:

Accessing NoSQL Tables Across Tenancies


This topic describes how to write policies that let your tenancy access NoSQL Tables
in other tenancies.
If you're new to policies, see Getting Started with Policies.

Cross-Tenancy Policies
Your organization might want to share resources with another organization that has its
own tenancy. It could be another business unit in your company, a customer of your
company, a company that provides services to your company, and so on. In cases like
these, you need cross-tenancy policies in addition to the required user and service
policies described previously.
To access and share resources, the administrators of both tenancies need to create
special policy statements that explicitly state the resources that can be accessed and
shared. These special statements use the words Define, Endorse ,and Admit.
Endorse, Admit, and Define Statements
Here's an overview of the special verbs used in cross-tenancy statements:
Endorse: States the general set of abilities that a group in your own tenancy can
perform in other tenancies. The Endorse statement always belongs in the tenancy with
the group of users crossing the boundaries into the other tenancy to work with that
tenancy's resources. In the examples, you refer to this tenancy as the source.

1-402
Chapter 1
Secure

Admit: States the kind of ability in your own tenancy that you want to grant a group from the
other tenancy. The Admit statement belongs in the tenancy who is granting "admittance" to
the tenancy. The Admit statement identifies the group of users that requires resource access
from the source tenancy and identified with a corresponding Endorse statement. In the
examples, you refer to this tenancy as the destination.
Define: Assigns an alias to a tenancy OCID for Endorse and Admit policy statements. A
Define statement is also required in the destination tenancy to assign an alias to the source
IAM group OCID for Admit statements.
Define statements must be included in the same policy entity as the endorse or the admit
statement. The Endorse and Admit statements work together, but they reside in separate
policies, one in each tenancy. Without a corresponding statement that specifies access, a
particular Endorse or Admit statement grants no access. You need an agreement from both
the tenancies.

Note:
In addition to policy statements, you must also be subscribed to a region to share
resources across regions.

Source tenancy policy statements


The source administrator creates policy statements that endorse a source IAM group allowed
to manage resources in the destination tenancy.

Note:
The cross-tenancy policies can also be written with other policy subjects. For more
details on policy subjects, see Policy Syntax in Oracle Cloud Infrastructure
Documentation.

Here is an example of a broad policy statement that endorses the IAM group NoSQLAdmins
group to do anything with all NoSQL Tables in any tenancy:

Endorse group NoSQLAdmins to manage nosql-family in any-tenancy

To write a policy that reduces the scope of tenancy access, the destination administrator must
provide the destination tenancy OCID. Here is an example of policy statements that endorse
the IAM group NoSQLAdmins group to manage NoSQL Tables in the DestinationTenancy
only:

Define tenancy DestinationTenancy as


ocid1.tenancy.oc1..<destination_tenancy_OCID>
Endorse group NoSQLAdmins to manage nosql-family in tenancy
DestinationTenancy

Destination tenancy policy statements


The destination administrator creates policy statements that:

1-403
Chapter 1
Secure

• Defines the source tenancy and IAM group that is allowed to access resources in
your tenancy. The source administrator must provide this information.
• Admits those defined sources to access NoSQL Tables that you want to allow
access to in your tenancy.
Here is an example of policy statements that endorse the IAM group NoSQLAdmins in
the source tenancy to do anything with all NoSQL Tables in your tenancy:

Define tenancy SourceTenancy as


ocid1.tenancy.oc1..<source_tenancy_OCID>
Define group NoSQLAdmins as ocid1.group.oc1..<group_OCID>
Admit group NoSQLAdmins of tenancy SourceTenancy to manage nosql-
family in tenancy

Here is an example of policy statements that endorse the IAM group NoSQLAdmins in
the source tenancy to manage NoSQL Tables only the Develop compartment:

Define tenancy SourceTenancy as


ocid1.tenancy.oc1..<source_tenancy_OCID>
Define group NoSQLAdmins as ocid1.group.oc1..<group_OCID>
Admit group NoSQLAdmins of tenancy SourceTenancy to manage nosql-
family in compartment Develop

Giving Another User Permission to Manage NoSQL Tables


When you activate your order for Oracle NoSQL Database Cloud Service, you (the
first user) are in the Administrators group by default. Being in the Administrators group
gives you full administration privileges in Oracle Cloud Infrastructure so you can
manage Oracle NoSQL Database Cloud Service tables and much more. There's no
need to delegate this responsibility but, if you want to, you can give someone else
privileges to create and manage Oracle NoSQL Database Cloud Service tables
through the manage nosql-tables permission.

In Oracle Cloud Infrastructure you use IAM security policies to grant permissions. First,
you must add the user to a group, and then you create a security policy that grants the
group the manage nosql-tables permission on a specific compartment or the tenancy
(any compartment in the tenancy). For example, you might create a policy statement
that looks like one of these:

allow group MyAdminGroup to manage nosql-tables in tenancy

allow group MyAdminGroup to manage nosql-tables in compartment


MyOracleNoSQL

To find out how to create security policy statements specifically for Oracle NoSQL
Database Cloud Service, see Setting Up Users, Groups, and Policies Using Identity
and Access Management.

Typical Policy Statements to Manage Tables


Here are typical policy statements that you might use to authorize access to Oracle
NoSQL Database Cloud Service tables.

1-404
Chapter 1
Reference

When you create a policy for your tenancy, you grant users access to all compartments by
way of policy inheritance. Alternatively, you can restrict access to individual Oracle NoSQL
Database Cloud Service tables or compartments.
Example 1-1 To allow group Admins to fully manage any Oracle NoSQL Database
Cloud Service table

allow group Administrators to manage nosql-tables in tenancy


allow group Administrators to manage nosql-rows in tenancy
allow group Administrators to manage nosql-indexes in tenancy

Example 1-2 To allow group Admins to do any operations against NoSQL Tables in
compartment Dev, use the family resource type.

allow group Admins to manage nosql-family in compartment Dev

Example 1-3 To allow group Analytics to do read-only operations against NoSQL


Tables in compartment Dev

allow group Analytics to read nosql-rows in compartment Dev

Example 1-4 To only allow Joe in Developer to create, get and drop indexes of
NoSQL tables in compartment Dev

allow group Developer to manage nosql-indexes in compartment Dev


where request.user.id = '<OCID of Joe>'

Example 1-5 To allow group Admins to create, drop and move NoSQL Tables only but
not alter in compartment Dev.

allow group Admins to manage nosql-tables in compartment Dev


where any {request.permission = 'NOSQL_TABLE_CREATE',
request.permission = 'NOSQL_TABLE_DROP',
request.permission = 'NOSQL_TABLE_MOVE'}

Example 1-6 To allow group Developer to read, update and delete rows of table
"customer" in compartment Dev but not others.

allow group Developer to manage nosql-rows in compartment Dev


where target.nosql-table.name = 'customer'

Reference
• References for Analytics Integrator
• Reference on NoSQL Database Cloud Service
• Oracle NoSQL Database Migrator Reference

1-405
Chapter 1
Reference

References for Analytics Integrator


• Known issues with Oracle NoSQL Database Analytics Integrator
• Failure handling in Oracle NoSQL Database Analytics Integrator

Known issues with Oracle NoSQL Database Analytics Integrator


Possible Loss of Precision with Some Data Types:
The Oracle NoSQL Database Analytics Integrator retrieves data from tables in the
Oracle NoSQL Database Cloud Service, converts that data to Parquet format, stores
the Parquet data in Object Storage, and finally transfers that data to a table in an ADW
database. To perform the conversion to Parquet format, the NoSQL Analytics
Integrator employs facilities provided by the Oracle NoSQL Database Migrator, which
maps Oracle NoSQL data types to comparable types defined by the Parquet type
system. The mapping between the NoSQL Database type system and the Parquet
type system is not a complete one-to-one mapping. See Oracle NoSQL to Parquet
Data Type Mapping for more details. In particular, the Parquet type system does not
currently define a numeric data type analogous to the Oracle NoSQL NUMBER type;
where the largest type defined by Parquet is the Parquet DOUBLE type. Thus, if a
NoSQL table to be processed by the Oracle NoSQL Database Analytics Integrator
consists of a field of type NUMBER that contains a value so large that it cannot be
represented as a Parquet DOUBLE, then a loss of precision is possible when that
value is converted to the Parquet DOUBLE type; as that value will be represented in
Parquet format as either +Infinity or -Infinity.
ADW Database Does Not Currently Handle JSON Field Types of Length Larger
4000 Bytes:
If the table you create in Oracle NoSQL Database Cloud Service contains a field
(column) of type JSON, and if the value written to that field is a JSON document with
length greater than 4000 bytes in at least one row of the table, then although the
Oracle NoSQL Database Analytics Integrator has no problem writing such values to
the Object Storage (in Parquet format), the ADW database does not process the
JSON document correctly; displaying null instead of the contents of the document.
Although the max_string_size initialization parameter of the ADW database is set to
EXTENDED by default, the mechanism used by the ADW database to retrieve and
display the corresponding Parquet value currently ignores the EXTENDED settings
and attempts to store the value in a VARCHAR2(4000) type instead of
VARCHAR2(32767); which causes the value to be truncated and null to be displayed.
See Oracle Database Reference - Datatype Limits for more details.
Example: Create a table myJsonTable with two fields, an INTEGER and a JSON.
Suppose you populate the row with id=1 with a JSON document consisting of more
than 4000 bytes.

CREATE TABLE IF NOT EXISTS myJsonTable (id INTEGER,


jsonField JSON, PRIMARY KEY (idField)) USING TTL 1 days;

1-406
Chapter 1
Reference

When you fetch the contents of the row with id=1, you should see output such as the
following:

SELECT * FROM myJsonTable WHERE id = '1';

id jsonField
1 (null)

Work Around: Until ADW fixes this bug, you can manually work around the issue by doing
the following from the Database Actions SQL Interface.
• Verify that the max_string_size initialization parameter is set to EXTENDED in the
database.

SELECT name,value FROM v$parameter WHERE name = 'max_string_size';

If the value of the max_string_size is set to STANDARD, then increase the size from
STANDARD to EXTENDED.
• Drop the table

DROP TABLE myJsonTable;

• Manually recreate the table and specify enough bytes to hold the JSON document.

begin
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name =>'myJsonTable',
credential_name =>'OCI$RESOURCE_PRINCIPAL' or
'NOSQLADWDB001_OBJ_STORE_CREDENTIAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/nosqldev/b/nosql-to-adw/o/myJsonTable*',
format => '{"type":"parquet", "schema": "first"}',
column_list =>'ID NUMBER (10), JSONFIELD VARCHAR2(32767)'
);
end;

• You should now be able to see the actual contents of the JSON document in the row with
id=1 .

SELECT * FROM myJsonTable WHERE id = '1';

1-407
Chapter 1
Reference

Note:
Rather than declaring the JSONFIELD as VARCHAR2(32767) you can
also work around this issue by declaring that column as type CLOB.

begin
DBMS_CLOUD.CREATE_EXTERNAL_TABLE (
table_name =>'myJsonTable',
credential_name =>'OCI$RESOURCE_PRINCIPAL' or
'NOSQLADWDB001_OBJ_STORE_CREDENTIAL',
file_uri_list =>'https://objectstorage.us-ashburn-
1.oraclecloud.com/n/nosqldev/b/nosql-to-adw/o/myJsonTable*',
format => '{"type":"parquet", "schema": "first"}',
column_list =>'ID NUMBER (10), JSONFIELD CLOB'
);
end;

Some Clients Do Not Handle Double.POSITIVE_INFINITY,


Double.NEGATIVE_INFINITY, and DOUBLE.NaN Correctly:
If the table you create in Oracle NoSQL Database Cloud Service contains a field
(column) of type DOUBLE, and if the value written to that field is
Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, or Double.NaN (Not-a-
Number) in at least one row of the table, then although the Oracle NoSQL Database
Analytics Integrator has no problem writing such values to Object Storage (in Parquet
format), and although the ADW database has no problem retrieving and storing those
values, some of the clients used to analyze those values may have trouble handling
and/or displaying such non-numeric values. For example, when you attempt to use
either the Oracle Cloud Database Actions SQL Interface or Oracle Analytics (Desktop
or Cloud) to query the ADW database table, this issue manifests itself in two ways.
When you use the Run Statement button on the Database Actions SQL Interface
(represented by a green circle containing a white arrow) to execute a single SELECT
query on the table, although the query actually completes, the results of the query are
never displayed and the command appears to hang.

Note:
One can tell that the query completes rather than hangs when using the Run
Statement option in the Database Actions SQL Interface when the Query
Result window of that interface eventually displays a dropdown menu
labeled Download and displays the Execution time (even though the
spinning wheel appears to indicate the query is hanging).

There are two ways you can work around this issue. First, you can simply execute the
query as a script. To do this, you would select the query in the [Worksheet]* window
of the tool and then click on the Run Script button. This will display the results of the
query in the Script Output window of the tool; displaying any
Double.POSITIVE_INFINITY values as the string 'Infinity',
Double.NEGATIVE_INFINITY values as the string '-Infinity', and any Double.NaN
values as the string 'NaN'.

1-408
Chapter 1
Reference

Another way to work around the issue in the Database Actions SQL Interface is to use the
Run Statement to execute the query, and when the Download dropdown menu appears in
the Query Result window (indicating the query has completed), click on the Download
dropdown menu and click on the menu item labeled JSON to export the output of the query
as a JSON document. Once you have exported the query results, you can use your browser
or editor of choice to examine the query results.
On the other hand, if you use Oracle Analytics (desktop tool or cloud service) to query the
table, then the following error trace occurs:

Odbc driver returned an error (SQLExecDirectW).


State: HY000. Code.10058. [NQODBC][SQL_STATE:HY000]
[nQSError:10058] A general error has occurred.
State: HY000. Code: 43113. [nQSError: 43113] Message returned from OBIS.
State: HY000. Code: 43119. [nQSError: 43119] Query Failed.
State: HY000. Code: 17001. [nQSError: 17001] Oracle Error code: 1722,
message: ORA-01722: invalid number at OCI call OCIStmtFetch.
State: HY000. Code: 17012. [nQSError: 17012] Bulk fetch failed. (HY000)
SQL Issued:
SET VARIABLE DISABLE_CACHE_SEED=1,
DISABLE_XSA_CACHE_SEED=1,
ENABLE_DIMENSIONALITY=1;
SELECT 0 s_0, XSA('weblogic'.'1cdbf90a-570e-4ebb-946b-5510da1b5f76').
"input"."Data"."XD" s_1,
XSA('weblogic'.'1cdbf90a-570e-4ebb-946b-5510da1b5f76').
"input"."Data"."XTCTYPE" s_2,
XSA('weblogic'.'1cdbf90a-570e-4ebb-946b-5510da1b5f76').
"input"."Data"."XTESTCASE" s_3,
FROM XSA('weblogic'.'1cdbf90a-570e-4ebb-946b-5510da1b5f76').input."Data"

There is no work around for this issue in Oracle Analytics.


Thus, until the Database Actions SQL Interface and Oracle Analytics addresses how they
handle Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, and Double.NaN, you
should always make note of whether or not the table you wish to analyze consists of any
rows with one or more of these values.

Failure handling in Oracle NoSQL Database Analytics Integrator


If a failure occurs at any point during the data transfer process (NoSQL Cloud Service to
ObjectStorage or ObjectStorage to Autonomous Database), the Oracle NoSQL Database
Analytics Integrator makes no attempt to automatically recover or continue processing from
the failure point. Such failures can be handled by simply re-executing the utility; which
removes all data stored during the prior run and then restarts the data retrieval and transfer
processing from scratch.

Reference on NoSQL Database Cloud Service


• Oracle NoSQL Database Cloud Service Reference
• Oracle NoSQL Database Cloud Service Policies Reference
• Known Issues for Oracle NoSQL Database Cloud Service

1-409
Chapter 1
Reference

Oracle NoSQL Database Cloud Service Reference


Learn about supported data types, DDL statements, Oracle NoSQL Database Cloud
Service Service parameters and metrics.
This article has the following topics:

Supported Data Types


Oracle NoSQL Database Cloud Service supports many common data types.

Data Type Description


BINARY A sequence of zero or more bytes. The storage size is the number of bytes plus an
encoding of the size of the byte array, which is a variable, depending on the size of
the array.
FIXED_BINARY A fixed-size byte array. There is no extra encoding overhead for this data type.
BOOLEAN A data type with one of two possible values: TRUE or FALSE. The storage size of the
boolean is 1 byte.
DOUBLE A long floating-point number, encoded using 8 bytes of storage for index keys. If it is a
primary key then it uses 10 bytes of storage.
FLOAT A long floating point number, encoded using 4 bytes of storage for index keys. If it is a
primary key then it uses 5 bytes of storage.
LONG A long integer number has a variable-length encoding that uses 1-8 bytes of storage
depending on the value. If it is a primary key then it uses 10 bytes of storage.
INTEGER A long integer number has a variable-length encoding that uses 1-4 bytes of storage
depending on the value. If it is a primary key then it uses 5 bytes of storage.
STRING A sequence of zero or more Unicode characters. The String type is encoded as
UTF-8 and stored in that encoding. The storage size is the number of UTF-8 bytes
plus the length, which may be 1-4 bytes depending on the number of bytes in the
encoding. When stored in an index key the storage size is the number of UTF-8 bytes
plus a single null termination byte.

1-410
Chapter 1
Reference

Data Type Description


NUMBER An arbitrary-precision signed decimal number.
It is serialized in a byte array format that can be used for ordered comparisons. The
format has 2 parts:
1. The sign and exponent plus a single digit. This takes 1-6 bytes but normally is 2
unless the exponent is quite large
2. The mantissa of the value which is approximately one byte for every 2 digits
Examples:
12.345678 serializes in 6 bytes
1.234E+102 serializes in 5 bytes

Note:
When you need to use numeric values in your schema,
it is recommended to decide on the data types in the
order given below: INTEGER, LONG, FLOAT, DOUBLE,
NUMBER Avoid NUMBER unless you really need it for
your use case as NUMBER is expensive both in terms
of storage and processing power used.

TIMESTAMP A point in time with a precision. The precision affects the storage size and usage.
Timestamp is stored and managed in UTC (Coordinated Universal Time). The
Timestamp datatype requires anywhere from 3 to 9 bytes depending on the precision
used.
The following breakdown illustrates the storage used by this datatype:
• bit[0~13] year - 14 bits
• bit[14~17] month - 4 bits
• bit[18~22] day - 5 bits
• bit[23~27] hour - 5 bits [optional]
• bit[28~33] minute - 6 bits [optional]
• bit[34~39] second - 6 bits [optional]
• bit[40~71] fractional second [optional with variable length]
UUID Note: The UUID data type is considered a subtype of the STRING data type. The
storage size is 16 bytes as an index key. If used as a primary key the storage size is
19 bytes.
ENUM An enumeration is represented as an array of strings. ENUM values are symbolic
identifiers (tokens) and are stored as a small integer value representing an ordered
position in the enumeration.
ARRAY An ordered collection of zero of more typed items. Arrays that are not defined as
JSON cannot contain NULL values.
Arrays declared as JSON can contain any valid JSON, including the special value,
null, which is relevant to JSON.
MAP An unordered collection of zero or more key-item pairs, where all keys are strings and
all items are the same type. All keys must be unique. The key-item pairs are called
fields, the keys are field names, and the associated items are field values. Field
values can have different types, but maps cannot contain NULL field values.
RECORD A fixed collection of one or more key-item pairs, where all keys are strings. All keys in
a record must be unique.

1-411
Chapter 1
Reference

Data Type Description


JSON Any valid JSON data.

Table States and Life Cycles


Learn about the different table states and their significance (table life cycle process).
Each table passes through a series of different states from table creation to deletion
(drop). For example, a table in the DROPPING state cannot proceed to the ACTIVE state,
while a table in the ACTIVE state can change to the UPDATING state. You can track the
different table states by monitoring the table life cycle. This section describes the
various table states.

Table State Description


CREATING The table is in the process of being created. It is not ready to use.
UPDATING Updating the table is in process. Further table modifications are not possible
while the table is in this state.
A table is in the UPDATING state when:
• The table limits are being changed
• The table schema is evolving
• Adding or dropping a table index
ACTIVE The table can be used in the current state. The table may have been recently
created, or modified, but the table state is now stable.
DROPPING The table is being dropped and cannot be accessed for any purpose.
DROPPED The table has been dropped and no longer exists for read, write, or query
activities.

Note:
Once dropped, a table with the same name can
be created again.

1-412
Chapter 1
Reference

Debugging SQL statement errors in the OCI console


When you are using the OCI console to create a table using a DDL statement or using a DML
statement to insert or update data or using a SELECT query to fetch data, you may get an
error that your statement is Incomplete or faulty in one of the following common scenarios:
• If you have a semi-colon at the end of your SQL statement.
• If there is a syntax error in your SQL statement like the wrong usage of commas, usage
of any unnecessary character in the statement, etc.
• If there is a spelling error in your SQL statement in any of the SQL keywords or in your
datatype definition.
• If you have defined the column as NOT NULL but not assigned a DEFAULT value to it.
• If you have defined the column as NOT NULL but not assigned a DEFAULT value to it.
How to handle some Incomplete or faulty errors while using the OCI console to create or
manage data:
• Remove the semi-colon ( if present) at the end of the SQL statement.
• Check if there is any undesired character or wrong punctuation in your SQL statement.
• Check for spelling errors in your SQL statement.
• Check if all your column definitions are complete and correct.
• Check if you have defined a primary key for your table.
If you still get an error after eliminating some of the possible situations as discussed above,
you can use Cloud Shell to run your query and capture the exact error as shown in the
examples below.
Example 1: Executing a DDL statement from the cloud shell
1. In your OCI console, Open the Cloud Shell from the top right menu.
2. Copy your DDL statement( for example, tableddl.nosql) into a variable (DDL_TABLE).
Example:

DDL_TABLE=$(cat tableddl.nosql)

3. Invoke the oci command to execute your DDL statement.

Note:
You need to give the compartment_id and also the values for the table capacity
for this DDL statement.

oci nosql table create


--compartment-id "<comp_ocid>" --name <table_name> \
--ddl-statement "$DDL_TABLE" \
--table-limits="{\"maxReadUnits\": 10, \"maxStorageInGBs\": 5,\
"maxWriteUnits\": 10 }" -wait-for-state SUCCEEDED \
--wait-for-state FAILED

1-413
Chapter 1
Reference

This will give you the exact error in your DDL statement.
Example 2: Executing a SELECT statement from the cloud shell
1. In your OCI console, Open the Cloud Shell from the top right menu.
2. Copy your SQL SELECT statement( for example, query1.sql) into a variable
(SQL_SELECTSTMT).
Example:

SQL_SELECTSTMT=$(cat ~/query1.sql | tr '\n' ' ')

3. Invoke the oci command to execute your SQL SELECT statement.

Note:
You need to give the compartment_id for this SELECT statement.

oci nosql query execute --compartment-id "<comp_ocid>"


--statement "$SQL_SELECTSTMT"

This will give you the exact error in your SQL statement.

Data Definition Language Reference


Learn how to use DDL in Oracle NoSQL Database Cloud Service.
Use Oracle NoSQL Database Cloud Service DDL to create, alter, and drop tables and
indexes.
For information on the syntax of the DDL language, see Table Data Definition
Language Guide. This guide documents the DDL language as supported by the on-
premises Oracle NoSQL Database product. The Oracle NoSQL Database Cloud
Service supports a subset of this functionality and the differences are documented in
the DDL Differences in the Cloud section.
Also, each NoSQL <language> driver provides an API to execute a DDL statement. To
write your application, see Using APIs to Create Tables and Indexes in Oracle NoSQL
Database Cloud Service .

Typical DDL Statements


Few samples of common DDL statements are as follows:
Create Table

CREATE TABLE [IF NOT EXISTS] (


field-definition, field-definition-2 ...,
PRIMARY KEY (field-name, field-name-2...),
) [USING TTL ttl]

1-414
Chapter 1
Reference

For example:

CREATE TABLE IF NOT EXISTS audience_info (


cookie_id LONG,
ipaddr STRING,
audience_segment JSON,
PRIMARY KEY(cookie_id))

Alter Table

ALTER TABLE table-name (ADD field-definition)


ALTER TABLE table-name (DROP field-name)
ALTER TABLE table-name USING TTL ttl

For example:

ALTER TABLE audience_info USING TTL 7 days

Create Index

CREATE INDEX [IF NOT EXISTS] index-name ON table-name (path_list)

For example:

CREATE INDEX segmentIdx ON audience_info


(audience_segment.sports_lover AS STRING)

Drop Table

DROP TABLE [IF EXISTS] table-name

For example:

DROP TABLE audience_info

See the reference guides for a complete list:


• Table Data Definition Language guide
• SQL Reference for Oracle NoSQL Database

DDL Differences in the Cloud


The cloud service DDL language differs from what is described in the reference guide in the
following way:
Table Names
• Limited to 256 characters, and are restricted to alphanumeric characters and underscore
• Must start with a letter
• Cannot include special characters

1-415
Chapter 1
Reference

• Child tables are not supported


Unsupported Concepts
• DESCRIBE and SHOW TABLE statements.
• Full text indexes
• User and role management
• On-premise regions

Query Language Reference


Learn how to use SQL statements to update and query data in Oracle NoSQL
Database Cloud Service.
The Oracle NoSQL Database uses the SQL query language to update and query data
in NoSQL tables. See SQL Reference for Oracle NoSQL Database to learn the query
language syntax.

Typical Queries

SELECT <expression>
FROM <table name>
[WHERE <expression>]
[GROUP BY <expression>]
[ORDER BY <expression> [<sort order>]]
[LIMIT <number>]
[OFFSET <number>];

For example:
SELECT * FROM Users;
SELECT id, firstname, lastname FROM Users WHERE firstname = "Taylor";

UPDATE <table_name> [AS <table_alias>]


<update_clause>[, <update_clause>]*
WHERE <expr>[<returning_clause>];

For example:
UPDATE JSONPersons $j
SET TTL 1 DAYS
WHERE id = 6
RETURNING remaining_days($j) AS Expires;

Query Language Differences in the Cloud


The cloud service query support differs from what is described in the query language
reference guide in the following way:
Restrictions on Expressions Used in the SELECT Clause
Oracle NoSQL Database Cloud Service supports grouping expressions or arithmetic
expressions among aggregate functions. No other kinds of expressions are allowed in
the SELECT clause. For example, CASE expressions are not allowed in the SELECT
clause.

1-416
Chapter 1
Reference

Each NoSQL Database driver provides an API to execute a query statement.

Query Plan Reference


A query execution plan is the sequence of operations Oracle NoSQL Database performs to
run a query.
A query execution plan is a tree of plan iterators. Each kind of iterator evaluates a different
kind of expression that may appear in a query. In general, the choice of index and the kind of
associated index predicates can have a drastic effect on query performance. As a result, you
as a user often want to see what index is used by a query and what predicates have been
pushed down to it. Based on this information, you may want to force the use of a different
index via index hints. This information is contained in the query execution plan. . All Oracle
NoSQL drivers provide APIs to display the execution plan of a query.
Some of the most common and important iterators used in queries are :
TABLE iterator: A table iterator is responsible for:
• Scanning the index used by the query (which may be the primary index).
• Applying any filtering predicates pushed to the index
• Retrieve the rows pointed to by the qualifying index entries if necessary. If the index is
covering, the result set of the TABLE iterator is a set of index entries, otherwise, it is a set
of table rows.

Note:
An index is called a covering index with respect to a query if the query can be
evaluated using only the entries of that index, that is, without the need to retrieve
the associated rows.

SELECT iterator: It is responsible for executing the SELECT expression.


Every query has a SELECT clause in it. So every query plan will have a SELECT iterator. A
SELECT iterator has the following structure:

"iterator kind" : "SELECT",


"FROM" :
{
},
"FROM variable" : "...",
"SELECT expressions" :
[
{
}
]

The SELECT iterator has fields like: “FROM”, "WHERE", “FROM variable”, and “SELECT
expressions”. “FROM” and “FROM variable” represent the FROM clause of the SELECT
expression, WHERE represents the filter clause, and “SELECT expression” represents the
SELECT clause.

1-417
Chapter 1
Reference

RECEIVE iterator: It is a special internal iterator that separates the query plan into 2
parts:
1. The RECEIVE iterator itself and all iterators that are above it in the iterator tree are
executed at the driver.
2. All iterators below the RECEIVE iterator are executed at the replication nodes
(RNs); these iterators form a subtree rooted at the unique child of the RECEIVE
iterator.
In general, the RECEIVE iterator acts as a query coordinator. It sends its subplan to
appropriate RNs for execution and collects the results. It may perform additional
operations such as sorting and duplicate elimination and propagates the results to its
ancestor iterators (if any) for further processing.
Distribution kinds :
A distribution kind specifies how the query will be distributed for execution across the
RNs participating in an Oracle NoSQL database (a store). The distribution kind is a
property of the RECEIVE iterator.
Different choices of Distribution kinds are:
• SINGLE_PARTITION: A SINGLE_PARTITION query specifies a complete shard
key in its WHERE clause. As a result, its full result set is contained in a single
partition, and the RECEIVE iterator will send its subplan to a single RN that stores
that partition. A SINGLE_PARTITION query may use either the primary-key index
or a secondary index.
• ALL_PARTITIONS: Queries use the primary-key index here and they don’t specify
a complete shard key. As a result, if the store has M partitions, the RECEIVE
iterator will send M copies of its subplan to be executed over one of the M
partitions each.
• ALL_SHARDS: Queries use a secondary index here and they don’t specify a
complete shard key. As a result, if the store has N shards, the RECEIVE iterator
will send N copies of its subplan to be executed over one of the N shards each.
Anatomy of a query execution plan:
Query execution takes place in batches. When a query subplan is sent to a partition or
shard for execution, it will execute there until a batch limit is reached. The batch limit is
a number of read units consumed locally by the query. The default is 2000 read units
(about 2MB of data), and it can only be decreased via a query-level option.
When the batch limit is reached, any local results that were produced are sent back to
the RECEIVE iterator for further processing along with a boolean flag that says
whether more local results may be available. If the flag is true, the reply includes
resume information. If the RECEIVE iterator decides to resend the query to the same
partition/shard, it will include this resume information in its request, so that the query
execution will restart at the point where it stopped during the previous batch. This is
because no query state is maintained at the RN after a batch finishes. The next batch
for the same partition/shard may take place at the same RN as the previous batch or
at a different RN that also stores the same partition/shard.

Oracle NoSQL Database Cloud Service Policies Reference


Learn about supported variables, permissions, and Verb + Reource-Type combinations
available for Oracle NoSQL Database Cloud Service Policies.

1-418
Chapter 1
Reference

This article has the following topics:

Supported Variables
Learn about the variables supported by Oracle NoSQL Database Cloud Service.
Oracle NoSQL Database Cloud Service supports all the general variables. See General
Variables for All Requests. All three NoSQL resource types can use the following variables,
except for ListTables and CreateTable.

Table 1-20 Supported Variables

Variable Variable Type Comments


target.nosql-table.id OCID Use this variable to control
access to specific NoSQL table
by OCID.
target.nosql-table.name String Use this variable to control
access to specific NoSQL table
by name.

Details for Verb + Resource-Type Combinations


Learn about the permissions and API operations covered by each verb.
The level of access is cumulative as you go from inspect > read > use > manage. A plus
sign (+) in a table cell indicates incremental access compared to the cell directly above it,
whereas no extra indicates no incremental access.
For example, the read verb for the nosql-tables resource-type includes the same
permissions and API operations as the inspect verb, plus the NOSQL_TABLE_READ
permission and the GetTable API operation. In the case of the nosql-tables resource-type,
the use verb covers UpdateTable API operations compared to read. Lastly, manage covers
more permissions and operations compared to use.

nosql-tables

Table 1-21 nosql-tables

Verb Permissions REST APIs Fully NoSQL Cloud Driver


Covered Request Covered
INSPECT NOSQL_TABLE_INSPE ListTables ListTableRequest
CT
READ INSPECT + GetTable GetTableRequest
NOSQL_TABLE_READ
READ INSPECT + ListWorkRequests None
NOSQL_TABLE_READ GetWorkRequest
ListWorkRequestErrors
ListWorkRequestLogs
READ INSPECT + ListTableUsage TableUsageRequest
NOSQL_TABLE_READ
USE READ + UpdateTable TableRequest
NOSQL_TABLE_ALTER DeleteWorkRequest • change TableLimits
• ALTER TABLE

1-419
Chapter 1
Reference

Table 1-21 (Cont.) nosql-tables

Verb Permissions REST APIs Fully NoSQL Cloud Driver


Covered Request Covered
MANAGE USE + CreateTable TableRequest (CREATE
NOSQL_TABLE_CREAT TABLE)
E
MANAGE NOSQL_TABLE_DROP CreateTable TableRequest (DROP
TABLE)
MANAGE NOSQL_TABLE_MOVE ChangeTableCompartm Not supported
ent

nosql-rows

Table 1-22 nosql-rows

Verb Permissions REST APIs Fully NoSQL Cloud Driver


Covered Request Covered
INSPECT None None None
READ NOSQL_ROWS_REA GetRow • GetRequest
D Query (SELECT) • PrepareRequest
PrepareStatement • QueryRequest
(SELECT)
SummarizeStatement
USE READ + UpdateRow • PutRequest
NOSQL_ROWS_INSE Query (INSERT/ • WriteMultipleReq
RT UPSERT, UPDATE) uest(Put)
• QueryRequest(IN
SERT/UPSERT,
UPDATE)
MANAGE USE + DeleteRow • DeleteRequest
NOSQL_ROWS_DEL Query (DELETE) • MultiDeleteReque
ETE st
• WriteMultipleReq
uest(Delete)
• QueryRequest(D
ELETE)

nosql-indexes

Table 1-23 nosql-indexes

Verb Permissions REST APIs Fully NoSQL Cloud Driver


Covered Request Covered
INSPECT None None None
READ NOSQL_INDEX_REA • ListIndexes • GetIndexesReque
D • GetIndex st + indexName
• GetIndexesReque
st

1-420
Chapter 1
Reference

Table 1-23 (Cont.) nosql-indexes

Verb Permissions REST APIs Fully NoSQL Cloud Driver


Covered Request Covered
USE READ + NONE • ListIndexes • GetIndexesReque
• GetIndex st + indexName
• GetIndexesReque
st
MANAGE READ + CreateIndex TableRequest(CREAT
NOSQL_INDEX_CRE E INDEX)
ATE
MANAGE NOSQL_INDEX_DRO DeleteIndex TableRequest(DROP
P INDEX)

Permission Required for Each NoSQL Cloud Driver Request


Learn about the required permissions for each NoSQL Cloud Driver Request.
The table below lists the API operations in a logical order, grouped by resource type. For
information about permissions, see Permissions in Oracle Cloud Infrastructure
Documentation.

Table 1-24 Permissions

Request Permissions Operation Id


(request.operation)
DeleteRequest NOSQL_ROWS_DELETE DeleteRow
GetIndexesRequest NOSQL_INDEX_READ GetIndex
GetRequest NOSQL_ROWS_READ GetRow
GetTableRequest NOSQL_TABLE_READ GetTable
ListTablesRequest NOSQL_TABLE_INSPECT ListTables
MultiDeleteRequest NOSQL_ROWS_DELETE DeleteRow
PrepareRequest NOSQL_ROWS_READ GetRow
PutRequest NOSQL_ROWS_INSERT UpdateRow
QueryRequest (SELECT) NOSQL_ROWS_READ GetRow
QueryRequest (INSERT, NOSQL_ROWS_INSERT UpdateRow
UPSERT, UPDATE)
QueryRequest (DELETE) NOSQL_ROWS_DELETE DeleteRow
TableRequest (CREATE TABLE) NOSQL_TABLE_CREATE CreateTable
TableRequest (ALTER TABLE) NOSQL_TABLE_ALTER UpdateTable
TableRequest (DROP TABLE) NOSQL_TABLE_DROP DeleteTable
TableUsageRequest NOSQL_TABLE_READ GetTable
WriteMultipleRequest has PutRequest: UpdateRow
NOSQL_ROWS_INSERT DeleteTable
has DeleteRequest:
NOSQL_ROWS_DELETE

1-421
Chapter 1
Reference

Permission Required for Each REST API Operation


Learn about the required permissions for each REST API operation request.
The table below lists the REST API operations in a logical order, grouped by resource
type. For information about permissions, see Permissions in Oracle Cloud
Infrastructure Documentation.

Table 1-25 Permissions

Request Permissions
ListTables NOSQL_TABLE_INSPECT
CreateTable NOSQL_TABLE_CREATE
GetTable NOSQL_TABLE_READ
UpdateTable NOSQL_TABLE_ALTER
DeleteTable NOSQL_TABLE_DROP
ListIndexes NOSQL_INDEX_READ
CreateIndex NOSQL_INDEX_CREATE
GetIndex NOSQL_INDEX_READ
DeleteIndex NOSQL_INDEX_DROP
GetRow NOSQL_ROWS_READ
UpdateRow NOSQL_ROWS_INSERT
DeleteRow NOSQL_ROWS_DELETE
ListTableUsage NOSQL_TABLE_READ
ChangeTableCompartment NOSQL_TABLE_ALTER
Query (SELECT) NOSQL_ROWS_READ
Query (INSERT, UPSERT, UPDATE) NOSQL_ROWS_INSERT
Query (DELETE) NOSQL_ROWS_DELETE
PrepareStatement NOSQL_TABLE_READ
SummarizeStatement NOSQL_TABLE_READ
ListWorkRequests NOSQL_TABLE_READ
GetWorkRequest NOSQL_TABLE_READ
DeleteWorkRequest NOSQL_TABLE_ALTER
ListWorkRequestErrors NOSQL_TABLE_READ
ListWorkRequestLogs NOSQL_TABLE_READ

When you write a policy with request.operation, use the name of API operations. For
Query operations, use the mapping operation of statement in the query. For example:

SELECT => GetRow INSERT, UPSERT or UPDATE => UpdateRow DELETE=>


DeleteRow

1-422
Chapter 1
Reference

Known Issues for Oracle NoSQL Database Cloud Service


Learn about issues that you can encounter when using Oracle NoSQL Database Cloud
Service and how to work around it.

Supported Browsers
The support for web browsers is as per the Oracle Software Web Browser Support Policy.

Topics:
As of the current release of Oracle NoSQL Database Cloud Service, there are no known
issues reported.

1-423
Glossary

Glossary-1
Index

Index-1

You might also like