Professional Documents
Culture Documents
GaussDB Manuales
GaussDB Manuales
V300R001C00
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
2.4 Tools............................................................................................................................................................................. 39
2.4.1 Parameter Overview.................................................................................................................................................. 40
2.4.2 zsql Parameters.......................................................................................................................................................... 40
2.5 Advanced Optimization................................................................................................................................................ 40
2.5.1 Parameter Overview.................................................................................................................................................. 41
2.5.2 Flow Control Switch..................................................................................................................................................43
2.5.3 Soft Parse Switch.......................................................................................................................................................44
2.5.4 Thread Processing......................................................................................................................................................44
2.5.5 Session Control Parameters....................................................................................................................................... 45
2.6 Security and Audit........................................................................................................................................................ 49
2.6.1 Parameter Overview.................................................................................................................................................. 50
2.6.2 Parameter Descriptions..............................................................................................................................................52
2.7 Logs.............................................................................................................................................................................. 60
2.7.1 Parameter Overview.................................................................................................................................................. 60
2.7.2 Parameter Descriptions..............................................................................................................................................61
2.8 Reserved Parameters.....................................................................................................................................................63
3.1.25 SYS_PART_COLUMNS.........................................................................................................................................88
3.1.26 SYS_PART_OBJECTS........................................................................................................................................... 89
3.1.27 SYS_PART_STORES............................................................................................................................................. 89
3.1.28 SYS_PENDING_DIST_TRANS............................................................................................................................ 90
3.1.29 SYS_PENDING_TRANS....................................................................................................................................... 90
3.1.30 SYS_PROCS........................................................................................................................................................... 91
3.1.31 SYS_PROC_ARGS.................................................................................................................................................92
3.1.32 SYS_PROFILE........................................................................................................................................................93
3.1.33 SYS_RECYCLEBIN...............................................................................................................................................93
3.1.34 SYS_ROLES........................................................................................................................................................... 95
3.1.35 SYS_SEQUENCES.................................................................................................................................................95
3.1.36 SYS_SHADOW_INDEXES................................................................................................................................... 96
3.1.37 SYS_SHADOW_INDEX_PARTS.......................................................................................................................... 97
3.1.38 SYS_SQL_MAPS................................................................................................................................................... 98
3.1.39 SYS_SYNONYMS................................................................................................................................................. 99
3.1.40 SYS_PRIVS.............................................................................................................................................................99
3.1.41 SYS_TABLES....................................................................................................................................................... 100
3.1.42 SYS_TABLE_PARTS............................................................................................................................................101
3.1.43 SYS_TMP_SEG_STATS.......................................................................................................................................103
3.1.44 SYS_USERS......................................................................................................................................................... 103
3.1.45 SYS_USER_HISTORY.........................................................................................................................................104
3.1.46 SYS_USER_ROLES............................................................................................................................................. 104
3.1.47 SYS_VIEWS......................................................................................................................................................... 105
3.1.48 SYS_VIEW_COLS............................................................................................................................................... 106
3.1.49 WSR_PARAMETER.............................................................................................................................................106
3.1.50 WSR_SQLAREA.................................................................................................................................................. 107
3.1.51 WSR_SYS_STAT.................................................................................................................................................. 109
3.1.52 WSR_SYSTEM.....................................................................................................................................................109
3.1.53 WSR_SYSTEM_EVENT......................................................................................................................................110
3.1.54 WSR_SNAPSHOT................................................................................................................................................ 110
3.1.55 WSR_CONTROL.................................................................................................................................................. 111
3.1.56 WSR_DBA_SEGMENTS..................................................................................................................................... 112
3.1.57 WSR_LATCH........................................................................................................................................................112
3.1.58 WSR_LIBRARYCACHE......................................................................................................................................113
3.1.59 WSR_LONGSQL.................................................................................................................................................. 114
3.1.60 WSR_SEGMENT..................................................................................................................................................114
3.1.61 WSR_SQL_LIST...................................................................................................................................................115
3.1.62 WSR_SQL_LIST_PLAN...................................................................................................................................... 115
3.1.63 WSR_WAITSTAT..................................................................................................................................................116
3.2 DBA Views................................................................................................................................................................. 116
3.2.1 DB_DB_LINKS.......................................................................................................................................................119
3.2.2 DB_IND_STATISTICS........................................................................................................................................... 120
3.2.45 ADM_SEGMENTS...............................................................................................................................................155
3.2.46 ADM_SEQUENCES.............................................................................................................................................156
3.2.47 ADM_SOURCE.................................................................................................................................................... 157
3.2.48 ADM_SYNONYMS............................................................................................................................................. 157
3.2.49 ADM_SYS_PRIVS............................................................................................................................................... 158
3.2.50 ADM_TABLES..................................................................................................................................................... 158
3.2.51 ADM_TABLESPACES......................................................................................................................................... 160
3.2.52 ADM_TAB_COLS................................................................................................................................................ 161
3.2.53 ADM_TAB_COLUMNS.......................................................................................................................................162
3.2.54 ADM_TAB_COL_STATISTICS...........................................................................................................................164
3.2.55 ADM_TAB_COMMENTS....................................................................................................................................165
3.2.56 ADM_TAB_DISTRIBUTE................................................................................................................................... 165
3.2.57 ADM_TAB_MODIFICATIONS........................................................................................................................... 166
3.2.58 ADM_TAB_PARTITIONS....................................................................................................................................167
3.2.59 ADM_TAB_PRIVS............................................................................................................................................... 169
3.2.60 ADM_TAB_STATISTICS.....................................................................................................................................170
3.2.61 ADM_TRIGGERS................................................................................................................................................ 171
3.2.62 ADM_USERS....................................................................................................................................................... 172
3.2.63 ADM_VIEWS....................................................................................................................................................... 173
3.2.64 ADM_VIEW_COLUMNS.................................................................................................................................... 174
3.3 User Views..................................................................................................................................................................174
3.3.1 DB_ARGUMENTS.................................................................................................................................................178
3.3.2 DB_COL_COMMENTS......................................................................................................................................... 179
3.3.3 DB_CONSTRAINTS.............................................................................................................................................. 180
3.3.4 DB_DBLINK_TABLES..........................................................................................................................................183
3.3.5 DB_DBLINK_TAB_COLUMNS........................................................................................................................... 183
3.3.6 DB_DEPENDENCIES............................................................................................................................................ 184
3.3.7 DB_DISTRIBUTE_RULES....................................................................................................................................185
3.3.8 DB_DIST_RULE_COLS........................................................................................................................................ 185
3.3.9 DB_HISTOGRAMS................................................................................................................................................186
3.3.10 DB_INDEXES.......................................................................................................................................................187
3.3.11 DB_IND_COLUMNS........................................................................................................................................... 189
3.3.12 DB_IND_PARTITIONS........................................................................................................................................190
3.3.13 DB_OBJECTS.......................................................................................................................................................191
3.3.14 DB_PART_COL_STATISTICS.............................................................................................................................192
3.3.15 DB_PART_KEY_COLUMNS.............................................................................................................................. 193
3.3.16 DB_PART_STORE............................................................................................................................................... 194
3.3.17 DB_PART_TABLES............................................................................................................................................. 194
3.3.18 DB_PROCEDURES..............................................................................................................................................195
3.3.19 DB_SEQUENCES.................................................................................................................................................196
3.3.20 DB_SOURCE........................................................................................................................................................ 197
3.3.21 DB_SYNONYMS................................................................................................................................................. 197
3.4.32 DV_PL_MANAGER.............................................................................................................................................277
3.4.33 DV_PL_REFSQLS................................................................................................................................................278
3.4.34 DV_REACTOR_POOLS...................................................................................................................................... 279
3.4.35 DV_REPL_STATUS............................................................................................................................................. 280
3.4.36 DV_RESOURCE_MAP........................................................................................................................................ 280
3.4.37 DV_SEGMENT_STATS....................................................................................................................................... 280
3.4.38 DV_SESSIONS..................................................................................................................................................... 281
3.4.39 DV_SESSION_EVENTS...................................................................................................................................... 285
3.4.40 DV_SESSION_SHARED_LOCKS...................................................................................................................... 285
3.4.41 DV_SESSION_WAITS......................................................................................................................................... 286
3.4.42 DV_GMA.............................................................................................................................................................. 287
3.4.43 DV_GMA_STATS.................................................................................................................................................287
3.4.44 DV_SPINLOCKS..................................................................................................................................................289
3.4.45 DV_SQLS..............................................................................................................................................................290
3.4.46 DV_SQL_POOL................................................................................................................................................... 293
3.4.47 DV_SYS_STATS...................................................................................................................................................298
3.4.48 DV_SYSTEM........................................................................................................................................................302
3.4.49 DV_SYS_EVENTS...............................................................................................................................................303
3.4.50 DV_TABLESPACES.............................................................................................................................................303
3.4.51 DV_TEMP_POOLS.............................................................................................................................................. 304
3.4.52 DV_TEMP_UNDO_SEGMENT.......................................................................................................................... 305
3.4.53 DV_TRANSACTIONS......................................................................................................................................... 306
3.4.54 DV_UNDO_SEGMENTS.....................................................................................................................................307
3.4.55 DV_USER_ADVISORY_LOCKS........................................................................................................................308
3.4.56 DV_USER_ASTATUS_MAP............................................................................................................................... 308
3.4.57 DV_USER_PARAMETERS................................................................................................................................. 309
3.4.58 DV_VERSION...................................................................................................................................................... 309
3.4.59 DV_VM_FUNC_STACK......................................................................................................................................310
3.4.60 DV_WAIT_STATS................................................................................................................................................ 310
3.4.61 DV_XACT_LOCKS............................................................................................................................................. 310
3.4.62 DV_XACT_SHARED_LOCKS............................................................................................................................311
3.5 View Descriptions.......................................................................................................................................................311
4 Monitoring Alarms....................................................................................................................315
5 Interface Mapping (Basic Packages vs. Compatible Packages)........................................ 321
5.1 Data Dictionary Tables............................................................................................................................................... 321
5.2 DBA Views.................................................................................................................................................................323
5.3 User Views..................................................................................................................................................................325
5.4 Dynamic Performance Views..................................................................................................................................... 328
5.5 Configuration Parameters........................................................................................................................................... 330
6 Glossary....................................................................................................................................... 332
Intended Audience
This document is intended for GaussDB 100 database users to help them obtain the database
information.
Before reading this document, you should be familiar with:
l Knowledge about a relational database. The theory helps you get familiar with GaussDB
100 and its usage.
l Knowledge about OSs. You will need it when you configure and run GaussDB 100.
Change History
Version Change Description Date
03 Added: 2019-06-06
l Data dictionary tables WSR_LONGSQL
and WSR_SQL_LIST_PLAN
l Dynamic performance view in
DV_XACT_LOCKS
l HIGH_WATER_MARK column in
DV_DATA_FILES
l BLOCK_REPAIR_ENABLE in HA
l BLOCK_REPAIR_TIMEOUT in HA
l Time Zone
l Monitoring Alarms
Modified:
l Configuration suggestions of
VARIANT_MEMORY_AREA_SIZE,
LARGE_VARIANT_MEMORY_AREA
_SIZE, and
_VMP_CACHES_EACH_SESSION in
SGA
l Added field OPEN_INCONSISTENCY in
section DV_DATABASE.
02 Added: 2019-04-05
l HA
l TYPE_MAP_FILE in Data Type Control
Parameters
l SSL_EXPIRE_ALERT_THRESHOLD
and SSL_PERIOD_DETECTION in
Parameter Descriptions
l ZSQL_SSL_QUIET and
ZSQL_INTERACTION_TIMEOUT in
zsql Parameters
l LOG_REPLAY_PROCESSES in
Background Process
l TEMP_POOL_NUM in SGA
l _SERIALIZED_COMMIT in
Transactions
l _PRIVATE_KEY_LOCKS and
_PRIVATE_ROW_LOCKS in Session
Control Parameters
l UNDO_RESERVE_SIZE in
Transactions
l ARCH_CLEAN_IGNORE_STANDBY
in Archive Logs
l Data dictionary tables WSR_LATCH,
WSR_LIBRARYCACHE,
WSR_SEGMENT, WSR_SQL_LIST and
WSR_WAITSTAT
l Modified:
l Moved 32 ALL_* views from DBA Views
to User Views
l Moved the ROLE_SYS_PRIVS view from
DBA Views to User Views
l Moved NLS_SESSION_PARAMETERS
from DBA Views to Dynamic
Performance Views
Deleted:
l DBA_HIST_WAITSTAT view
l DBA_HIST_SEGMENT view
l DBA_HIST_LIBRARYCACHE view
l DBA_HIST_LATCH view
l 3.12.14 Debugging and Restriction
Information
l V$LOG_HISTORY view
2 Parameters
GaussDB 100 provides parameters to control database system behavior. Do your best not to
modify these parameters after a database is installed. If the parameters require modification,
fully understand the impacts on GaussDB 100 before modifying them. Otherwise, unexpected
results may be generated.
Precautions
l If the value range of a parameter is a string, the string should comply with the naming
conventions of the path and file name in the OS running the target database.
l If the maximum value of a parameter is INT_MAX, this value will vary by OS.
l If the maximum value of a parameter is DBL_MAX, this value will vary by OS.
Viewing Parameters
You can view a single parameter or all parameters of GaussDB 100, including their values,
and other details.
l You can run the SHOW command to check the parameters and their values.
For details about the SHOW command, see "Client Tools > zsql" in the GaussDB 100
V300R001C00 Operation Guide to Tools.
-- View a single parameter and its value:
SHOW PARAMETER parameter_name;
l You can query the DV_PARAMETERS view to check the configured values, default
values, value ranges, and data types of parameters, and check whether the parameters can
be modified.
For details about the DV_PARAMETERS view, see DV_PARAMETERS.
-- View a single parameter and its value:
SELECT parameter_name FROM DV_PARAMETERS;
Modifying Parameters
You can modify the values and attributes of parameters in GaussDB 100.
For details about the attributes and modification commands of parameters, see "Managing the
Database System > Configuring the Database System" in GaussDB 100 V300R001C00 User
Guide (Standalone).
2.1 Databases
This section describes basic database parameters. You can adjust the parameters based on
service scenarios and data volume. Common users do not have permission to view database
parameters. Only user SYS is authorized to do so.
CONTROL_FILES
Parameter description: Specifies the path of a control file, which is automatically generated
by the system and cannot be changed.
The file records the metadata information of a database, such as the database name, creation
timestamp, and names and locations of data files and redo files. You are advised to multiplex
a control file on different nodes or mirror it at the OS level.
PAGE_SIZE
Parameter description: Specifies the size of a page.
This parameter can be set only when a database is created. In other cases, do not set or modify
this parameter.
After a database is started, this parameter becomes read-only. To modify the parameter, stop
the database and modify the configuration file. The modification takes effect after the
database is rebuilt.
Value range: 8K, 16K, 32K (unit: byte)
Default value: 8K
DEFAULT_EXTENTS
Parameter description: Specifies the number of pages in an extent.
If you do not specify the number of pages in an extent when creating a tablespace, the default
value will be used.
Value range: 8, 16, 32, 64, and 128
Default value: 8
In HA scenarios, the value of this parameter must be the same on the primary and standby
databases. Otherwise, a core dump will occur on the standby database.
ARCH_CLEAN_IGNORE_BACKUP
Parameter description: Specifies whether to ignore archive log backup during automatic
archive deletion.
Valid value:
l TRUE: An archive log meeting deletion conditions will be deleted no matter whether it
has been backed up.
l FALSE: An archive log meeting deletion conditions will not be deleted if it has not been
backed up.
ARCH_CLEAN_IGNORE_STANDBY
Parameter description: Specifies whether to ignore standby nodes during automatic archive
deletion.
Valid value:
l TRUE: During archive deletion, standby nodes are ignored. Only rcy_point of the
primary node is used to determine whether archive files can be deleted.
l FALSE: During archive deletion, standby nodes are considered. The minimum
rcy_point among all the primary standby nodes is used to determine whether archive
files can be deleted.
ARCHIVE_DEST_n
Parameter description: Specifies the destination for log archiving.
l LOCATION: Specifies the log archive address of the local host. For
ARCHIVE_DEST_1, only LOCATION can be used.
l SERVICE: Specifies the IP addresses, port numbers, and log synchronization modes of
the peer end. For ARCHIVE_DEST_2 to ARCHIVE_DEST_10, only SERVICE can
be used. Only in HA deployment or when one primary node has multiple standby nodes
are parameters ARCHIVE_DEST_2 to ARCHIVE_DEST_10 required.
l SYNC | ASYNC: Specifies the transmission mode of redo logs between primary and
standby databases. This parameter is optional. If it is not specified, the default value
SYNC will be used.
– SYNC: Redo logs are transmitted in synchronous mode.
– ASYNC: Redo logs are transmitted in asynchronous mode.
ARCHIVE_DEST_STATE_n
Parameter description: Specifies the availability state of the corresponding destination,
ARCHIVE_DEST_n.
DV_ARCHIVE_DEST_STATUS, a dynamic performance view, displays the value used by
the current session. The DEST_ID column in this view corresponds to the suffix n.
Valid value:
ARCHIVE_DEST_STATE_[1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 ]=
{ ENABLE | DEFER | ALTERNATE }
ENABLE means that a destination can be used for a subsequent archiving operation. DEFER
and ALTERNATE mean that a destination does not take effect.
Default value: ENABLE
ARCHIVE_FORMAT
Parameter description: Specifies the file format used when redo logs are archived.
Value range: a string
The value must contain %s (or %S) and %r (or %R), and cannot contain other % symbols
except %s, %ARCHIVE_FORMAT, %t, %T, %r, and %R.
ARCHIVE_FORMAT
l %s and %S: archive sequence number
l %t and %T: thread ID
l %r and %R: reset log ID
Default value: arch_%r_%s.arc
2.1.6 Transactions
_UNDO_SEGMENTS
Parameter description: Specifies the number of undo segments, which determines the
concurrency capability and total number of transactions.
This parameter can be set only when a database is created. In other cases, do not modify this
parameter.
After a database is started, this parameter becomes read-only. To modify the parameter, stop
the database and modify the configuration file. The modification takes effect after the
database is rebuilt.
Value range: an integer, in the range (0, 1024]
Default value: 32
_UNDO_ACTIVE_SEGMENTS
Parameter description: Specifies the number of active undo segments that can be used
currently.
Value range: an integer, in the range (0, 1024]. The value must be less than or equal to the
value of _UNDO_SEGMENTS.
Default value: 32
_UNDO_AUTO_SHRINK
Parameter description: Specifies whether to enable the automatic SHRINK UNDO
SEGMENT function.
Valid value:
l TRUE: Enable.
l FALSE: Do not enable.
Default value: TRUE
UNDO_RESERVE_SIZE
Parameter description: Specifies the number of undo pages reserved for an undo segment.
Value range: an integer, in the range [64, 1024]
_TX_ROLLBACK_PROC_NUM
Parameter description: Specifies the number of background threads for rolling back residual
transactions.
Default value: 2
PAGE_CHECKSUM
Parameter description: Specifies whether to enable checksum verification for databases.
Valid value:
2.2 Instances
You can set database instance parameters to modify the number or size of buffers, logs,
sessions, and transactions.
LSNR_ADDR
Parameter description: Specifies the listening IP address of the server.
Value range: a valid IPv4 or IPv6 address
Default value: 127.0.0.1
LSNR_PORT
Parameter description: Specifies the listening port of the server.
Value range: an integer, in the range [1024, 65535]
Default value: 1611
REACTOR_THREADS
Parameter description: Specifies the number of threads for I/O listening.
l The recommended value is OPTIMIZED_WORKER_THREADS divided by 50.
l If this parameter is set too large, more CPU, memory, and thread resources will be
occupied. When resources are insufficient, database exceptions may occur.
Value range: a positive integer, in the range [1, 30000]
Default value: 1
2.2.4 SGA
CR_POOL_SIZE
Parameter description: Specifies the size of a consistency read page buffer (that is, a CR
pool).
Set this parameter based on the actual memory size. A larger value can accelerate data access
in concurrency scenarios.
Value range: an integer, in the range [16 MB, 32 TB] (unit: byte)
Default value: 32M
Remarks: In concurrent access scenarios, the current data page may not be visible to the
current session, and a visible page needs to be constructed by using a transaction, called a
consistency read page.
CR_POOL_COUNT
Parameter description: Specifies the number of consistency read page sub-buffers (that is,
CR sub-pools).
Set this parameter based on the actual memory size. A larger value can release the
competition between sessions and accelerate data access in concurrency scenarios.
Value range: an integer, in the range [1, 256]
Default value: 1
DATA_BUFFER_SIZE
Parameter description: Specifies the size of a data buffer, which is used for recently
accessed data.
Value range: an integer, in the range [64 MB, 32 TB] (unit: byte)
Set this parameter based on the actual memory size. A larger value can accelerate data access.
Default value: 128M
SHARED_POOL_SIZE
Parameter description: Specifies the size of a shared pool.
A shared pool contains space shared by Lock, SQL, and DC pools.
Value range: an integer, in the range [82 MB, 32 TB] (unit: byte)
Set this parameter based on the actual memory size. A larger value can accelerate data access.
Default value: 128M
_SQL_POOL_FACTOR
Parameter description: Specifies the maximum proportion of SQL pools in a shared pool.
l The maximum proportion of DC pools in the shared pool is 1 minus
_SQL_POOL_FACTOR.
l When DC pools are insufficient (record the number of all pages in the DC pools as B),
use dv_gma_stats to check the number of pages in the SQL pools and record it as A.
The recommended ratio is 0.8A:B. In special scenarios, you need to test different
configurations to determine the optimal choice.
Value range: a number, in the range [0.001, 0.999]
Default value: 0.5
Remarks:
VARIANT_MEMORY_AREA_SIZE
Parameter description: Specifies the size of the virtual memory area (VMA) that is used to
store variables (such as bind parameters) less than 16 KB during execution. The size of a page
in the area is 16 KB. The setting takes effect only after a restart.
Setting notes: For database installation, you are advised to set this parameter to 16 KB x
_VMP_CACHES_EACH_SESSION x SESSIONS x 1.1. For database upgrade, you are
advised to set this parameter to _VARIANT_AREA_SIZE x SESSIONS x 0.8.
LARGE_VARIANT_MEMORY_AREA_SIZE
Parameter description: Specifies the size of the large VMA that is used to store variables
(such as bind parameters) less than 256 KB during execution. The size of a page in the area is
256 KB. The setting takes effect only after a restart.
Setting notes: For database installation, retain the default value of this parameter. For database
upgrade, you are advised to set this parameter to _VARIANT_AREA_SIZE x SESSIONS x
0.2.
NOTE
If there are too many bind parameters (for example, more than 1000) and frequent statement executions,
you are advised to increase the value of LARGE_VARIANT_MEMORY_AREA_SIZE.
_VMP_CACHES_EACH_SESSION
Parameter description: Specifies the number of 16 KB VMA pages that can be cached in
each session (256 KB pages are not cached). When the remaining memory of VMA is less
than 10%, the pages of some sessions are not cached. Try to maintain over 10% remaining
memory. The setting takes effect immediately.
Default value: 8
Setting notes: For database installation, retain the default value of this parameter. For database
upgrade, you can also use the default value. However, if the result of
_VMP_CACHES_EACH_SESSION x SESSIONS x 16 K is much greater than the value of
VARIANT_MEMORY_AREA_SIZE, you are advised to reduce the value of
_VMP_CACHES_EACH_SESSION. A value greater than 4 is recommended for
performance purposes.
NOTE
An improper memory configuration does not affect statement functionality but does affect statement
performance and OS memory stability (there may be continuous application to the OS for memory
resources and frequent memory release operations).
LARGE_POOL_SIZE
Parameter description: Specifies the size of a large pool.
Set this parameter based on the actual memory size. A larger value can accelerate data access.
LOG_BUFFER_SIZE
Parameter description: Specifies the size of a log buffer, which is used for redo logs.
Value range: an integer, in the range [1 MB, 128 MB] (unit: byte)
Set this parameter based on the actual memory size. A larger value can accelerate data access.
Default value: 4M
LOG_BUFFER_COUNT
Parameter description: Specifies the number of log buffers.
Default value: 4
TEMP_BUFFER_SIZE
Parameter description: Specifies the size of a temporary buffer.
Value range: an integer, in the range [32 MB, 21 TB] (unit: byte)
Set this parameter based on the actual memory size. A larger value can accelerate data access.
TEMP_POOL_NUM
Parameter description: Specifies the number of temporary pools (or temporary buffer
partitions).
Each session is mapped to a temporary pool based on its ID during startup. Later, a session
will be allocated VM pages from the temporary pool.
Default value: 1
USE_LARGE_PAGES
Parameter description: Specifies how to manage the database's use of large pages for SGA
memory.
Valid value:
l TRUE
Specifies that the instance can use large pages if large pages are configured on the
system.
l FALSE
Specifies that the instance will not use large pages. This value is not recommended
because it can cause severe performance degradation on the instance.
Default value: TRUE
_MAX_VM_FUNC_STACK_COUNT
Parameter description: Records the stack information about applying for VMs in the range
[vmid = 0, _MAX_VM_FUNC_STACK_COUNT – 1] in a temporary pool. This
parameter is used with the DV_VM_FUNC_STACK view. If this parameter is set to 0, the
stack information will not be recorded. If the initial value of
_MAX_VM_FUNC_STACK_COUNT is 0 upon startup, you have one chance to change it
to a non-0 value during system runtime and make the change take effect immediately.
Otherwise, the change takes effect only after a startup.
Value range: an integer, in the range [0, 4294967295]
Default value: 0
2.2.5 Sessions
SESSIONS
Parameter description: Specifies the upper limit of concurrent sessions in the system.
Value range: an integer, in the range [64,8192] by default
Default value: 200
Note:
1. The value of SESSIONS is the upper limit of concurrent sessions. It contains 64 sessions
reserved for the system and sessions available to users.
2. The system reserves the number of sessions set by
SUPER_USER_RESERVED_SESSIONS user sys. The sum of SESSIONS and
SUPER_USER_RESERVED_SESSIONS cannot exceed 8192.
3. The minimum value of SESSIONS is 64. The system reserves 64 sessions
(AUTONOMOUS_SESSIONS + KNL_AUTONOMOUS_SESSIONS + 32 internal
sessions used for resource reclamation and checkpoints + 16 sessions for the SQL
parallel framework).
– If the sum of AUTONOMOUS_SESSIONS and
KNL_AUTONOMOUS_SESSIONS is less than or equals 16, the minimum value
of SESSIONS is 64.
– If the sum of AUTONOMOUS_SESSIONS and
KNL_AUTONOMOUS_SESSIONS is greater than 16, the value of SESSIONS
must be greater than or equal AUTONOMOUS_SESSIONS +
KNL_AUTONOMOUS_SESSIONS + 32 + 16. If the values of SESSIONS,
AUTONOMOUS_SESSIONS and KNL_AUTONOMOUS_SESSIONS are
modified and do not meet the preceding requirements, the database cannot be
restarted.
AUTONOMOUS_SESSIONS
Parameter description: Specifies the maximum number of concurrent sessions in an
autonomous transaction.
Default value: 8
KNL_AUTONOMOUS_SESSIONS
Parameter description: Specifies the maximum number of concurrent sessions in an
autonomous transaction of a storage engine.
Default value: 8
2.2.6 Transactions
COMMIT_MODE
Parameter description: Specifies how logs are written to disks. It is an advanced parameter.
Valid value:
l IMMEDIATE: immediate processing. Transactions are not buffered and will be written
to disks immediately once received. This method reduces transaction throughput.
l BATCH: buffering before batch processing. Redo records of transactions are logged and
will be batch written to disks after reaching a certain number.
COMMIT_WAIT_LOGGING
Parameter description: Specifies whether to wait for relevant redo logs to be written to disks
in a transaction.
Valid value:
l WAIT: Perform a transaction after relevant redo logs are written to disks.
l NOWAIT: Perform a transaction without waiting for relevant redo logs to be written to
disks.
LOCK_WAIT_TIMEOUT
Parameter description: Specifies a transaction waiting threshold. If the waiting time exceeds
the threshold, an error will be reported.
Value range: an integer, in the range [0, 2^32 – 1] (unit: ms)
Default value: 0, which means infinite waiting
DB_ISOLEVEL
Parameter description: Specifies the transaction isolation level to ensure that no dirty data is
read.
Valid value:
l RC: It is short for Read Committed. At this level, data read by an SQL statement is the
data of the same snapshot.
l CC: It is short for Current Committed. At this level, data read by an SQL statement is
the latest committed data at the read time. All read data is no longer of the same
snapshot.
Default value: RC
_SERIALIZED_COMMIT
Parameter description: Specifies whether to commit a transaction in serialization mode.
Valid value: TRUE, FALSE
Default value: FALSE
TC_LEVEL
Parameter description: Specifies a transaction compensation level. If the value is greater
than 0, a pending transaction generated when there is a network fault can be automatically
handled after the fault is fixed.
Value range: an integer, in the range [0, 2^32 – 1]
Default value: 0
2.2.7 Checkpoints
CHECKPOINT_PERIOD
Parameter description: Specifies an interval between every two checkpoints. When an
interval reaches this value, an incremental checkpoint will be triggered.
Value range: an integer, in the range [1, 2^32 – 1] (unit: second)
Default value: 300
CHECKPOINT_PAGES
Parameter description: Specifies the number of redo logs between two checkpoints. When
the number reaches this value, a checkpoint is triggered.
TIMED_STATS
Parameter description: Specifies whether to collect time-related statistics.
Valid value:
l TRUE: The statistics are collected and stored in trace files or displayed in the dynamic
performance view DV_SYS_STATS.
l FALSE: The values of all time-related statistics are set to zero.
Default value: TRUE
STATISTICS_SAMPLE_SIZE
Parameter description: Specifies a default sample size for gathering and analyzing statistics
related to database tables.
Value range: an integer, in the range [32 MB, 4 GB) (unit: byte)
Default value: 128M
STATS_FORCE_SAMPLE
Parameter description: Specifies whether to enable the function of limiting the maximum
sampling size for collecting statistics.
Valid value:
l TRUE: The system checks whether the size of table data to be analyzed exceeds the
default sampling size (specified by STATISTICS_SAMPLE_SIZE) during statistics
collection. If it does, only the specified size of data will be analyzed.
l FALSE: Statistics collection is not limited by the default sampling size, and a user-
defined sampling size can be used.
Default value: FALSE
STATS_LEVEL
Parameter description: Specifies whether to collect statistics about DML operations on the
table level.
Valid value:
l TYPICAL/ALL:
Table monitoring is enabled. Statistics about the DML operations on tables will be
collected and displayed in the system catalog SYS_DML_STATS 15 minutes later.
l BASIC: Table monitoring is disabled.
STATS_COST_LIMIT
Parameter description: Specifies the number of pages for collecting statistics in flow control
mode.
Default value: 0
STATS_COST_DELAY
Parameter description: Specifies the I/O wait time for collecting statistics in flow control
mode.
Default value: 0
_MAX_CONNECT_BY_LEVEL
Parameter description: Specifies the upper limit of CONNECT BY LEVEL. If the value of
CONNECT BY LEVEL in a SQL statement exceeds this limit, an error will be reported.
This parameter restricts the depth of recursion for CONNECT BY. If its value is too large,
memory overflow may occur in the thread stack. Determine its value based on the OS
parameter RLIMIT_STACK and the zengine parameter _THREAD_STACK_SIZE. The
thread stack size depends on RLIMIT_STACK. If RLIMIT_STACK is not configured, the
size depends on _THREAD_STACK_SIZE.
So on and so forth.
DBWR_PROCESSES
Parameter description: Specifies the number of background threads for writing dirty pages.
A larger value of this parameter helps improve concurrency performance but causes more
resources to be consumed.
Value range: an integer, in the range [1, 36]
Default value: 1
LOG_REPLAY_PROCESSES
Parameter description: Specifies the number of redo log replay threads.
A larger value of this parameter helps improve concurrency performance but causes more
resources to be consumed.
Value range: an integer, in the range [1, 8]
Default value: 1
2.2.11 HA
REPL_ADDR
Parameter description: Specifies the listening IP address of the server for primary/standby
communication. If this parameter is specified, the IP address will be used for primary/standby
replication. Otherwise, LSNR_ADDR will be used.
Value range: a valid IPv4 or IPv6 address
Default value: none
REPL_PORT
Parameter description: Specifies a port on a standby node for primary-standby
communication.
Value range: 0, or in the range [1024, 65535]
Default value: 0, indicating that the port is not listened and no lsnr thread is started on it. To
listen to the port, select a value from the range [1024, 65535].
REPL_WAIT_TIMEOUT
Parameter description: If no message is exchanged between primary and standby nodes
within the period specified by REPL_WAIT_TIMEOUT, the primary-standby link will be
considered abnormal, and the primary or standby node will proactively disconnect from each
other.
Value range: an integer, in the range [3, 2^32 – 1] (unit: second)
Default value: 10
REPL_TRUST_HOST
Parameter description: Specifies the whitelist of primary and standby nodes for connection.
If the IP address bound to the connection initiator is included in REPL_TRUST_HOST, the
peer end will receive the connection request.
DB_FILE_NAME_CONVERT
Parameter description: Specifies mapping relationships between data file paths on primary
and standby nodes.
After this parameter is set, a standby node will modify its data file paths based on the
mapping relationships. This parameter does not take effect on the primary node, except when
it is demoted to standby.
Value range: Data file paths of primary and standby nodes. The format is as follows (with
one primary node and two standby nodes as an example):
Primary node A: data file path on standby node B, data file path on primary node A, data file
path on standby node C, data file path on primary node A ... (A maximum of 10 mapping
relationships are supported.)
Standby node B: data file path on primary node A, data file path on standby node B, data file
path on standby node C, data file path on standby node B ... (A maximum of 10 mapping
relationships are supported.)
Standby node C: data file path on primary node A, data file path on standby node C, data file
path on standby node B, data file path on standby node C ... (A maximum of 10 mapping
relationships are supported.)
You need to configure all the relationships between the local and peer nodes. However, only
those between the primary and standby nodes take effect. Each mapping relationship has the
peer path coming first and then the local path. Multiple data file paths can be configured, and
they all take effect.
Default value: N/A, indicating that the data file paths on a standby node are consistent with
those on the primary node
LOG_FILE_NAME_CONVERT
Parameter description: Specifies mapping relationships between redo log file paths on
primary and standby nodes.
After this parameter is set, a standby node will modify its log file paths based on the mapping
relationships. This parameter does not take effect on the primary node, except when it is
demoted to standby.
Value range: Log file paths of primary and standby nodes. The format is as follows (with one
primary node and two standby nodes as an example):
Primary node A: log file path on standby node B, log file path on primary node A, log file
path on standby node C, log file path on primary node A ... (A maximum of 10 mapping
relationships are supported.)
Standby node B: log file path on primary node A, log file path on standby node B, log file
path on standby node C, log file path on standby node B ... (A maximum of 10 mapping
relationships are supported.)
Standby node C: log file path on primary node A, log file path on standby node C, log file
path on standby node B, log file path on standby node C ... (A maximum of 10 mapping
relationships are supported.)
You need to configure all the relationships between the local and peer nodes. However, only
those between the primary and standby nodes take effect. Each mapping relationship has the
peer path coming first and then the local path. Multiple log file paths can be configured, and
they all take effect.
Default value: N/A, indicating that the redo log file paths on a standby node are consistent
with those on the primary node
_LNS_WAIT_TIME
Parameter description: Specifies the interval at which an lns thread needs to wait before
sending logs. Value 0 indicates no wait.
Default value: 3
BLOCK_REPAIR_ENABLE
Parameter description: Specifies whether to enable the function of automatically repairing
fault data pages by using a standby node.
Valid value:
l TRUE: When a disk page of a primary node is damaged, the system automatically
obtains the correct disk page from a standby node and repairs the damaged disk page.
l FALSE: The function is disabled on a primary node.
BLOCK_REPAIR_TIMEOUT
Parameter description: Specifies the timeout period for the system to obtain a correct page
from a standby node when the automatic data page repair function is enabled on the primary
node.
Default value: 60
ENABLE_RAFT
Parameter description: Specifies whether to enable the GS-Paxos replication function. In a
standalone database, if ENABLE_RAFT is set to TRUE, the RAFT function becomes
unavailable, and restarting a process will cause an error.
Valid value:
l TRUE: Enable.
l FALSE: Do not enable.
RAFT_START_MODE
Parameter description: Specifies a GS-Paxos startup mode.
Valid value:
l 0: It means a normal mode. GS-Paxos or Kudu metadata must have been initialized.
l 1: GS-Paxos or Kudu metadata is initialized.
l 2: An existing GS-Paxos cluster is joined.
l 3: It means a forcible startup of a single node. GS-Paxos or Kudu metadata will be re-
initialized.
Default value: 0
RAFT_NODE_ID
Parameter description: Specifies the ID of a local node, used for a GS-Paxos cluster to
identify each member node.
RAFT_PEER_IDS
Parameter description: Specifies a string consisting of IDs of all nodes in a cluster.
Active nodes are separated by commas (,). If there are passive nodes, they are separated from
active nodes by semicolons (;) and from other passive nodes by commas (,).
RAFT_LOCAL_ADDR
Parameter description: Specifies IP:Port of a local node.
Value range: an IP address. The port is the one used for each member in a GS-Paxos cluster
to communicate.
RAFT_PEER_ADDRS
Parameter description: Specifies a string consisting of addresses of all nodes in a cluster,
separated by commas (,). These addresses must be in one-to-one mapping with the IDs in
RAFT_PEER_IDS.
Value range: a string
Default value: N/A
RAFT_LOG_LEVEL
Parameter description: Specifies the level for printing GS-Paxos logs.
Value range: [0, 6]
l 0: No logs are printed.
l 1: Logs of the debug level are printed.
l 2: Logs of the info level are printed.
l 3: Logs of the warning level are printed.
l 4: Logs of the error level are printed.
l 5: Logs of the fatal level are printed.
l 6: Logs of the panic level are printed.
Default value: 2
RAFT_KUDU_DIR
Parameter description: Specifies the storage directory of Kudu, storing GS-Paxos or Kudu
metadata.
Value range: a string
Default value: N/A
RAFT_PRIORITY_TYPE
Parameter description: Specifies the GS-Paxos self-quorum type. The options are as
follows:
Valid value:
l External: external quorum mode
l Random: RANDOM quorum mode
l Static: static quorum mode based on priority
l AZFirst: dynamic quorum mode based on AZ priority
Default value: External
RAFT_PRIORITY_LEVEL
Parameter description: Specifies the priority of selecting a primary node in a self-quorum.
This parameter is valid only if RAFT_PRIORITY_TYPE=Static is set.
Value range: a string from '0' to '16'. The value can only be an integer enclosed in single
quotation marks (' ').
The value 0 indicates that no selection is performed. Among other values, the smaller the
value is, the higher the priority is. You are advised to set this parameter to a value less than or
equal to 3.
Default value: '0'
RAFT_LAYOUT_INFO
Parameter description: Path of the cluster topology information file
This parameter is valid only if RAFT_PRIORITY_TYPE=AZFirst is set.
Value range: a string
Default value: N/A
RAFT_PENDING_CMDS_BUFFER_SIZE
Parameter description: Specifies the length of a write or callback queue in Paxos.
A large value indicates high tolerance during performance or network jitter but will increase
memory usage. Do not modify this parameter unless absolutely necessary.
Value range: a string from '1' to '2^32–1'. The value can only be an integer enclosed in
single quotation marks (' '). The unit is byte.
Default value: '1000'
RAFT_SEND_BUFFER_SIZE
Parameter description: Specifies the length of the message sending queue in Paxos.
A large value indicates high tolerance during performance or network jitter but will increase
memory usage. Do not modify this parameter unless absolutely necessary.
Value range: a string from '1' to '10000'. The value can only be an integer enclosed in single
quotation marks (' '). The unit is byte.
Default value: '100'
RAFT_RECEIVE_BUFFER_SIZE
Parameter description: Specifies the length of the message receiving queue in Paxos.
A large value indicates high tolerance during performance or network jitter but will increase
memory usage. Do not modify this parameter unless absolutely necessary.
Value range: a string from '1' to '10000'. The value can only be an integer enclosed in single
quotation marks (' '). The unit is byte.
Default value: '100'
RAFT_RAFT_ENTRY_CACHE_MEMORY_SIZE
Parameter description: Specifies the size of the log cache in Paxos.
A large value indicates high tolerance during performance or network jitter but will increase
memory usage. Do not modify this parameter unless absolutely necessary.
Value range: a string from '1' to '2^32–1'. The value can only be an integer enclosed in
single quotation marks (' '). The unit is byte.
Default value: '2147483648'
RAFT_MAX_SIZE_PER_MSG
Parameter description: Specifies the maximum size of a message in Paxos.
Value range: a string from '67108864' to '2^32–1'. The value can only be an integer
enclosed in single quotation marks (' '). The unit is byte.
Default value: '134217728'
RAFT_LOG_ASYNC_BUF_NUM
Parameter description: Specifies the number of asynchronous buffers in the Raft protocol.
Value range: an integer, in the range [1, 128]
Default value: 16
LOCAL_TEMPORARY_TABLE_ENABLED
Parameter description: Specifies whether local temporary tables can be created.
Valid value:
l TRUE: Local temporary tables can be created.
l FALSE: Local temporary tables cannot be created.
Default value: FALSE
Valid value:
A keyword will be mapped to different data types when USE_NATIVE_DATATYPE is set
to TRUE or FALSE. For details, see Table 2-3.
l TRUE: Confused keywords are mapped to the primitive data types of C-like languages.
This mode is compatible with MySQL and PostgreSQL data types.
l FALSE: All keywords of numeric data types are mapped to NUMBER. This mode is
compatible with Oracle data types.
In principle, some keywords are not affected by the USE_NATIVE_DATATYPE
parameter, including BINARY_BIGINT, BINARY_INTEGER, and
BINARY_DOUBLE.
TYPE_MAP_FILE
Parameter description: Specifies the directory of type mapping files. This parameter is read-
only.
This parameter is valid only when USE_NATIVE_DATATYPE is set to TRUE.
CBO (Cost-Based-Optimize)
Parameter description: Specifies the switch for the cost-based optimizer (CBO). Value ON
means the switch is turned on, and value OFF means the switch is turned off.
If CBO is set to ON and there are table statistics, SQL execution plans will be generated
based on CBO rules. Otherwise, the SQL execution plans will be generated based on RBO
rules.
If there is a CBO switch switchover, all execution plans buffered in a SQL pool will be
invalidated.
Valid value:
ON: The switch is enabled.
OFF: The switch is disabled.
Default value: OFF
JOB_THREADS
Parameter description: Specifies the maximum number of jobs that can be concurrently
executed.
When the number of concurrent jobs reaches the maximum, the system waits for ongoing jobs
to be completed even if there is a job reaching the next execution time. That is, the system
executes jobs only when the number of concurrent jobs is less than the maximum.
Value range: [0, 200]
Default value: 100
UNDO_RETENTION_TIME
Parameter description: Specifies the undo retention period. If this parameter is set to an
overly small value, the error "snapshot too old" will be reported.
Value range: an integer, in the range (0, 2^32 – 1] (unit: second)
Default value: 100
FILE_OPTIONS
Parameter description: Specifies whether to enable the Direct I/O or asynchronous I/O
feature for file systems on supported platforms.
Valid value:
ALARM_LOG_DIR
Parameter description: Specifies the directory of alarm logs.
INSTANCE_NAME
Parameter description: Specifies the name of an instance.
RECYCLEBIN
Parameter description: Specifies whether to enable the recycle bin function in real time.
BUF_POOL_NUM
Parameter description: Specifies the number of partitions for the data buffer.
Default value: 1
SQL_COMPAT
Parameter description: Specifies a supported database type.
CR_MODE
Parameter description: Specifies an MVCC mechanism for tables or indexes.
Valid value:
DROP_NOLOGGING
Parameter description: Specifies whether to delete the definition of a NOLOGGING table
when a database is restarted, a primary node is demoted to standby, or a standby node is
promoted to primary.
Valid value:
TRUE: All NOLOGGING tables (including table definitions and table data) are deleted.
When a standby node accesses a NOLOGGING table, an error indicating that the table does
not exist is reported.
FALSE: NOLOGGING tables are cleared, and table definitions are reserved.
Default value: FALSE
_AUTO_INDEX_RECYCLE
Parameter description: Specifies whether to create a background thread to reclaim empty
pages of indexes.
Valid value:
ON: The thread is created.
OFF: The thread is not created.
Default value: ON
_RCY_CHECK_PCN
Parameter description: Specifies whether to enable the PCN verification function during log
replay. The function checks whether the PCN values between data pages and redo logs are
consistent.
Valid value: TRUE, FALSE
Default value: TRUE
TABLESPACE_USAGE_ALARM_THRESHOLD
Parameter description: Specifies the alarm threshold of the tablespace usage.
Value range: an integer, in the range [0,100] (unit: percentage)
Default value: 80
The time zone is used as the time reference of the database. Any time point of other time
zones is automatically converted to a point of the local time zone. In this way, global time is
unified. In a client query, the record time is converted into a point of the time zone where the
client is located when the query result is returned.
You are advised not to modify this parameter after it is set during system initialization.
Otherwise, data storage of the TIMESTAMP WITH LOCAL TIME ZONE type will be
affected.
2.3 Sessions
2.3.2 Cursors
COMMIT_ON_DISCONNECT
Parameter description: Specifies whether autocommit is enabled when there is a
disconnection.
_PREFETCH_ROWS
Parameter description: Specifies the number of prefetch rows.
Value range: an integer, in the range [1, 2^32 – 1]
Default value: 100
OPEN_CURSORS
Parameter description: Specifies the maximum number of cursors that can be opened in a
session at a time. This parameter can be used to prevent sessions from opening too many
cursors.
Value range: an integer, in the range [1, 16384]
Default value: 2000
_SQL_MAP_BUCKETS
Parameter description: When the SQL mapping function is enabled, SQL mapping
relationships are recorded in the hash structure on database buffers. This parameter is used to
adjust the number of hash buckets.
Value range: an integer, in the range [1, 1000000]
Default value: 1000
2.4 Tools
ZSQL_INTERACTION_TIMEOUT
Parameter description: Specifies the timeout period during which zsql waits for user input
after notifying the users of SSL security interaction information.
Value range: a positive integer
Default value: 5
_ENABLE_QOS
Parameter description: Specifies whether to turn on the flow control switch. When there are
high requirements for concurrency performance, turning on the flow control switch optimizes
the performance of databases.
Valid value:
_QOS_CTRL_FACTOR
Parameter description: Specifies the maximum number of concurrent threads for a single
CPU. If the number of concurrent threads reaches this parameter value, additional threads will
enter the sleep state, queuing up to get activated.
_QOS_SLEEP_TIME
Parameter description: Specifies the fixed time coefficient for threads to sleep. If the
number of concurrent threads reaches the maximum, additional threads will enter the sleep
state. The sleeping time is calculated as follows: _QOS_SLEEP_TIME x 1 ms +
_QOS_SLEEP_TIME x (TRANSACTION_ID % _QOS_RANDOM_RANGE) x 1 μs.
This parameter is valid only when the flow control switch is turned on.
Default value: 20
_QOS_RANDOM_RANGE
Parameter description: Specifies the random time coefficient for threads to sleep. If the
number of concurrent threads reaches the maximum, additional threads will enter the sleep
state. The sleeping time is calculated as follows: _QOS_SLEEP_TIME x 1 ms +
_QOS_SLEEP_TIME x (TRANSACTION_ID % _QOS_RANDOM_RANGE) x 1 μs.
This parameter is valid only when the flow control switch is turned on.
Default value: 64
_HINT_FORCE
Parameter description: Specifies whether to enable optimizer hints.
Value range: an integer
0: All optimizer hints are disabled.
1: The ordered hint is enabled.
2: The nested loop hint is enabled.
4: The merge hint is enabled.
8: The hash hint is enabled.
If you need to enable multiple hints at the same time, set this parameter to a sum of them. For
example, if both the ordered and hash hints need to be enabled, set this parameter to 9.
Default value: 0
STRING_AS_HEX_FOR_BINARY
Parameter description: Specifies whether to process the BINARY type as the RAW type.
Valid value:
l TRUE: The BINARY type is processed as the RAW type.
l FALSE: The BINARY type is processed as it is.
Default value: FALSE
_AGENT_STACK_SIZE
Parameter description: Specifies the size of a thread data stack, which is used to buffer
messages. The maximum size of a message that can be buffered is half of this parameter
value.
Value range: a positive integer, in the range [512 KB, 4 GB) (unit: byte)
Default value: 1MB
_VARIANT_AREA_SIZE
Parameter description: This parameter has been abandoned. If it is configured for
compatibility purposes, no error will be reported when the service is started.
Value range: an integer, in the range [256 KB, 64 MB] (unit: byte)
Default value: 256K
OPTIMIZED_WORKER_THREADS
Parameter description: Specifies the most proper number of work threads.
l When the number of sessions exceeds the value of this parameter, the session and thread
separation mode will be enabled. Otherwise, the binding mode will be used.
l It is recommended that this parameter value be less than or equal to the value of
SESSIONS. Otherwise, additional thread resources will be wasted.
l If this parameter is set to an overly large value, more CPU and thread resources will be
occupied. When resources are insufficient, database exceptions may occur.
l Each thread occupies over 0.5 MB memory.
Value range: a positive integer, in the range [2, 30000]
Default value: 100
_INDEX_BUFFER_SIZE
Parameter description: Specifies the size of an index buffer on a single session. Increasing
this size helps reduce the number of times that indexes are read from a disk.
Number of pages in an index buffer = _INDEX_BUFFER_SIZE/PAGE_SIZE
Value range: a positive integer, in the range [16 KB, 32 TB] (unit: byte)
Default value: 8M
_PRIVATE_KEY_LOCKS
Parameter description: Specifies the maximum number of key locks that can be held by
each session.
When a transaction ends, the session can hold a maximum of key locks no greater than
_PRIVATE_KEY_LOCKS, and needs to release remaining locks to the global lock area for
later reuse.
Value range: an integer, in the range [8, 128]
Default value: 8
_PRIVATE_ROW_LOCKS
Parameter description: Specifies the maximum number of row locks that can be held by
each session.
When a transaction ends, the session can hold a maximum of row locks no greater than
_PRIVATE_ROW_LOCKS, and needs to release remaining locks to the global lock area for
later reuse.
Value range: an integer, in the range [8, 128]
Default value: 8
_DOUBLEWRITE
Parameter description: Specifies whether to turn on the doublewrite switch. After this
switch is turned on, reliability will be improved.
Valid value:
TRUE: The doublewrite switch is turned on.
FALSE: The doublewrite switch is turned off.
Default value: TRUE
_INIT_CURSORS
Parameter description: Specifies the number of initial cursors in a session.
A cursor opens a table. When the number of tables opened at the same time exceeds this
initial number, cursors will be dynamically allocated to the current session. The dynamic
allocation may reduce SQL performance.
Value range: an integer, in the range [0, 256]
Default value: 32
MERGE_SORT_BATCH_SIZE
Parameter description: Specifies the number of records involved in a sort operation in the
merge join algorithm.
LONGSQL_TIMEOUT
Parameter description: Specifies time threshold for slow queries. This parameter is valid
only when slow query logging is enabled. If the execution time of a DML statement exceeds
the value of this parameter, the statement will be recorded in a slow query log.
Value range: an integer, in the range [0, 2^32 – 1] (unit: second)
Default value: 10
ENABLE_ERR_SUPERPOSED
Parameter description: Specifies whether to turn on the SQL error superposition switch.
After the switch is turned on, error information will be superposed when being output. After
the switch is turned off, only bottom-layer error information will be output.
Valid value:
TRUE: The SQL error superposition switch is turned on.
FALSE: The SQL error superposition switch is turned off.
Default value: FALSE
EMPTY_STRING_AS_NULL
Parameter description: Specifies whether to consider an empty string null. This parameter is
read-only.
Valid value:
TRUE: An empty string is considered null.
FALSE: An empty string is not considered null, and is processed differently.
Default value: TRUE
ZERO_DIVISOR_ACCEPTED
Parameter description: Specifies whether to turn on the switch of allowing for a zero
divisor. After the switch is turned on, calculation results will be NULL when a zero divisor is
used. After the switch is turned off, an exception will be thrown when a zero divisor is used.
Valid value:
TRUE: A divisor can be 0.
FALSE: No divisor can be 0.
Default value: FALSE
INTERACTIVE_TIMEOUT
Parameter description: Specifies a session timeout period. If a session has no operation
within the timeout duration, it will be closed.
Value range: a positive integer, in the range [1, 2^32 – 1] (unit: second)
UPPER_CASE_TABLE_NAMES
Parameter description: Specifies whether to convert letters in object names to uppercase.
This parameter is read-only.
Objects names include table, column, view, stored procedure, customized function, trigger,
tablespace, index, and constraint names.
Valid value:
TRUE: The letters are converted to uppercase. In this case, SQL statements are case-
insensitive.
FALSE: The letters are not converted to uppercase. In this case, SQL statements are case-
sensitive.
MAX_CONNECTION_POOL_SIZE
Parameter description: Specifies the maximum number of connections in a connection pool
for z-sharding. This parameter is valid only on z-sharding nodes.
MIN_CONNECTION_POOL_SIZE
Parameter description: Specifies the minimum number of connections in a connection pool
for z-sharding. This parameter is valid only on z-sharding nodes.
Default value: 10
SUPER_USER_RESERVED_SESSIONS
Parameter description: Specifies the number of sessions reserved for user SYS.
Default value: 5
RESOURCE_LIMIT
Parameter description: Specifies whether to turn on the resource restriction switch.
Valid value:
TRUE: The resource restriction switch is turned on. In this case, the maximum number of
sessions each user can connect to is limited by the SESSIONS_PER_USER parameter and
also the SESSIONS parameter, which specifies the total number of sessions.
FALSE: The resource restriction switch is turned off. In this case, the maximum number of
sessions each user can connect to is limited only by the total number of sessions.
Default value: FALSE
_SQL_CURSORS_EACH_SESSION
Parameter description: Specifies the number of initial SQL cursors in a session.
An SQL cursor opens a table, either a physical table or a materialized table. After the number
of opened tables exceeds this initial number, the system will allocate cursors from the global
SQL cursor pool until resources there are used up. Then, the system dynamically allocates
cursors, which affects SQL performance.
Value range: an integer, in the range [0, 300]
Default value: 8
_RESERVED_SQL_CURSORS
Parameter description: Specifies the number of reserved global SQL cursors.
When a database is started, a global SQL cursor pool is initialized. When a session is created,
it is allocated a cursor from the global pool. When the number of sessions is large and the
initial global SQL cursors are insufficient, the system automatically expands the global SQL
cursor pool. The number of global SQL cursors is increased by 80 each time until the number
reaches _SQL_CURSORS_EACH_SESSION x SESSIONS +
_RESERVED_SQL_CURSORS.
This parameter can be set dynamically. In this case, however, the value can only be increased,
instead of being reduced. To reduce value, you must modify the configuration file and restart
the database.
Value range: an integer, in the range [0, 1000]
Default value: 80
MAX_ALLOWED_PACKET
Parameter description: Specifies the maximum packet size allowed for communication.
Value range: [96 KB, 64 MB] (unit: byte)
Default value: 64M
MAX_REMOTE_PARAMS
Parameter description: Specifies the maximum number of bind parameters. In distribution
scenarios, IN subqueries cannot be pushed down to a DN for execution, and therefore need to
be rewritten to IN bind parameters for pushdown purposes.
Value range: [0, 32768]
Default value: 300
_SYS_PASSWORD
Parameter description: Specifies a preset password of user SYS for a standalone database.
The password is stored in install.py after undergoing an irreversible encryption. You are not
advised to change the password online. After the value of this parameter is changed, the
password will only be used as a temporary password of user SYS when the database is in
NOMOUNT mode.
Setting method:
When running install.py to install a database, use -C to specify the value of
_SYS_PASSWORD. The format is as follows: -C _SYS_PASSWORD=new_password
Value range: a string
Default value:
thuMmQYA0AcykVYBVtuBJRxecJ3in8XVsZb2sHORAgOqnCrPOTvm7VYtv3RoPb
WMKRduMKrHZ3dlCVih0o0at1KvH7t8VZHLGpa+n1kJlTP6iLrYGRNBXA==
AUDIT_LEVEL
Parameter description: Specifies an audit level.
Table 2-8 lists the audit objects and the corresponding open flags. To audit multiple objects,
set AUDIT_LEVEL to a sum. For example, if DDL, DCL, and DML operations need to be
audited at the same time, set AUDIT_LEVEL to 7.
DDL 1
DCL 2
DML 4
PL 8
ALL 255
For functional syntax, set AUDIT_LEVEL based on its types to record required audit logs.
l For EXP, IMP, LOAD, and DUMP syntax that consists of multiple types of SQL
statements, the value of AUDIT_LEVEL must cover the corresponding SQL types.
l For other syntax, the value of AUDIT_LEVEL must cover the DCL type.
Default value: 3
_AUDIT_MAX_FILE_SIZE
Parameter description: Specifies the size of a single audit log file.
If the size of an audit log file reaches this parameter value, the system will back up the log
file. If the log file name is zengine.aud, the backup log file name will be
zengine_yyyymmddhhmissfff.aud.
Note that this parameter does not dynamically take effect on existing audit log files.
_AUDIT_BACKUP_FILE_COUNT
Parameter description: Specifies the maximum number of backup audit log files.
If the number of backup audit log files reaches this parameter value, the earliest backup audit
log file will be first deleted. Only the number, specified by this parameter, of backup audit log
files will be retained.
Default value: 10
ENABLE_SYSDBA_LOGIN
Parameter description: Specifies whether to support password-free login. If this parameter is
not set, password-free login will be supported.
This parameter specifies whether to enable password-free login during installation. You are
not advised to change the value online during database use. If you need to change the value
online, run the python zctl.py -t kill command to stop the database first.
Valid value:
LOCAL_KEY
Parameter description: Specifies a working key, which is used to encrypt SSL private key
passwords and the data for local password-free login, providing confidentiality and integrity
protection for locally stored sensitive data.
Such a working key can be generated by running the zencrypt -g command. The command
actually generates a pair of a working key (specified by LOCAL_KEY) and a root key factor
(specified by _FACTOR_KEY) at the same time. They must be used in pairs.
_FACTOR_KEY
Parameter description: Specifies a root key factor, which is located at the bottom layer of
key management and is used to protect the confidentiality of upper-layer keys (a working
key).
Such a root key factor can be generated by running the zencrypt -g command. The command
actually generates a pair of a working key (specified by LOCAL_KEY) and a root key factor
(specified by _FACTOR_KEY) at the same time. They must be used in pairs.
Value range: a 24-byte value, which is generated by encoding a 128-bit (16-byte) random
number using Base64
TCP_VALID_NODE_CHECKING
Parameter description: Specifies whether to enable the IP address whitelist checking
function. A server can control client access based on a whitelist or blacklist.
Before enabling the IP address whitelist checking function, ensure that at least one of
TCP_INVITED_NODES and TCP_EXCLUDED_NODES has been configured.
Otherwise, the following error will be reported:
GS-00254 : For invited and excluded nodes is both empty, ip whitelist function
can't be enabled
Setting method:
Valid value:
TCP_INVITED_NODES
Parameter description: Specifies an IP address whitelist.
After the IP address whitelist checking function is enabled and an IP address whitelist is
configured, only whitelisted clients can access databases. Such a whitelist allows for IPv4 and
IPv6 addresses, as well as a specified subnet mask length, which indicates a subnet segment.
Multiple addresses or network segments can be separated by commas (,).
When the IP address whitelist detection function is enabled, you are not allowed to change
both the whitelist and blacklist to be empty. Otherwise, the following error will be reported:
GS-00255: Ip whitelist function is enabled, invited and excluded nodes can't set
to both empty
Setting method:
TCP_EXCLUDED_NODES
Parameter description: Specifies an IP address blacklist.
After the IP address whitelist checking function is enabled and an IP address blacklist is
configured, those blacklisted clients cannot access databases. Such a blacklist allows for IPv4
and IPv6 addresses, as well as a specified subnet mask length, which indicates a subnet
segment. Multiple addresses or network segments can be separated by commas (,).
When the IP address whitelist detection function is enabled, you are not allowed to change
both the whitelist and blacklist to be empty. Otherwise, the following error will be reported:
GS-00255: Ip whitelist function is enabled, invited and excluded nodes can't set
to both empty
Setting method:
l Set the TCP_EXCLUDED_NODES parameter in the zengine.ini configuration file and
restart the database for the setting to take effect. The zengine.ini file is stored in
{GSDB_DATA}/cfg/zengine.ini.
TCP_EXCLUDED_NODES = (10.134.175.142/32,20ab::9217:acff:feab:fcd0/64)
l During database instance running, use the ALTER SYSTEM statement to configure an
IP address blacklist.
ALTER SYSTEM SET TCP_EXCLUDED_NODES = '(10.134.175.142/32,20ab::
9217:acff:feab:fcd0/64)';
UNAUTH_SESSION_EXPIRE_TIME
Parameter description: Specifies the connection authentication time.
If a connection is not authenticated within the configured authentication time, the server
forcibly breaks the connection and releases the session resources occupied by the connection.
This avoids the exhaustion of connection sessions caused by malicious TCP connections. This
parameter effectively prevents DoS attacks, and any modification of it will take effect
immediately.
Value range: an integer, in the range [0, 2^32 – 1] (unit: second)
Default value: 60
SSL_VERIFY_PEER
Parameter description: Specifies whether to verify a client certificate.
Valid value:
TRUE: The client certificate is verified. When creating an SSL connection, a client must
provide a valid certificate.
FALSE: No client certificate is verified. When creating SSL connection, a client is allowed to
not provide a certificate.
Default value: FALSE
SSL_CERT
Parameter description: Specifies the path of a server certificate.
A server certificate is used to indicate the validity of the server identity. A certificate file
contains the public key of a server. The public key will be sent to a peer end to encrypt data.
Value range: a string. You are advised to set this parameter to the absolute path of a device
certificate. Otherwise, loading the certificate may fail.
Default value: N/A, indicating no server certificate
SSL_KEY
Parameter description: Specifies the path of a server's private key file.
A private key is used to decrypt the ciphertext generated by using a public key.
Value range: a string. You are advised to set this parameter to the absolute path of a private
key file. Otherwise, loading the private key may fail.
Default value: N/A, indicating no private key
SSL_CA
Parameter description: Specifies the path of a root certificate on a CA server.
This parameter is valid only when SSL_VERIFY_PEER is enabled.
Value range: a string. You are advised to set this parameter to the absolute path of a root
certificate on a CA server. Otherwise, loading the certificate may fail.
Default value: N/A, indicating no root certificate on a CA server
SSL_CRL
Parameter description: Specifies a certificate revocation list (CRL). If a client certificate is
on the list, it will be regarded as an invalid certificate.
Value range: a string. You are advised to set this parameter to the absolute path of a revoked
certificate. Otherwise, loading the certificate may fail.
Default value: N/A, indicating no CRL
SSL_CIPHER
Parameter description: Specifies an encryption algorithm for SSL communication.
Valid value:
Default value: N/A, indicating that all encryption algorithms supported by GaussDB 100 can
be used on a peer end
SSL_KEY_PASSWORD
Parameter description: Specifies the ciphertext of the password for a server's private key. To
encrypt a private key before storing it, set this parameter specify the password ciphertext.
Value range: a string
Default value: N/A, indicating that a private key file is not encrypted
SSL_EXPIRE_ALERT_THRESHOLD
Parameter description: Specifies the threshold for alarming SSL certificate expiration. If
SSL has been enabled, there will be a log reminder when the expiration time is less than the
threshold. After a database is started, it periodically checks for SSL certificate expiration
based on SSL_PERIOD_DETECTION.
Value range: an integer, in the range [7, 180] (unit: day)
Default value: 30
SSL_PERIOD_DETECTION
Parameter description: Specifies the period for detecting SSL certificate expiration. If SSL
has been started, the system will periodically check the time before SSL expiration after an
instance is started. If the time is less than the value of
SSL_EXPIRE_ALERT_THRESHOLD, a log reminder will be reported.
Value range: unconfigurable (unit: day)
Default value: 7
HAVE_SSL
Parameter description: Specifies whether the current database instance supports SSL
connection creation. This parameter is read-only.
Valid value:
TRUE: The current database instance supports SSL connection creation.
FALSE: The current database instance does not support SSL connection creation.
Default value: FALSE. When a database is started, the SSL status is detected and the
parameter is automatically updated. Manual configuration is not supported.
_ENCRYPTION_ITERATION
Parameter description: Specifies the number of iterations of an encryption algorithm.
Value range: an integer, in the range [1000, 50000]
Default value: 2000
2.7 Logs
ALARM_LOG_DIR
Parameter description: Specifies the directory of alarm logs.
Value range: a string, no greater than 163
Default value: none
_LOG_LEVEL
Parameter description: Specifies a level for logging.
If this parameter is set, audit logging will be disabled. Audit logging is controlled by
AUDIT_LEVEL.
Value range: an integer
Table 2-11 lists supported log levels and the corresponding open flags. To record logs of
different levels, set _LOG_LEVEL to a sum. For example, if the RUN ERROR and
DEBUG ERROR logs need to be recorded, set _LOG_LEVEL to 17, that is, 0x00000001(1)
+ 0x00000010(16). RUN logs and DEBUG logs are classified as follows: INFORMATION,
WARNING, and ERROR. INFORMATION: Normal information, such as SQL statements,
is printed. WARNING: Alarm information is printed, without affecting operations. ERROR:
Error information, for example, causes of SQL parse errors, is printed.
RUN 0x00000004 4
INFORMATION
DEBUG 0x00000020 32
WARNING
DEBUG 0x00000040 64
INFORMATION
When _LOG_LEVEL is set to 0, run logging, debug logging, and LONGSQL logging are all
disabled.
Default value: 7 (RUN logs are recorded.)
_LOG_MAX_FILE_SIZE
Parameter description: Specifies the size of a single log file. This parameter applies only to
RUN logs, DEBUG logs, OPER logs, ALARM logs, and LONGSQL logs.
If the size of a log file reaches this parameter value, the system will back up the log file. If the
log file name is zengine.rlog, the backup log file name will be
zengine_yyyymmddhhmissfff.rlog.
Note that this parameter does not dynamically take effect on existing log files.
Value range: an integer, in the range [1 MB, 4 GB] (unit: byte)
Default value: 10M
_LOG_BACKUP_FILE_COUNT
Parameter description: Specifies the maximum number of backup log files. This parameter
applies only to RUN logs, DEBUG logs, OPER logs, ALARM logs, and LONGSQL logs.
If the number of backup log files reaches this parameter value, the earliest backup log file will
be first deleted. Only the number, specified by this parameter, of backup log files will be
retained.
Value range: an integer, in the range [0, 128]
Default value: 10
_LOG_FILE_PERMISSIONS
Parameter description: Specifies permissions for log files.
The write permission of log owners, owner groups, and other users for backup log files can be
automatically deleted. This parameter does not dynamically take effect on existing log files.
Value range: an integer, in the range [600, 777]
Default value: 600
_LOG_PATH_PERMISSIONS
Parameter description: Specifies permissions for a log directory.
This parameter can be used to set permissions for the directories of RUN, DEBUG, and
AUDIT logs, their upper-layer directories, and the ALARM log directory.
When a log file is backed up or a log file is created, _LOG_PATH_PERMISSIONS
dynamically takes effect to control the directory permissions.
Value range: an integer, in the range [700, 777]
Default value: 700
_BLACKBOX_STACK_DEPTH
Parameter description: Specifies the depth of the stack called for printing logs when the
database program crashes.
Value range: an integer, in the range [2, 40]
Default value: 30
Database ARCHIVE_CONFIG
ARCHIVE_MAX_THREADS
ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE
Instance COVERAGE_ENABLE
ENABLE_IDX_CONFS_NAME_DUPL
GaussDB 100 provides static data dictionary tables and views for users to view system
information. The information is updated only when the data dictionary is changed (for
example, when a table is created or new permissions are granted to users). GaussDB 100 also
provides dynamic performance views for database administrators to monitor systems.
SYS_VIEWS Records information about all views in the system, except for
dynamic views.
3.1.1 SYS_BACKUP_SETS
Records information about backup sets from physical backup files.
3.1.2 SYS_COLUMNS
Records information about the columns of all tables in the system.
2 ID BINARY_INTEG Column ID
ER
3.1.3 SYS_COMMENTS
Records information about all comments in the system.
3.1.4 SYS_CONSTRAINT_DEFS
Records information about all constraints in the system.
3.1.5 SYS_DATA_NODES
Records information about all database nodes.
3.1.6 EXP_TAB_ORDERS
Records the export sequence of tables. This system catalog is a temporary table used by the
EXP tool.
3.1.7 EXP_TAB_RELATIONS
Records the dependencies of tables. This system catalog is a temporary table used by the EXP
tool.
3.1.8 SYS_DEPENDENCIES
Records information about dependencies between all objects in the system.
Note:
The values of D_TYPE# and P_TYPE# are enumerated integers. 0 indicates a table. 1
indicates a view, 2 indicates a sequence, 3 indicates a stored procedure, 7 indicates a noun, 8
indicates a function, 9 indicates a trigger, 10 indicates an index, 11 indicates a LOB, 12
indicates a partitioned table, 13 indicates a partitioned index, and 14 indicates a LOB in a
partition.
3.1.9 SYS_DISTRIBUTE_RULES
Records distribution rules. (If GaussDB 100 is deployed in distributed mode, you can query
this system catalog for data. In standalone deployment, this system catalog has no data.)
3.1.10 SYS_DISTRIBUTE_STRATEGIES
Records information about table distribution.
3.1.11 SYS_DUMMY
Records the constants of an expression.
3.1.12 SYS_EXTERNAL_TABLES
Records information about external tables in the system.
3.1.13 SYS_GARBAGE_SEGMENTS
Records information about garbage segments in the system.
3.1.14 SYS_HISTGRAM_ABSTR
Records information about the headers of histograms in the system.
3.1.15 SYS_HISTGRAM
Records information about histograms in the system.
3.1.16 SYS_INDEXES
Records information about all indexes in the system.
2 ID INTEGER Index ID
3.1.17 SYS_INDEX_PARTS
Records information about all partitioned indexes in the system.
3.1.18 SYS_JOBS
Records information about scheduled jobs in the system.
3.1.19 SYS_LINKS
A reserved function view, with no data.
3.1.20 SYS_LOBS
Records information about large objects in the system.
3.1.21 SYS_LOB_PARTS
Records information about large partitioned objects in the system.
3.1.22 SYS_LOGIC_REPL
Records information about tables where the logical replication service is enabled.
3.1.23 SYS_DML_STATS
Records statistics about operations on tables in the system.
3.1.24 SYS_OBJECT_PRIVS
Records the object authorization information about non-SYS users and all roles in the system.
3.1.25 SYS_PART_COLUMNS
Records information about partition columns in the system.
3.1.26 SYS_PART_OBJECTS
Records information about partitioned objects in the system.
3.1.27 SYS_PART_STORES
Records information about partition storage in the system.
3.1.28 SYS_PENDING_DIST_TRANS
Records information about distributed two-phase transactions.
3.1.29 SYS_PENDING_TRANS
Records information about pending, two-phase transactions in the system.
3.1.30 SYS_PROCS
Records information about stored procedures, user-defined functions, and triggers in system
catalogs.
3.1.31 SYS_PROC_ARGS
Records information about the parameters of stored procedures and user-defined functions in
system catalogs. The system catalogs are available only when there are parameters.
3.1.32 SYS_PROFILE
Records information about profiles in the system.
3.1.33 SYS_RECYCLEBIN
Records information about the recycle bin in the system.
1 ID BINARY_BIGINT ID of an object in
the recycle bin
3.1.34 SYS_ROLES
Records information about roles in the system.
0 ID BINARY_INTEGE Role ID
R
3.1.35 SYS_SEQUENCES
Records information about all sequences in the system.
1 ID BINARY_INTEGE Sequence ID
R
3.1.36 SYS_SHADOW_INDEXES
Records information about shadow indexes in the system.
2 ID BINARY_INTEGE Index ID
R
3.1.37 SYS_SHADOW_INDEX_PARTS
Records information about shadow partitioned indexes in the system.
3.1.38 SYS_SQL_MAPS
Records all SQL mapping relationships in the system.
3.1.39 SYS_SYNONYMS
Records information about all synonyms in the system.
1 ID BINARY_INTEGE Synonym ID
R
3.1.40 SYS_PRIVS
Records information about all system permissions granted to users and roles in the system.
3.1.41 SYS_TABLES
Records information about all tables (including all system catalogs and user tables) in the
system.
1 ID BINARY_INT Table ID
EGER
3.1.42 SYS_TABLE_PARTS
Records information about partitioned tables in the system.
3.1.43 SYS_TMP_SEG_STATS
Records statistics about temporary segments in the system.
3.1.44 SYS_USERS
Records information about all users in the system.
0 ID BINARY_INTEGE User ID
R
3.1.45 SYS_USER_HISTORY
Records the historical passwords of users.
3.1.46 SYS_USER_ROLES
Records information about the roles of users.
3.1.47 SYS_VIEWS
Records information about all views in the system, except for dynamic views.
1 ID BINARY_INTEGE View ID
R
3.1.48 SYS_VIEW_COLS
Records column information about views in the system.
2 ID BINARY_INTEGE Column ID
R
3.1.49 WSR_PARAMETER
Records parameter values in each snapshot of the database.
3.1.50 WSR_SQLAREA
Records DML execution information in each snapshot of the database.
10 IO_WAIT_TIME BINARY_B I/O wait time of the SQL statement (unit: μs)
IGINT
3.1.51 WSR_SYS_STAT
Records statistics in each snapshot of the database.
3.1.52 WSR_SYSTEM
Records statistics in each snapshot of the database.
3.1.53 WSR_SYSTEM_EVENT
Records event information in each snapshot of the database.
3.1.54 WSR_SNAPSHOT
Records snapshot information in the database.
3.1.55 WSR_CONTROL
Records parameters about snapshot generation in the database.
3.1.56 WSR_DBA_SEGMENTS
Records information about ADM_SEGMENTS in snapshots. In standalone scenarios, this
system catalog has data. In distributed scenarios, this system catalog has no data.
3.1.57 WSR_LATCH
Records information about DV_LATCHS in snapshots. In standalone scenarios, this system
catalog has data. In distributed scenarios, this system catalog has no data.
6 WAIT_TIME BINARY_INTE Wait time for the latch lock (unit: ms)
GER
3.1.58 WSR_LIBRARYCACHE
Records information about WSR_LIBRARYCACHE in snapshots. In standalone scenarios,
this system catalog has data. In distributed scenarios, this system catalog has no data.
5 PINHITS BINARY_BIG Total number of pages used for soft parsing of the
INT SQL statement
3.1.59 WSR_LONGSQL
Records information about DV_LONG_SQL in snapshots. In standalone scenarios, this
system catalog has data. In distributed scenarios, this system catalog has no data.
3.1.60 WSR_SEGMENT
Records snapshot information about DV_SEGMENT_STATS In standalone scenarios, this
system catalog has data. In distributed scenarios, this system catalog has no data.
3.1.61 WSR_SQL_LIST
A global temporary table, records mappings between SQL_ID and SQL_TEXT saved when
the WSR report is generated. In standalone scenarios, this system catalog has data. In
distributed scenarios, this system catalog has no data.
3.1.62 WSR_SQL_LIST_PLAN
A global temporary table, records mappings between SQL_ID and SQL execution plans
saved when the WSR report is generated. In standalone scenarios, this system catalog has
data. In distributed scenarios, this system catalog has no data.
3.1.63 WSR_WAITSTAT
Records information about DV_XACT_LOCKS in snapshots. In standalone scenarios, this
system catalog has data. In distributed scenarios, this system catalog has no data.
ADM_DATA_FILES Displays the information about all data files in the database.
ADM_SEGMENTS Displays information about the storage allocated for all tables,
indexes, and LOBs in the database.
3.2.1 DB_DB_LINKS
A reserved function view, with no data.
3.2.2 DB_IND_STATISTICS
Displays statistics about partition indexes of all users.
3.2.3 DB_JOBS
Displays information about all jobs.
3.2.4 DB_TAB_MODIFICATIONS
Displays table update information.
3.2.5 DB_USERS
Displays information about all users in the database.
3.2.6 DB_USER_SYS_PRIVS
Displays system permissions granted to all users in the database.
3.2.7 ADM_ARGUMENTS
Displays the parameters of stored procedures and UDFs that are available in the database.
3.2.8 ADM_BACKUP_SET
Displays the backup set information.
3.2.9 ADM_COL_COMMENTS
Displays information about comments on the columns of all tables in the database.
3.2.10 ADM_CONSTRAINTS
Displays information about constraints on all tables in the database.
3.2.11 ADM_DATA_FILES
Displays the information about all data files in the database.
11 USER_BYTES BINARY_B The size of the file available for user data, in
IGINT bytes
12 USER_BLOCKS BINARY_B The size of the file available for user data, in
IGINT blocks
3.2.12 ADM_DBLINK_TABLES
Displays information about all tables in the database.
3.2.13 ADM_DBLINK_TAB_COLUMNS
Displays information about the columns of all tables in the database.
3.2.14 ADM_DEPENDENCIES
Displays information about dependencies between all objects in the system.
3.2.15 ADM_FREE_SPACE
Displays the free partitions in all tablespaces in the database.
When you create a tablespace, querying the ADM_FREE_SPACE view may display free
tablespaces with the same size. If you create a table without inserting records in any of the
tablespaces, querying the ADM_FREE_SPACE view shows that the sizes of the free
tablespaces remain unchanged.
3.2.16 ADM_HISTOGRAMS
Displays information about histograms on all tables in the database.
3.2.17 ADM_HIST_DBASEGMENTS
Displays snapshot information about ADM_HIST_DBASEGMENTS.
3.2.18 ADM_HIST_LATCH
Displays the DV_LATCHS information in historical WSR snapshots.
3.2.19 ADM_HIST_LIBRARYCACHE
Displays the DV_LIBRARY_CACHE information in historical WSR snapshots.
3.2.20 ADM_HIST_LONGSQL
Displays the DV_LONG_SQL information in historical WSR snapshots.
3.2.21 ADM_HIST_PARAMETER
Displays parameter values in each snapshot of the database.
3.2.22 ADM_HIST_SEGMENT
Displays the DV_SEGMENT_STATS information in historical WSR snapshots.
3.2.23 ADM_HIST_SNAPSHOT
Displays snapshot information in the database.
3.2.24 ADM_HIST_SQLAREA
Displays DML execution information in each snapshot of the database.
10 IO_WAIT_TIME BINARY_ I/O wait time of the SQL statement (unit: μs)
BIGINT
11 CON_WAIT_TIM BINARY_ Lock wait time of the SQL statement (unit: μs)
E BIGINT
3.2.25 ADM_HIST_SYSSTAT
Displays statistics in each snapshot of the database.
3.2.26 ADM_HIST_SYSTEM
Displays OS statistics in each snapshot of the database.
3.2.27 ADM_HIST_SYSTEM_EVENT
Displays event information in each snapshot of the database.
3.2.28 ADM_HIST_WAITSTAT
Displays the DV_WAIT_STATS information in historical WSR snapshots.
3.2.29 ADM_HIST_WR_CONTROL
Displays the parameters of a database snapshot.
3.2.30 ADM_INDEXES
Displays information about indexes on all tables in the database.
3.2.31 ADM_IND_COLUMNS
Displays the indexed columns of all tables in the database.
3.2.32 ADM_IND_PARTITIONS
Displays information about all partitioned indexes in the database.
3.2.33 ADM_IND_STATISTICS
Displays statistics about partition indexes of all users.
3.2.34 ADM_JOBS
Displays information about all jobs.
3.2.35 ADM_JOBS_RUNNING
Displays all running job sessions.
3.2.36 ADM_OBJECTS
Displays information about all objects.
3.2.37 ADM_PART_COL_STATISTICS
Displays statistics about the columns of all partitioned tables in the database.
3.2.38 ADM_PART_KEY_COLUMNS
Displays information about the partition columns for all partitioned tables in the database.
3.2.39 ADM_PART_STORE
Displays information about the tablespace corresponding to the STORE clause used for all
partitioned tables in the database.
3.2.40 ADM_PART_TABLES
Displays information about all partitioned tables in the database.
3.2.41 ADM_PROCEDURES
Displays information about all stored procedures, functions, and triggers in the database.
3.2.42 ADM_PROFILES
Displays all profiles in the database.
3.2.43 ADM_ROLES
Displays information about all roles in the database.
3.2.44 ADM_ROLE_PRIVS
Displays the permissions of all users in the database.
3.2.45 ADM_SEGMENTS
Displays information about the storage allocated for all tables, indexes, and LOBs in the
database.
3.2.46 ADM_SEQUENCES
Displays information about all sequences in the database.
3.2.47 ADM_SOURCE
Displays information about all user-defined objects in the database.
3.2.48 ADM_SYNONYMS
Displays information about all synonyms in the database.
3.2.49 ADM_SYS_PRIVS
Displays system permissions granted to all users in the database.
3.2.50 ADM_TABLES
Displays information about all tables in the database.
3.2.51 ADM_TABLESPACES
Displays information about all tablespaces in the database.
3.2.52 ADM_TAB_COLS
Displays information about the columns of all tables in the database.
3.2.53 ADM_TAB_COLUMNS
Displays information about the columns of all tables and views in the database.
3.2.54 ADM_TAB_COL_STATISTICS
Displays statistics about the columns of all tables in the database.
3.2.55 ADM_TAB_COMMENTS
Displays information about comments on all tables in the database.
3.2.56 ADM_TAB_DISTRIBUTE
Displays the table distribution information of all users. (If GaussDB 100 is deployed in
distributed mode, you can query this view for data. In standalone deployment, this view has
no data.)
3.2.57 ADM_TAB_MODIFICATIONS
Displays modifications to all tables in the database, which include insert, delete, and update
operations.
3.2.58 ADM_TAB_PARTITIONS
Displays information about partitions in all tables in the database.
3.2.59 ADM_TAB_PRIVS
Displays all user object permissions in the database.
3.2.60 ADM_TAB_STATISTICS
Displays statistics about all tables in the database.
3.2.61 ADM_TRIGGERS
Displays information about all triggers in the database.
3.2.62 ADM_USERS
Displays information about all users in the database.
3.2.63 ADM_VIEWS
Displays information about all views in the database.
3.2.64 ADM_VIEW_COLUMNS
Displays information about the columns of all views in the database.
DB_INDEXES Displays all indexes of the current user and user SYS.
DB_TAB_COLUMNS Displays all table columns of the current user and user
SYS.
DB_TAB_COMMENTS Displays all comments of the current user and user SYS.
DB_VIEW_COLUMNS Displays the view columns of the current user and user
SYS.
ROLE_SYS_PRIVS Displays the roles granted to the current user and the
system permissions granted to each role.
3.3.1 DB_ARGUMENTS
Displays the parameters of stored procedures and UDFs created by the current user and user
SYS.
3.3.2 DB_COL_COMMENTS
Displays the column comments of the current user and user SYS.
3.3.3 DB_CONSTRAINTS
Displays the constraints of the current user and user SYS.
3.3.4 DB_DBLINK_TABLES
Displays table information of the current user and user SYS.
3.3.5 DB_DBLINK_TAB_COLUMNS
Displays table column information of the current user and user SYS.
3.3.6 DB_DEPENDENCIES
Displays information about dependencies between all the objects owned by the current user
and user SYS.
3.3.7 DB_DISTRIBUTE_RULES
Displays information about all distribution rules in the system. (If GaussDB 100 is deployed
in distributed mode, you can query this view for data. In standalone deployment, this view has
no data.)
3.3.8 DB_DIST_RULE_COLS
Displays information about all distribution rule columns in the system. (If GaussDB 100 is
deployed in distributed mode, you can query this view for data. In standalone deployment,
this view has no data.)
3.3.9 DB_HISTOGRAMS
Displays information about histograms on the tables owned by the current user and user SYS.
3.3.10 DB_INDEXES
Displays all indexes of the current user and user SYS.
3.3.11 DB_IND_COLUMNS
Displays indexed columns in tables of the current user and user SYS.
7 ID BINARY_INTEGE Column ID
R
3.3.12 DB_IND_PARTITIONS
Displays all partition indexes of the current user and user SYS.
3.3.13 DB_OBJECTS
Displays information about all objects of the current user and user SYS.
3.3.14 DB_PART_COL_STATISTICS
View column statistics about partitioned tables of the current user and user SYS.
3.3.15 DB_PART_KEY_COLUMNS
Displays the partition column information of the partitioned table of the current user and user
SYS.
3.3.16 DB_PART_STORE
Displays the tablespace information corresponding to the STORE clause in the interval
partitioned tables of the current user and user SYS.
3.3.17 DB_PART_TABLES
Displays the partitioned table information of the current user and user SYS.
3.3.18 DB_PROCEDURES
Displays information about stored procedures, functions, and triggers of the current user.
3.3.19 DB_SEQUENCES
Displays sequences of the current user and user SYS.
3.3.20 DB_SOURCE
Displays information about user-defined objects of the current user.
3.3.21 DB_SYNONYMS
Displays synonyms of the current user, user SYS, and user PUBLIC.
3.3.22 DB_TABLES
Displays table information of the current user and user SYS.
3.3.23 DB_TAB_COLS
Displays table columns of the current user and user SYS.
3.3.24 DB_TAB_COLUMNS
Displays all table columns of the current user and user SYS.
3.3.25 DB_TAB_COL_STATISTICS
Displays column statistics about the tables accessible to the current user and user SYS.
3.3.26 DB_TAB_COMMENTS
Displays all comments of the current user and user SYS.
3.3.27 DB_TAB_DISTRIBUTE
Displays the table distribution information of the current user and user SYS. (If GaussDB 100
is deployed in distributed mode, you can query this view for data. In standalone deployment,
this view has no data.)
3.3.28 DB_TAB_PARTITIONS
Displays table partitions of the current user and user SYS.
3.3.29 DB_TAB_STATISTICS
Displays statistics about the tables owned by the current user and user SYS.
The table statistics are mainly used for optimizing SQL statements. The collection will be
triggered by the analyze table table_name compute statistics command, and all statistics of
a table will be collected.
3.3.30 DB_TRIGGERS
Displays triggers of the current user and user SYS.
3.3.31 DB_VIEWS
Displays view information of the current user and user SYS.
3.3.32 DB_VIEW_COLUMNS
Displays the view columns of the current user and user SYS.
3.3.33 DB_VIEW_DEPENDENCIES
Displays the dependency between views created by the current user.
3.3.34 ROLE_SYS_PRIVS
Displays the roles granted to the current user and the system permissions granted to each role.
3.3.35 MY_ARGUMENTS
Displays the parameters of stored procedures and UDFs created by the current user.
3.3.36 MY_COL_COMMENTS
Displays information about comments on the columns of the tables owned by the current user.
3.3.37 MY_CONSTRAINTS
Displays information about constraints on the tables owned by the current user.
15 INDEX_OW VARCHA Owner of the index (only shown for primary and
NER R(64 unique key constraints)
BYTE)
16 INDEX_NA VARCHA Name of the index (only shown for primary and
ME R(64 unique key constraints)
BYTE)
3.3.38 MY_CONS_COLUMNS
Displays information about constraints on the columns of tables owned by the current user.
3.3.39 MY_DEPENDENCIES
Displays information about dependencies between all the objects owned by the current user.
3.3.40 MY_FREE_SPACE
Displays information about free extents in the tablespaces accessible to the current user.
3.3.41 MY_HISTOGRAMS
Displays information about histograms on the tables owned by the current user.
3.3.42 MY_INDEXES
Displays information about indexes owned by the current user.
3.3.43 MY_IND_COLUMNS
Displays information about the indexed columns of the current user.
3.3.44 MY_IND_PARTITIONS
Displays the information about the partition indexes of the current user.
3.3.45 MY_IND_STATISTICS
Displays statistics about partitioned indexes owned by the current user.
3.3.46 MY_JOBS
Displays information about all jobs owned by the current user.
3.3.47 MY_OBJECTS
Displays information about all objects owned by the current user.
3.3.48 MY_PART_COL_STATISTICS
Displays statistics about the columns of partitioned tables owned by the current user.
3.3.49 MY_PART_KEY_COLUMNS
Displays information about the partition columns for partitioned tables owned by the current
user.
3.3.50 MY_PART_STORE
Displays information about the tablespaces corresponding to the STORE clause used for
interval partitioned tables owned by the user.
3.3.51 MY_PART_TABLES
Displays information about the partitioned tables owned by the current user.
3.3.52 MY_PROCEDURES
Displays stored procedures owned by the current user. The stored procedures include general
triggers, UDFs, and stored procedure bodies.
3.3.53 MY_ROLE_PRIVS
Displays information about roles granted to the current user.
3.3.54 MY_SEGMENTS
Displays information about the storage allocated for all tables, indexes, and LOBs owned by
the current user.
3.3.55 MY_SEQUENCES
Displays information about the sequences owned by the current user.
3.3.56 MY_SOURCE
Displays information about all use-defined objects of the current user.
3.3.57 MY_SQL_MAPS
Displays all SQL mapping relationships owned by the current user.
3.3.58 MY_SYNONYMS
Displays synonyms of the current user.
3.3.59 MY_SYS_PRIVS
Displays information about system permissions granted to the current user.
3.3.60 MY_TABLES
Displays information about the tables owned by the current user.
3.3.61 MY_TAB_COLS
Displays information about the columns of tables owned by the current user.
3.3.62 MY_TAB_COLUMNS
Displays details about the columns of tables and views owned by the current user.
3.3.63 MY_TAB_COL_STATISTICS
Displays statistics about the columns of tables owned by the current user.
3.3.64 MY_TAB_COMMENTS
Displays information about comments on the tables owned by the current user.
3.3.65 MY_TAB_DISTRIBUTE
Displays the table distribution information of the current user. (If GaussDB 100 is deployed in
distributed mode, you can query this view for data. In standalone deployment, this view has
no data.)
3.3.66 MY_TAB_MODIFICATIONS
Displays table modifications made by the current user.
3.3.67 MY_TAB_PARTITIONS
Displays information about partitions in the tables owned by the current user.
3.3.68 MY_TAB_PRIVS
Displays information about permissions to the tables owned by the current user.
3.3.69 MY_TAB_STATISTICS
Displays the table statistics of the current user.
The collection will be triggered by the analyze table table_name compute statistics
command, and all statistics of a table will be collected.
3.3.70 MY_TRIGGERS
Displays information about the triggers owned by the current user.
3.3.71 MY_USERS
Displays information about the current user.
3.3.72 MY_VIEWS
Displays information about the views owned by the current user.
3.3.73 MY_VIEW_COLUMNS
Displays information about the columns of views owned by the current user.
3.4.1 NLS_SESSION_PARAMETERS
Displays the NLS parameters of a session.
3.4.2 DV_ALL_TRANS
Displays information about all transactions.
3.4.3 DV_ARCHIVED_LOGS
Displays information about archived logs.
3.4.4 DV_ARCHIVE_DEST_STATUS
Displays information about archived log destinations.
3.4.5 DV_ARCHIVE_GAPS
Displays information about archive gaps in a standby database.
3.4.6 DV_ARCHIVE_THREADS
Displays the status of various archive processes.
3.4.7 DV_BACKUP_PROCESSES
Displays the status of various backup processes for the current instance.
3.4.8 DV_BUFFER_POOLS
Displays information about all buffer pools available for the instance.
0 ID BINARY_INTEGE Buffer ID
R
3.4.9 DV_BUFFER_POOL_STATS
Displays statistics about all buffer pools available for the instance.
3.4.10 DV_CONTROL_FILES
Displays basic information about control files in the current database.
NOTE
Currently, the storage engine of GaussDB 100 does not support the function of returning control file
sizes. Therefore, in the current version, the value of FILE_SIZE_BLKS is always 0.
3.4.11 DV_DATABASE
Displays basic information about the current database.
3.4.12 DV_DATA_FILES
Displays information about data files in the current database.
3.4.13 DV_OBJECT_CACHE
Displays information about cached objects in the current database.
3.4.14 DV_DC_POOLS
Displays information about current DC pools.
3.4.15 DV_DYNAMIC_VIEWS
Displays information about dynamic views.
3.4.16 DV_DYNAMIC_VIEW_COLS
Displays information about the columns of dynamic views.
3.4.17 DV_FREE_SPACE
Displays the free partitions in all tablespaces in the database.
When you create a tablespace, querying the DV_FREE_SPACE view may display free
tablespaces with the same size. If you create a table without inserting records in any of the
tablespaces, querying the DV_FREE_SPACE view shows that the sizes of the free
tablespaces remain unchanged.
3.4.18 DV_HA_SYNC_INFO
Displays information including the primary/standby connection and log sending status on the
primary node; or information including the standby/cascaded standby connection and log
sending status on the standby node.
3.4.19 DV_HBA
Displays the configurations of the user whitelist.
3.4.20 DV_INSTANCE
Displays information about database instances.
3.4.21 DV_RUNNING_JOBS
Displays all running job sessions.
3.4.22 DV_LATCHS
Displays information about current structure locks.
MISSES BINARY_INTEG Number of times the latch is first requested and the
ER requester has to wait
SPIN_GETS BINARY_INTEG Number of latch requests which miss the first try but
ER succeed while spinning
WAIT_TIME BINARY_INTEG Time spent on waiting for the latch lock (unit: ms)
ER
3.4.23 DV_LIBRARY_CACHE
Displays the management information about SQL statements in a shared pool.
4 PINHITS BINARY_BI Total number of pages used for soft parsing of the
GINT SQL statement
3.4.24 DV_LOCKS
Displays information about current lock resources.
5 BLOCK BINARY_IN Lock status (1: self lock; 0: being locked) if the
TEGER lock type is TS or TX; or 1 if the lock type is
different
3.4.25 DV_LOCKED_OBJECTS
Displays information about locked objects.
3.4.26 DV_LOG_FILES
Displays information about current log files.
3.4.27 DV_LONG_SQL
Displays logs of long SQL statements. Only the SQL statements whose execution time
exceeds the specified time (LONGSQL_TIMEOUT) can be queried. For details about the
LONGSQL_TIMEOUT parameter, see Session Control Parameters.
3.4.28 DV_STANDBYS
Displays the status of threads for standby databases.
3.4.29 DV_ME
Displays information about current sessions.
3.4.30 DV_OPEN_CURSORS
Displays information about the status of currently opened cursors.
3.4.31 DV_PARAMETERS
Displays the configuration items of the current database.
3.4.32 DV_PL_MANAGER
Displays information about stored procedure loading to memory.
3.4.33 DV_PL_REFSQLS
Displays information about the SQL statements associated with PL after stored procedure
loading to memory.
3.4.34 DV_REACTOR_POOLS
Displays information about connection pools and the corresponding work thread pools.
3.4.35 DV_REPL_STATUS
Displays information about the communication status between primary and standby
databases.
3.4.36 DV_RESOURCE_MAP
Displays information about resource views in the database.
3.4.37 DV_SEGMENT_STATS
Displays information about the usage of objects, such as heap index, in the database.
3.4.38 DV_SESSIONS
Displays information about current sessions.
1 SPID VARCHAR(1 ID of the thread for the session. For sessions that are
1 BYTE) not buffered in the session pool, SPID is 0.
4 USERNAM VARCHAR(6 Name of the login user when the current session is
E 4 BYTE) created
7 CLIENT_P VARCHAR(1 Port number of the client for the current session
ORT 0 BYTE)
9 SERVER_P VARCHAR(1 Port number of the server for the current session
ORT 0 BYTE)
44 IO_WAIT_T BINARY_BI Total I/O wait time for SQL statements in the current
IME GINT session (unit: μs)
45 CON_WAIT BINARY_BI Total lock wait time for SQL statements in the
_TIME GINT current session (unit: μs)
46 CPU_TIME BINARY_BI Total CPU time for SQL statements in the current
GINT session (unit: μs)
50 VMP_PAGE BINARY_BI Number of memory pages that VMP has applied for
S GINT from VMA
3.4.39 DV_SESSION_EVENTS
Displays information about the wait events for current sessions.
3.4.40 DV_SESSION_SHARED_LOCKS
Displays information about all session-level shared advisory locks in use.
3.4.41 DV_SESSION_WAITS
Displays information about the wait events for current sessions.
3.4.42 DV_GMA
Displays information about the memory that is applied for.
3.4.43 DV_GMA_STATS
Displays the statistics items of the SGA memory.
Only users SYS and DBA can query this view. The memory pointer columns need to be used
by the database maintenance personnel.
shared pool sql pool page buf Start address of the pages
that can be allocated in
the SQL buffer pool
shared pool sql pool page size Size of each page in the
SQL buffer pool
shared pool sql pool optimizer page count Number of pages that can
be added to the SQL
buffer pool
shared pool sql pool free page count Number of free pages in
the SQL buffer pool
shared pool sql pool free page first ID of the first free page in
the SQL buffer pool
shared pool sql pool free page last ID of the last free page in
the SQL buffer pool
shared pool sql pool bucket size Size of the hash bucket in
the SQL buffer pool
shared pool sql pool lru count Number of linked lists for
the replacement algorithm
in the SQL buffer pool
3.4.44 DV_SPINLOCKS
Displays information about the usage of spinlocks on current sessions.
3.4.45 DV_SQLS
Displays information about the execution of SQL DML statements.
3.4.46 DV_SQL_POOL
Displays information about SQL pool usage in the current system.
47: DROP_SYNONYM
48: DROP_PROFILE
49: DROP_NODE
50:
DROP_DISTRIBUTE_RUL
E
51: DROP_SQL_MAP
52: TRUNCATE_TABLE
53: PURGE
54: COMMENT
55: FLASHBACK_TABLE
56: ALTER_SEQUENCE
57: ALTER_TABLESPACE
58: ALTER_TABLE
59: ALTER_INDEX
60: ALTER_USER
61: ALTER_SYSTEM
62: ALTER_SESSION
63: ALTER_DATABASE
64: ALTER_NODE
65: ALTER_PROFILE
66: ALTER_TRIGGER
67: ALTER_SQL_MAP
68: ANALYSE_TABLE
69: GRANT
70: REVOKE
72: ANONYMOUS_BLOCK
73: CREATE_PROC
74: CREATE_FUNC
75: CREATE_TRIG
76: DROP_PROC
77: DROP_FUNC
78: DROP_TRIG
79: PL_CALL
3.4.47 DV_SYS_STATS
Displays statistics about the current system.
3.4.48 DV_SYSTEM
Displays information about CPU and memory usage in the current OS.
3.4.49 DV_SYS_EVENTS
Displays information about events in the current system.
3.4.50 DV_TABLESPACES
Displays information about current tablespaces.
0 ID BINARY_INT Tablespace ID
EGER
3.4.51 DV_TEMP_POOLS
Displays information about current temporary pools.
3.4.52 DV_TEMP_UNDO_SEGMENT
Displays the status of all temporary undo segment queues.
The difference between a temporary undo segment and an undo segment is that the former
records undo information without redo information.
3.4.53 DV_TRANSACTIONS
Displays information about transactions.
3.4.54 DV_UNDO_SEGMENTS
Displays the status of all undo segment queues.
6 FIRST_TIME DATE Time when the first undo page on the undo
segment is updated
7 LAST_TIME DATE Time when the last undo page on the undo
segment is updated
3.4.55 DV_USER_ADVISORY_LOCKS
Displays information about all session-level advisory locks in use.
3.4.56 DV_USER_ASTATUS_MAP
Displays information about the views of the database user status type.
3.4.57 DV_USER_PARAMETERS
Displays the configuration items of the current database. This view is accessible to common
users.
3.4.58 DV_VERSION
Displays software versions.
3.4.59 DV_VM_FUNC_STACK
Displays the function stack information when the VM is not released.
When there is a VM leak suspect, you can configure _MAX_VM_FUNC_STACK_COUNT
to query this view.
3.4.60 DV_WAIT_STATS
Displays statistics about all wait events when there is a large number of buffer busy waits
events.
3.4.61 DV_XACT_LOCKS
Displays information about all transaction-level exclusive advisory locks in use.
3.4.62 DV_XACT_SHARED_LOCKS
Displays information about all transaction-level shared advisory locks in use.
Statistics type.
0: SQL statistics
4 Monitoring Alarms
1078919217|InsufficientDataInstFileDesc
Alarm ID: 1078919217
Alarm meaning: File handle resources of the Zengine process are insufficient, and the upper
limit of the resources needs to be raised.
Alarm principle: The system checks whether the file handle resources of the current database
are insufficient. If they are, the alarm is reported.
Alarm handling:
l In a Linux operating system, modify system parameters to increase the number of file
handles that can be opened by database processes. Alternatively, run the lsof command
to view the usage of system file handles, and terminate related processes to release
handle resources.
l In a Windows operating system, terminate unnecessary applications to release system
file handle resources.
1078919231|DeadLock
Alarm ID: 1078919231
Alarm principle: The system checks whether there is a deadlock while a DN waits for lock
resources. If there is, the alarm is reported.
Alarm handling: Query trace_log for the statement causing the deadlock, and check whether
the statement logic is the cause of the deadlock. If it is, modify the statement.
1078919232|DataNodeNeedBuild
Alarm ID: 1078919232
Alarm meaning: The build command needs to be manually delivered to a DN to rebuild the
baseline.
Alarm principle: The system periodically checks the DN view V$DATABASE for the DN
status (dbcondition) at an interval of 5s. If the status is need repair, you need to manually
deliver the build command to rebuild the baseline.
Alarm handling: Manually deliver the build command to rebuild the baseline.
1078919234
Alarm ID: 1078919234
Alarm principle: When the logical replication process exits abnormally, the alarm is reported
by invoking the DM alarm sending interface.
Alarm handling: Check the run and audit log files of logical replication. Rectify the error
according to the logs, and then restart the logical replication tool. The run and audit log files
are stored in /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
logicrep/logicrep/logicrep.
1078919235
Alarm ID: 1078919235
Alarm principle: When checkpoint writing fails, the alarm is reported by invoking the DM
alarm sending interface. A checkpoint writing failure will lead to a checkpoint update failure.
Alarm handling: Check the status of the source database and query the run logs of logical
replication to obtain more information. The run logs are stored in /opt/software/tools/
GAUSSDB100-V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919236
Alarm ID: 1078919236
Alarm principle: When a checkpoint thread exits, the alarm is reported by invoking the DM
alarm sending interface. A checkpoint thread exit will lead to a logical replication process
exit.
Alarm handling: Check the run logs of logical replication, rectify the error, and restart the
logical replication tool. The run logs are stored in /opt/software/tools/GAUSSDB100-
V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919237
Alarm ID: 1078919237
Alarm principle: When there is an error in extracting the log parsing thread, the alarm is
reported by invoking the DM alarm sending interface. An error in extracting the log parsing
thread will lead to a logical replication process exit.
Alarm handling: Check the run logs of logical replication, rectify the error, and restart the
logical replication tool. The run logs are stored in /opt/software/tools/GAUSSDB100-
V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919238
Alarm ID: 1078919238
Alarm principle: When SQL statement replay fails, the alarm is reported by invoking the
DM alarm sending interface. A SQL statement replay failure will lead to a logical replication
process exit.
Alarm handling: Check the run logs of logical replication, rectify the error, and restart the
logical replication tool. The run logs are stored in /opt/software/tools/GAUSSDB100-
V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919239
Alarm ID: 1078919239
Alarm handling: Check whether the source database is normal and query the run logs of
logical replication to obtain more information. The run logs are stored in /opt/software/tools/
GAUSSDB100-V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919240
Alarm ID: 1078919240
Alarm principle: When there is an error in transaction distribution, the alarm is reported by
invoking the DM alarm sending interface. An error in the transaction distribution thread will
lead to a logical replication process exit.
Alarm handling: Check whether the source database is normal and query the run logs of
logical replication to obtain more information. The run logs are stored in /opt/software/tools/
GAUSSDB100-V300R001C00-LOGICREP/logicrep/logicrep/logicrep/logicrep.
1078919241
Alarm ID: 1078919241
Alarm principle: When there is an error in the target database, the alarm is reported by
invoking the DM alarm sending interface. An error in the target database will lead to a logical
replication process exit.
Alarm handling: Check whether the target database is normal, query the run logs of logical
replication to rectify the database error, and then restart the logical replication process. The
run logs are stored in /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/logicrep/logicrep/logicrep.
1078919243|backup failed
Alarm ID: 1078919243
Alarm name: backup failed
Alarm meaning: There is a backup failure.
Alarm principle: This alarm is reported when backup fails.
Alarm handling: Troubleshoot according to Roach logs, which are stored in $GAUSSLOG/
roach.
1078919244|Degrade
Alarm ID: 1078919244
Alarm name: Degrade
Alarm meaning: There is a demotion from synchronous standby to temporary asynchronous
standby.
Alarm principle: In maximum availability mode, if the synchronization log sending thread
does not receive a response from the standby database within the period of
REPL_WAIT_TIMEOUT multiplied by 2, the database will be demoted from synchronous
to temporary asynchronous.
Alarm handling: Check whether the network is abnormal or the standby database is
overloaded. Generally, self-healing works (restored from temporary asynchronous to
synchronous), except when the demotion is manually produced for fault tests.
1078919249|Archive
Alarm ID: 1078919249
Alarm name: Archive
Alarm meaning: There is an archive log failure.
Alarm principle: The alarm is reported when redo log archiving on a database instance fails.
It is a common error alarm. When the alarm is reported, logs cannot be archived. As a result,
the database is suspended and cannot provide services.
Alarm handling: Check the permission, attribute, and disk status of archive log files, and
check error information in run logs.
1078919250|FlushRedo
Alarm ID: 1078919250
1078919251|FlushBuffer
Alarm ID: 1078919251
Alarm name: FlushBuffer
Alarm meaning: There is a data file write failure.
Alarm principle: The alarm is reported when flush-to-buffer of redo data on a database
instance fails. It is a major error alarm. When the alarm is reported, the database immediately
stops client requests and the main process immediately exits.
Alarm handling: Check the permission, attribute, and disk status of data files, and check
error information in run logs.
1078919252|DataNodeLRInstAbonormal
Alarm ID: 1078919252
Alarm name: DataNodeLRInstAbonormal
Alarm meaning: A logical replication instance is abnormal.
Alarm principle: A CM Agent automatically periodically checks whether the logical
replication process is normal at an interval of 5s. If there is an exception, the CM Agent will
report the alarm and automatically start the abnormal process for a maximum of three times.
The alarm will not be repeatedly reported.
Alarm handling: Contact Huawei technical support.
1078919253|TablespaceUsage
Alarm ID: 1078919253
Alarm name: TablespaceUsage
Alarm meaning: The size of a tablespace needs to be adjusted when the tablespace usage
reaches the threshold.
Alarm principle: The system periodically checks the usage of database instance tablespaces
at an interval of 3s. If the tablespace usage reaches the user-specified threshold, the alarm is
reported. If the tablespace usage reaches the threshold again after the alarm is cleared by
manual intervention, a new alarm will be generated.
Alarm handling: Choose any one of the following methods according to the actual situation:
Method 1: Query the dv_data_files view. If the tablespace has data files where automatic
extension is not enabled, enable it for both the tablespace and data files.
For example:
-- Enable automatic extension for a data file:
alter tablespace SPC_NAME autoextend on;
-- Enable automatic extension for a tablespace:
alter database datafile 'FILE_NAME' autoextend on;
Method 2: Query the dv_data_files view and adjust the maxsize value of automatic
extension.
For example:
alter database datafile 'FILE_NAME' autoextend on maxsize MAX_SIZE;
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
6 Glossary
Term Description
A–E
ACID Atomicity, Consistency, Isolation, and Durability (ACID). These are a set of
features of database transactions in a DBMS.
archive A thread started when the archive function is enabled on a database. The
thread thread is used to archive database logs to a specified path.
atomicity One of the ACID features of database transactions. Atomicity means that a
transaction is composed of an indivisible unit of work. All operations
performed in a transaction must either be committed or uncommitted. If an
error occurs during transaction execution, the transaction will be rolled
back to the state when it was not committed.
backup A backup, or the process of backing up, refers to the copying and archiving
of computer data. Backup data can be used for restoration in case of data
loss.
checkpoint A mechanism that stores data in the database memory to disks at a certain
time. GaussDB 100 periodically stores the data of committed transactions
and data of uncommitted transactions to disks. The data and redo logs can
be used for database restoration if a database restarts or breaks down.
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands are entered
through keyboards or similar devices and are compiled and executed by
applications. The results are displayed in text or graphic forms on the
terminal interface.
Term Description
coding Coding is representing data and information using code so that it can be
processed and analyzed by a computer. Characters, digits, and other objects
can be converted into digital code, or information and data can be converted
into the required electrical pulse signals based on predefined rules.
concurrency A DBMS service that ensures data integrity when multiple transactions are
control concurrently executed in a multi-user environment. In a multi-threaded
GaussDB 100 environment, concurrency control ensures that database
operations are safe and all database transactions remain consistent at any
given time.
core dump When a program stops abnormally, core dump, memory dump, or system
dump records the state of working memory of the program at that point in
time. The states of key programs are often dumped at the same time. For
example, information about processor registers, including program metrics,
stack pointers, memory management, other processors, and OS flags are
often dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
core file A file that is created when memory overwriting, assertion failures, or access
to invalid memory occurs in a process, causing it to fail. This file is then
used for further analysis.
A core file stores memory dump data, and supports binary mode and
specified ports. The name of a core file consists of the word "core" and the
OS process ID.
The core file is available regardless of the type of platform.
Term Description
data flow An operator that exchanges data among query fragments. By their input/
operator output relationships, data flows can be categorized into Gather flows,
Broadcast flows, and Redistribution flows. Gather combines multiple query
fragments of data into one. Broadcast forwards the data of one query
fragment to multiple query fragments. Redistribution reorganizes the data
of multiple query fragments and then redistributes the reorganized data to
multiple query fragments.
database A collection of data that is stored together and can be accessed, managed,
and updated. Data in a view in a database can be classified into the
following types: numeral, full text, digit, and image.
database file A binary file that stores user data and the internal data of a database system.
database HA GaussDB 100 provides a highly reliable HA solution. Every logical node in
GaussDB 100 is identified as a primary or standby node. At the same time,
only one GaussDB 100 node is identified as the primary server. In
GaussDB 100, standby nodes first perform full synchronization from the
primary node and later incremental synchronization. When the HA system
is running, the primary node can receive data read and write requests in
GaussDB 100.
DBLINK An object of the path from one database to another. A remote database
object can be queried with DBLINK.
Term Description
dirty page A page that has been modified and is not written to a permanent device.
dump file A specific type of trace file. A dump file contains diagnostic data during an
event response, whereas a trace file contains continuously generated
diagnostic data.
durability One of the ACID features of database transactions. Transactions that have
been committed will permanently survive and not be rolled back.
error A technique that automatically detects and corrects errors in software and
correction data streams to improve system stability and reliability.
F–J
failover Automatic switchover from a faulty node to its standby node. Reversely,
automatic switchback from the standby node to the primary node is called
failback.
free space A mechanism for managing free space in a table. This mechanism enables a
management database system to record free space in each table and establish an easy-to-
find data structure, accelerating operations (such as INSERT) performed on
the free space.
Term Description
GNU The GNU Project was publicly announced on September 27, 1983 by
Richard Stallman, aiming at building an OS composed wholly of free
software. GNU is a recursive acronym for "GNU's Not Unix!". Stallman
announced that GNU should be pronounced as Guh-NOO. Technically,
GNU is similar to Unix in design, a widely used commercial OS. However,
GNU is free software and contains no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for each
node in the case of strong consistency.
incremental Incremental backup stores all file changes since the last valid backup.
backup
index An ordered data structure in a DBMS. An index accelerates data query and
update in database tables.
isolation One of the ACID features of database transactions. Isolation means that the
operations inside a transaction and data used are isolated from other
concurrent transactions. Concurrent transactions do not disturb each other.
JDBC Java database connectivity (JDBC) is used to implement the Java APIs of
SQL statements. It provides unified access to multiple relational databases,
consisting of a set of classes and interfaces written in Java language.
junk tuple A tuple that is deleted using the DELETE and UPDATE statements. When
deleting a tuple, GaussDB 100 only marks the tuples that are to be cleared.
The VACUUM thread will then periodically clear these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
Term Description
metadata Data that provides information about other data. Metadata describes the
source, size, format, or other characteristics of data. In database columns,
metadata explains the content of a data warehouse.
P–T
page Smallest memory unit for row storage in the relational object structure in
GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB 100 HA
server system and works with all standby servers. At any time, only one node in
the HA system is identified as the primary server.
QPS Query Per Second (QPS) means the number of queries that a server can
respond to per second.
query Each query job can be split into one or more query fragments. Each query
fragment fragment consists of one or more query operators and can independently
run on a node. Query fragments exchange data through data flow operators.
query An iterator or a query tree node, which is a basic unit for the execution of a
operator query. Execution of a query can be split into one or more query operators.
Common query operators include scan, join, and aggregation.
redo log A log that contains information required for performing an operation again
in a database. If a database is faulty, redo logs can be used to restore the
database to its original state.
Term Description
relational A database created using the relational model. It processes data using
database methods of set algebra.
RPO Recovery point objective (RPO) refers to the latest status that a database
system and the data can be restored to after a disaster, and it is usually
represented by time.
RTO Recovery time objective (RTO) refers to the duration between the database
system failure caused by a disaster and its restoration to proper running.
schema A database object set that includes the logical structure, such as tables,
views, sequences, stored procedures, synonyms, clusters, and database
links.
shared pool A shared pool is created for repeatedly executed SQL statements to save
memory. It contains the explain trees and execution plans of given SQL
statements.
SSL Secure Sockets Layer (SSL) is a network security protocol first used by
Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides three
basic security services, all of which use the public key technology. SSL
ensures the security of service communication through a network by
establishing a secure connection between a client and a server and then
sending data through this connection.
Term Description
stop word In computing, stop words are words which are filtered out before or after
processing of natural language data (text), saving storage space and
improving search efficiency.
stored A group of SQL statements compiled into a single execution plan and
procedure stored in a large database system. Users can specify a name and parameters
(if any) for a stored procedure to execute the procedure.
system A table storing meta information about a database. The meta information
catalog includes user tables, indexes, columns, functions, and data types in a
database.
table A set of columns and rows. Each column is referred to as a field. Values in
each field represent a data type. For example, if a table contains three fields
of person names, cities, and states, it has three columns: Name, City, and
State. In every row in the table, the Name column contains a name, the City
column contains a city, and the State column contains a state.
tablespace A tablespace is a logical storage structure that contains tables, indexes, and
objects. A tablespace provides an abstract layer between physical data and
logical data, and provides storage space for all database objects. When you
create an object, you can specify which tablespace it belongs to.
thesaurus Standardized words or phrases that express document themes and are used
for indexing and retrieval.
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
zsql GaussDB 100 interactive terminal. zsql enables you to interactively enter
queries, issue them to GaussDB 100, and view the query results. Queries
can also be entered from files. zsql supports many meta commands and
shell-like commands, allowing you to conveniently compile scripts and
automate jobs.
Issue 01
Date 2019-12-28
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
3 DataSync Syntax.................................................................................................................... 13
4 Working with DataSync....................................................................................................... 18
4.1 Configuration Files............................................................................................................................................................... 18
4.1.1 cfg.ini..................................................................................................................................................................................... 19
4.1.2 exp_obj.ini............................................................................................................................................................................ 38
4.1.3 exclusive_obj.ini.................................................................................................................................................................. 40
4.1.4 ignore_ddl.ini....................................................................................................................................................................... 41
4.1.5 exclusiveDataOnly_obj.ini............................................................................................................................................... 42
4.1.6 diff_ddl_obj.ini.................................................................................................................................................................... 43
4.2 Processes.................................................................................................................................................................................. 44
4.2.1 Export Process..................................................................................................................................................................... 44
4.2.2 Import Process.................................................................................................................................................................... 47
4.2.3 Export and Import Process............................................................................................................................................. 49
4.3 Reports...................................................................................................................................................................................... 51
4.3.1 Report for Automatic Table Creation......................................................................................................................... 51
4.3.2 DDL Synchronization Report......................................................................................................................................... 52
4.3.3 Data Export Report........................................................................................................................................................... 53
4.3.4 Data Import Report.......................................................................................................................................................... 54
4.3.5 Report for Export and Import........................................................................................................................................55
4.3.6 Incremental Migration Report.......................................................................................................................................56
4.4 Logs........................................................................................................................................................................................... 56
4.5 Data and File Clearing........................................................................................................................................................ 58
4.5.1 Clearing the Target Database....................................................................................................................................... 58
4.5.2 Deleting Local Files........................................................................................................................................................... 58
5 Data Migration.......................................................................................................................59
6 Security.................................................................................................................................... 63
6.1 Account Security....................................................................................................................................................................63
6.2 Security Statement............................................................................................................................................................... 64
6.3 Integrity Verification............................................................................................................................................................ 64
6.4 Data Transmission Security............................................................................................................................................... 65
6.5 Database SSL Connection.................................................................................................................................................. 65
7 Glossary................................................................................................................................... 67
Overview
DataSync is a data migration tool provided by GaussDB 100 to securely, efficiently
synchronize data with other commercial databases. It can migrate data from
Sybase, Oracle, MySQL, GaussDB 100 V100R003C10, and SQL Server to GaussDB
100 V300R001. This document describes how to migrate data from Sybase, Oracle,
MySQL, GaussDB 100 V100R003C10, and SQL Server to GaussDB 100 V300R001
(standalone).
Intended Audience
This document is intended for:
● GaussDB 100 database administrators
● GaussDB 100 engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Version Update Date
2 About DataSync
DataSync supports online migration for GaussDB 100 V100R003C10 and offline
migration for Sybase, Oracle, MySQL, GaussDB 100 V100R003C10, and SQL Server.
To migrate data, you simply need to configure the source and target databases
before starting the tool. Related log files and reports are generated during
migration to facilitate routine management and maintenance.
● The password of a GaussDB 100 V300R001 database does not include spaces
or semicolons (;). Otherwise, the tool cannot use the zsql command to
connect to a database. For details, see "SQL Syntax Reference > SQL Syntax"
in GaussDB 100 V300R001 R&D Documentation (Standalone).
readme.txt Precautions
Folder/File Description
Folder/File Description
Oracle: ojdbc8-12.2.0.1.jar
MySQL: mysql-connector-java-5.1.44.jar
SQL Server: sqljdbc4-4.2.jar
If the source database is Sybase, you also need to copy
the driver package bcprov-jdk16-1.46.jar to this folder
and change the owner to that of the tool package.
The driver package name, bcprov-jdk16-1.46.jar, cannot
be modified.
3 DataSync Syntax
Syntax
● View help information.
java -jar DSS.jar -h | -help
Example:
java -jar DSS.jar -h
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Usage: java [JVM] -jar DSS.jar [options] filePath
JVM: Set jvm parameters if necessary,
eg:-Xms1g -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./log
Options:
-h, -help
Show this help message and exit
-pwd <password type>, -password <password type>
Enter the clear text password for encryption
password type:
1--export database
2--import database
3--import server
4--export remote server
5--import remote server
6--export database trustStore
7--export database keyStore
8--import database trustStore
9--import database keyStore
-p <basic config file path>
Set basic config file path,use default if not specified
-i <export/import database or table config file path>
Set export database or table config file path,export all if not specified
-e <exclude database or table config file path>
Set exclude database or table config file path,exclude nothing if not specified
-l <log and report path>
Set export log and report path,use default path if not specified
-imp_b <import data failed path>
Set import data failed log file path,use default path if not specified
-t <incremental migration operation>
Set mission will execute incremental migration process and set trigger will create table and
trigger for business table.
-d <ignore DDL verification items file>
Set Ignore partial DDL verification of the database and tables
-o <only export/verify DDL>
Set the database and tables only export or verify DDL
-s <different DDL table export/import>
Set the tables which DDL is different between dest database and source database export
or import
– 5: password of the remote server where the data file used for the import
operation is located
– 6: password of the truststore file generated by the database for export
– 7: password of the keystore file generated by the database for export
– 8: password of the truststore file generated by the database for import
– 9: password of the keystore file generated by the database for import
Passwords must be encrypted one by one based on the type, and the
ciphertext cannot be reused. For example, if the plaintext password of the
database for export is the same as that of the database for import, you need
to run the java -jar DSS.jar -pwd 1 and java -jar DSS.jar -pwd 2 commands
to encrypt the two passwords. (-password can also be used in place of -pwd.)
Example:
dbuse@plat:~/gppTest/verf0603/DataSync> java -jar DSS.jar -pwd 1
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Please enter the password to be encrypted and press Enter to confirm!
The encrypted password is : O5gs+S9n18P3uVFohVhpEA==
Note: After a password of the same type has a new ciphertext generated, the
one generated last time will become invalid.
● Migrate data offline.
java -jar DSS.jar [-p cfg.ini_path] [-i exp_obj.ini_path] [-e exclusive_obj.ini_path] [-d
ignore_ddl.ini_path] [-o exclusiveDataOnly_obj.ini] [-s diff_ddl_obj.ini] [-l /data/gaussdba/log_path] [-
imp_b importerrorlog_path]
Example:
In the directory where DSS.jar is located, run the following command:
java -jar DSS.jar -p ./config/cfg.ini -i ./config/exp_obj.ini -e exclusive_obj.ini
DataSync (1.1.1), From Huawei !
Copyright ©@ Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[ok]
Start syncing data................................[ok]
Data exporting....................................[0/1]
Data export completed.............................[1/1]
Start collecting results..........................[ok]
Task start time...................................[2019-06-13 14:10:58]
Task end time.....................................[2019-06-13 14:11:13]
Total spent time..................................[15.109s]
Export successful data (rows).....................[100]
Export failed data (rows).........................[0]
Export data failed table count....................[0]
For details about how to migrate data offline, see Offline Migration.
● Migrate data online.
– Create a trigger, which is used to create incremental tables in the source
database.
java -jar DSS.jar [-p cfg.ini_path] [-i exp_obj.ini_path] [-e exclusive_obj.ini_path] [-l /data/
gaussdba/log_path] [-imp_b importerrorlog_path] -t trigger
Example:
In the directory where DSS.jar is located, run the following command:
java -jar DSS.jar -p ./config/cfg.ini -i ./config/exp_obj.ini -e exclusive_obj.ini -t trigger
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start init increments
increment initializing............................[0/1]
increment initializing............................[1/1]
increment initialize done.........................[1/1]
For details about how to migrate data online, see Online Migration.
Parameter Description
Parameter Description
● Ensure the cfg.ini path is correct. If the specified file is empty, an error will be reported.
● Data import and export operations vary based on the first three configuration files you
use. The rules are as follows:
● If cfg.ini alone is used, all data will be imported or exported.
● If both cfg.ini and exp_obj.ini are used, only data of the databases, schemas, and
tables specified in exp_obj.ini is imported or exported.
● If both cfg.ini and exclusive_obj.ini are used, data of the databases, schemas, and
tables specified in exclusive_obj.ini is excluded from the import or export.
● If cfg.ini, exp_obj.ini, and exclusive_obj.ini are all used, data of the databases,
schemas, and tables that are specified in exp_obj.ini and not specified in
exclusive_obj.ini is exported.
● By default, data in the system database, system tables, and temporary databases is
excluded. You need to specify the databases and tables to export such data. Table 4-1
describes the databases and tables excluded by default.
mysql information_schema, -
mysql,
performance_schema,
sys, test
4.1.1 cfg.ini
Server and database information is configured in the cfg.ini file. Before data
migration, ensure the information is correct. Table 4-2 describes the parameters in
the cfg.ini file.
db Basic parameters - -
of source and
target databases,
including:
● ip: database IP
address
● port: database
port
● username:
database
username. Do
not set it to
SYS for
GaussDB 100.
● password:
database
password
● server_name:
service name.
It is mandatory
if the source
database is
Oracle.
● db_name:
database
name. This
parameter is
mandatory
when the
source
database is
MySQL,
GaussDB 100
V100R003C10,
or SQL Server.
● trust_store:
Java-identified
truststore file,
where
OpenSSL or
Keytool
imports the
root certificate
● trust_store_pa
ssword:
ciphertext of
the generated
truststore file
● key_store:
Java-identified
keystore file,
where
OpenSSL or
Keytool
imports the
client
certificate and
private key
● key_store_pas
sword:
ciphertext of
the generated
keystore file
● table_space:
name of the
tablespace
specified for
automatically
created tables.
● index_table_s
pace: name of
the index
tablespace
specified for
automatically
created tables.
Note: If SSL is
enabled on
database servers,
select and
configure these
parameters based
on the SSL
configuration to
ensure that the
migration tool is
connected
properly.
not password or
pub_key_file. In
this case, ip can
be set to
127.0.0.1.
source
database only
when this
parameter is
set to 3.
● If a table in the
target
database is
created by the
user, the tool
will not modify
the table
structure or
other
information.
For example, if
a source table
in the source
database has
indexes and
the user has
not created
indexes when
creating the
target table in
the target
database, the
tool will not
create indexes
on the target
table during
data migration.
import_allow_max Allowed ≥0 0
_errors maximum
number of rows
that fail to be
imported to a
single table
export_allow_max Allowed ≥0 0
_errors maximum
number of rows
that fail to be
exported from a
single table
4.1.2 exp_obj.ini
You can configure databases and tables to be migrated. This file is optional.
Description
Databases and tables to be migrated are configured as follows:
Source-database-name[.Source-table-name][:Target-database-name.[Target-table-name]] [Filter-criteria]
Precautions
● If no tables are specified in the exp_obj.ini file, all tables in the source
database (excluding system catalogs and temporary tables) are exported.
● In Oracle and GaussDB 100 V100R003, database names and table names are
case-insensitive in database or table creation statements, but uppercase is
adopted for statement execution by default. Therefore, database names and
table names must be in upper case in the configuration file. (If a database
name or table name is enclosed in double quotation marks ("") in the
statement, the name is case-sensitive. In this case, the letter case of the name
in the configuration file must be the same as that in the statement.)
● In Sybase, MySQL, and SQL Server, database names and table names are
case-sensitive in database or table creation statements. Therefore, the letter
case of a name in the configuration file must be the same as that in the
statement.
● If cross-schema migration is supported by GaussDB 100 V100R003, the
schema name of the source database is mapped to that of the target
database by default. Incremental migration does not support cross-schema
migration.
Examples
● Export databases db1 and db2. The database and table names to be imported
are the same as those exported.
db1
db2
● Export the t_test table of the db1 database, specify the filter criteria for
exporting the t_test2 table of the db2 database, and map db2.t_test2 to
db3.t_test3.
db1.t_test
db2.t_test2:db3.t_test3 where rownum<=10
4.1.3 exclusive_obj.ini
Scenario
You can configure databases and tables that are excluded from migration.
Description
The databases and tables excluded from migration are configured as follows:
Source-database-name[.Table-name]
Precautions
● If this file is not configured, no databases or tables will be excluded during
migration.
● In Oracle and GaussDB 100 V100R003, database names and table names are
case-insensitive in database or table creation statements, but uppercase is
adopted for statement execution by default. Therefore, database names and
table names must be in upper case in the configuration file. (If a database
name or table name is enclosed in double quotation marks ("") in the
statement, the name is case-sensitive. In this case, the letter case of the name
in the configuration file must be the same as that in the statement.)
● In Sybase, MySQL, and SQL Server, database names and table names are
case-sensitive in database or table creation statements. Therefore, the letter
case of a name in the configuration file must be the same as that in the
statement.
● Mapping cannot be configured in the file.
Examples
● Exclude the DB1 and DB2 databases during data import. (example for Oracle
and GaussDB 100 V100R003).
DB1,DB2
● Exclude the t_test table of the db1 database and the t_test table of the db2
database during data import. (example for Sybase, MySQL, and SQL Server)
db1.t_test,db2.t_test2
4.1.4 ignore_ddl.ini
Scenario
You can configure databases and tables in which DDL-based table structure
verification can be ignored during migration.
Description
The database or table ignoring table structure verification is configured as follows:
Source-database-name[.Table-name]
● Content in square parentheses ([]) is optional.
● Use commas (,) to separate multiple databases or tables.
● Configure some databases and tables to ignore certain DDL verification items.
● The databases and tables support fuzzy match.
\\%: any number of characters or no characters, for example, haha\\%
\\_: an unknown character, for example, hah\\_
Precautions
● If this file is not configured, DDL verification is always needed during the
migration.
● In Oracle and GaussDB 100 V100R003, database names and table names are
case-insensitive in database or table creation statements, but uppercase is
adopted for statement execution by default. Therefore, database names and
table names must be in upper case in the configuration file. (If a database
name or table name is enclosed in double quotation marks ("") in the
statement, the name is case-sensitive. In this case, the letter case of the name
in the configuration file must be the same as that in the statement.)
● In Sybase, MySQL, and SQL Server, database names and table names are
case-sensitive in database or table creation statements. Therefore, the letter
case of a name in the configuration file must be the same as that in the
statement.
● Mapping cannot be configured in the file.
● If cross-schema migration is not supported by GaussDB 100 V100R003, the
configuration is as follows: Source-database-name.Schema name_Table-name.
(In this case, only the public schema is supported.)
● If cross-schema migration is supported by GaussDB 100 V100R003, the
configuration is as follows: Source-database-name.Schema name.Table-name.
● The file is in JSON format. Therefore, you only need to add databases and
tables in square parentheses ([]). Any other changes may cause damage to
the file or ignorance failure.
Examples
● Database names and table names must be in upper case in the configuration
file.
During DDL verification, for all tables of the DB1 and DB2 databases,
verifications of type compatibility are ignored.
"notCheckType":["DB1","DB2"]
● The letter case of a name in the configuration file must be the same as that
in the creation statement.
During DDL verification, for the t_test table of the db1 database and the
t_test table of the db2 database, verifications on whether the length of the
target column is less than that of the source column are ignored. (example
for Sybase, MySQL, and SQL Server)
"notCheckLength":["db1.t_test","db2.t_test"]
4.1.5 exclusiveDataOnly_obj.ini
Scenario
If some databases or tables only need to export or verify their table structures and
do not need to import or export their data during data migration, you can
configure these databases and tables in the exclusiveDataOnly_obj.ini file.
Description
The databases and tables that only need to export or verify DDL-defined table
structures are configured as follows:
Source-database-name[.Table-name]
● Content in square parentheses ([]) is optional.
● Use commas (,) to separate multiple conditions.
● Add the configuration to the blank area above ### Parameter Description
in the configuration file.
● If cross-schema migration is not supported by GaussDB 100 V100R003, the
configuration is as follows: Source-database-name.Schema name_Table-name.
(In this case, only the public schema is supported.)
● If cross-schema migration is supported by GaussDB 100 V100R003, the
configuration is as follows: Source-database-name.Schema name.Table-name.
● The databases and tables support fuzzy match.
\%: any number of characters or no characters, for example, haha\%
\_: an unknown character, for example, hah\_
Precautions
● If the exclusiveDataOnly_obj.ini file is not configured, no databases or tables
will be processed as follows: only DDL-defined table structures are exported
or verified and no data is exported or imported.
● In Oracle and GaussDB 100 V100R003, database names and table names are
case-insensitive in database or table creation statements, but uppercase is
adopted for statement execution by default. Therefore, database names and
table names must be in upper case in the configuration file. (If a database
name or table name is enclosed in double quotation marks ("") in the
statement, the name is case-sensitive. In this case, the letter case of the name
in the configuration file must be the same as that in the statement.)
● In Sybase, MySQL, and SQL Server, database names and table names are
case-sensitive in database or table creation statements. Therefore, the letter
case of a name in the configuration file must be the same as that in the
statement.
● Mapping cannot be configured in the file.
Examples
● Only export or verify structures of tables in the DB1 and DB2 databases
(example for Oracle and GaussDB 100 V100R003).
DB1,DB2
● Only export or verify the structure of the t_test table in the db1 database and
the t_test2 table in the db2 database (example for Sybase, MySQL, and SQL
Server).
db1.t_test,db2.t_test2
4.1.6 diff_ddl_obj.ini
Scenario
If source and target tables have different DDLs and you want to migrate data of
certain columns in the source table, configure the involved table and column
names in the diff_ddl_obj.ini file to migrate such data.
Description
Configure the migration as follows:
Source-database-name.Source-table-name:Source-database-column-name[(Target-database-column-
name)]
● Content in square parentheses ([]) is optional.
● You can configure multiple column names that are separated with commas
(,).
● You can configure multiple table names and each table has its configuration
information in a separate line.
● Add the configuration to the blank area above ###Parameter Description.
● If cross-schema migration is not supported by GaussDB 100 V100R003C10,
the configuration is as follows: Database-name.Schema-name_Table-
name:Source-database-column-name[(Target-database-column-name)]. (In
this case, only the public schema is supported.)
● If cross-schema migration is supported by GaussDB 100 V100R003C10, the
configuration is as follows: Database-name.Schema name.Table-name:Source-
database-column-name[(Target-database-column-name)].
● This file does not support fuzzy match.
● Column names configured in this file are case insensitive. However, for
columns with the same name but different letter cases in source or target
tables, configure the column names as they are.
Precautions
● If this file is not configured, mapping between the source and target table will
not be specified.
● By default, database and table names are created in uppercase in Oracle and
GaussDB 100 V100R003C10 databases. Therefore, the database and table
Examples
● Migrate only the ID and NAME columns of TABLE1 in the DB1 database
(example for Oracle and GaussDB 100 V100R003).
DB1.TABLE1:ID,NAME
● Migrate only the id and name columns of table2 in the db2 database to the
num and nickname columns of the table in the target database, respectively
(example for Sybase, MySQL, and SQL Server).
db2.table2:id(num),name(nickname)
4.2 Processes
Data migration supports the following processes: export process, import process,
and export and import process. You can choose from them as needed.
● You do not need to configure the target database and server information in the export
process. Ensure that the file is in JSON format.
● Set flow_type=1 for the export process.
● The local export path under data_path is optional. If it is not specified, the exported file
is generated in the home directory of the user who started DataSync. To specify the
path, ensure that the path is accessible to the user running DataSync.
● A remote path can be configured. The exported file is generated in the configured
remote path and the local file is automatically deleted.
{
"flow_type":1,
"export_db":{
"database_type":2,
"db":{
"ip":"192.168.0.1",
"username":"onetypeuser",
"password":"urEkRwfVrNKEQ4dMeRvj8g==",
"port":1521,
"db_name":"",
"server_name":"orcl",
"trust_store":"",
"trust_store_password":"",
"key_store":"",
"key_store_password":""
}
},
"import_db":{
"database_type":6,
"db":{
"ip":"",
"username":"",
"password":"",
"port":1888,
"trust_store":"",
"trust_store_password":"",
"key_store":"",
"key_store_password":""
"table_space":"",
"index_table_space":""
},
"server":{
"ip":"",
"username":"",
"password":"",
"pub_key_file":"",
"port":22
}
},
"data_path":{
"export_local_path":"",
"export_remote_path":{
"ip":"192.168.0.1",
"username":"dbuser",
"password":"O5gs+S9n18P3uVFohVhpEA==",
"pub_key_file":"",
"port":22,
"path":"/home/dbuser/haha"
},
"import_local_path":"",
"import_remote_path":{
"ip":"",
"username":"",
"password":"",
"pub_key_file":"",
"port":22,
"path":""
}
},
"option":{
"column_separator":"~~~~~",
"row_separator":"@#\n#@",
"data_check_type":1,
"compression_before_translate":false,
"disable_foreign_key":true,
"check_ddl":true,
"nls_lang":"utf8",
"delete_file":true,
"ignore_lost_table":2,
"disable_trigger":true,
"check_obj_exists":true,
"ignore_sync_ddl":false,
"import_nologging":false,
"create_tab_with_default":false,
"import_threads_per_obj":10,
"import_total_task":5,
"import_allow_max_errors":0,
"import_force":false,
"import_check_row_count":false,
"truncate_before_import_db_data":true,
"export_system_rowcount_offset":10,
"export_total_task":10,
"export_allow_max_errors":0,
"export_force":false,
"export_check_row_count":false,
"export_append_on":false,
"export_max_rownum":-1
}
}
The information Export ddl failed table count.....................[0] in bold is displayed only
when the source database is Sybase.
● You do not need to configure the source database information during the import
process. Ensure that the file is in JSON format.
● Set flow_type=2 for the import process.
● Ensure server and database information is correctly configured. Otherwise, DataSync
cannot properly connect to the server and database.
● The data_path parameter can be set to a local data file path used for the import. The
path must be accessible to the user running DataSync, and the data files to be exported
must exist in the path. You can also set a remote path. If the parameter is not set, the
home directory of the user running DataSync is used by default.
● If both the password and pub_key_file parameters are configured, pub_key_file will be
used for connecting to the server. You are advised to select only one of the two.
● If pub_key_file is used and if DataSync, data files, and GaussDB 100 V300R001C00 are
deployed on three different servers, you need to manually configure the password-free
connection between the servers of GaussDB 100 V300R001C00 and the data files so that
GaussDB 100 V300R001C00 can obtain data files through SCP in password-free mode.
{
"flow_type":2,
"export_db":{
"database_type":1,
"db":{
"ip":"",
"username":"",
"password":"",
"port":4100,
"db_name":"",
"server_name":"",
"trust_store":"",
"trust_store_password":"",
"key_store":"",
"key_store_password":""
}
},
"import_db":{
"database_type":6,
"db":{
"ip":"192.168.0.1",
"username":"testsybase1",
"password":"U06lpFo5LlP9wvL4Kt4E4A==",
"port":1888,
"trust_store":"",
"trust_store_password":"",
"key_store":"",
"key_store_password":""
"table_space":"",
"index_table_space":""
},
"server":{
"ip":"192.168.0.1",
"username":"dbuser",
"password":"O5gs+S9n18P3uVFohVhpEA==",
"pub_key_file":"",
"port":22
}
},
"data_path":{
"export_local_path":"",
"export_remote_path":{
"ip":"",
"username":"",
"password":"",
"pub_key_file":"",
"port":22,
"path":""
},
"import_local_path":"/home/dbuser/haha",
"import_remote_path":{
"ip":"",
"username":"",
"password":"",
"pub_key_file":"",
"port":22,
"path":""
}
},
"option":{
"column_separator":"~~~~~",
"row_separator":"@#\n#@",
"data_check_type":1,
"compression_before_translate":false,
"disable_foreign_key":true,
"check_ddl":true,
"nls_lang":"utf8",
"delete_file":true,
"ignore_lost_table":2,
"disable_trigger":true,
"check_obj_exists":true,
"ignore_sync_ddl":false,
"import_nologging":false,
"create_tab_with_default":false,
"import_threads_per_obj":10,
"import_total_task":5,
"import_allow_max_errors":0,
"import_force":false,
"import_check_row_count":false,
"truncate_before_import_db_data":true,
"export_system_rowcount_offset":10,
"export_total_task":10,
"export_allow_max_errors":0,
"export_force":false,
"export_check_row_count":false,
"export_append_on":false,
"export_max_rownum":-1
}
}
"key_store_password":""
}
},
"import_db":{
"database_type":6,
"db":{
"ip":"192.168.0.1",
"username":"testsybase1",
"password":"U06lpFo5LlP9wvL4Kt4E4A==",
"port":1888,
"trust_store":"",
"trust_store_password":"",
"key_store":"",
"key_store_password":""
"table_space":"",
"index_table_space":""
},
"server":{
"ip":"192.168.0.1",
"username":"dbuser",
"password":"O5gs+S9n18P3uVFohVhpEA==",
"pub_key_file":"",
"port":22
}
},
"data_path":{
"export_local_path":"",
"export_remote_path":{
"ip":"",
"username":"",
"password":"",
"port":22,
"pub_key_file":"",
"path":""
},
"import_local_path":"/home/dbuser/import_path",
"import_remote_path":{
"ip":"",
"username":"",
"password":"",
"port":22,
"pub_key_file":"",
"path":""
}
},
"option":{
"column_separator":"~~~~~",
"row_separator":"@#\n#@",
"data_check_type":1,
"compression_before_translate":false,
"disable_foreign_key":true,
"check_ddl":true,
"nls_lang":"utf8",
"delete_file":true,
"ignore_lost_table":2,
"disable_trigger":true,
"check_obj_exists":true,
"ignore_sync_ddl":false,
"import_nologging":false,
"create_tab_with_default":false,
"import_threads_per_obj":10,
"import_total_task":5,
"import_allow_max_errors":0,
"import_force":false,
"import_check_row_count":false,
"truncate_before_import_db_data":true,
"export_system_rowcount_offset":10,
"export_total_task":10,
"export_allow_max_errors":0,
"export_force":false,
"export_check_row_count":false,
"export_append_on":false,
"export_max_rownum":-1
}
}
After the export and import are complete, the following information is displayed:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[ok]
Start converting environment......................[ok]
Start syncing data................................[ok]
Data syncing......................................[0/1]
Data sync completed...............................[1/1]
Start recovering envrionment......................[ok]
Start collecting results..........................[ok]
Task start time...................................[2019-06-13 15:50:59]
Task end time.....................................[2019-06-13 15:51:48]
Total spent time..................................[48.885s]
Export successful data (rows).....................[10]
Export failed data (rows).........................[0]
Export ddl failed table count...............[0]
Export data failed table count....................[0]
Import successful data (rows).....................[10]
Import failed data (rows).........................[0]
Import failed table(tables).......................[0]
Report details path...............................[./logs/reports_2019-06-13/15h-50m-59s/]
The information Export ddl failed table count.....................[0] in bold is displayed only
when the source database is Sybase.
4.3 Reports
At the end of each DDL synchronization, export, import, and export+import
process, DataSync generates a report to provide users with information such as
execution results and execution time consumption.
Report Contents
CreateTblReport.csv records the creation result, index, index creation result,
foreign key, foreign key creation result, and rollback result of each automatically
created table.
The creation results can be:
Results of creating a table:
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[failed]
[Msg]:./logs/reports_2019-06-13/19h-48m-40s/
Example Contents
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
SOURCEDB,SOURCETBL,TARGETDB,TARGETTBL,TBLCREATEDRESULT,INDEXES,IDXCREATEDRESULT,FOREI
GNKEYS,FKCREATEDRESULT,ROLLBACK
testsybase3,tbl_test1000W6,TESTSYBASE3,TBL_TEST1000W6,ERROR,--,--,--,--,--
Report Contents
During DDL synchronization, the following information is verified:
● Database-level data verification
Check whether the source database and target database exist.
● Table-level data verification
Check whether the source table and target table exist.
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[failed]
[Msg]:./logs/reports_2019-06-13/19h-45m-40s/
Example Contents
SOURCEDB,TARGETDB,SOURCETBL,TARGETTBL,SOURCEFIELD,TARGETFIELD,SOURCETYPE,TARGETTYPE,
SOURCENULLABLE,TARGETNULLABLE,SOURCEFIELDLENGTH,TARGETFIELDLENGTH,LEVEL,RESULT,TIME
DSS,DSS,PUBLIC_TABLE1,TABLE1,C_DATE,--,TIMESTAMP WITHOUT TIME ZONE,--,NOT NULL,--,4,0,error,
[reasons]1.The target table is missing this column;,2019-06-13 19:45:43
Report Contents
DumpReport.csv records the time consumption of export, number of total rows,
number of rows that are successfully exported, number of rows that fail to be
exported, and export result of each table.
The export result can be:
● SUCCESSED: All data is exported successfully.
● PARTSUCCESSED: The amount of data that failed the export is less than or
equal to the value of export_allow_max_errors.
● FAILED: The amount of data that failed the export is greater than the value
of export_allow_max_errors.
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
Example Contents
REPORTTYPE,DBNAME,TBLNAME,EXPORTSTART,EXPORTEND,EXPORTCOST(MS),EXPORTEDROWS,EXPO
RTFAILEDROWS,EXPORTRESULT
dump,ONETYPEUSER,ONE_TYPE_1,2019-06-13 15:01:40,2019-06-13 15:01:41,830ms,100,0,SUCCESSED
Report Contents
LoadReport.csv records the time consumption of import, number of total rows,
number of rows that are successfully imported, number of rows that fail to be
imported, and import result of each table.
The import result can be:
● SUCCESSED: All data is imported successfully.
● PARTSUCCESSED: The amount of data that failed the import is less than or
equal to the value of import_allow_max_errors.
● FAILED: The amount of data that failed the import is greater than the value
of import_allow_max_errors.
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[ok]
Start converting environment......................[ok]
Start syncing data................................[ok]
Data importing....................................[0/0]
Data import completed.............................[1/0]
Start recovering envrionment......................[ok]
Start collecting results..........................[ok]
Task start time...................................[2019-06-13 20:33:53]
Task end time.....................................[2019-06-13 20:34:08]
Total spent time..................................[14.789s]
Import successful data (rows).....................[0]
Import failed data (rows).........................[10]
Import failed table(tables).......................[1]
Report details path........................[./logs/reports_2019-06-13/20h-33m-53s/]
Example Contents
REPORTTYPE,SOURCEDB,TARGETDB,SOURCETBL,TARGETTBL,IMPORTSTART,IMPORTEND,IMPORTCOST(
MS),IMPORTEDROWS,IMPORTFAILEDROWS,IMPORTRESULT,ISFKENABLED
load,testsybase2,TESTSYBASE2,tbl_test1000W3,TBL_TEST1000W3,2019-06-13 20:36:27,2019-06-13
20:36:29,2329ms,10,0,SUCCESSED,No ForeignKey
Report Contents
CompleteReport.csv records the time consumption, number of total rows,
number of successful rows, number of failed rows, and execution result in an
export process and an import process of each table.
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing DDL.................................[ok]
Start converting environment......................[ok]
Start syncing data................................[ok]
Data syncing......................................[0/1]
Data sync completed...............................[1/1]
Start recovering envrionment......................[ok]
Start collecting results..........................[ok]
Task start time...................................[2019-06-13 15:55:10]
Task end time.....................................[2019-06-13 15:55:59]
Total spent time..................................[48.690s]
Export successful data (rows).....................[10]
Export failed data (rows).........................[0]
Export data failed table count....................[0]
Import successful data (rows).....................[10]
Import failed data (rows).........................[0]
Import failed table(tables).......................[0]
Report details path...............................[./logs/reports_2019-06-13/15h-55m-10s/]
Example Contents
REPORTTYPE,SOURCEDB,TARGETDB,SOURCETBL,TARGETTBL,EXPORTSTART,EXPORTEND,EXPORTCOST(
MS),EXPORTTOTALROWS,EXPORTEDROWS,EXPORTFAILEDROWS,EXPORTRESULT,IMPORTSTART,IMPOR
TEND,IMPORTCOST(MS),IMPORTEDROWS,IMPORTFAILEDROWS,IMPORTRESULT,ALLCOSTS(MS),ISFKE
NABLED
dump&load,testsybase2,TESTSYBASE2,tbl_test1000W3,TBL_TEST1000W3,2019-06-13 20:39:35,2019-06-13
20:40:08,32162ms,10,10,0,SUCCESSED,2019-06-13 20:40:08,2019-06-13 20:40:10,2306ms,10,0,SUCCESSED,
34468ms,No ForeignKey
Report Contents
IncrementReport.csv records the time consumption, number of table rows,
number of successful rows, number of failed rows, migration result, and path of
files about failed SQL executions for each table in an incremental migration.
Report Path
DataSync displays the report path after its execution is complete, as shown in the
following information in bold:
DataSync (1.1.1), From Huawei !
Copyright © Huawei Technologies Co , Ltd 2019 All Rights Reserved.
Start checking config.............................[ok]
Start syncing increments..........................[ok]
commited/failed/remaining.........................[0/0/0]
spent :0.000s
Task start time...................................[2019-06-22 20:02:35]
Task end time.....................................[2019-06-22 20:02:37]
Total spent time..................................[2.231s]
Logs details path.................................[./logs/reports_2019-06-22/20h-02m-35s/]
Example Contents
REPORTTYPE,SOURCEDB,TARGETDB,SOURCETBL,TARGETTBL,TASKSTART,TASKEND,TASKCOST(MS),TOTA
LROWS,IMPORTEDROWS,FAILEDROWS,RESULT,FAILEDSQLSFILE
increment,DSS,DSS,TESTCLOB,TESTCLOB,--,--,--,-1,-1,-1,--,table does not exists or the table don't have
increment table in source DB
4.4 Logs
Log files are generated during the execution of DataSync. You can refer to these
logs for routine management and maintenance.
Log Types
The following logs are generated during the execution of DataSync:
Log Policies
dss_info_log.log and dss_error_log.log are generated for each execution of
DataSync. The two logs are stored in a folder named after the running time of
DataSync. In addition, all log folders in a day are stored in a folder named after
the current date, as shown in the following information in bold:
syncusr@plaat:/home/syncusr/GAUSSDB100-V300R001C00-DATASYNC/DataSync> cd logs/
syncusr@plaat:/home/syncusr/GAUSSDB100-V300R001C00-DATASYNC/DataSync/logs> ll
drwx------ 4 syncusr syncgrp 4096 Jun 11 17:44 reports_2019-06-11
drwx------ 16 syncusr syncgrp 4096 Jun 13 20:39 reports_2019-06-13
syncusr@plaat:/home/syncusr/GAUSSDB100-V300R001C00-DATASYNC/DataSync/logs> cd
reports_2019-06-13/
syncusr@plaat:/home/syncusr/GAUSSDB100-V300R001C00-DATASYNC/DataSync/logs/reports_2019-06-13>
ll
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:25 15h-25m-59s
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:27 15h-27m-13s
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:29 15h-29m-04s
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:32 15h-32m-39s
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:50 15h-50m-27s
drwx------ 2 syncusr syncgrp 4096 Jun 13 15:51 15h-50m-59s
drwx------ 2 syncusr syncgrp 4096 Jun 13 17:39 15h-55m-10s
drwx------ 2 syncusr syncgrp 4096 Jun 13 17:40 17h-40m-18s
drwx------ 2 syncusr syncgrp 4096 Jun 13 19:24 19h-24m-13s
drwx------ 2 syncusr syncgrp 4096 Jun 13 20:30 20h-30m-06s
drwx------ 2 syncusr syncgrp 4096 Jun 13 20:34 20h-33m-53s
drwx------ 2 syncusr syncgrp 4096 Jun 13 20:35 20h-35m-11s
drwx------ 2 syncusr syncgrp 4096 Jun 13 20:36 20h-36m-18s
drwx------ 2 syncusr syncgrp 4096 Jun 13 20:48 20h-39m-27s
By default, the size of each log file is 100 MB. A maximum of 500 log files can be
generated. When the number of log files reaches the maximum, the earliest log
files will be overwritten.
Table 4-3
Log Path
The log path can be specified by the -l parameter in the command line. If the log
path is not specified, logs will be generated in the Running package path./logs
directory by default.
To delete local files, set delete_file in the cgf.ini file to true. Note that no matter
whether the value of delete_file in the cgf.ini file is set to true, local files will be
deleted after DataSync transfers the exported data from the local directory to the
configured remote directory if any.
In an export+import process, the remote path for export is invalid. In this case,
DataSync generates the exported data files in the configured directory, and then
transfers the files to the home directory of the user who installed the GaussDB
100 V300R001 database. If delete_file is set to true, the data files in the home
directory of the database installation user will be deleted after the import is
successful. Otherwise, the data files will not be deleted.
5 Data Migration
Offline Migration
Step 1 Log in as user root to the server where DataSync resides.
Step 2 If DataSync and GaussDB 100 are deployed on different servers during offline
migration, run commands by using a non-database installation user.
To create a user, perform the following steps:
Create the user and user group for running DataSync on the server.
groupadd syncgrp
useradd -g syncgrp -d /home/syncusr -m -s /bin/bash syncusr
● If DataSync and GaussDB 100 are deployed on the same server, run the
following command by using the database installation user (for example, the
GaussDB 100 database installation user is omm).
su - omm
[syncusr@plaat syncusr]\>ll
total 11684
drwx------ 2 syncusr syncgrp 4096 May 28 23:26 GAUSSDB100-V300R001C00-DATASYNC
-rw-r--r-- 1 root root 18350870 May 28 23:26 GAUSSDB100-V300R001C00-DATASYNC.tar.gz
[syncusr@plaat syncusr]\>ll
total 11680
drwx------ 5 syncusr syncgrp 4096 May 28 23:26 DataSync
-rw------- 1 syncusr syncgrp 11934916 May 28 23:26 DataSync.tar.gz
-rw------- 1 syncusr syncgrp 82 May 28 23:26 DataSync.tar.gz.sha256
Before data migration, you can synchronize table structures and modify DDL
statements for the tables based on the synchronization report. For details about
the DDL synchronization report, see DDL Synchronization Report.
3. Execute DataSync.
In this case, only table structures are synchronized, and data is not imported
or exported.
java -jar DSS.jar [-p cfg.ini_path] [-i exp_obj.ini_path] [-e exclusive_obj.ini_path] [-d
ignore_ddl.ini_path] [-o exclusiveDataOnly_obj.ini] [-l /data/gaussdba/log_path] [-imp_b
importerrorlog_path]
For details about the syntax supported by DataSync, see DataSync Syntax.
----End
Online Migration
Step 1 Log in as user root to the server where DataSync resides.
Step 2 Create the user and user group for running DataSync on the server.
groupadd syncgrp
useradd -g syncgrp -d /home/syncusr -m -s /bin/bash syncusr
[syncusr@plaat syncusr]\>ll
total 11684
drwx------ 2 syncusr syncgrp 4096 May 28 23:26 GAUSSDB100-V300R001C00-DATASYNC
-rw-r--r-- 1 root root 18350870 May 28 23:26 GAUSSDB100-V300R001C00-DATASYNC.tar.gz
[syncusr@plaat syncusr]\>ll
total 11680
drwx------ 5 syncusr syncgrp 4096 May 28 23:26 DataSync
-rw------- 1 syncusr syncgrp 11934916 May 28 23:26 DataSync.tar.gz
-rw------- 1 syncusr syncgrp 82 May 28 23:26 DataSync.tar.gz.sha256
Before data migration, you can synchronize table structures and modify DDL
statements for the tables based on the synchronization report. For details about
the DDL synchronization report, see DDL Synchronization Report.
3. Execute DataSync.
In this case, only table structures are synchronized, and data is not imported
or exported.
java -jar DSS.jar [-p cfg.ini_path] [-i exp_obj.ini_path] [-e exclusive_obj.ini_path] [-d
ignore_ddl.ini_path] [-o exclusiveDataOnly_obj.ini] [-l /data/gaussdba/log_path] [-imp_b
importerrorlog_path]
For details about the syntax supported by DataSync, see DataSync Syntax.
Step 11 Perform full migration.
java -jar DSS.jar [-p cfg.ini_path] [-i exp_obj.ini_path] [-e exclusive_obj.ini_path] [-o
exclusiveDataOnly_obj.ini] [-l /data/gaussdba/log_path] [-imp_b importerrorlog_path]
----End
6 Security
Security Operations
According to service security requirements, the security level of DataSync does not
need to be defined in each security zone when an intranet is used.
Passwords
DataSync encrypts user passwords.
Encryption Algorithms
DataSync encrypts sensitive data using standard encryption algorithms, meeting
security requirements.
After obtaining the installation package, verify the package integrity. The package
can be deployed only after it passes the verification.
Run the following statement in the directory containing the DataSync package. If
OK is returned, the package passes the verification. Otherwise, the package fails
the verification. In the latter case, contact Huawei technical support.
sha256sum -c DataSync.tar.gz.sha256
– verify_ca: Try only SSL connection and check whether the server
certificate is issued by a trusted CA.
– verify_full: Try only SSL connection and check whether the server
certificate is issued by a trusted CA and whether the host name of the
server is the same as that in the certificate.
● ZSQL_SSL_KEY
Specifies the private key file of the client, used to decrypt data encrypted
using the public key. The value must be an absolute path, for example, export
ZSQL_SSL_KEY='/home/xxx/client.key'. xxx indicates the database
installation user.
7 Glossary
This chapter describes the glossary, acronyms, and abbreviations used in this
document.
Term Description
Term Description
Issue 04
Date 2019-12-28
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
4 Glossary................................................................................................................................... 94
Purpose
This document describes error information that may be displayed when you use
GaussDB 100.
If errors are reported in the compilation phase of stored procedures, the specified
error codes will be summarized and output in PLC-XXXXX format (specific to
stored procedures), which is equivalent to GS-XXXXX.
GaussDB 100 is compatible with the user habits of mainstream databases. You can
use native GaussDB 100 interface names or their corresponding names in the
mainstream databases. For details, see Interface Mapping (GaussDB 100 Native
Interface Names vs. Mainstream Database Interface Names). The interfaces
mentioned in this document use their native GaussDB 100 names.
Intended Audience
This document is designed for all GaussDB 100 users.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Version Change Description Date
03 Added: 2019-06-26
● GS-00877 in GS-00871 -- GS-00880
● GS-01258 in GS-00251 -- GS-00260
02 Added: 2019-04-05
● zsql Error Codes
● GS-01345 in GS-01341 -- GS-01350
Modified:
● GS-00927 in GS-00921 -- GS-00930
GS-00002: Failed to open the file %s, the error code was %d
Description: Failed to open the file.
Solution:
● Ensure that the file exists, the user has read permission, and the disk is not
damaged.
● If the fault persists after the above measure is taken, contact Huawei
technical support.
GS-00003: Failed to create the file %s, the error code was %d
Description: Failed to create the file.
Solution:
● Ensure that the disk has free space.
● If the disk has free space but the fault persists, contact Huawei technical
support.
GS-00004: Failed to read data from the file, the error code was %d
Description: Failed to read the file.
Solution:
● Ensure that the host disk is normal.
● If the host disk is normal but the fault persists, contact Huawei technical
support.
GS-00006: The file name (%s) exceeded the maximum length (%u)
Description: The file name was invalid.
Solution: Proceed according to the error information.
GS-00017: Failed to initialize event notification for agent, error code %d.
Description: Failed to create the event.
Solution:
● Ensure that the host is normal.
● If the host is normal but the fault persists, contact Huawei technical support.
Solution: Ensure that the path exists and is valid and ensure that the user has the
access permission.
GS-00028: Write size %d, expected size %d, mostly because file size is larger than
disk, please delete the incomplete file
Description: Failed to completely write the file.
Solution:
● Ensure that the system disk has sufficient space.
● If the system disk has sufficient space but the fault persists, contact Huawei
technical support.
GS-00030: Read size %d, expected size %d, please check incomplete file
Description: Failed to completely read the file.
Solution: Ensure that the file is correct and intact.
_AGENT_STACK_SIZE = Size of each column value x Number of parameters + Size of data in two rows to
be pushed + Size of data for type conversion
The maximum data size of a single row is 64,000 bytes, and type conversion requires 1600
bytes.
For details about the _AGENT_STACK_SIZE parameter, see "Parameters > Advanced
Optimization > Thread Processing" in GaussDB 100 V300R001C00 Database Reference
(Standalone).
● Handle the data in batches according to the estimation result.
● Run the following statement to change the value of _AGENT_STACK_SIZE,
and restart the database for the change to take effect:
ALTER SYSTEM SET _AGENT_STACK_SIZE = value;
Description: The stack depth exceeded the allowed maximum, which is 127.
Solution:
Solution:
Description: The RAFT function was not enabled, the Raft module was not
initialized, or the initialization failed.
Description: The size of the configuration file exceeded the allowed maximum.
Description: The name of the configuration file was a duplicate of an existing one.
Solution: Ensure that the name of the new configuration file is different from any
existing one.
Solution: Ensure that the user has read and write permissions for the entered path.
Description: NULL was specified in an SQL function that does not allow for NULL.
Solution: Modify the SQL statement by referring to the SQL function manual.
Description: The value of the SQL function parameter was out of range.
Solution: Modify the SQL statement by referring to the description of the function
in GaussDB 100 V300R001C00 Database Reference.
GS-00250: There must be at least one clause for the analytic function
Description: No clause was specified in the analytical function.
Solution: Modify the SQL statement by referring to the function manual.
GS-00254 : For invited and excluded nodes is both empty, ip whitelist function
can't be enabled
Description: The whitelist and blacklist were both empty, and the whitelist
checking function was not enabled.
Solution: Configure the whitelist or blacklist, and then enable the whitelist
checking function.
GS-00255: Ip whitelist function is enabled, invited and excluded nodes can't set to
both empty
Description: The whitelist checking function was enabled, but the whitelist and
blacklist were both empty.
Description: Whitelist verification proved that the injected command was not in
the whitelist.
Solution: Confirm the system environment variable settings returned by the error
code.
Description: Whitelist verification proved that the entered file path was not in the
whitelist.
Solution: Confirm the system environment variable settings returned by the error
code.
Description: The database has only one listening IP address, which cannot be
deleted.
Solution: Check the floating IP address configuration. First add one IP address and
then delete another one. Ensure that there is at least one listening IP address in
the database.
Description: The floating IP address added in the database configuration was not a
local IP address.
Solution: Use the ifconfig tool to check the IP addresses of the local NIC. Ensure
that the floating IP address added in the current database configuration is an
existing local IP address.
GS-00260: Can not get valid replication port for peer node.
Description: The REPL_PORT parameter was not configured on the peer database
or the local database was disconnected from the peer one.
Solution:
GS-00305: %s timeout
Description: The API timed out upon a network exception.
Solution:
Solution:
Solution:
Solution:
Solution:
GS-00325: Receive packet has no more data to read, packet size: %u, offset: %u,
read: %u
Description: The packet had no data for reading.
Solution: Contact Huawei technical support.
unable to get certificate Check whether the issuer To check the certificate
CRL of the CRL is the same as revocation status, load
that of the device the CRL with the same
certificate. issuer as the device
certificate.
self signed certificate in The certificate chain sent Load the root trust
certificate chain from the peer end has a certificate on the local
root trust certificate, but end, and ensure that the
this certificate is not trust certificate loaded
loaded on the local end, on the local end and the
or the loaded root trust device certificate chain
certificate is not the sent from the peer end
issuer certificate of the form a complete
device certificate of the certificate chain, that is,
peer end. all certificates from the
self-issued certificate to
the device certificate
have issuing
relationships.
unable to get local issuer 1. The certificate chain 1. Load the root trust
certificate sent from the peer end certificate on the local
has no root trust end, and ensure that the
certificate, and no root trust certificate loaded
trust certificate is loaded on the local end and the
on the local end, or the device certificate chain
loaded root trust sent from the peer end
certificate is not the form a complete
issuer certificate of the certificate chain, that is,
device certificate of the all certificates from the
peer end. self-issued certificate to
2. The number of the device certificate
certificates in the have issuing
certificate chain is relationships.
greater than 10. 2. Reduce the number of
certificates in the
certificate chain. A
database supports a
maximum of 10
certificates in the chain.
certificate has expired The UTC time of the If the validity period is
current system is later incorrectly set, generate
than the end of the a new certificate and
certificate validity period. ensure that the validity
period of the certificate
is later than the current
time.
GS-00342: REPL_PORT is used for replication only, external service will be rejected
Description: The port REPL_PORT was used by the client to connect to the
database.
Description: The client requested to establish an SSL connection, but the server did
not support SSL authentication.
Solution:
1. Configure SSL certificates and enable SSL authentication on the server. For
details, see "Database Usage > Connecting to a Database in GaussDB 100
V300R001C00 User Guide (Standalone)..
2. Set the SSL authentication mode of the client to DISABLED to disable SSL
connections. For a JDBC client, use useSSL=false in the URL. Note that disabling
SSL connections reduces the security of data communication.
Solution:
Description: Failed to establish the SSL connection. Generally, this error is raised in
the SSL handshake phase, indicating a failure in authenticating the client
certificate by the server. For details about the failure cause, see Table 2-2 based
on '%s'.
Solution:
1. Use OpenSSL to check the validity of the client device certificate. If the
authentication fails, update the SSL certificate.
2. Disable client authentication on the server and use SSL_VERIFY_PEER=FALSE.
Note that disabling the authentication prohibits the client identity from being
authenticated.
GS-00346: SSL certificate file \"%s\" has execute, group or world access permission
Solution: Run the chmod command to change the access permission for the SSL
certificate to 400 (read-only for the owner) or 600 (read-write for the owner).
GS-00348: Failed to bind unix domain socket for %s, error code %d
Description: Failed to bind the Unix domain socket file.
Solution: Check whether the path of the Unix domain socket file already exists and
whether the user has required permissions.
● Check error code in the error information. If it is followed by 13, the user
does not have file permissions, and UDS_FILE_PERMISSIONS has been
incorrectly configured.
● If it is followed by 19, UDS_FILE_PATH has been configured, but the file path
does not exist.
GS-00408: The resource requested in NOWAIT mode was being occupied or not
released after the request timed out.
Description: The resource requested in NOWAIT mode was being occupied or not
released after the request timed out.
Solution: Perform a commit or rollback operation in the session that occupies the
current resource to release the resource lock.
GS-00509: Column %u binding buffer is too small, buffer size: %u, size required:
%u
Description: The program-provided buffer was too small.
Solution: Contact Huawei technical support.
GS-00512: %s is null
Description: The operation object was null.
Solution: Proceed according to the error information.
Description: The client received an unexpected packet. That is, the command
keywords do not match.
Description: The SQL statement was too long. Currently, the allowed maximum
size of an SQL statement is 1 MB.
Description: The data type in the SQL statement does not match the expected one.
GS-00619: The number of columns specified in view creation was inconsistent with
that of columns covered in query
Description: The number of specified columns was incorrect.
Solution: Modify the SQL syntax according to the error information.
GS-00634: The total length (%d) of columns within an index exceeded the
maximum.
Description: The length of the index key exceeded the allowed maximum.
Solution: Correctly create an index.
GS-00635: %s
Description: The value was invalid.
GS-00642: The unique index or primary key was referenced by a foreign key.
Description: The table information was referenced by a foreign key.
Solution: Proceed according to the error information.
GS-00643: Table %s.%s is not empty, hint: use force option to flashback truncate
Description: The table was not empty.
Solution: Proceed according to the error information.
GS-00650: The column referenced by a foreign key was not the unique index or
primary key of the referenced table
Description: Constraint conditions were not met.
Solution: Proceed according to the error information.
GS-00657: Password is too simple, password should contain at least three of the
following character types:
A. at least one lowercase letter
B. at least one uppercase letter
C. at least one digit
D. at least one special character: `~!@#$%%^&*()-_=+\\|[{}];:\'\",<.>/? and space
Description: The configured password was too simple.
Solution: Change the password based on the password complexity requirements.
GS-00665: Value size(%u) from cast operand is larger than cast target size(%u)
Description: Failed to convert the value type.
Solution: Proceed according to the error information.
GS-00666: Lob value too large in expression (actual: %u, maximum: %u)
Description: There was an error in reading LOB.
Solution: Proceed according to the error information.
Solution: Rebuild the index or modify the constraint adding statement to ensure
that the constraint column and indexed column match. Alternatively, remove the
USING INDEX clause, and use the database-created default index.
GS-00680: Specified length of column %s too long(> %u) for its datatype in
partition key
Description: The maximum length of the data type of the partition key for
creating a partitioned table exceeded the allowed maximum.
Description: The session ID was invalid. For example, the entered session ID is
smaller than the number of reserved sessions or the session ID does not match
serial. When ALTER SYSTEM KILL SESSION is executed, this error is reported.
Solution: Use the SELECT * FROM DV_SESSIONS; statement to query for the ID of
the session to be ended.
Description: Failed to kill the current session. When ALTER SYSTEM KILL SESSION
is executed, this error is reported.
Solution: Use the SELECT * FROM DV_SESSIONS; statement to query for the ID of
the session to be ended.
Description: If no column is specified during foreign key creation, the primary key
of the parent table will be used by default. This error is reported when the parent
table has no primary key in this situation.
Description:
Solution: Ensure that the numbers of columns in the left and right of the condition
are the same.
GS-00688: Sql has too many bind parameters, count = %d, max = %d
Description: The number of bind variables in the SQL statement exceeded the
allowed maximum.
Solution: Reduce the number of bind variables in the SQL statement.
GS-00714: Log file size should be larger than log keep size %lld
Description: The size of the log file was too small.
Solution: Increase the specified log file size or decrease the value of
LOG_BUFFER_SIZE in the configuration file.
GS-00723: The resource to be locked was occupied, and the wait for the resource
timed out or the NOWAIT mode was used.
Description: The resource to be locked was occupied, and the waiting for the
resource timed out; or the NOWAIT mode was used.
Solution: Release the resource and try again.
Description: The number of indexes in the table exceeded the allowed maximum.
Solution: Contact Huawei technical support.
GS-00744: Datafile %s has already been used, can not remove it in space %s
Description: The high-water mark (HWM) of the data file to be deleted was not 0,
indicating that the data file was in use and failed to be deleted.
Solution: Do not delete this data file.
GS-00767: %s property for %s exceeds or smaller than size that system allowed
Description: The size parameters of the data file were invalid.
Solution: Set the parameters to valid values.
GS-00770: Not all transactions were committed when the database was shut
down.
Description: Not all transactions were committed when the database was shut
down.
Solution: Commit or roll back the transactions not committed and shut down the
database again.
GS-00778: Database not in archive mode, can not execute backup prepare
Description: The database was not in the archiving mode.
Solution: Enable the archiving mode for the database.
GS-00795: The tablespace specified for the user to be created was not a temporary
tablespace.
Description: The tablespace specified for the user to be created was not a
temporary tablespace.
Solution: Specify a temporary tablespace for the user.
Description: The working thread was shut down. This error occurs when shutdown
is concurrently performed.
Solution: After the shutdown is complete, restart the database.
GS-00807: The user %s has logged in, can not be dropped now
Description: The user failed to be deleted because it had logged in to the
database.
Solution: Log out the user and try again.
GS-00812: Log file is not enough, requiring at least %u log files but only has %u
files
Description: The number of log files was less than 2 (system requirement).
Solution: Increase the number of log files.
GS-00816: Table partition key should be subsets of local primary or unique index
Description: The partition key was not a subset of the local unique index columns.
GS-00817: In the FOREIGN KEY constraint, the column type does not match the
type of the referenced column.
Description: In the FOREIGN KEY constraint, the column type does not match the
type of the referenced column.
Solution: Insert data or update the data with the matched type.
GS-00818: Profile has been assigned to user, can not been dropped without
cascade option.
Description: The profile had been allocated to a user and failed to be deleted
without CASCADE.
Solution: Add CASCADE to the DROP statement.
GS-00822: When the password of an existing database user was changed, the
original password was incorrectly entered.
Description: Incorrect old password was entered to modify a database user
password.
Solution: Enter the correct old password. If you forgot the password, log in as a
database administrator to reset the password.
log file. If the status is ACTIVE, keep querying the view until the status changes to
INACTIVE. Then, run the corresponding statement to drop or clear the file.
Description: The two-phase transaction held too many tables or LOB columns.
Solution: Reduce tables or LOB columns in the two-phase transaction.
GS-00856: The current constraint forbids the column data type from being
modified.
Description: The data type of the current column cannot be modified because the
column has the CHECK constraint.
Solution: Delete the column constraint and try again.
GS_00859: Recovery point %llu less than least recovery point %llu, open resetlogs
failed
Description: Failed to open the reset log. A possible cause is that the database was
not completely stopped, the database status was abnormal, or the log file was
damaged.
Solution:
● Start the database in open mode and run the shutdown command. After
data synchronization between memory and disks is disabled, try again.
● If the log file was damaged and RECOVER DATABASE UNTIL CANCEL had
been executed, the database cannot be restored to the consistent point. In
this case, run the ALTER DATABASE OPEN IGNORE LOGS command to
forcibly start the database. However, this operation will damage data
consistency.
Solution: Delete the tables and views that are no longer used.
GS-00875: Cannot alter database timezone when database has TIMESTAMP WITH
LOCAL TIME ZONE columns
Description: Faied to modify db_timezone because the current database has
columns of the TIMESTAMP WITH LOCAL TIME ZONE type.
Solution: Delete the columns of the TIMESTAMP WITH LOCAL TIME ZONE type.
Description: Valid data occupied more space than the set value during tablespace
shrinking.
Description: The database cannot enter the open state after some commands are
executed in the mount state.
Description: The standby database failed to send backup set information to the
primary database.
Description: The data page was damaged, which led to an error in statement
execution.
Solution: Repair the page online or offline, and then access the page again.
Description: The data file size was smaller than the minimum value required by
the system.
Solution: Run the RESIZE command with parameter values increased to change
the data file size.
Description: The index has been deleted. This error may also be reported during
the rebuilding process.
Solution: Check whether the index is actually deleted. If it is, recreate the index
and try again. If it is not, try again.
GS-00901: The referenced composite type (only RECORD supported) was not
initialized
Description: The referenced composite type (only RECORD supported) was not
initialized.
Solution: Initialize the composite type before referencing it.
GS-00902: The declaration of CASE was not found when the CASE statement was
executed
Description: The declaration of CASE was not found when the CASE statement
was executed.
Solution: Ensure that the CASE statement is correctly declared.
Solution: Ensure that the invoked SQL statements can return values.
GS-00926: PL/SQL: Return types of Result Set variables or query do not match
Description: The returned result set was inconsistent with the variable set specified
by INTO.
Solution: Ensure that the number of columns in the FETCH result set is equal to
that in the variable set specified by INTO.
GS-00930: The number of sys_refcursor can be returned extend the max size:%d
Description: The number of cursors returned by the return_result interface
exceeded the system-defined upper limit (2000).
Solution: Reduce the number of cursors returned by return_result.
GS-00942: With ROWTYPE attribute, '%s' must name a table, cursor or cursor-
variable
Description: The object referenced by %ROWTYPE was not a table, cursor, or
pointer variable.
Solution: Reference only a table, cursor, or pointer variable for %ROWTYPE.
GS-00950: The expression %s was used as the assignment target (left operand of
the assignment statement).
Description: The expression was used as the assignment target (left operand of
the assignment statement).
Solution: Do not use an expression as an assignment target.
GS-00960: The OUT and IN OUT parameters were not allowed to contain a
default expression.
Description: The OUT and IN OUT parameters were not allowed to contain a
default expression.
Solution: Do not use a default expression in the OUT and IN OUT parameters.
Solution: Check whether the expression in the line is correct, whether the
expression meets the constraint conditions, and whether the reference cursor with
a parameter is used to assign a value.
Description: The data type of the dynamic SQL statements executed by EXECUTE
IMMEDIATE was not string.
Solution: Change data type of the dynamic SQL statements executed by EXECUTE
IMMEDIATE to string.
GS-00972: The into clause and select need to appear together in 'execute
immediate'
Solution: Use a label name that is a variable type or the one enclosed with double
quotation marks.
Solution: Do not use the FOR loop variable in the range for traversing or as a
parameter for traversing cursors.
GS-01101: The number of index partitions was different from that of table
partitions.
Description: The number of index partitions was different from that of table
partitions.
Solution: Locate the cause for the inconsistency.
GS_01113: Updating the partition key would cause changes to partitions, while
cross-partition update was not supported.
Description: Updating the partition key would cause changes to partitions, while
cross-partition update was not supported.
Solution: Note the update on upper boundary values of the partition key. Do not
perform cross-partition update.
GS-01212: Failed to get timestamp from gts (node id=%u), error number = %05d
Description: Failed to obtain the timestamp when establishing a connection
between nodes.
Solution: Contact Huawei technical support.
GS-01223: Integrity constraint violated - parent key (child record) not found
Description: The operation violated the integrity constraint condition.
Solution: Proceed according to the error information.
GS-01312: %s expected
Description: The syntax was incorrect, with the keyword missing.
Solution: Proceed according to the error information.
GS-01316: Unexpected %s
Description: The syntax was incorrect, with an incorrect keyword.
Solution: Proceed according to the error information.
GS-01324: %s failed
Description: Failed to invoke the function.
Solution: Proceed according to the error information.
GS-01332: Cannot create a user with the name same as sys user
Description: A user with the same name as the system user cannot be created.
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
4 Glossary
Term Description
A–E
backup A backup, or the process of backing up, refers to the copying and
archiving of computer data. Backup data can be used for
restoration in case of data loss.
Term Description
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands
are entered through keyboards or similar devices and are compiled
and executed by applications. The results are displayed in text or
graphic forms on the terminal interface.
core dump When a program stops abnormally, core dump, memory dump, or
system dump records the state of working memory of the
program at that point in time. The states of key programs are
often dumped at the same time. For example, information about
processor registers, including program metrics, stack pointers,
memory management, other processors, and OS flags are often
dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
Term Description
core file A file that is created when memory overwriting, assertion failures,
or access to invalid memory occurs in a process, causing it to fail.
This file is then used for further analysis.
A core file stores memory dump data, and supports binary mode
and specified ports. The name of a core file consists of the word
"core" and the OS process ID.
The core file is available regardless of the type of platform.
data flow An operator that exchanges data among query fragments. By their
operator input/output relationships, data flows can be categorized into
Gather flows, Broadcast flows, and Redistribution flows. Gather
combines multiple query fragments of data into one. Broadcast
forwards the data of one query fragment to multiple query
fragments. Redistribution reorganizes the data of multiple query
fragments and then redistributes the reorganized data to multiple
query fragments.
database A binary file that stores user data and the internal data of a
file database system.
Term Description
dirty page A page that has been modified and is not written to a permanent
device.
dump file A specific type of trace file. A dump file contains diagnostic data
during an event response, whereas a trace file contains
continuously generated diagnostic data.
Term Description
F–J
free space A mechanism for managing free space in a table. This mechanism
manageme enables a database system to record free space in each table and
nt establish an easy-to-find data structure, accelerating operations
(such as INSERT) performed on the free space.
GNU The GNU Project was publicly announced on September 27, 1983
by Richard Stallman, aiming at building an OS composed wholly
of free software. GNU is a recursive acronym for "GNU's Not
Unix!". Stallman announced that GNU should be pronounced as
Guh-NOO. Technically, GNU is similar to Unix in design, a widely
used commercial OS. However, GNU is free software and contains
no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for
each node in the case of strong consistency.
incrementa Incremental backup stores all file changes since the last valid
l backup backup.
Term Description
junk tuple A tuple that is deleted using the DELETE and UPDATE statements.
When deleting a tuple, GaussDB 100 only marks the tuples that
are to be cleared. The VACUUM thread will then periodically clear
these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
P–T
page Smallest memory unit for row storage in the relational object
structure in GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB
server 100 HA system and works with all standby servers. At any time,
only one node in the HA system is identified as the primary server.
Term Description
QPS Query Per Second (QPS) means the number of queries that a
server can respond to per second.
query Each query job can be split into one or more query fragments.
fragment Each query fragment consists of one or more query operators and
can independently run on a node. Query fragments exchange data
through data flow operators.
query An iterator or a query tree node, which is a basic unit for the
operator execution of a query. Execution of a query can be split into one or
more query operators. Common query operators include scan, join,
and aggregation.
RPO Recovery point objective (RPO) refers to the latest status that a
database system and the data can be restored to after a disaster,
and it is usually represented by time.
RTO Recovery time objective (RTO) refers to the duration between the
database system failure caused by a disaster and its restoration to
proper running.
schema A database object set that includes the logical structure, such as
tables, views, sequences, stored procedures, synonyms, clusters,
and database links.
Term Description
SSL Secure Sockets Layer (SSL) is a network security protocol first used
by Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides
three basic security services, all of which use the public key
technology. SSL ensures the security of service communication
through a network by establishing a secure connection between a
client and a server and then sending data through this connection.
stop word In computing, stop words are words which are filtered out before
or after processing of natural language data (text), saving storage
space and improving search efficiency.
Term Description
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
6 Performance.................................................................................................................................. 23
6.1 Execution Plans............................................................................................................................................................ 23
6.2 Execution Plan Cache................................................................................................................................................... 23
6.3 Partitions....................................................................................................................................................................... 24
6.4 Hints..............................................................................................................................................................................25
6.5 CBO.............................................................................................................................................................................. 26
7 High Availability.........................................................................................................................28
7.1 Physical Standby Databases......................................................................................................................................... 28
7.2 Flashback...................................................................................................................................................................... 29
7.3 Backup and Restoration................................................................................................................................................30
7.4 Logical Replication.......................................................................................................................................................31
8 Database Security........................................................................................................................ 32
8.1 Permission Management...............................................................................................................................................32
8.2 Database Audit............................................................................................................................................................. 33
8.3 Network Communication Security............................................................................................................................... 34
8.4 User Profiles................................................................................................................................................................. 35
9 Glossary......................................................................................................................................... 36
Overview
This document describes features of GaussDB 100 V300R001C00 developed by Huawei
Technologies Co., Ltd.
Intended Audience
This document is intended for:
l Pre-sales engineers
l System architecture engineers
Change History
Version Change Description Date
2 Architecture
Introduction
GaussDB 100 supports x86 servers, effectively reducing hardware costs. The system fully
utilizes resources on each node to provide large capacity, high performance, and high
scalability.
Description
GaussDB 100 can be deployed on x86 servers, enabling powerful scale-out for databases. Log
synchronization is used to ensure data consistency between primary and standby nodes. One
primary node can have multiple standby nodes, and each standby node can work in either
synchronous or asynchronous mode. Service applications can access the standby nodes for
read-only operations, enhancing system read performance.
Introduction
GaussDB 100 supports ARM servers, effectively reducing hardware costs. The system fully
utilizes resources on each node to provide large capacity, high performance, and high
scalability.
Description
GaussDB 100 can be deployed on ARM servers. The databases can fully utilize the multi-core
capability of ARM servers, achieving high performance, high scalability, and large capacity in
the ARM ecosystem.
Introduction
GaussDB 100 supports Linux operating systems.
Description
GaussDB 100 supports the following Linux operating systems and versions:
The x86 architecture supports the following operating systems:
l Red Hat Enterprise Linux Server release 7.4 x86_64
l SUSE Linux Enterprise Server 11.3 (SUSE 11 for short), x86_64
l SUSE Linux Enterprise Server 12.4 (SUSE 12 for short), x86_64
l EulerOS Server V2.0SP3 x86_64
l EulerOS Server V2.0SP5 x86_64
The ARM architecture supports the following operating systems:
EulerOS Server V2.0SP8 ARM_64
Introduction
GaussDB 100 supports standard development interfaces, achieving quick interconnection with
development and application software.
Description
GaussDB 100 supports the following development interfaces:
1. C API
2. JDBC 4.0
3. Python 2.7
4. ODBC 3.0
5. GO
GaussDB 100 also supports CLI-based connection to clients.
4.2 Transactions
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports transactions, ensuring integrity and consistency in data processing.
Description
A transaction is a logical unit of work that contains one or more SQL statements, which are
either all committed or all not committed. Transaction processing ensures that data is not
permanently modified until all operations in a transactional unit are completed successfully.
By combining a set of related operations into a unit that succeeds completely or fails
completely, the system can ensure correct service logic with no loss. All transactions must
have the basic ACID properties. ACID is an acronym for the following:
l Atomicity
Operations contained in a transaction are considered as a logical unit, and they either succeed
completely or fail completely.
l Consistency
When a transaction is complete, data must be consistent and the data integrity constraint must
not be damaged. If there is an error during transaction execution, the transaction will be rolled
back to the state before execution, as if it has never been executed.
l Isolation
A transaction allows multiple users to concurrently access the same data without damaging
the correctness and integrity of the data. In addition, changes in parallel transactions must be
independent.
l Durability
Changes made by committed transactions are permanently stored in databases and will not be
rolled back.
l READ COMMITTED (default): At this level, a transaction can access only committed
data.
Generally, the SELECT statement accesses a database snapshot taken when the query
begins. It can also access the data updates in its session, regardless of whether they have
been committed. In this case, different database snapshots may be available to two
consecutive SELECT statements in the same transaction because other transactions may
be committed while the first SELECT statement is executed.
At the READ COMMITTED level, the execution of each statement begins with a new
snapshot, which contains all the transactions that have been committed by the execution
time. Therefore, during a transaction, a statement can access the results of other
committed transactions. Pay attention to whether a single statement always accesses
absolutely consistent snapshots in a database.
Transaction isolation at this level meets the requirements of many applications, and is
fast and easy to use. However, applications performing complicated queries and updates
may require a view that is more consistent than this level can provide.
l SERIALIZABLE: At this level, a transaction can see only changes committed at the
beginning of the transaction (not a query) and changes made by the transaction itself. A
serializable transaction operates in an environment that makes it appear as if no other
users were modifying data in the database. Serializable isolation is suitable for
environments where there are short transactions that update only a few rows, the chance
that two concurrent transactions will modify the same row is relatively low, or relatively
long-running transactions are primarily read only.
In GaussDB 100 transaction management, you can start, commit, and roll back transactions,
prepare for two-phase commit, set a transaction isolation level, and create a savepoint for a
transaction.
For details about how to use transaction control, see GaussDB 100 V300R001C00 R&D
Documentation (Standalone).
4.3 Tables
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports typical heap tables for data storage and query. Data will be
automatically persisted after being committed.
Description
l A heap table is a default data table storage structure. In a heap table, data rows are
managed based on random access. Space from a segment header block to the high water
mark (HWM) is extensively managed in a random manner. By default, when a data row
needs to be inserted into a table, the system checks for idle space below the HWM to
accommodate the new row. If there is a vacant position, the row will be placed. Note that
this position should be as small as possible and it may be an overwrite position where a
data row has been deleted. If no proper position is found below the HWM of a heap table
segment, the system will raise the heap table HWM. In heap tables, row storage has no
order. This characteristic is called random access. For details about how to use ordinary
database heap tables, see GaussDB 100 V300R001C00 R&D Documentation.
l GaussDB 100 supports table partitioning. A partitioned table is a logical table that is
divided into several physical partitions for storage based on a specific plan. Data is
stored on these physical partitions, instead of the logical partitioned table. Currently,
GaussDB 100 supports range partitioning, hash partitioning, list partitioning, and interval
partitioning.
Introduction
GaussDB 100 allows users to customize temporary tables.
Description
A temporary table is like an ordinary table, allowing users to add, delete, modify, and query
data. It can be used to store and process intermediate results and will be automatically deleted
after use.
GaussDB 100 supports both local temporary tables and global temporary tables.
l A local temporary table (or its table structure) exists for the duration of a specific
session, and will be cleared and deleted when the session ends.
l A global temporary table (or its table structure) is globally visible and persistent. It will
be automatically cleared but not deleted when the session ends. A global temporary table
can be a transaction- or session-level temporary table.
– ON COMMIT PRESERVE ROWS: Defines a session-level temporary table.
When a session ends, the temporary table data is deleted but the table structure
remains.
– ON COMMIT DELETE ROWS: Defines a transaction-level temporary table.
When a transaction ends, the temporary table data is deleted but the table structure
remains.
Introduction
GaussDB 100 supports standard data types.
Description
Currently, GaussDB 100 supports the following data types:
Boolean: BOOLEAN
4.6 Indexes
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows indexes to be created on tables. Users can quickly search for data based
on indexes, improving performance.
Description
An index is a structure that sorts the values of one or more columns in a database table. Use
select * from table where col=1000000 as an example. If there is no index, the entire table
will be traversed until the row whose col is 1000000 is found. After an index is created (on
the col column), the index will be queried. Indexes have been optimized by algorithms,
greatly reducing the number of searches.
From the perspective of data search implementation, indexes are another type of data objects.
Indexes contain various records that can point to related data records. Specifically, each index
includes the data of all indexed columns for a specified data row, and is stored at the physical
location corresponding to the data row. In this way, an index is equivalent to a collection of all
data directory items, and users can quickly locate a data row that meets the condition
specified by the data in an indexed column.
l An index can be created on multiple columns.
GaussDB 100 allows indexes to be created on one or more columns. Users can combine
multiple column values to narrow the search scope. A composite index contains a
maximum of 16 columns.
l Unique indexes can be created.
GaussDB 100 allows for unique indexes. In this case, the system checks whether new
values are unique in the indexed column. Attempts to insert or update data which would
result in duplicate values in the indexed column will generate an error. Currently, only B-
tree indexes can be created as unique indexes.
l Partitioned indexes can be created.
GaussDB 100 allows local partitioned indexes to be created on partitioned tables. Such
an index is equipartitioned with the table and the index partitioning is automatically
maintained when partitions are dropped or truncated. This ensures that the index always
remains equipartitioned with the table. The number of local index partitions must be the
same as that of partitions in a table.
l Function-based indexes can be created.
GaussDB 100 allows for function-based indexes, which are created based on the
calculation results of columns in a table. Such indexes improve query performance
without modifying the logic of applications. If there is no function-based index, any
query that executes a function on a column cannot use the index of this column. A
database uses a function-based index only when the function is included in a query. In
GaussDB 100, you can create function-based indexes for the UPPER and TO_CHAR
functions. The function parameter must be one column, and the index cannot be
converted to a constraint.
4.7 Views
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows views to be created and deleted.
Description
A view is a logical representation of one or more tables. In essence, a view is defined by a
query. Like an actual table, a view also contains a series of columns with names and rows.
However, views are not stored as data value sets in databases. A view derives its row and
column data from the table referenced by the query of the view, and the data is dynamically
generated when the view is referenced.
GaussDB 100 supports the following two types of views:
l User-defined views: Users can create and delete views as needed.
l Preset system views: The system can initialize and preset views, and the owner is user
SYS. Such views include metadata and dynamic performance views, which can be used
to query the metadata and dynamic performance data of the system.
4.8 Synonyms
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows users to customize synonyms.
Description
A synonym is an alias of a data object, indicating a mapping relationship with the data object.
It is often used to simplify object access and improve the security of object access. A
synonym requires no storage other than its definition in the data dictionary. Therefore, using
synonyms saves a large amount of data space and enables users to seamlessly interact with
each other across databases.
l Private synonyms: If the keyword public is not added during synonym creation, the
synonym will be a private one. Other users can access the private synonym only after
being authorized. A private synonym can have the same name as another public
synonym.
l Public synonyms: If the keyword public is added during synonym creation, the synonym
will be a public one. Public synonyms are created by user public. Other users can access
the synonyms without authorization. However, while accessing the objects to which the
synonyms point, the users still need to pass permission verification.
4.9 Sequences
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports sequences, allowing users to generate unique integers.
Description
GaussDB 100 supports sequences. Users can run the CREATE SEQUENCE statement to
create a sequence, also a database object, from which multiple users can generate unique
integers. Sequences can also be used to generate primary keys automatically.
When sequence numbers are generated, the sequence increments automatically, which is
irrelevant to transaction commission or rollback. When two users increment the same
sequence at the same time, the sequence numbers obtained by a user may have a gap with
those obtained by the other because the two users generate different numbers. A user can
never obtain the sequence numbers generated by another user. After a user generates a
sequence number, the user can continue to access the corresponding value regardless of
whether the sequence is incremented by another user.
Sequence numbers are independent of tables, and therefore the same sequence can be used for
one or more tables. A single sequence number may be skipped because it is generated and
used in the transaction when the final rollback was performed. In addition, a single user may
not realize that other users are obtaining values from the same sequence.
After a sequence is created, it can be referenced in SQL statements with the NEXTVAL and
CURRVAL pseudocolumns. Each new sequence number is generated when the pseudocolumn
NEXTVAL is used, while the current sequence number can be repeatedly returned when the
pseudocolumn CURRVAL is used.
For details about how to use sequences, see GaussDB 100 V300R001C00 R&D
Documentation (Standalone).
Introduction
GaussDB 100 supports the SQL:2003 standard, enabling quick application migration and
rollout.
Description
SQL standards are international and updated periodically. SQL standards have defined core
features and optional features. Most databases cannot completely conform to the SQL
standards. SQL features are built by database vendors to maintain customers and push up
application migration costs. New SQL features are increasingly different among vendors.
Currently, there is no authoritative SQL standard test.
GaussDB 100 supports standard SQL statements. Specifically, it supports most of the core
features in SQL:2003, as well as some optional features, implementing basic SQL syntax in
application development.
In addition, GaussDB 100 offers compatibility with most mainstream SQL syntax to quickly
migrate existing applications and reduce the workload of application modification.
For details about the SQL syntax list, see the SQL syntax part in GaussDB 100 V300R001C00
R&D Documentation (Standalone).
Introduction
GaussDB 100 allows users to bind variables to SQL statements, simplifying the parse process
and increasing the execution efficiency.
Description
Variable binding refers to the use of variables instead of constants in the conditions of SQL
statements. Assume there are two SQL statements in the SQL pool.
select * from table where col=1;
select * from table where col=2;
For GaussDB 100, the two statements are completely different. Specifically, GaussDB 100
matches each character in the texts of the two SQL statements. Although there is only one
character different, the SQL statements in the SQL pool cannot be matched. Therefore, the
database considers them to be two completely different statements. Any SQL statements that
cannot be completely matched need to be hard parsed.
Change the above SQL statements to select * from table where col=:var1, and assign a
value to the variable var1 for query. The first statement will be hard parsed, and the second
statement will match the first one and only need to be soft parsed. In this case, the existing
information in the SQL pool is reused. If a statement has been repeatedly executed for many
times, using bind variables will bring great benefits. If an application does not use or not fully
use bind variables, there will be a high probability of serious performance problems.
Bind variables help reduce hard parses, which in turn reduces parse-caused CPU contention
and resource overhead as well as the memory usage of SQL pools. However, histograms
cannot be used in this situation and SQL optimization becomes difficult.
Introduction
GaussDB 100 allows users to customize stored procedures.
Description
A stored procedure is a set of SQL statements that are used to complete specific functions in a
large database system. It is stored inside a database. A stored procedure is compiled only for
the first time, and does not need to be compiled in later invoking. Users can execute a stored
procedure by specifying its name and providing parameters (if any). A stored procedure is an
important object in a database.
Stored procedures provide benefits by:
l Allowing customers to modularize program design and encapsulate SQL statement sets,
easy to invoke.
l Caching the compilation results of stored procedures, accelerating SQL statement set
execution.
l Allowing system administrators to restrict the permission for executing a specific stored
procedure, controlling access to the corresponding type of data. This prevents access
from unauthorized users and ensures data security.
4.13 Functions
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows users to customize functions and provides a large number of preset
functions.
Description
A function is a set of SQL statements that are used to complete specific functions in a large
database system. It is stored inside a database. A function is compiled only for the first time,
and does not need to be compiled in later invoking. Users can execute a function by
specifying its name and providing parameters (if any). Functions have return values. Users
can use functions to achieve specific targets and obtain required data.
GaussDB 100 allows users to customize functions. Users can use SQL syntax to create and
delete functions.
GaussDB 100 also provides a large number of preset functions, including numeric calculation,
character processing, time and date, interval, type conversion, and aggregation functions.
For details about the syntax for function creation, deletion, and usage as well as the preset
functions, see GaussDB 100 V300R001C00 R&D Documentation (Standalone).
4.14 Triggers
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows users to customize triggers.
Description
A database trigger is a compiled storage unit, written in PL/SQL or Java. It is automatically
invoked by a database.
GaussDB 100 can monitor the SELECT, INSERT, UPDATE, DELETE, and MERGE
operations on data tables. It executes corresponding trigger actions when these operations are
performed. GaussDB 100 can fire a trigger at the table or row level before or after table data
access/change. For details about trigger usage and restrictions, see GaussDB 100
V300R001C00 R&D Documentation (Standalone).
Introduction
GaussDB 100 provides preset system packages.
Description
A package is a schema object that groups logically related PL/SQL types, variables, constants,
subprograms, cursors, and exceptions. A package is compiled and stored inside a database
where many applications can share its content. A package always has a specification, which
declares the public items that can be referenced from outside the package. If the public items
include cursors or subprograms, then the package must also have a body. The body must
define queries for public cursors and code for public subprograms. The body can also declare
and define private items that cannot be referenced from outside the package but are necessary
for the internal work of the package. Finally, the body can have an initialization part whose
statements initialize variables and do other one-time setup steps, and an exception-handling
part. You can change the body without changing the specification or the references to the
public items.
GaussDB 100 also provides preset system packages. For details about the content and usage
of system packages, see GaussDB 100 V300R001C00 R&D Documentation (Standalone).
Introduction
GaussDB 100 allows users to collect statistics on wait events to determine system running
status.
Description
A wait event occurs when a session waits for system behavior. It may be caused by many
factors, such as slow read/write on disks, architecture-caused locks, and various system
resource contentions. A wait event can be at the system or session level. A session-level wait
event affects the activity of a single user in a database. A system-level wait event affects the
entire database system. Users can locate system performance problems by analyzing wait
events.
GaussDB 100 provides seven classes of wait events (25 events in total): Idle, Concurrency,
Other, Commit, Application, User I/O, and Configuration.
You can query the DV_SESSION_EVENTS and DV_SESSION_WAITS views for statistics
on session-level wait events, and the DV_SYS_EVENTS view for statistics on system-level
wait events.
Introduction
GaussDB 100 provides preset metadata views, from which users can query system metadata.
Description
An important part of GaussDB 100 is its metadata, which is a set of views providing
management metadata about the database. Metadata includes the following information:
l Definition of each schema object in a database, including default values and integrity
constraints of columns
l Amount of space allocated for and used by a schema object
l Name of a database user, permissions and roles granted to the user, as well as audit
information related to the user
Metadata is the core part of GaussDB 100 management. Assume that a database performs the
following operations:
l Accessing the data dictionary to find information about users, schema objects, and
storage structures
l Modifying the data dictionary each time DDL statements are issued
Since GaussDB 100 stores metadata in the system and opens metadata views to users, the
users can run SQL statements to query the views. For example, users can run SELECT to
determine their permissions, which tables exist in their schemas, which columns are in these
tables, and whether indexes are built on these columns.
l Base table
A base table stores metadata information about a database. The database system writes and
reads such tables, and users should not access the tables directly because the tables have been
standardized and most data is stored in hidden format.
l View
A view uses JOIN and WHERE clauses to decode base table data into useful information,
such as user names or table names, which simplifies information. A view contains the names
and descriptions of all objects in the data dictionary. Some views can be accessed by all
database users, while some others are only for administrators.
Generally, data dictionary views are grouped. In many cases, there are three views containing
similar information and distinguished by their prefixes.
For details about metadata views, see GaussDB 100 V300R001C00 R&D Documentation
(Standalone).
Introduction
GaussDB 100 provides dynamic performance views, from which users can query statistics
about a current database. The views are continuously updated while a database is open and in
use, providing system status in real time.
Dynamic performance views usually start with DV, and also can be called DV views.
Description
Dynamic performance data is owned by user SYS and stored in dynamic performance tables,
allowing users to query views for performance data. Dynamic performance views provide the
status information of databases and are updated in real time.
For details about dynamic performance views, see GaussDB 100 V300R001C00 R&D
Documentation (Standalone).
Introduction
GaussDB 100 supports data management based on tablespaces. Users can create and delete
tablespaces to flexibly allocate and use storage resources.
Description
A GaussDB 100 database tablespace consists of one or more data files. Database objects are
logically stored in tablespaces and physically stored in data files.
When a GaussDB 100 database is created, the following tablespaces are automatically
created: SYSTEM, TEMP, UNDO, USERS, TEMP2, and TEMP2_UNDO.
l SYSTEM tablespace
It stores GaussDB 100 metadata. To ensure stable database running, you are advised not
to store user data in the SYSTEM tablespace. By default, the SYSTEM tablespace is not
automatically extended. If it is full, manually add data files or extend the tablespace.
l TEMP tablespace
It is automatically maintained by GaussDB 100. When SQL statements apply for disk
space, the GaussDB 100 database allocates temporary segments from the TEMP
tablespace. The TEMP tablespace is also used for index creation, data sorting that
cannot be executed in the memory, intermediate result sets of SQL statements, and
temporary tables.
l UNDO tablespace
It stores undo data. When a DML operation (INSERT, UPDATE, or DELETE) is
performed, old data before the operation is written into the UNDO tablespace. Such a
tablespace is mainly used for transaction rollback, database instance restoration, read
consistency, and flashback query.
l USERS tablespace
It is the default tablespace. When a user is created with no tablespace specified, all
information about the user is stored in the USER tablespace.
l TEMP2
It stores NOLOGGING table data, and is automatically maintained by GaussDB 100.
l TEMP2_UNDO
It stores the undo data of NOLOGGING tables.
Users can create and specify tablespaces to store user data, including tables, table partitions,
and indexes.
Database administrators can use tablespaces to control the layout of disks where a database is
installed. This has the following advantages:
l Separate user data from system data, reducing I/O contention.
l Separate the data of an application from that of another application, preventing multiple
applications from being affected when a tablespace is offline.
l Store data files of different tablespaces on different disk drives, reducing I/O contention.
l Make a single tablespace offline while retaining other tablespaces online. Tablespace
usage can be improved by reserving a tablespace for a specific type of database use, such
as frequent update activities, read-only activities, or temporary segment storage.
Some operating systems limit the number of files that can be opened at the same time. This
limit may affect the number of tablespaces that can be online concurrently. Plan tablespaces
effectively to avoid breaking this limit. Only create enough tablespaces to meet your
requirements, and create them using as few files as possible. If you have to increase the size
of a tablespace, add one or two big data files, or create a data file with autoextend enabled,
instead of creating many small data files.
For details about how to use tablespaces, see GaussDB 100 V300R001C00 User Guide
(Standalone).
Introduction
GaussDB 100 supports Workload Statistics Report (WSR), which can be used to generate
performance analysis reports.
Description
A snapshot is a collection of system statistics at a specified time point. Users can use wsr
$create_snapshot to collect complete statistics of the entire database system at a specified
time point.
WSR is a built-in system tool of GaussDB 100. It generates a report by comparing the
statistics collected in two snapshots. The report data is used to analyze database performance
in a specified period to further analyze system performance problems.
WSR can generate a report only when there are two or more snapshots. The IDs of start and
end snapshots must be specified, and there must be no system restart between the two
snapshots. Specifically, the system compares statistics collected in two snapshots to obtain the
changes within the time period. Then, a report is generated.
A WSR report contains the following:
l Load profile: CPU, I/O, and redo resource usage information, and the total numbers of
SQL and transaction executions
l SQL statistics: SQL status, and top 10 SQL statements sorted by execution time, CPU
time, user I/O wait, physical read, logical read, execution times, and parse times
5.6 IPv6
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports service access addresses in IPv6 format and allows logs to be
transmitted between primary and standby databases on an IPv6 network.
Description
GaussDB 100 supports service access addresses in IPv6 format. Users can set an IPv6 address
of a server as the listening address. They can also access a database from an IPv6 address of a
server.
GaussDB 100 allows primary and standby databases to communicate from IPv6 addresses and
transmit logs on an IPv6 network.
GaussDB 100 allows a service network and a primary/standby replication network to have
different network segments. IPv4 and IPv6 can be selected as needed.
Introduction
GaussDB 100 allows data to be imported and exported. Users can export data from a database
to the text format and import text data to the database.
Description
To facilitate data transfer, GaussDB 100 supports text data import and export. This feature is
commonly used in data migration, data backup, and service text data import scenarios. Users
can run DUMP on zsql to export data to the text format and LOAD to import text data.
In GaussDB 100, data export can be performed on both servers and clients. Specifically, SQL
statements are assembled on a client before being sent to the server, which then receives
specific DDL and DML statements. GaussDB 100 allows users to export a single table, that
is, all data in the table is exported as text data. Users can also customize statements for export,
for example, add SELECT to DUMP. GaussDB 100 allows users to set the specifications for
exporting data to text files, including the size of a target text file (splitting supported, creating
another file when the size is exceeded), symbols contained in columns, delimiters of columns,
and the number of rows for a record.
GaussDB 100 also supports plain-text data import. Specifically, SQL statements are
assembled on a client before being sent to the server, which then receives specific DDL and
DML statements. A target file for text data to be imported to must match the source file in the
number of columns and the data types of columns. For data import, users can also set the data
format of text files, including symbols contained in columns, delimiters of columns, and the
number of rows for a record.
For details about how to import and export data, see GaussDB 100 V300R001C00 User Guide
(Standalone).
Introduction
GaussDB 100 supports logical data import and export. Users can logically export data from
and import data to a database. Source and target files for import and export are all of the SQL
type.
Description
To facilitate data transfer, GaussDB 100 supports logical data import and export. This feature
is commonly used in data migration, data backup, and service text data import scenarios. The
format of files for logical import and export is SQL text. Users can run EXP on zsql to
logically export data and IMP to logically import data.
In GaussDB 100, logical data export can be performed on both servers and clients.
Specifically, SQL statements are assembled on a client before being sent to the server, which
then receives specific DDL and DML statements.
In GaussDB 100, you can logically export a single table or a user-specified table, and also
multiple tables of multiple users at a time. If EXP is used to export data, users can set
parameters such as parallel, insert_batch, and commit_batch to accelerate export.
GaussDB 100 allows users to import the files that they logically export data to. Users can also
choose full import or incremental import. By specifying the REMAP_SCHEMA and
REMAP_TABLESPACE parameters to determine data mappings for logical import, users
can prevent import conflicts due to duplicate objects.
For details about how to logically import and export data, see GaussDB 100 V300R001C00
User Guide (Standalone).
6 Performance
Introduction
GaussDB 100 allows SQL statement execution plans to be viewed to analyze the performance
of SQL statements.
Description
An SQL statement execution plan is a combination of steps used by a database to execute an
SQL statement. It is the core of SQL performance analysis and optimization. An execution
plan includes the following information:
Users can run EXPLAIN PLAN to determine whether an optimizer has chosen a specific
execution plan, such as a nested loop join. The users can also examine the decision of the
optimizer, for example, the reason why the optimizer chose a nested loop join rather than a
hash join.
Introduction
GaussDB 100 allows SQL execution plans to be cached in a shared pool. In this case, SQL
statements do not need to be repeatedly parsed each time they are executed, increasing SQL
execution efficiency.
Description
In GaussDB 100, when an SQL statement is executed for the first time, the SQL syntax is
hard parsed and the generated execution plan is cached in an independent shared pool. When
the statement is executed later, the system automatically matches the SQL texts and directly
invokes the execution plan cached in the SQL pool, skipping the hard parse operation and
reducing the SQL execution time.
GaussDB 100 determines that historical execution information and an execution plan in an
SQL pool can be reused only when all the characters in two SQL texts are the same. Even if
the SQL texts have a little bit difference, GaussDB 100 will determine the two as different
statements. Then, it will separately hard parse the SQL statement and store the generated
execution plan in the shared pool. If two SQL texts differ from each other only in the values
in the WHERE clause, bind variables are recommended. For details about how to use bind
variables, see GaussDB 100 V300R001C00 R&D Documentation.
The size of memory occupied by an SQL pool is affected by the memory parameter
SHARED_POOL_SIZE. The size can be dynamically adjusted. You can query the
DV_GMA view for the SGA structure and shared pool size, query the DV_SQLS view for
cached SQL statements, and query the DV_OBJECT_CACHE view for the data dictionary
cache.
For details about shared pools, see GaussDB 100 V300R001C00 User Guide (Standalone).
6.3 Partitions
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports the syntax for creating partitioned tables and data partitions as well as
for optimizing execution plans.
Description
A continuous increase in the amount of data in a table slows down the data query speed and
deteriorates the application performance. In this case, you need to partition the table. After a
table is partitioned, it is still a complete table logically, but its data is physically stored in
multiple tablespaces (physical files). In this case, the system does not need to scan the entire
table in each data query.
GaussDB 100 table partitioning provides many benefits for applications, such as improving
manageability, performance, and availability. Partitioning can greatly improve the
performance of some query and maintenance operations. In addition, partitioning can greatly
simplify common management jobs. It is a key feature for systems with an ultra-large amount
of data and ultra-high availability.
Partitioning enables a tables or index to be divided into smaller pieces, which called
partitions. Each partition has its own name and storage characteristics. From the perspective
of a database administrator, a partitioned object has multiple pieces that can be managed
either collectively or individually, allowing for great flexibility for the administrator to
manage partitioned objects. However, from the perspective of an application, a partitioned
table is identical to the non-partitioned table; and no modifications are needed when a
partitioned table is accessed using DML statements.
GaussDB 100 supports range partitioning, list partitioning, hash partitioning, and interval
partitioning. A partition key supports integer, character, and time data types. With partition
pruning, querying a partitioned table is over 10 times faster than querying a non-partitioned
table.
For details about how to use partitions in GaussDB 100, see GaussDB 100 V300R001C00
R&D Documentation (Standalone).
6.4 Hints
Version
Introduced in GaussDB 100 V300R001C00
Introduction
In standalone deployment, GaussDB 100 allows hints to be set at the SQL statement level.
Description
Hints are special SQL syntax. They can be used to manually intervene SQL execution plan
selection. Adding a hint to an SQL statement instructs the SQL optimizer to select a specific
execution plan. Hints are usually used when users know the optimal SQL execution plan, or
used to stabilize the SQL execution plan and optimize performance.
In GaussDB 100, you can optimize execution plans at both the hint and SQL levels.
l access_method_hint
Hint for type selection during data access.
There are seven hints, which are FULL, INDEX, NO_INDEX, INDEX_ASC,
INDEX_DESC, INDEX_FFS, and NO_INDEX_FFS.
l join_order_hint
Hint for join order selection.
There are two hints, which are ORDERED and LEADING.
l join_method_hint
Hint for join method selection.
There are three hints, which are USE_NL, USE_MERGE, and USE_HASH.
For details about hints and the usage, see GaussDB 100 V300R001C00 R&D Documentation
(Standalone) and GaussDB 100 V300R001C00 Performance Tuning Guide.
6.5 CBO
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 supports the cost-based optimizer (CBO).
Description
Query optimization is a process of choosing the most efficient method for executing an SQL
statement.
SQL is a non-procedural language. Therefore, an optimizer can freely merge, reorganize, and
progress in any order. In addition, a database can optimize each SQL statement based on the
statistics collected on accessed data. Then, the optimizer can examine multiple access
methods (such as full table scan or index scan), different join methods (such as nested loop
join and hash join), different join orders, and possible conversions to determine the optimal
execution plan for an SQL statement.
Given a specific query and an environment, an optimizer will assign a relative numerical cost
to each step of an execution plan, and calculate the values together to generate an overall cost
estimation result for the plan. After calculating the costs of alternative execution plans, the
optimizer will select an execution plan with the minimal cost estimate. For this reason, an
optimizer is sometimes referred to as the CBO, which is often compared with the rule-based
optimizer (RBO).
GaussDB 100-supported CBO selects an optimal execution plan based on the costs of SQL
statements. It has three key components:
l Query converter
For some statements, the query converter rewrites an original SQL statement into an
equivalent SQL statement with a lower cost and determines whether the conversion is a
worthy operation.
l Cost estimator
It is used to determine the overall cost of a given execution plan.
7 High Availability
Introduction
GaussDB 100 allows a primary database to send redo logs to a standby database so that the
standby database can apply the logs to implement quick data synchronization and DR between
the two databases.
Description
In traditional database DR, redo logs are transferred between primary and standby databases
for data synchronization. The logs are generated when data is modified on a primary database,
and they can be sent to a standby database in either synchronous or asynchronous mode. After
the standby database receives and applies the redo logs, its data becomes consistent with that
in the primary database.
GaussDB 100 supports the following three modes for standby databases:
l Maximum protection: It ensures no data loss.
In this mode, transaction logs are not only written into local log files, but also into the
log files of standby databases. Transactions are committed in a primary database only
when data is available in at least one standby database. If standby databases are
unavailable due to a fault (for example, network disconnection), services on the primary
database will be blocked to prevent data loss. Only LGWR SYNC is supported for
replication from a primary database to standby databases.
l Maximum availability: It provides the highest-level data protection without
compromising the availability of a primary database.
Maximum availability is similar to maximum protection. It also requires that local
transaction logs be written into the log files of at least one standby database before
commission. There is also a difference between the two. In maximum availability mode,
if standby databases are unavailable due to a fault, services in the primary database will
not be blocked and the database will automatically switch to the maximum performance
mode. After the standby databases recover, the primary database will automatically
switch back to the maximum availability mode. Although data loss is avoided, data
consistency cannot be completely ensured. Only LGWR SYNC is supported for
replication from a primary database to standby databases.
l Maximum performance: It provides the highest-level data protection without
compromising the performance of a primary database. It is the default mode.
This mode allows transactions to be committed at any time. The transaction logs of a
primary database must be written into at least one secondary database, and this write can
be asynchronous. In ideal network conditions, this mode provides data protection similar
to the maximum availability mode and has slight impact on primary database
performance. Both LGWR SYNC and ASYNC are supported for replication from a
primary database to standby databases.
GaussDB 100 supports a maximum of nine standby databases. Users can determine the
quantity and protection mode of standby databases based on their HA requirements.
For details about how to create, maintain, and rebuild a standby database, see GaussDB 100
V300R001C00 User Guide (Standalone).
7.2 Flashback
Version
Introduced in GaussDB 100 V300R001C00
Introduction
GaussDB 100 allows users to flash back data after misoperations, without stopping databases.
Description
Flashback is a fast data recovery solution. You can selectively query or cancel misoperations.
FLASHBACK TABLE restores a table to an earlier state in the event of human or
application errors. The time in the past to which the table can be flashed back is dependent on
the amount of undo data in the system. In addition, GaussDB 100 cannot restore a table to an
earlier state across any DDL operations that change the structure of the table.
l Restoring table data to a specified time point or SCN point. This mode is suitable when
users have incorrectly adjusted the data.
l Restoring tables that have been deleted by mistake from recycle bins. This mode is
suitable when users have incorrectly executed DROP TABLE.
For details about the FLASHBACK syntax and usage, see GaussDB 100 V300R001C00
R&D Documentation (Standalone).
For details about the scenarios and steps of FLASHBACK, see GaussDB 100 V300R001C00
User Guide (Standalone).
Introduction
GaussDB 100 supports full data backup and restoration. In the case of extreme data loss, data
can be restored, improving system reliability.
Description
l Physical backup
GaussDB 100 supports physical database backup. Before backing up a database, ensure that
the database is in archiving mode. The backup operation can be performed only on primary
databases in the OPEN state.
l Full backup
To improve service reliability, the system supports full backup of database data (excluding
alarm data) in either automatic or manual mode. Full backup is performed at a specified time
point, covering the full data of a database at that point. When a database is abnormal, users
can restore it by using the full backup file. GaussDB 100 supports full backup of an entire
database, and allows users to run the BACKUP DATABASE statement on zsql to fully back
up a database.
l Incremental backup
To accelerate data backup, the system supports incremental backup of database data
(excluding alarm data) in either automatic or manual mode. Incremental backup covers
differential data between a specified time point and the last full/incremental backup time
point, reducing the amount of data to be backed up. When a database is abnormal, users can
restore it by using the full backup and incremental backup files. GaussDB 100 supports
incremental backup of an entire database, and allows users to run the BACKUP DATABASE
INCREMENTAL statement on zsql to incrementally back up a database. Level 0 indicates
the baseline backup. Level 0 backup must be performed before level 1 backup is performed
for the first time. Level 1 backup is based on the previous level 1 or level 0 backup.
l Backup compression
To reduce the space occupied by backup data, GaussDB 100 provides the backup compression
function. Database data can be compressed in stream data mode, generating compressed
backup files. Users can run the COMPRESSED BACKUPSET command to compress data
during backup.
l Database restoration
GaussDB 100 supports database restoration. Before restoring a database, ensure that there is
an available database backup file, the server has sufficient disk space, the database is in the
NOMOUNT state, and data files in the data directory have been deleted. A path specified
during restoration must be the same as that specified during backup.
GaussDB 100 supports database restoration in either synchronous or asynchronous mode. In
synchronous mode, the system returns execution results to the client only after restoration is
complete. In asynchronous mode, the database instance returns execution results to the client
after receiving the RESTORE DATABASE statement. To check whether the restoration is
successful, observe the STATUS column in the DV_BACKUP_PROCESSES view.
GaussDB 100 also supports Point-In-Time Recovery (PITR). Users can run the UNTIL
TIME statement to restore a database to a specified time point.
For details about how to restore a database, see GaussDB 100 V300R001C00 User Guide
(Standalone).
Introduction
GaussDB 100 supports data synchronization based on logical logs.
Description
GaussDB 100 logical replication parses table column changes recorded in redo logs to
reversely generate and replay SQL statements, and then executes the SQL statements on a
target database, replicating GaussDB 100 data changes to the target database in real time.
Logical replication is more flexible than physical replication, which has strong dependency on
the physical formats of logs. Logical replication can implement GaussDB 100 cross-version
replication and GaussDB 100 replication to other heterogeneous databases (such as Oracle
databases). It also provides customization support when the structures of source and target
database tables are inconsistent. Logical replication can be used for incremental data backup
between primary and standby databases, data synchronization between different service
systems, and online data migration during system upgrade.
GaussDB 100 can logically replicate data to Oracle and GaussDB 100 databases. For primary/
standby replication, you need to install, configure, and start the logical backup service on both
the primary and standby hosts. In this case, only the primary host has logical replication in
working mode.
For details about how to create and maintain logical replication, see GaussDB 100
V300R001C00 User Guide (Standalone).
8 Database Security
Introduction
GaussDB 100 provides two levels of user permissions. It allows for object access permission
settings by user type, achieving refined access control.
Description
Database users are used to connect databases, access database objects, and run SQL
statements. Only an existing database user can be used to connect databases. Therefore,
database administrators need to plan a database user for anyone who wants to connect a
database. GaussDB 100 supports user permission management. You can configure the
operation and access permissions for database objects and the use permissions for database
functions for different users.
GaussDB 100 provides two levels of user permissions.
l System permissions
System permissions are about system operations, for example, changing system
parameters. By default, only system administrators have the system permissions. After a
database is installed, the system administrator can grant system permissions to other
users. For security purposes, grant system permissions only to reliable users.
l Object permissions
Object permissions are about database object operations, such as INSERT, DELETE,
UPDATE, and SELECT permissions. The management of object permissions is
flexible. The system administrator can either grant all permissions or partial permissions
(such as SELECT and UPDATE) for certain database objects to users.
A database may be accessed by multiple users. To facilitate management, you can group
permissions and grant them to roles. Each permission group corresponds to a role. A role can
be granted to a user based on the user permission level. In this way, the user has all the
permissions of this role, that is, the required permissions are granted in batches to the user.
GaussDB 100 supports role-based permission management. Users can define roles. A role is a
set of multiple user permissions. If a role is granted to a user, the user will have all
permissions of this role.
Users can query system-provided views for information about system permissions, permission
allocation, and role permissions.
For details about permission management and precautions, see GaussDB 100 V300R001C00
Security Maintenance Guide (Standalone).
Introduction
GaussDB 100 supports database audit.
Description
Database audit, referred to as audit, can record database-related operations in real time,
manage the compliance of fine-grained audit on database operations, generate alarms for risks
that a database faces, and block attack behavior. Specifically, it records, analyzes, and reports
user access to a database to help users generate compliance reports for incident backtracking.
In addition, it enhances the management of internal and external database operation records,
improving data security.
GaussDB 100 supports audit on certain user operations. The DDL and DCL audit switches are
enabled by default. Audit logging is controlled by the AUDIT_LEVEL parameter.
GaussDB 100 supports the following audit operations:
l DDL audit: When AUDIT_LEVEL is set to 1, only DDL operations such as CREATE,
DROP, and ALTER of database objects or users are audited. DDL is used to define or
modify database objects, such as tables, indexes, views, synonyms, databases, sequences,
users, roles, tablespaces, and profiles.
l DCL audit: When AUDIT_LEVEL is set to 2, only DCL operations of database objects
are audited. DCL is used to set or change the permissions for database sessions and
objects. DCL operations include ALTER SESSION, COMMIT, ROLLBACK,
GRANT, REVOKE, SHUTDOWN, ALTER SYSTEM KILL SESSION, and LOCK
TABLE.
l DML audit: When AUDIT_LEVEL is set to 4, DML operations such as INSERT,
UPDATE, DELETE, and MERGE are audited, and SELECT and EXPLAIN PLAN
are also audited. DML is used to manage table data.
l PL audit: When AUDIT_LEVEL is set to 8, stored procedures of databases are audited.
l DDL audit + DCL audit: When AUDIT_LEVEL is set to 3, both DDL audit and DCL
audit are performed.
l DDL audit + DML audit: When AUDIT_LEVEL is set to 5, both DDL audit and DML
audit are performed.
l DCL audit + DML audit: When AUDIT_LEVEL is set to 6, both DCL audit and DML
audit are performed.
l All types of audit: When AUDIT_LEVEL is set to 15, DDL audit, DCL audit, DML
audit, and PL audit are all performed.
GaussDB 100 audit allows for a customized log path and limits the maximum number and
size of audit logs.
For details about how to use the audit function, see GaussDB 100 V300R001C00 Security
Hardening Guide (Standalone).
Introduction
GaussDB 100 supports SSL connection encryption and authentication between a server and a
client. When an application is connected through a JDBC interface, you can enable the SSL
connection. In addition, GaussDB 100 allows database whitelists and blacklists to be
configured, limiting client addresses for accessing databases.
Description
SSL, a security protocol, is used to ensure data security and data integrity for Internet
communication. Configuring SSL to encrypt client/server communication enhances security.
To start a server in SSL mode, you must configure a device certificate and a private key file
on the server. SSL_CERT and SSL_KEY parameters can be used in this situation. In a Unix
operating system, any world or group access to the private key file must be forbidden. You
can run the chmod 0600 server-key.crt command to set access permissions. If the private key
file needs password protection, set the SSL_KEY_PASSWORD parameter. GaussDB 100
uses SSL connection encryption and authentication to ensure user data security and integrity.
Configure client access authentication to allow remote host access. You can configure the user
whitelist (zhba.conf), IP address whitelist (TCP_INVITED_NODES), and IP address
blacklist (TCP_EXCLUDED_NODES) to control remote connections to GaussDB 100. By
default, only local access is allowed.
l User whitelist: Only users listed in zhba.conf can access the database and they must
access the database through specified IP addresses.
l IP address whitelist: Only the specified IP addresses can be used to access the database.
l IP address blacklist: The specified IP addresses cannot be used to access the database.
For details about how to configure the SSL connection encryption, user whitelist, IP address
whitelist, and IP address blacklist, see GaussDB 100 V300R001C00 Security Hardening
Guide (Standalone).
Introduction
GaussDB 100 supports user profiles to restrict user resources and behavior, ensuring database
security.
Description
A profile is a means of restricting resources for database users. For example, a profile can
limit CPU resources for sessions or SQL statements, and it also can determine user password
management policies. After a database is created, there is a default profile named DEFAULT
in the system. By default, the profile will be used for creating users unless otherwise
specified.
In GaussDB 100, each database user can be configured with a profile, the DEFAULT profile
by default.
For details about how to use user profiles, see GaussDB 100 V300R001C00 User Guide
(Standalone) and GaussDB 100 V300R001C00 R&D Documentation (Standalone).
9 Glossary
Term Description
A–E
ACID Atomicity, Consistency, Isolation, and Durability (ACID). These are a set of
features of database transactions in a DBMS.
archive A thread started when the archive function is enabled on a database. The
thread thread is used to archive database logs to a specified path.
atomicity One of the ACID features of database transactions. Atomicity means that a
transaction is composed of an indivisible unit of work. All operations
performed in a transaction must either be committed or uncommitted. If an
error occurs during transaction execution, the transaction will be rolled
back to the state when it was not committed.
backup A backup, or the process of backing up, refers to the copying and archiving
of computer data. Backup data can be used for restoration in case of data
loss.
checkpoint A mechanism that stores data in the database memory to disks at a certain
time. GaussDB 100 periodically stores the data of committed transactions
and data of uncommitted transactions to disks. The data and redo logs can
be used for database restoration if a database restarts or breaks down.
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands are entered
through keyboards or similar devices and are compiled and executed by
applications. The results are displayed in text or graphic forms on the
terminal interface.
Term Description
coding Coding is representing data and information using code so that it can be
processed and analyzed by a computer. Characters, digits, and other objects
can be converted into digital code, or information and data can be converted
into the required electrical pulse signals based on predefined rules.
concurrency A DBMS service that ensures data integrity when multiple transactions are
control concurrently executed in a multi-user environment. In a multi-threaded
GaussDB 100 environment, concurrency control ensures that database
operations are safe and all database transactions remain consistent at any
given time.
core dump When a program stops abnormally, core dump, memory dump, or system
dump records the state of working memory of the program at that point in
time. The states of key programs are often dumped at the same time. For
example, information about processor registers, including program metrics,
stack pointers, memory management, other processors, and OS flags are
often dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
core file A file that is created when memory overwriting, assertion failures, or access
to invalid memory occurs in a process, causing it to fail. This file is then
used for further analysis.
A core file stores memory dump data, and supports binary mode and
specified ports. The name of a core file consists of the word "core" and the
OS process ID.
The core file is available regardless of the type of platform.
Term Description
data flow An operator that exchanges data among query fragments. By their input/
operator output relationships, data flows can be categorized into Gather flows,
Broadcast flows, and Redistribution flows. Gather combines multiple query
fragments of data into one. Broadcast forwards the data of one query
fragment to multiple query fragments. Redistribution reorganizes the data
of multiple query fragments and then redistributes the reorganized data to
multiple query fragments.
database A collection of data that is stored together and can be accessed, managed,
and updated. Data in a view in a database can be classified into the
following types: numeral, full text, digit, and image.
database file A binary file that stores user data and the internal data of a database system.
database HA GaussDB 100 provides a highly reliable HA solution. Every logical node in
GaussDB 100 is identified as a primary or standby node. At the same time,
only one GaussDB 100 node is identified as the primary server. In
GaussDB 100, standby nodes first perform full synchronization from the
primary node and later incremental synchronization. When the HA system
is running, the primary node can receive data read and write requests in
GaussDB 100.
DBLINK An object of the path from one database to another. A remote database
object can be queried with DBLINK.
Term Description
dirty page A page that has been modified and is not written to a permanent device.
dump file A specific type of trace file. A dump file contains diagnostic data during an
event response, whereas a trace file contains continuously generated
diagnostic data.
durability One of the ACID features of database transactions. Transactions that have
been committed will permanently survive and not be rolled back.
error A technique that automatically detects and corrects errors in software and
correction data streams to improve system stability and reliability.
F–J
failover Automatic switchover from a faulty node to its standby node. Reversely,
automatic switchback from the standby node to the primary node is called
failback.
free space A mechanism for managing free space in a table. This mechanism enables a
management database system to record free space in each table and establish an easy-to-
find data structure, accelerating operations (such as INSERT) performed on
the free space.
Term Description
GNU The GNU Project was publicly announced on September 27, 1983 by
Richard Stallman, aiming at building an OS composed wholly of free
software. GNU is a recursive acronym for "GNU's Not Unix!". Stallman
announced that GNU should be pronounced as Guh-NOO. Technically,
GNU is similar to Unix in design, a widely used commercial OS. However,
GNU is free software and contains no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for each
node in the case of strong consistency.
incremental Incremental backup stores all file changes since the last valid backup.
backup
index An ordered data structure in a DBMS. An index accelerates data query and
update in database tables.
isolation One of the ACID features of database transactions. Isolation means that the
operations inside a transaction and data used are isolated from other
concurrent transactions. Concurrent transactions do not disturb each other.
JDBC Java database connectivity (JDBC) is used to implement the Java APIs of
SQL statements. It provides unified access to multiple relational databases,
consisting of a set of classes and interfaces written in Java language.
junk tuple A tuple that is deleted using the DELETE and UPDATE statements. When
deleting a tuple, GaussDB 100 only marks the tuples that are to be cleared.
The VACUUM thread will then periodically clear these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
Term Description
metadata Data that provides information about other data. Metadata describes the
source, size, format, or other characteristics of data. In database columns,
metadata explains the content of a data warehouse.
P–T
page Smallest memory unit for row storage in the relational object structure in
GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB 100 HA
server system and works with all standby servers. At any time, only one node in
the HA system is identified as the primary server.
QPS Query Per Second (QPS) means the number of queries that a server can
respond to per second.
query Each query job can be split into one or more query fragments. Each query
fragment fragment consists of one or more query operators and can independently
run on a node. Query fragments exchange data through data flow operators.
query An iterator or a query tree node, which is a basic unit for the execution of a
operator query. Execution of a query can be split into one or more query operators.
Common query operators include scan, join, and aggregation.
redo log A log that contains information required for performing an operation again
in a database. If a database is faulty, redo logs can be used to restore the
database to its original state.
Term Description
relational A database created using the relational model. It processes data using
database methods of set algebra.
RPO Recovery point objective (RPO) refers to the latest status that a database
system and the data can be restored to after a disaster, and it is usually
represented by time.
RTO Recovery time objective (RTO) refers to the duration between the database
system failure caused by a disaster and its restoration to proper running.
schema A database object set that includes the logical structure, such as tables,
views, sequences, stored procedures, synonyms, clusters, and database
links.
shared pool A shared pool is created for repeatedly executed SQL statements to save
memory. It contains the explain trees and execution plans of given SQL
statements.
SSL Secure Sockets Layer (SSL) is a network security protocol first used by
Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides three
basic security services, all of which use the public key technology. SSL
ensures the security of service communication through a network by
establishing a secure connection between a client and a server and then
sending data through this connection.
Term Description
stop word In computing, stop words are words which are filtered out before or after
processing of natural language data (text), saving storage space and
improving search efficiency.
stored A group of SQL statements compiled into a single execution plan and
procedure stored in a large database system. Users can specify a name and parameters
(if any) for a stored procedure to execute the procedure.
system A table storing meta information about a database. The meta information
catalog includes user tables, indexes, columns, functions, and data types in a
database.
table A set of columns and rows. Each column is referred to as a field. Values in
each field represent a data type. For example, if a table contains three fields
of person names, cities, and states, it has three columns: Name, City, and
State. In every row in the table, the Name column contains a name, the City
column contains a city, and the State column contains a state.
tablespace A tablespace is a logical storage structure that contains tables, indexes, and
objects. A tablespace provides an abstract layer between physical data and
logical data, and provides storage space for all database objects. When you
create an object, you can specify which tablespace it belongs to.
thesaurus Standardized words or phrases that express document themes and are used
for indexing and retrieval.
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
zsql GaussDB 100 interactive terminal. zsql enables you to interactively enter
queries, issue them to GaussDB 100, and view the query results. Queries
can also be entered from files. zsql supports many meta commands and
shell-like commands, allowing you to conveniently compile scripts and
automate jobs.
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
4.1 sql_process.py...............................................................................................................................................................58
4.2 zengine..........................................................................................................................................................................59
4.3 shutdowndb.sh.............................................................................................................................................................. 60
6 Glossary......................................................................................................................................... 71
Overview
GaussDB 100 is a high-performance and high-reliability distributed relational database
developed by Huawei Technologies Co., Ltd.. It supports automatic horizontal sharding and
breaks the storage and performance bottlenecks of a single server, applying to massive data
storage and processing. This document describes how to use the tools provided by the
GaussDB 100.
GaussDB 100 software packages are classified into basic and compatible packages. They
differ in the names of various interfaces. The compatible package is used to offer
compatibility with the usage habits of mainstream databases in the industry. The interfaces
mentioned in this document use names from the basic package. If you have installed the
compatible package, you can use either the interface names of the basic package or those of
the compatible package by referring to Interface Mapping (Basic Packages vs. Compatible
Packages). For details about how to install the basic and compatible packages, see
"Installation and Deployment" in GaussDB 100 V300R001C00 V300R001C00 User Guide
(Standalone).
Intended Audience
Database administrators
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Example Conventions
The following table describes some example information in this document. You can replace
the example information as needed.
Information Description
Parameters of GaussDB 100 tools are parsed in sequence. If a parameter is specified for
multiple times, the last value takes effect.
Format Description
[ x | y | ... ] Indicates that one item is selected from two or more options or no
item is selected.
{ x | y | ... } Indicates that one item is selected from two or more options.
{ x | y | ... } [ ... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with spaces.
{ x | y | ... } [ ,... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with commas (,).
Change History
Version Update Changed On
03 Added: 2019-06-06
l ztrst
Modified:
l Parameter description in GaussRoach.py
2 Client Tools
After the database is deployed, you need certain tools to conveniently connect to a database
for operations and commissioning. GaussDB 100 provides database connection tools for users
to conveniently perform operations on the database.
Function
Database Manager provides the following functions for database developers:
l Browsing database objects
l Creating database objects, such as the database, user, table, and index
l Executing SQL statements and SQL scripts
l Editing and executing PL/SQL statements
For details about how to use Data Studio, see Data Studio User Manual released with the tool.
2.2 zsql
zsql is a client tool provided by GaussDB 100. It provides basic database functions, such as
interaction and query. It also provides advanced features.
2.2.1 Overview
zsql provides a command line interface (CLI) to help users connect to and use GaussDB 100.
zsql has its own commands and environments.
Parameters
-w Time to wait before the client connection to the database times out. The
default value is 10s. This parameter can be used together with -q.
If you have logged in to a database through zsql in non-interactive mode, there would be a
large number of plaintext passwords in the environment. Therefore, you are advised to log in
to GaussDB 100 in interactive mode.
If no IP address is specified during password-free login, the first IP address of LSNR_ADDR
in the local configuration file will be used.
Scenario
Use zsql to connect to a GaussDB 100 server. Then, you can run SQL statements and perform
database operations.
Prerequisites
l The zsql tool has been installed on the client.
l The user for connection must have permission to access the database.
l Before remotely accessing a database through APIs such as zsql or JDBC, set LSNR_IP
and LSNR_PORT in the zengine.ini file. A maximum of eight listening IP addresses
can be set at a time, and they must be separated by commas (,). For details, see
"Database Usage > Configuring Client Access Authentication" in GaussDB 100
V300R001C00 User Guide (Standalone).
l Before remotely accessing a database, configure access authentication on the local client.
For details about how to configure client access authentication, see "Database Usage >
Configuring Client Access Authentication" in GaussDB 100 V300R001C00 User Guide
(Standalone).
Precautions
If the password of a database user contains the special character $, use the escape character \
to connect to the database through zsql. Otherwise, the login will fail.
Procedure
l Log in as a database administrator. (Only database administrators can use password-free
login.)
zsql
{ CONNECT | CONN } / AS SYSDBA [ip:port] [-D /home/gaussdba/data1] [-q] [-s
"silent_file"] [-w connect_timeout]
[ip:port] is optional. If it is not specified, the local host will be connected by default.
If a database administrator has started multiple database instances, you need to specify
the database directory (-D) when connecting to a specified database.
The -q parameter is used to cancel the SSL login authentication check. The -s parameter
is used to set the silent mode (no prompt) for SQL statement execution.
The -w parameter is used to set the timeout period for the client to wait for a connection
response from the database. Its values are -1, indicating that the client keeps waiting
without timeout restrictions; 0, indicating that the client does not wait and the server
directly returns a failure result; and n, indicating that the client waits for n seconds. The
default value is 10s. After this parameter is used, its value will be the response waiting
timeout period when the zsql process is started to connect to the database. After the
process startup, the timeout period will be used in waiting for a response for establishing
or reestablishing a new connection as well as that in queries. After the zsql process is
exited, the setting becomes invalid.
l Log in as a common database user.
GaussDB 100 supports the following login modes:
– Interactive login mode 1:
zsql user@ip:port [-D /home/gaussdba/data1] [-q] [-s "silent_file"] [-w
connect_timeout]
In this command, user indicates the name of the database user and password indicates
the password of the user. ip:port indicates the IP address and port number of the host
where the database resides. The default port number is 1888.
If a database administrator has started multiple database instances, you need to specify
the database directory (-D) when connecting to a specified database.
The -q parameter is used to cancel the SSL login authentication check. The -s parameter
is used to set the silent mode (no prompt) for SQL statement execution.
The -w parameter is used to set the timeout period for the client to wait for a connection
response from the database. Its values are -1, indicating that the client keeps waiting
without timeout restrictions; 0, indicating that the client does not wait and the server
directly returns a failure result; and n, indicating that the client waits for n seconds. The
default value is 10s. After this parameter is used, its value will be the response waiting
timeout period when the zsql process is started to connect to the database. After the
process startup, the timeout period will be used in waiting for a response for establishing
or reestablishing a new connection as well as that in queries. After the zsql process is
exited, the setting becomes invalid.
Examples
l Locally log in to a database as user gaussdba.
gaussdba@plat1~> zsql
SQL> CONN gaussdba/database_123@127.0.0.1:1888
connected.
l Start the zsql process and set a response waiting timeout period.
-- Start the zsql process and set the response waiting timeout period to 20s.
After the process is started, the timeout period for waiting for a connection
setup response will be 20s.
zsql gaussdba/database_123@127.0.0.1:1888 -w 20
connected.
-- Create a user jim and grant the CREATE SESSION permission to the user.
DROP USER IF EXISTS jim;
CREATE USER jim IDENTIFIED BY database_123;
GRANT CREATE SESSION TO jim;
-- Switch to the user. The timeout period for waiting for a reconnection
setup response will be also 20s.
CONN jim/database_123@127.0.0.1:1888
connected.
-- Exit the zsql process. The timeout period setting becomes invalid, and the
timeout period for waiting for a new connection setup response will be 10s
(default value).
EXIT
Scenario
Run SQL statements on zsql to add, delete, update, and query data, objects, and permissions.
NOTE
SQL Execution
Take creating a table as an example
CREATE TABLE place
(
place_ID NUMBER(4) not null,
STREET_ADDRESS VARCHAR2(40),
POSTAL_CODE VARCHAR2(12),
CITY VARCHAR2(30),
STATE_PROVINCE VARCHAR2(25),
state_ID CHAR(2)
) ;
When entering a SQL statement in the zsql interface, use a semicolon (;) or a slash (/) at the
end of the statement, and then press Enter to run the command. If you use a slash (/), put it in
a new line and press Enter.
zsql also allows for multiple SQL statements in a single line. In this case, use a semicolon (;)
to separate them. zsql identifies each SQL statement by semicolon (;) and then executes them
in sequence. For example, run the following statements:
l Following the -c parameter, you can enter multiple common SQL statements and
separate them using semicolons (;); or enter only one stored procedure and end it with a
slash (/).
l If an object name contains $, an escape character (\) is needed.
Scenario
zsql can be used to execute SQL script files. A SQL script file is a .sql file that contains a set
of SQL statements. The syntax is as follows. The maximum length of a line is 64 KB and the
maximum length of an executable SQL statement is 1 MB.
Assume that the content of the SQL script file my_script.sql is as follows and the storage
path is /opt/userscripts/my_script.sql.
INSERT INTO COUNTRY
VALUES ('NGA','Nigeria','Africa','Western
Africa',923768.00,1960,111506000,51.6,65707.00,58623.00,'Nigeria','Federal
Republic','Olusegun
Obasanjo',2754,'NG');
SELECT Code, Name, Population
FROM COUNTRY
WHERE Population > 100000;
Run either of the following commands in the zsql command line to execute my_script.sql:
@/opt/userscripts/my_script.sql
Or
start /opt/userscripts/my_script.sql
Or
zsql user/password@ip:port -f "sql_script_file"
Comment Conventions
The SQL scripts of GaussDB 100 support two comment formats:
l Single-line comment
Format: -- Comment
l Multi-line comment
Format: /*Comment*/
Scenario
zsql supports parameter binding, which means binding a variable to a name placeholder or
question mark placeholder in a prepared SQL statement. This effectively prevents SQL
injection. Parameter binding is widely used in applications. You can bind multiple parameters
to a SQL statement by entering their types and values in sequence.
Method
Placeholders are as follows:
l ? -- Single question mark
l :1 -- Colon and number
l :name -- Colon and variable name
The zsql supports the following data types for bind parameters: CHAR, VARCHAR,
STRING, INT, INTEGER, UINT32, UNSIGNED INTEGER, BIGINT, REAL, DOUBLE,
DATE, TIMESTAMP, BLOB, CLOB, DECIMAL, NUMBER, BOOLEAN, and BOOL. For
details about the mapping, see Table 2-2.
CHAR CHAR
VARCHAR VARCHAR
STRING STRING
INT INTEGER
INTEGER INTEGER
UINT32 UINT32
BIGINT BIGINT
REAL REAL
DOUBLE REAL
DATE DATE
TIMESTAMP TIMESTAMP
BLOB STRING
CLOB STRING
DECIMAL DECIMAL
NUMBER NUMBER
BOOLEAN BOOLEAN
BOOL BOOLEAN
Precautions
l Placeholders for parameter binding cannot appear in the SELECT list, that is,
placeholders are not allowed between SELECT and FROM.
l If an entered data type is not supported, an error will be reported.
Examples
Enter the following statement:
SELECT Code, Name, Population
FROM COUNTRY
WHERE Population > ?;
After you specify Direction, DataType, and BindValue for parameter binding, the following
information will be displayed:
Direction : in
DataType : int
BindValue: 1
Bind params successfully.
Scenario
Executing SQL statements in silent mode returns the execution results all to a specified file,
instead of a display screen.
Procedure
Step 1 Specify the silent mode when connecting to a database.
zsql gaussdba@192.168.0.1 -s silent.log
Please enter password:
************
connected.
Step 3 Print the specified output file to view the execution results.
cat silent.log
----End
Examples
-- Connect to a database as user hr in silent mode, and specify the output log
file as silent.log:
zsql hr@127.0.0.1:1171 -s silent.log
-- Create a training table:
CREATE TABLE training(staff_id INT NOT NULL,course_name
CHAR(50),course_start_date DATETIME, course_end_date DATETIME,exam_date
DATETIME,score INT);
INSERT INTO
training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20
12:00:00','2017-06-25 12:00:00',90);
-- Exit the database:
exit
-- View silent.log:
cat silent.log
Succeed.
Scenario
On the zsql client, you can run the DESCRIBE command or its abbreviation DESC to view
the definition information about database objects (such as tables or views) or view SQL
statements.
Syntax
l View the definition information about database objects.
DESCRIBE [-o | -O] object
Or
DESC [-o | -O] object
The description of a column when SELECT is used for query is displayed, including its
name, nullable attribute, type, and size (character or byte).
If DESC is used, the column size will be displayed as the maximum deduction value
obtained during SQL parsing, and the returned column data will not exceed this size.
DESC -q SELECT expression
Examples
Query the definition of the privilege table.
-- Delete the privilege table.
DROP TABLE IF EXISTS privilege;
-- Create the privilege table.
CREATE TABLE privilege(staff_id INT PRIMARY KEY, privilege_name VARCHAR(64) NOT
NULL, privilege_description VARCHAR(64), privilege_approver VARCHAR(10));
-- Query the definition of the privilege table.
DESC privilege;
Scenario
Running the PROMPT command prints user-entered information. You can add this command
in a script to provide information for users.
NOTE
l Press Enter to end input. Semicolons (;) at the end of a line are not displayed.
l The maximum length an input string is 64 bytes.
Examples
Run the following statements to print user information:
PROMPT Please enter a username
PROMPT For example: jim
SELECT * FROM DV_DATABASE
Scenario
Running the SPOOL statement records operation logs for database management.
Procedure
zsql can output execution results to an OS file by using SPOOL. The syntax is as follows:
-- Specify an output file, which can be either a relative path or an absolute
path:
SPOOL file_path
After the file for spooling is specified, zsql will directly output the execution results to the
file. The content of the file is similar to that displayed in the zsql command line. Spooling
will be stopped only after SPOOL off is specified.
If the file specified in the SPOOL statement does not exist, zsql will create a file. If the
specified file already exists, zsql will append the execution results to the file.
Examples
Run the following statements:
SPOOL ./spool.txt
SELECT Code, Name, Population
FROM COUNTRY
WHERE Population > 100000;
SELECT 'This SQL will be output into ./spool.txt' FROM SYS_DUMMY;
SPOOL OFF;
SELECT 'This SQL will not be output into ./spool.txt' FROM SYS_DUMMY;
After these statements are executed, a file ./spool.txt will be generated in the current
directory, and the file content is as follows:
SQL> SELECT Code, Name, Population
SQL> FROM COUNTRY
SQL> WHERE Population > 100000;
CODE NAME POPULATION
---- ---------------------------------------------------- ------------
MLT Malta 380200
MMR Myanmar 45611000
MNG Mongolia 2662000
MTQ Martinique 395000
4 rows fetched.
SQL>
SQL> SELECT 'This SQL will be output into ./spool.txt' FROM SYS_DUMMY;
'THIS SQL WILL BE OUTPUT INTO
----------------------------------------
This SQL will be output into ./spool.txt
1 rows fetched.
SQL>
SQL> SPOOL OFF;
Syntax
SET [attr_name] [value]
Parameters
l attr_name
Specifies the attribute name.
The following attributes are supported:
– AUTO[COMMIT]
Specifies whether to automatically commit statements.
Valid value:
n ON: Commit.
n OFF: Do not commit.
Default value: OFF
– EXITC[OMMIT]
Specifies whether to commit data modified in a session when zsql is closed or
exited.
Valid value:
n ON: Commit.
n OFF: Do not commit.
Default value: ON
– CHARSET
Specifies the client character set.
Valid value:
n GBK
n UTF8
Default value: UTF8
– HEA[DING]
Specifies whether to display column headings.
Valid value:
n ON: Display.
n OFF: Do not display.
Default value: ON
– TRIMS[POOOL]
Specifies whether to remove trailing blanks at the end of each line.
Valid value:
n ON: Remove.
n OFF: Do not remove.
Default value: OFF
– SERVEROUT[PUT]
Specifies whether to display server output information generated by the
DBMS_OUTPUT.PUT_LINE package.
Valid value:
n ON: Display.
n OFF: Do not display.
Default value: 0, which means no limit.
– LINE[SIZE]
Specifies the maximum number of characters allowed in a line. If the output content
for a line exceeds this number, the content will be truncated.
Value range: [0, +∞)
Default value: 0, which means no limit.
– NUM[WIDTH]
Default value: ON
– NEWP[AGE]
Specifies the number of empty rows between pages.
Value range: 0 or [1, +∞)
Default value: 1
– COLSEP
Specifies the separator between columns.
Value range: 'text'|"text"|text
Default value: ' ', indicating that the separator between columns is a space
– LONG
It is used only for syntax adaptation.
– DEFINE
Specifies whether to enable variable substitution. The substitution variable
identifier is an ampersand (&) by default.
n ON: Enable. In this case, if an input string contains a substitution variable
identifier, the string following the identifier will be identified as the variable
name, you will be prompted to enter the value of the substitution variable. For
example, when you enter the string SQL&Plus, the system prompts you to
enter the value of the substitution variable Plus. After you enter ABC, the
string SQL&Plus will be converted to SQLABC.
n OFF: Do not enable. The substitution variable identifier is entered as a
common character. For example, when you enter the string SQL&Plus, the
final input is also SQL&Plus.
n one_char can be used to change the substitution variable identifier, and only
one character is supported. In this case, the substitution variable function is
automatically enabled. For example, use SET DEFINE @ to set the
substitution variable identifier to @.
Default value: OFF
– OPLOG
Specifies whether to enable the function of recording operation logs on the zsql
client.
Valid value:
n ON: Enable.
n OFF: Do not enable.
Default value: ON
– ZSQL_SSL[_MODE|_CA|_CERT|_CRL|_CIPHER|_KEY|_KEY_PASSWD]
Specifies SSL-related attributes and the file path.
Default value: NULL
– CONNECT[_TIMEOUT]
Sets the timeout period for the client to wait for a connection response from the
database. After this parameter is modified, the timeout period for waiting for a
response from the current long connection is still 10s. The modification will take
effect for a new connection established at the background or for a connection
reestablished using conn name/password@ip:port. The modification will be invalid
after the zsql process exits.
Valid value:
n -1: Wait for a response from the server and never time out.
n 0: Do not wait.
n n: Wait for n seconds.
Default value: 10s
l value
Specifies an attribute value.
Examples
l Set INTERACTIVE_TIMEOUT to 4200.
ALTER SYSTEM SET INTERACTIVE_TIMEOUT = 4200;
ON
-- View the current value of the substitution variable identifier:
SHOW DEFINE
Scenario
When you query for parameter information, fuzzy query is supported, which is case-
insensitive.
Syntax
SHOW [
PARAMETER[S]
| PARAMETER[S] parameter_name
| attr_name
]
Parameters
l PARAMETER[S]
Displays the names, types, and values of all configured parameters.
l PARAMETER[S] parameter_name
Displays the value of a specified parameter in the database.
Fuzzy query of parameters is supported.
l attr_name
Specifies the attribute name.
The following attributes are supported:
– AUTO[COMMIT]
Specifies whether to automatically commit statements.
n ON: Commit.
n OFF: Do not commit.
– EXITC[OMMIT]
Specifies whether to commit data modified in a session when zsql is closed or
exited.
n ON: Commit.
n OFF: Do not commit.
– CHARSET
Specifies the client character set.
– HEA[DING]
Specifies whether to display column headings.
n ON: Display.
n OFF: Do not display.
– SERVEROUT[PU]T
Specifies whether to display procedural output generated by the
DBMS_OUTPUT.PUT_LINE package.
n ON: Display.
n OFF: Do not display.
– TRIMS[POOL]
Specifies whether to remove trailing blanks at the end of each line.
n ON: Remove.
n OFF: Do not remove.
– SPOO[L]
Specifies whether to export data to a specified file.
n ON: Export.
n OFF: Do not export.
– LIN[ESIZE]
Specifies the maximum number of characters allowed in a line. If the output content
for a line exceeds this number, the content will be truncated.
– NUM[WIDTH]
Specifies the width for displaying numbers.
– PAGES[IZE]
Specifies the number of lines displayed on a page.
– TIM[ING]
Specifies whether to display SQL execution time.
n ON: Display.
n OFF: Do not display.
– FEED[BACK]
Specifies whether to display feedback when an SQL statement returns n or more
than lines of records.
n ON: When you set the number of returned records to n, in the range [1, +∞),
and the actual number exceeds n, the feedback is displayed.
n OFF: No feedback is displayed.
– ECHO
Specifies whether to display the SQL statements in a script executed by using the
symbol @.
n ON: Display.
n OFF: Do not display.
– VER[IFY]
Specifies whether to display the confirmation information old sql is and new sql is
when the variable replacement command SET DEFINE is used.
n ON: Display.
n OFF: Do not display.
– TERM[OUT]
Specifies whether to display the execution information when SQL commands in a
script are executed by using the symbol @.
n ON: Display.
n OFF: Do not display.
– NEWP[AGE]
Specifies the number of blank lines between pages. The default value is 1.
– COLSEP
Specifies the separator between columns. The default value is ' '.
– LONG
It is used only for syntax adaptation.
– PARAMETER[S]
Displays the names, types, and values of all configured parameters.
– DEFINE
Specifies whether to enable variable substitution.
n ON: Enable.
n OFF: Do not enable.
– OPLOG
Specifies whether to enable the operation log function on the zsql client.
n ON: Enable.
n OFF: Do not enable.
– ZSQL_SSL[_MODE|_CA|_CERT|_KEY|_CRL|_KEY_PASSWD|_CIPHER]
Specifies SSL-related attributes.
– CONNECT[_TIMEOUT]
Queries the time to wait before a database connection times out.
Examples
l Query for parameters whose names contain the keyword interactive_timeout.
SQL> show parameter interactive_timeout;
NAME
DATATYPE VALUE
----------------------------------------------------------------
--------------------
----------------------------------------------------------------
INTERACTIVE_TIMEOUT
GS_TYPE_INTEGER 28800
NAME
DATATYPE VALUE
----------------------------------------------------------------
--------------------
----------------------------------------------------------------
CHECKPOINT_PERIOD
GS_TYPE_INTEGER 300
INTERACTIVE_TIMEOUT
GS_TYPE_INTEGER 28800
LOCK_WAIT_TIMEOUT
GS_TYPE_INTEGER 0
LONGSQL_TIMEOUT
GS_TYPE_INTEGER 10
REPL_WAIT_TIMEOUT
GS_TYPE_INTEGER 10
Scenario
During database migration or data backup, you need to import and export data. zsql allows
you to use the DUMP statement to export data.
Precautions
l If DUMP uses the -h, -u, or help parameter and ends with a semicolon (;) or slash (/),
the help information of the command will be displayed.
l If DUMP uses the -o or option parameter and ends with a semicolon (;) or slash (/), the
latest configuration item will be displayed.
l When DUMP is used to export data, SQL statements are assembled on the client before
being sent to a server, which then receives complete, specific DDL and DML statements.
l The server audit log {GSDB_DATA}/log/audit records only the DDL and DML
statements, instead of DUMP.
l Do not set the name of a target export file to the system or configuration file name.
Otherwise, the file may be overwritten or deleted.
Syntax
DUMP { TABLE table_name | QUERY "select_query " }
INTO FILE 'file_name '
[ FILE SIZE 'uint64_file_size' ]
[ { FIELDS | COLUMNS } ENCLOSED BY 'ascii_char' [ OPTIONALLY ] ]
[ { FIELDS | COLUMNS } TERMINATED BY 'string ']
[ { LINES | ROWS } TERMINATED BY 'string ']
[ CHARSET 'string' ];
Parameter Description
l table_name
Specifies the name of a table whose data is to be exported.
l select_query
Specifies the records to be exported. select_query is a SELECT clause.
l file_name
Specifies the name of a file to store exported data.
l uint64_file_size
Specifies the size of each file storing exported data. If a file is full, a new file will be
created to store data. The default value is 0, indicating that no new file will be created.
l FIELDS
Specifies the format of each column.
l COLUMNS
Specifies the format of each column. It is equal to FIELDS.
l ENCLOSED BY
Encloses column values with a pair of characters.
l ascii_char
Specifies the characters used to enclose each column value. For example, in "ABC", the
value is ABC and the characters used to enclose the value are double quotation marks
(""). By default, the characters are not specified.
Value range: a single ASCII character or an empty string. The value single quotation
marks ('') indicate that no characters are specified.
– Decimal ASCII characters range from 0 to 127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-3.
l OPTIONALLY
Encloses only character and binary data. By default, OPTIONALLY is not used.
l TERMINATED BY
Separates columns with delimiters.
l string
Specifies a column delimiter. The default value is a comma (,).
Value range: a single-byte ASCII character
– Decimal ASCII characters range from 0 to 127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-3.
l LINES
Separates rows with delimiters if a record contains multiple rows.
l ROWS
It is a synonym of LINES.
string
Specifies a row delimiter. The default value is \n.
Value range: a single-byte ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-3.
l CHARSET
Specifies the character set to be exported.
string
Currently, only the UTF8 (without BOM) character set (CHARSET = UTF8) and GBK
character set (CHARSET = GBK) are supported. The former is used by default.
l row_terminated_char
Specifies a row delimiter. The default value is \n.
Value range: a single-byte ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-3.
NOTE
The single quotation mark (') is an escape character of SQL. If you need to use it as a delimiter, use two
single quotation marks ('') to represent it. For example:
.... enclosed by ''''....
The outer two single quotation marks are used to enclose the inner ones ('') that represent a single
quotation mark (').
If the character specified by ascii_char is included in the column value, the character will be escaped
again when the column value is exported. For example, if the specified character is the double quotation
mark (") which is included in the column value 1"1, the value will be exported as "1""1".
Examples
l Export an existing table places from the database. Add a vertical bar (|) at the end of the
statement.
DUMP TABLE places INTO FILE '/gaussdb/backup/places_backup ' FIELDS ENCLOSED
BY '|';
23 rows dumped.
l Export the rows whose STATED_ID is US from the table places. Each row is enclosed
with single quotation marks (''), and is ended with vertical bars (|).
DUMP QUERY "SELECT STREET_ADDRESS,CITY FROM places WHERE STATED_ID = 'US'"
INTO FILE '/gaussdb/backup/places_backup '
COLUMNS ENCLOSED BY ''''
COLUMNS TERMINATED BY '|';
4 rows dumped.
dump option;
Scenario
During database migration or data backup, you need to import and export data. zsql allows
you to use the LOAD statement to import data.
Precautions
l A target file for data to be imported to must match the source file in the number of
columns and the data types of columns.
l GaussDB 100 supports plain-text file import.
l If LOAD uses the -h, -u, or help parameter and ends with a semicolon (;) or slash (/), the
help information of the command will be displayed.
l If LOAD uses the -o or option parameter and ends with a semicolon (;) or slash (/), the
latest configuration item will be displayed.
l When LOAD is used to import data, SQL statements are assembled on the client before
being sent to a server, which then receives complete, specific DDL and DML statements.
l The server audit log {GSDB_DATA}/log/audit records only the DDL and DML
statements, instead of LOAD.
Syntax
LOAD DATA INFILE "file_name" INTO TABLE table_name
[{ FIELDS | COLUMNS } ENCLOSED BY 'ascii_char' [ OPTIONALLY ]]
[{ FIELDS | COLUMNS } TERMINATED BY 'string']
[{ LINES | ROWS } TERMINATED BY 'string']
[ TRAILING COLUMNS( COLUMN1[ , COLUMN2, ... ] ) ]
[ IGNORE uint64_num { LINES | ROWS }]
[ CHARSET string ]
[ THREADS uint32_threads ]
[ ERRORS uint32_num ]
[ NOLOGGING ]
[ NULL2SPACE ]
[ DEBUG ];
Parameter Description
l file_name
Specifies the path and name of a file to be imported.
l table_name
Specifies the name of a table to store imported data.
l FIELDS
Specifies the format of each column.
l COLUMNS
Specifies the format of each column. It is equal to FIELDS.
l ENCLOSED BY
Encloses column values with a pair of characters.
l ascii_char
Specifies the characters used to enclose each column value. By default, no characters are
specified.
Value range: a single ASCII character, or an empty string ('') which indicates that no
characters are specified.
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-4.
l OPTIONALLY
Encloses only character and binary data. By default, they are enclosed with a pair of
single quotation marks ('').
l TERMINATED BY
Separates columns with delimiters.
string
Specifies a column delimiter. The default value is a comma (,).
Value range: one or more ASCII characters. A maximum of 10 characters are allowed.
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-4.
l LINES
Separates rows with delimiters if a record contains multiple rows.
l ROWS
It is a synonym of LINES.
string
Specifies a row delimiter. The default value is \n.
Value range: a single ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 2-4.
l IGNORE
Specifies the number of lines to be ignored.
l uint64_num
Ignores the first uint64_num lines. The default value is 0.
l THREADS
Specifies the number of threads for concurrent data import.
l uint32_threads
Specifies the number of threads for concurrent import. The default value is 1. Multi-
thread import is used to improve efficiency. Deviation is allowed to collect statistics
about the number of errors. In addition, detailed information about records that cause
errors is recorded and the errors will not affect the subsequent import.
Value range: [1, 128]
l ERRORS
Specifies the number of SQL statements that are allowed to cause errors.
l uint32_num
Specifies the number of SQL statements that are allowed to cause errors. The default
value is 0.
l NOLOGGING
Does not record redo or undo logs for imported data. This parameter is available only
when the target table is set to append only.
l DEBUG
Prints debugging information generated during tool running to a screen.
l CHARSET
Specifies the character set to be imported.
string
Currently, only the UTF8 (without BOM) character set (CHARSET = UTF8) and GBK
character set (CHARSET = GBK) are supported. The former is used by default.
l TRAILING COLUMNS( COLUMN1[ , COLUMN2, ... ] )
Specifies the columns to which data is to be imported. COLUMN1[, COLUMN2, ...]
specifies column names and at least one name must be specified.
l NULL2SPACE
Inserts a space to replace NULL, if an empty value of the CHAR or LOB type is to be
imported and NOT NULL is specified.
NOTE
The single quotation mark (') is an escape character of SQL. If you need to use it as a delimiter, use two
single quotation marks ('') to represent it. For example:
.... enclosed by ''''....
The outer two single quotation marks are used to enclose the inner ones ('') that represent a single
quotation mark (').
If the character specified by ascii_char is included in the column value, the character will be escaped
again when the column value is exported. For example, if the specified character is the double quotation
mark (") which is included in the column value 1"1, the value will be exported as "1""1".
Examples
Import data from a file places_backup to a table places_new.
-- Create the file places_backup in the /gaussdb/backup directory:
|1|,|address_aa|,|0001|,|xian|,|shanxi|,|01|
|2|,|address_bb|,|0002|,|hangzhou|,|zhejiang|,|02|
|3|,|address_cc|,|0003|,|chengdu|,|sichuan|,|03|
|4|,|address_dd|,|0004|,|shenzhen|,|guangdong|,|04|
-- Create the table places_new:
CREATE TABLE places_new(place_id number(4,0),
STREET_ADDRESS VARCHAR(40),
POSTAL_CODE VARCHAR(12),
CITY VARCHAR(64),
STATE_PROVINCE VARCHAR(25),
STATE_ID CHAR(2)
);
-- Import places_backup to places_new:
LOAD DATA INFILE "/gaussdb/backup/places_backup" INTO TABLE places_new FIELDS
ENCLOSED BY '|' ;
Complete the data load.
4 rows are totally read.
0 rows are ignored.
4 rows are loaded into table.
4 rows are committed into table.
Description
EXP logically exports data from a database.
Precautions
l When EXP is used to export data, SQL statements are assembled on the client before
being sent to a server, which then receives complete, specific DDL and DML statements.
l The client operation log is specified by the LOG parameter and records the EXP
command.
l The server audit log {GSDB_DATA}/log/audit records the DDL and DML statements.
l User SYS cannot be used to logically export data.
l Users must have required operation permissions to objects which will be logically
exported.
l EXP uses the -h parameter, help parameter, or option parameter, and ends with ;. or /.
Help information about the EXP command can be displayed.
l If FILETYPE=BIN is set, the following three types of files are exported: metadata files
(specified by users), data files (.D files), and LOB files (.L files).
l When data is logically exported, a metadata file and a data subdirectory are generated in
the specified export file path. If no file path is specified, they are generated in the current
path by default. If FILETYPE=BIN is set, the generated subfiles (data files and LOB
files) will be stored in the data subdirectory. If the specified metadata file and the
generated subfiles already exist, an error will be reported.
l If a file with the same name exists in the target directory when data is logically exported,
the system overwrites the existing file without any prompt.
Syntax
{EXP | EXPORT}[ keyword =param [ , ... ] ] [ ... ];
Parameters
l EXP
Specifies the command for logical export. It is equivalent to EXPORT.
l keyword
Specifies the keyword for logical export.
– USERS
Specifies users whose data is to be exported. Multiple users are separated by
commas (,), and % indicates all users.
No user can export data of user SYS, but user SYS can export the data of other
users. Common users must have the DBA role to export data of specified users.
Common users can export their own data only when they have the SELECT ANY
TABLE or READ ANY TABLE permission.
– TABLES
Specifies the tables to be exported. Multiple tables are separated by commas (,), and
% indicates all tables.
– DIST_RULES
Specifies distribution rules of exported data. Multiple rules are separated by
commas (,), and % indicates all rules.
This parameter is used only when GaussDB 100 is deployed in distributed mode.
– FILE
Specifies the file that stores exported data. The value is a file name and its full path,
enclosed with double quotation marks (""). If the path is not specified, the file will
be stored in the current directory where the command is executed.
– FILETYPE
Specifies the type of the files that store exported data.
The value can be TXT or BIN.
(The default value is TXT.)
– LOG
Specifies the name and path of the log file generated during logical export. The
value must be enclosed with double quotation marks ("").
– COMPRESS
Specifies a compression level for data export.
Value range: [0,9]. 0 indicates no compression, and level 9 indicates the highest
compression ratio.
Default value: 0
– CONTENT
Specifies whether to export table data or table definitions.
Valid value:
n ALL: Export both.
n DATA_ONLY: Export only data.
n METADATA_ONLY: Export only table definitions.
Default value: ALL
– QUERY
Specifies the query condition for exporting a table. The value must be enclosed with
double quotation marks (""), for example, "where rownum <= 10".
– SKIP_COMMENTS
Specifies whether to add comments when exporting DDL.
Valid value:
n Y: Add comments.
n N: Do not add comments.
Default value: N
– FORCE
Specifies whether to export the next object when an error occurs.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– SKIP_ADD_DROP_TABLE
Specifies whether to add DROP to the statement before exporting a table.
Valid value:
n Y: Do not add.
n N: Add.
Default value: N
– SKIP_TRIGGERS
Specifies whether to export triggers.
Valid value:
n Y: Do not export.
n N: Export.
Default value: N
– QUOTE_NAMES
Specifies whether to enclose exported objects with double quotation marks ("").
Valid value:
n Y: Enclose.
n N: Do not enclose.
Default value: N
– COMMIT_BATCH
Specifies the amount of data to be batch submitted.
Value range: natural number. 0 indicates that all the data in a table.
Default value: 1000
– INSERT_BATCH
Specifies the amount of data inserted by a single INSERT statement.
Value range: natural number
Default value: 1
– FEEDBACK
Specifies how many records need to be exported to trigger the display of the export
progress.
Value range: natural number 0 indicates that the progress is displayed once for a
table.
Default value: 10000, indicating the progress is displayed when 10000 records are
exported.
– PARALLEL
Specifies the number of concurrent threads. This parameter is valid only for TEXT
mode.
Value range: natural number
n A value from 2 to 16 indicates the number of concurrent threads.
n Value 1 or a value greater than 16 indicates a single thread.
Default value: 0
– CONSISTENT
Specifies whether to export the consistency of global data.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– CREATE_USER
Specifies whether to export user definition statements, that is, DDL statements used
for creating users. This keyword must be used in conjunction with USERS.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– ROLE
Specifies whether to export role (non-SYS) definition statements, that is, DDL
statements used for creating roles. This keyword must be used in conjunction with
USERS.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– GRANT
Specifies whether to export GRANT statements of users or roles. This keyword
must be used in conjunction with USERS and ROLES.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– TABLESPACE
Specifies whether to export all tablespaces. Currently, exporting all tablespaces
allows for only those created by users, excluding system-reserved ones. The file
storage directory is the same as the default tablespace path of the system.
Valid value:
n Y: Export.
n N: Do not export.
Default value: N
– TABLESPACE_FILTER
Specifies the filter for specifying tablespaces. Multiple tablespaces are separated by
commas (,). The specified tablespace is used only for filtering. No creation
statement is generated. The symbol % is not supported, that is, filtering all
tablespace is not supported.
– WITH_CR_MODE
Specifies whether to add CR_MODE for exporting tables and index scripts.
Valid value:
n Y: Add.
n N: Do not add.
Default value: N
Examples
l Export data from tab1 and tab2 of the current user.
EXP TABLES=tab1,tab2 FILE="file1.dmp";
l Create a user TEST_USER and a role TEST_ROLE, grant permissions to them, and
export the user, role, and user table structure information.
-- Delete the existing user test_user:
DROP USER IF EXISTS test_user;
-- Delete the existing role test_role:
DROP ROLE test_role;
-- Create user test_user:
CREATE USER test_user IDENTIFIED BY 'huawei_123';
-- Create role test_role:
CREATE ROLE test_role IDENTIFIED BY 'exp_user123';
-- Grant permissions to test_user:
GRANT DBA TO test_user;
-- Grant permissions to test_role:
GRANT CREATE TABLE TO test_role;
-- Assign the test_role role to user test_user:
GRANT test_role TO test_user;
-- Export user test_user, role test_role, and table structure information of
the user:
EXP USERS = TEST_USER CONTENT = METADATA_ONLY CREATE_USER = Y ROLE = Y GRANT
= Y FILE = "file1.dmp";
l Export tablespaces.
EXP USERS = TEST_USER CONTENT = METADATA_ONLY TABLESPACE= Y FILE =
"file1.dmp";
Precautions
l When IMP is used to import data, SQL statements are assembled on the client before
being sent to a server, which then receives complete, specific DDL and DML statements.
l The client operation log is specified by the LOG parameter and records the IMP
command.
l The server audit log {GSDB_DATA}/log/audit records the DDL and DML statements.
l When you import a .bin file and set content to DATA_ONLY or METADATA_ONLY,
the exported file must use the same content value.
l If IMP uses the -h, help, or option parameter and ends with a semicolon (;) or slash (/),
the help information about IMP will be displayed.
l User SYS cannot be used to logically import data.
l When FILETYPE is TXT, a maximum of 8 KB data of the CLOB, BLOB, TEXT, or
IMAGE type can be imported.
l If FILETYPE is BIN, user, table, and remap cannot be selected. You can only fully
import an exported file.
l IMP can import a file from the database of an earlier version to the current database.
l If a file with the same name exists in the target directory when data is logically imported,
the system overwrites the existing file without any prompt.
Syntax
{IMP | IMPORT} [ keyword =param [ , ... ] ] [ ... ];
Parameters
l IMP
Specifies the command for logical import. It is equivalent to IMPORT.
l keyword
Specifies the keyword for logical import.
– USERS
Specifies users whose data is to be imported. Multiple users are separated by
commas (,), and % indicates all users.
A common user must have the DBA role to import data of other users (except
SYS). Common users can import their own data only when they have the required
permissions. For example, to import data for creating a table, the CREATE
TABLE permission is required.
– TABLES
Specifies the tables to be imported. Multiple tables are separated by commas (,),
and % indicates all tables.
– FILE
Specifies the file that stores imported data. The value is a file name and its full path,
enclosed with double quotation marks ("").
The file name must be specified. If no path is specified, the default path \pkg\bin\
will be used.
– LOG
Specifies the name and path of the log file generated during logical import. The
value must be enclosed with double quotation marks ("").
– FILETYPE
Specifies the type of the files that store imported data.
Valid value:
n TXT: TEXT format
n BIN: Binary format
Default value: TXT
– FULL
Specifies whether to perform a full import.
Valid value:
n Y: Perform.
n N: Do not perform.
Default value: N
– CONTENT
Specifies whether to import table data or table definitions.
Valid value:
n DATA_ONLY: Import only table data.
n METADATA_ONLY: Import only table definitions.
n ALL: Import both.
Default value: ALL
– REMAP_SCHEMA
Specifies schema mapping. For example, to import data from users A, B, C to user
D, REMAP_SCHEMA will be A,B,C:D.
n SHOW
Specifies whether to print SQL statements or import them.
Valid value:
○ Y: Print but do not import.
○ N: Import but do not print.
Default value: N
n FEEDBACK
Specifies how many records need to be imported to trigger the display of the
import progress.
Value range: natural number
Default value: 10000
n IGNORE
Specifies whether to execute the next statement when a statement fails to be
executed.
Valid value:
○ Y: Execute.
○ N: Do not execute.
Default value: N
n REMAP_TABLESPACE
Specifies tablespace mapping. For example, to import data from tablespace A
to tablespace B, REMAP_TABLESPACE will be A: B. Use commas (,) to
separate multiple mapping relationships.
n CREATE_USER
Specifies whether to import user definition statements, that is, DDL statements
for creating users.
Valid value:
○ Y: Import.
○ N: Do not import.
Default value: N
n PARALLEL
Specifies the number of parallel DML statements.
Value range: [1,32]
Default value: 1
n DDL_PARALLEL
Specifies the number of parallel DDL statements.
Value range: [1,32]
Default value: 1
n NOLOGGING
Specifies whether to enable nologging of redo logs. Only the bin format is
supported.
Valid value:
○ Y: Enable.
○ N: Disable.
Default value: N
n TIMING
Specifies whether to print the timing statistics about the import.
Valid value:
○ ON: Print.
○ OFF: Do not print.
Default value: OFF
n BATCH_COUNT
Specifies the number of rows to be imported in each batch. This parameter
takes effect only if filetype=bin is set.
Value range: [1,10000]
Default value: 10000
Examples
l Import data from tab1 and tab2 of the current user.
IMP TABLES=tab1,tab2 FILE="file1.dmp";
Procedure
Step 1 Use zsql to log in to a database as user SYS.
zsql SYS/database_123@127.0.0.1:1888
Or
zsql / as SYSDBA
Step 3 Run WSR LIST to obtain the snapshot IDs corresponding to the two time points.
WSR LIST
Listing the lastest Completed Snapshots
Snap Id Snap Started DB_startup_time
--------------- ------------------- ------------------
2 2018-09-21 17:11:20 2018-09-20 11:44:36
1 2018-09-21 17:11:14 2018-09-20 11:44:36
NOTE
l The system automatically generates snapshots. To manually create a snapshot, run the following
command:
CALL WSR$CREATE_SNAPSHOT;
l For details about how to delete a snapshot, see Deleting a Specified Snapshot.
l To modify the WSR configuration, use the WSR$MODIFY_SETTING stored procedure. For
details, see Table 2-5.
l A unique snapshot ID is generated for either an automatically or manually created snapshot. A
generated snapshot can be operated based on the ID. The ID is automatically generated when the
global sequence SNAP_ID$ is created. The minimum value is 1 and the value increments by 1.
NOTE
l The path where the analysis report is generated can be modified based on site requirements, and the
modifier must have write permission for the path.
l When a performance analysis report is generated, a smaller snapshot ID must be placed before a
larger snapshot ID. Otherwise, the error "GS-00601, [1:3]sql syntax error: start_snap_id is greater
than end_snap_id!" will be reported and the message "WSR Report Build failed." will be returned.
----End
WSR-related Views
ADM_HIST_SNAPSHOT: historical snapshot information
The client configuration file zsql.ini is stored in the cfg directory in the parent directory
of the zsql installation directory. For example, if the storage path of zsql is /opt/
zenith/app/bin/zsql, the storage path of the configuration file zsql.ini will be /opt/
zenith/app/cfg/zsql.ini. To ensure database security, you are advised to set the
permission for the cfg directory to 700 and that for zsql.ini to 600.
NOTE
l The zsql.ini file needs to be created by users and stored in the cfg directory in the parent
directory of the zsql installation directory.
l The default value of ZSQL_INTERACTION_TIMEOUT is 5. If zsql.ini has the
ZSQL_INTERACTION_TIMEOUT parameter incorrectly set or missing, the default value
will be used.
Method 2: Configure a zsql environment variable
ZSQL_INTERACTION_TIMEOUT.
export ZSQL_INTERACTION_TIMEOUT=6
Method 1: Run the zsql command by using the -q parameter, which indicates quiet
(silent).
Method 2: Configure the file zsql.ini in the following format:
ZSQL_SSL_QUIET=TRUE
The client configuration file zsql.ini is stored in the cfg directory in the parent directory
of the zsql installation directory. For example, if the zsql directory is /opt/
zenith/app/bin/zsql, the configuration file directory will be /opt/zenith/app/cfg/zsql.ini.
To ensure database security, you are advised to set the permission for the cfg directory to
700 and that for zsql.ini to 600.
NOTE
l The zsql.ini file needs to be created by users and stored in the cfg directory in the parent
directory of the zsql installation directory.
l The default value of ZSQL_SSL_QUIET is FALSE. If zsql.ini has the ZSQL_SSL_QUIET
parameter incorrectly set or missing, the default value will be used.
Method 3: Configure a zsql environment variable ZSQL_SSL_QUIET (silent zsql
startup).
export ZSQL_SSL_QUIET=TRUE
HA Rebuilding
l Description
HA rebuilding helps restore a standby node from its primary node.
l Rebuilding mode
Currently, only full rebuilding is supported.
– All data files and log files on a standby node need to be deleted, but the
configuration files and data file directories must be retained.
– The rebuilding command copies data from the primary node to the standby node.
After the rebuilding is complete, the primary and standby nodes will have the same
data files.
l Prerequisites
HA has been correctly configured.
l Related concepts
– HA rebuilding is needed upon initial HA configuration to ensure database
consistency between primary and standby nodes.
– HA rebuilding is needed if primary and standby nodes are not synchronized.
l Procedure
a. Configure primary-standby links in configuration files.
n LSNR_ADDR indicates an IP address for listening.
n REPL_PORT indicates a port for copying data between primary and standby
nodes.
n ARCHIVE_DEST_2 = SERVICE indicates a peer link.
For example:
On the primary node:
LSNR_ADDR = 127.0.0.1,172.16.1.123
REPL_PORT = 15401
ARCHIVE_DEST_2 = SERVICE=172.16.1.124:15401 SYNC
LSNR_ADDR = 127.0.0.1,172.16.1.124
REPL_PORT = 15401
ARCHIVE_DEST_2 = SERVICE=172.16.1.123:15401 SYNC
COL
l Description
COL sets the width of a column.
l Syntax
-- Clear a column format:
COLUMN|COL clear;
-- Set a column width:
COLUMN|COL column_name FOR|FORMAT A{column_width};
-- Enable or disable column width settings:
COLUMN|COL column_name ON | OFF ;
l Parameters
– column_name
Column name
– column_width
Column width
NOTE
l By default, the width of a column is set to the default width supported by the zsql tool.
l ON and OFF mean enabling and disabling column width settings, respectively.
l Examples
COL F1 FOR A12
WHENEVER
l Description
WHENEVER determines whether to continue or exit a connection when there is a script
running error. This function is disabled by default. If COMMIT or ROLLBACK is not
specified when WHENEVER is enabled, the default value ROLLBACK will be used.
l Syntax
WHENEVER SQLERROR
{ CONTINUE [ COMMIT | ROLLBACK ]
| EXIT [ COMMIT | ROLLBACK ] }
l Examples
-- Perform a rollback and exit if there is an error:
whenever sqlerror exit rollback
-- Query a table that does not exist:
select 1 from sys_dummy;
You can also use the CLEAR statement to clear the current command line.
Examples
l Run the EXIT statement to exit the zsql tool.
EXIT
During the use of GaussDB 100, database installation, upgrade, uninstallation, and database
O&M are required. To facilitate database maintenance, GaussDB 100 provides a set of
database management tools.
Tool Description
The python tool autofills incomplete long parameters when parsing commands. For example,
the execution results of the python zctl.py --h, python zctl.py --he, and python zctl.py --
help commands are the same. The help information about zctl.py is returned.
3.2 install.py
Function
install.py is a tool for installing and deploying GaussDB 100 in standalone mode. It provides
the one-click installation function.
Syntax
l Show help information.
python install.py --help
Parameter Description
Parameter Description
3.3 uninstall.py
Description
uninstall.py is a tool for installing and deploying GaussDB 100 in standalone mode. It
provides the one-click uninstallation function.
Syntax
l Show the help page.
python uninstall.py --help
Parameters
Parameter Description
-g Specifies a non-root user to uninstall databases. Ensure the user has the
access permission to the uninstallation directory.
-D Specifies the data file path, that is, GAUSSDATA. The path must follow
the -F parameter.
Parameter Description
-P Specifies that the tool connects to the database using a username and a
password. During command execution, you will be prompted to enter the
username and password for connecting to the database so as to stop the
database instance. The username is SYS, and the default password is
Changeme_123. If the process instance has been stopped in advance, the
entered username and password will neither undergo correctness
verification nor be used.
This parameter can be left empty. If empty, the database can be connected
in password-free login mode.
3.4 upgrade.py
Description
upgrade.py is used for GaussDB 100 upgrade and downgrade in standalone mode. It is in the
root directory of the GaussDB 100 database installation package and provides database
upgrade, downgrade, and rollback functions. The python version of upgrade.py must be
2.7.*.
Syntax
l Show help information.
python upgrade.py --help | -?
l Manual downgrade
python upgrade.py -t { upgrade-type | pretest | precheck | prepare | replace
| start | upgrade | sync | restart | upgrade-view | checkpoint | dbcheck |
flush } --package=path_to_package_file --backupdir=path_to_backup [--
GSDB_HOME=path_to_gsdb_home] [--GSDB_DATA=path_to_data_dir] [-f
cmd_config_file]
Parameters
-- Specifies the installation directory where BIN and LIB files are stored.
GSDB_HOME You can enter a path.
This parameter is optional for the manual upgrade or downgrade in HA
or standalone mode. It can be left empty. If empty, the environment
variable GSDB_HOME will be used. If GSDB_HOME does not exist,
an error is reported.
This parameter cannot be set for the automatic upgrade or downgrade in
HA or standalone mode.
-- Specifies the data file directory. You can enter multiple data paths
GSDB_DATA corresponding to the same BIN file for simultaneous upgrade or
downgrade.
This parameter is optional for the manual upgrade or downgrade in HA
or standalone mode. It can be left empty. If empty, the environment
variable GSDB_DATA will be used. Only the databases whose data file
directory is specified by GSDB_DATA can be upgraded. If the
environment variable is omitted, an error will be reported.
This parameter cannot be set for the automatic upgrade or downgrade in
HA or standalone mode.
--package Specifies the installation package of the new version. You need to enter
an absolute path.
This parameter is mandatory for the manual upgrade or downgrade in
HA or standalone mode and cannot be left empty.
This parameter cannot be set for the automatic upgrade or downgrade in
HA or standalone mode.
Parameter Description
--backupdir Specifies the backup folder, that is, the location where the backup system
tablespace, admin, and LIB files of the old version are stored during an
upgrade or downgrade. The backup folder can be created by using
scripts.
This parameter is mandatory for the manual upgrade or downgrade in
HA or standalone mode and cannot be left empty.
This parameter cannot be set for the automatic upgrade or downgrade in
HA or standalone mode.
-P Specifies that the tool connects to the database using a username and a
password. During the command execution, you will be prompted to enter
the username and password for connecting to the database. The username
is SYS, and the default password is Changeme_123.
This parameter is available and can be left empty. If empty, the database
can be connected in password-free login mode.
Parameter Description
Parameter Description
3.5 zctl.py
Description
l zctl.py is a tool for controlling GaussDB 100 in standalone mode. It provides functions
such as starting a database, stopping a database, and viewing database status.
l zctl.py must be executed by database installation users.
l A zctl.py log is generated in the log directory of each instance. The file name format is
zctl-YYYY-MM-DD_xxxxxx.log, where YYYY is a year, MM is a month, DD is a date, and
xxxxxx is a randomly-generated six digits. For example, zctl-2018-10-01_055246.log
l zctl.py logs can be dumped, supporting a maximum of 10 log files with a size no more
than 10 MB.
l zctl.py logs record usernames and host information. If the username or host information
in a zctl log is NULL, check whether the user used non-SSH login (for example, VNC or
other login modes) or check the zctl.py execution mode.
Syntax
l Show the help page.
l Start a database.
python zctl.py -t start [-D DATADIR] [-m START-MODE]
l Stop a database.
python zctl.py -t stop [-D DATADIR] [-m SHUTDOWN-MODE] [-P]
Parameters
Parameter Description
-m Specifies the mode to start and stop the database, rebuild the
database baseline mode, and demote the primary database to
standby.
l In database start scenarios, -m can be set to nomount, mount,
or open. If it is not specified, value open will be used.
l In database stop scenarios, -m can be set to immediate,
abort, or normal. If it is not specified, value normal will be
used.
l When the database baseline is rebuilt, the mode can be
standby or cascaded. standby rebuilds the database as a
standby. cascaded rebuilds the database as a cascaded
standby. If -m is not specified, the rebuild mode depends on
the peer database. If the peer is a standby database, the
database is rebuilt as a cascaded standby. If the peer a primary
database, the database is rebuilt as a standby.
l When the primary database is demoted to standby, the mode
can be standby or cascaded. standby demotes the primary
database to standby. cascaded demotes the primary or standby
database to cascaded standby. If -m is not specified, the
database is demoted to standby by default.
switchover Performs a switchover when both primary and standby nodes are
running properly. This command is executed on a standby node.
Command Scenario
demote Demotes a primary node to standby when the node is faulty and a
standby node has been promoted to primary by failover. This
command is executed on a primary node.
3.6 zencrypt
Description
zencrypt encrypts the password for user SYS in standalone deployment, enhancing
communication security without disturbing users. By default, zencrypt operation logs are
stored in $GSDB_HOME/log/oper. The log files are named zencrypt.olog. The number of
zencrypt logs that can be retained is 10; the maximum size of a log file is 10 MB; the log file
permission is 600; and the log directory permission is 700.
NOTE
l If the key factor (_FACTOR_KEY) is updated online, the working key (LOCAL_KEY) will be
updated synchronously. If an SSL private key password has been configured, it will also be updated
synchronously.
l You can use zencrypt -g -f <factor_key> to specify a key factor to generate a new working key, and
then update the working key by running ALTER SYSTEM or updating the configuration file.
l For security purposes, the zencrypt tool can be executed only by non-root users. If it is executed by
user root, an error will be reported. The following is an example.
"root" execution of the zencrypt tool is not permitted.
Syntax
l Show the help page.
zencrypt {-h|-H}
Parameters
3.7 ztrst
Description
ztrst is a restoration tool for GaussDB 100 in standalone mode. It can restore data of a
specified schema based on the full physical backup file in one-click mode. Backup files and
database instances can be distributed on different servers.
Syntax
l Show help information.
ztrst -h|-H
l Restore the data of a specified schema based on the full physical backup file in one-click
mode.
ztrst -p [syspassword:]port -D export_data_path -B backup_file_path -U
schema[/passwd] -T tablespace_name -S ip:port [-C import_parameters]
Parameters
-p (lowercase) Specifies the password of database user SYS and the temporary
port for restoration. The value format is syspasswd:port. The
temporary port used for restoration must be specified and the
password of user SYS is optional. If the password is not
specified, it should be entered in interactive mode.
Parameter Description
Add the bin path in the tool package to the environment variable PATH.
export PATH=$PATH:/home/ztrstdba/GAUSSDB100-V300R001C00-RESTORE/bin
If you do not add the bin path in the tool package to the environment variable PATH, the
format for using the ztrst tool in the bin path of the tool will be ./ztrst. If you add the path,
the format will be ztrst or ./ztrst.
This chapter describes tools used in GaussDB 100 processes or invoked among modules.
These tools are used only for internal invoking and their accuracy in other scenarios has not
been verified. Therefore, you are not advised to execute tasks by directly using these tools to
avoid impact on the system.
4.1 sql_process.py
Description
sql_process.py is used to generate SQL files for standalone GaussDB 100 upgrades. It
compares the initdb.sql files of old and new versions to locate the differences in system
catalogs and generates SQL files for upgrading the system catalogs. The sql_process.py
version must be 2.7.*.
Syntax
l Show the help page.
python sql_process.py -?|--help
Parameters
--new-initdb Specifies the name of the initdb.sql file of the new version. It is an
absolute path. You must assign a value to this parameter.
Parameter Description
--old-initdb Specifies the name of the initdb.sql file of the old version. It is an
absolute path. You must assign a value to this parameter.
--outdir Specifies the output folder of SQL files used for the upgrade. You must
assign a value to this parameter.
4.2 zengine
Description
zengine is an internal tool. It starts a database, and you are not advised to use it directly.
Syntax
l Show the help page.
zengine [-h|-H]
Parameters
l -h/-H
Obtains help information.
l -v/-V
Obtains version information.
l mode
Specifies the startup mode. Its value can be NOMOUNT, MOUNT, or OPEN. This
parameter is optional. If no startup modes are specified, the database is started in OPEN
mode.
l db_home_path
Specifies the GSDB_HOME path.
l node_type
(Valid only in distributed deployment) Specifies the start mode of a database instance.
The value can be --coordinator or --datanode. --coordinator indicates that a database
CN is started, and --datanode indicates that a database DN is started.
4.3 shutdowndb.sh
Description
shutdowndb.sh is a script for graceful shutdown. You are not advised to use it directly.
Syntax
shutdowndb.sh -h HOSTIP -p PORT -U SYS -w|-W -m IMMEDIATE|NORMAL|ABORT [-D
GSDB_DATADIR]
Parameters
l -h/--host
Specifies a database host.
l -p/--port
Specifies a port.
l -U/--username
Specifies a database user.
l -W/--password
Specifies a password for the user. You must enter both the username and password for
login.
l -w/--no-password
Specifies password-free login.
l -D/--data-directory
Specifies an instance data directory when an OS user installs multiple database instances.
This parameter can be skipped if username- and password-based login is used.
l -m
Specifies the stop mode, supporting IMMEDIATE, NORMAL, and ABORT. The
value must be all uppercase or lowercase.
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
6 Glossary
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Overview
GaussDB 100 is a high-performance and high-reliability relational disk database developed by
Huawei Technologies Co., Ltd.
This document describes the positioning, characteristics, system architecture, basic features
and enterprise-level enhanced features, application scenarios, operating environment, and
technical specifications of the standalone GaussDB 100.
Intended Audience
This document is intended for the personnel who need to have an overall knowledge of
GaussDB 100, such as application product planning personnel and system architects.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Change History
Version Change Date
Description
Contents
7 Technical Specifications.............................................................................................................18
8 Running Environment................................................................................................................ 20
9 Standards Compliance................................................................................................................21
10 Glossary....................................................................................................................................... 22
1 Product Positioning
2 Product Features
High performance
l Concurrency control
In GaussDB 100, a series of concurrency control mechanisms, such as multi-level read/
write locks, multi-version concurrency control (MVCC), and transaction isolation levels,
are used for high concurrent access on the premise of data consistency.
Row-level MVCC based on timestamps and rollback segments supports data query and
modification without blocking each other, greatly improving the performance of
concurrent query and modification.
l Query optimization
In GaussDB 100, the built-in rule-based optimizer (RBO) and cost-based optimizer
(CBO) provide hints to generate optimal execution plans.
High reliability
l Primary/standby replication and switchover
GaussDB 100 generates redo logs on the primary server and replay the logs on the
standby servers to ensure data consistency between the primary and standby servers. In
addition, primary/standby switchover is used for high availability (HA).
l Logical replication
GaussDB 100 provides logical replication independent from physical logs to synchronize
data between GaussDB 100 in different versions and between GaussDB 100 and other
heterogeneous databases. Logical replication can be used for incremental data backup
from the primary to standby databases, data synchronization between different service
systems, online data migration during the upgrade without service interruption, and other
operations.
l Flashback and recycle bin
GaussDB 100 provides flashback and a recycle bin. You can specify a timestamp to
perform flashback query, or to flash back table data that is incorrectly deleted or updated.
After being flashed back from the recycle bin, table data can be quickly restored to the
state before an error event. This greatly improves user data reliability and prevents
service interruption caused by point-in-time recovery (PITR).
Large capacity
In GaussDB 100, a single node supports a maximum of 8 PB storage capacity and a single
table supports a maximum of 7.8 TB. In addition, you can create bigfile tablespaces to
simplify storage O&M. A bigfile tablespace can be created to store files with large capacity,
which significantly reduces the number of data files in the database.
Proper compatibility with SQL
GaussDB 100 supports the SQL:2003 standard and partially supports SQL:2006, SQL:2008,
SQL:2011, and SQL:2016. In addition, it supports the syntax that is widely used in
mainstream commercial databases to greatly reduce the cost of migrating data from other
commercial databases to GaussDB 100.
High maintainability
The kernel status of GaussDB 100 is transparent, O&M methods are diverse, and enterprise-
level O&M capabilities are provided. The built-in performance views cover various key
performance indicators, such as waiting events, transaction status, session status, slow SQL
statements, space statistics, and memory statistics. Comprehensive load statistics reports are
provided, covering instance load overview, instance efficiency percentage, and top 10
background events.
High security
For data security purposes, GaussDB 100 provides access control, password protection,
permission management, data encryption, sensitive data masking, connection encryption, and
operation auditing.
3 System Architecture
The standalone GaussDB 100 adopts a layered architecture. Applications access data through
standard database interfaces, such as JDBC and ODBC. Dual-host hot backup is supported.
Standby servers synchronize with the primary server by replaying redo logs. Data on the
standby servers is read-only. The following figure shows the system architecture.
SQL engine Parses SQL statements, optimizes SQL execution plans, invokes the
storage layer, and returns results.
HA module Backs up data on the primary server in quasi-real-time and replays the
data on standby servers.
CLI Command line interface (CLI) tools used for database management,
management and including the tools for system start/stop, database initialization, SQL
maintenance execution, parameter configuration, and backup and restoration
tool set
Monitoring and WebUI-based tools used for monitoring and managing database running
management status
tools
SQL SQL performance audit tool, SQL execution workload statistics reports
performance (WSRs), views for SQL execution plan analysis and dynamic
diagnosis tool performance statistics collection.
set
Name Description
Thread Listening thread Processes client requests, supporting the TCP and IPC
protocols.
Name Description
File type Data files Store data such as tables, indexes, and rollback data of
(including the database.
temporary files)
Redo log files Online redo log files store write ahead logs (WAL)
generated by the database. At least three redo log files are
used cyclically.
If archive is enabled and an online redo log is switched,
the log file will be archived to a specified directory for
database restoration.
Alarm log files Store the alarms generated during database running.
Audit log files Store records of historical operations in the database. The
records can be traced and audit logs can be disabled.
GaussDB 100, a common relational database, can be used for online transaction processing in
various service scenarios, especially in scenarios where the requirements of data storage
access performance and reliability are high, such as telecom and finance services.
GaussDB 100 provides multiple networking modes to meet different requirements of data
capacity, processing performance, and reliability. The network modes include but are not
limited to standalone deployment, primary/standby deployment, and primary/standby HA
deployment based on RDMA hardware.
The following figure shows the use of GaussDB 100 on the OSS platform. The database is
deployed in primary/hot standby mode based on service requirements.
GaussDB 100 synchronously or asynchronously replicates redo logs from the primary server
to standby servers and replay the logs on the standby servers to ensure data consistency
between the primary and standby servers. When a fault occurs on the primary server,
GaussDB 100 quickly switches services to a standby server. OSS software accesses the
database through a standard interface and a floating IP address. In this way, GaussDB 100 is
unaware of the primary/standby switchover.
GaussDB 100, a common relational database, can provide the basic functions and features of
a standard relational database.
l Standard SQL
Supports the SQL:2003 standard and partially supports SQL:2006, SQL:2008, SQL:
2011, and SQL:2016.
l Character set
Supports the UTF-8 character set.
l Database storage management
Supports tablespaces.
l Transaction
Supports atomicity, consistency, isolation, and durability (ACID) and two transaction
isolation levels: Read Committed, Serializable, Read Current Committed.
l Data node HA
Supports primary/standby replication and failover.
l Standard application access interface
Supports ODBC 2.0 and JDBC 4.0.
l Multiple programming languages
Supports C, Java, and Python.
l SQL optimization
Supports RBO, CBO, and hints.
l Data export and import
Provides tools for quick, parallel data export and import.
l Management tool
Provides installation and deployment tools, client tools, status monitoring tools, backup
and restoration tools, and upgrade tools.
l Security management
Supports SSL Internet connections, user permission management, password
management, and security audit, to ensure data security at the management, application,
system, and Internet layers.
Unlike physical replication that strongly depends on physical formats of logs, logical
replication depends on only logical changes of data and is more flexible. Replication between
GaussDB 100 of different versions, replication from GaussDB 100 to other heterogeneous
databases (such as Oracle and MySQL), and replication to a target database whose table
structure is different from the source database are supported.
Logical replication can be used for incremental data backup between primary and standby
databases, data synchronization between different service systems, and online data migration
during system upgrade without service interruption.
l Unique row storage is used to support online table structure changes without delay.
l The B-tree index data structure adopts lock-free node mirroring, greatly improving the
performance of concurrency processing.
l Catalog optimistic locking ensures that DDL changes do not affect data query.
l Multi-Version Concurrency Control (MVCC) based on rollback segments are used.
l Checkpoint page mirroring is used to ensure that checkpoint processes do not affect data
access.
l SQL Cache multi-version lifecycle management ensures that online DDL does not need
to wait for DML.
l A dynamic thread pool is used to automatically adjust thread resources without the
intervention of database administrators.
l Redo logs are stored in multiple partitions without locks to ensure high performance in
high concurrency scenarios.
7 Technical Specifications
Single-node capacity ≤ 5 TB 8 PB
Size of data in each 8000 (excluding CLOB/ 8000 (row chaining: 64 KB)
row BLOB)
LOB size - 4 GB
Number of indexes ≤8 32
in each table
Number of columns ≤4 16
contained in each
table
Number of standby ≤3 9
servers
8 Running Environment
GaussDB 100 supports multiple software and hardware environments. Enterprises can select
software and hardware as needed.
Hardware
l Universal PC server, x86_64
l Universal PC server, ARM_64
l ARM-based servers
l Local storage (SATA, SAS, and SSD)
l Gigabit Ethernet and faster
OSs
The x86 architecture supports the following operating systems:
l Red Hat Enterprise Linux Server release 7.4 x86_64
l SUSE Linux Enterprise Server 11.3 (SUSE 11 for short), x86_64
l SUSE Linux Enterprise Server 12.4 (SUSE 12 for short), x86_64
l EulerOS Server V2.0SP3 x86_64
l EulerOS Server V2.0SP5 x86_64
The ARM architecture supports the following operating systems:
EulerOS Server V2.0SP8 ARM_64
9 Standards Compliance
For details about the standards complied by GaussDB 100, see Table 9-1.
SQL SQL:2003
10 Glossary
Term Description
A–E
ACID Atomicity, Consistency, Isolation, and Durability (ACID). These are a set of
features of database transactions in a DBMS.
archive A thread started when the archive function is enabled on a database. The
thread thread is used to archive database logs to a specified path.
atomicity One of the ACID features of database transactions. Atomicity means that a
transaction is composed of an indivisible unit of work. All operations
performed in a transaction must either be committed or uncommitted. If an
error occurs during transaction execution, the transaction will be rolled
back to the state when it was not committed.
backup A backup, or the process of backing up, refers to the copying and archiving
of computer data. Backup data can be used for restoration in case of data
loss.
checkpoint A mechanism that stores data in the database memory to disks at a certain
time. GaussDB 100 periodically stores the data of committed transactions
and data of uncommitted transactions to disks. The data and redo logs can
be used for database restoration if a database restarts or breaks down.
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands are entered
through keyboards or similar devices and are compiled and executed by
applications. The results are displayed in text or graphic forms on the
terminal interface.
Term Description
coding Coding is representing data and information using code so that it can be
processed and analyzed by a computer. Characters, digits, and other objects
can be converted into digital code, or information and data can be converted
into the required electrical pulse signals based on predefined rules.
concurrency A DBMS service that ensures data integrity when multiple transactions are
control concurrently executed in a multi-user environment. In a multi-threaded
GaussDB 100 environment, concurrency control ensures that database
operations are safe and all database transactions remain consistent at any
given time.
core dump When a program stops abnormally, core dump, memory dump, or system
dump records the state of working memory of the program at that point in
time. The states of key programs are often dumped at the same time. For
example, information about processor registers, including program metrics,
stack pointers, memory management, other processors, and OS flags are
often dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
core file A file that is created when memory overwriting, assertion failures, or access
to invalid memory occurs in a process, causing it to fail. This file is then
used for further analysis.
A core file stores memory dump data, and supports binary mode and
specified ports. The name of a core file consists of the word "core" and the
OS process ID.
The core file is available regardless of the type of platform.
Term Description
data flow An operator that exchanges data among query fragments. By their input/
operator output relationships, data flows can be categorized into Gather flows,
Broadcast flows, and Redistribution flows. Gather combines multiple query
fragments of data into one. Broadcast forwards the data of one query
fragment to multiple query fragments. Redistribution reorganizes the data
of multiple query fragments and then redistributes the reorganized data to
multiple query fragments.
database A collection of data that is stored together and can be accessed, managed,
and updated. Data in a view in a database can be classified into the
following types: numeral, full text, digit, and image.
database file A binary file that stores user data and the internal data of a database system.
database HA GaussDB 100 provides a highly reliable HA solution. Every logical node in
GaussDB 100 is identified as a primary or standby node. At the same time,
only one GaussDB 100 node is identified as the primary server. In
GaussDB 100, standby nodes first perform full synchronization from the
primary node and later incremental synchronization. When the HA system
is running, the primary node can receive data read and write requests in
GaussDB 100.
DBLINK An object of the path from one database to another. A remote database
object can be queried with DBLINK.
Term Description
dirty page A page that has been modified and is not written to a permanent device.
dump file A specific type of trace file. A dump file contains diagnostic data during an
event response, whereas a trace file contains continuously generated
diagnostic data.
durability One of the ACID features of database transactions. Transactions that have
been committed will permanently survive and not be rolled back.
error A technique that automatically detects and corrects errors in software and
correction data streams to improve system stability and reliability.
F–J
failover Automatic switchover from a faulty node to its standby node. Reversely,
automatic switchback from the standby node to the primary node is called
failback.
free space A mechanism for managing free space in a table. This mechanism enables a
management database system to record free space in each table and establish an easy-to-
find data structure, accelerating operations (such as INSERT) performed on
the free space.
Term Description
GNU The GNU Project was publicly announced on September 27, 1983 by
Richard Stallman, aiming at building an OS composed wholly of free
software. GNU is a recursive acronym for "GNU's Not Unix!". Stallman
announced that GNU should be pronounced as Guh-NOO. Technically,
GNU is similar to Unix in design, a widely used commercial OS. However,
GNU is free software and contains no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for each
node in the case of strong consistency.
incremental Incremental backup stores all file changes since the last valid backup.
backup
index An ordered data structure in a DBMS. An index accelerates data query and
update in database tables.
isolation One of the ACID features of database transactions. Isolation means that the
operations inside a transaction and data used are isolated from other
concurrent transactions. Concurrent transactions do not disturb each other.
JDBC Java database connectivity (JDBC) is used to implement the Java APIs of
SQL statements. It provides unified access to multiple relational databases,
consisting of a set of classes and interfaces written in Java language.
junk tuple A tuple that is deleted using the DELETE and UPDATE statements. When
deleting a tuple, GaussDB 100 only marks the tuples that are to be cleared.
The VACUUM thread will then periodically clear these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
Term Description
metadata Data that provides information about other data. Metadata describes the
source, size, format, or other characteristics of data. In database columns,
metadata explains the content of a data warehouse.
P–T
page Smallest memory unit for row storage in the relational object structure in
GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB 100 HA
server system and works with all standby servers. At any time, only one node in
the HA system is identified as the primary server.
QPS Query Per Second (QPS) means the number of queries that a server can
respond to per second.
query Each query job can be split into one or more query fragments. Each query
fragment fragment consists of one or more query operators and can independently
run on a node. Query fragments exchange data through data flow operators.
query An iterator or a query tree node, which is a basic unit for the execution of a
operator query. Execution of a query can be split into one or more query operators.
Common query operators include scan, join, and aggregation.
redo log A log that contains information required for performing an operation again
in a database. If a database is faulty, redo logs can be used to restore the
database to its original state.
Term Description
relational A database created using the relational model. It processes data using
database methods of set algebra.
RPO Recovery point objective (RPO) refers to the latest status that a database
system and the data can be restored to after a disaster, and it is usually
represented by time.
RTO Recovery time objective (RTO) refers to the duration between the database
system failure caused by a disaster and its restoration to proper running.
schema A database object set that includes the logical structure, such as tables,
views, sequences, stored procedures, synonyms, clusters, and database
links.
shared pool A shared pool is created for repeatedly executed SQL statements to save
memory. It contains the explain trees and execution plans of given SQL
statements.
SSL Secure Sockets Layer (SSL) is a network security protocol first used by
Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides three
basic security services, all of which use the public key technology. SSL
ensures the security of service communication through a network by
establishing a secure connection between a client and a server and then
sending data through this connection.
Term Description
stop word In computing, stop words are words which are filtered out before or after
processing of natural language data (text), saving storage space and
improving search efficiency.
stored A group of SQL statements compiled into a single execution plan and
procedure stored in a large database system. Users can specify a name and parameters
(if any) for a stored procedure to execute the procedure.
system A table storing meta information about a database. The meta information
catalog includes user tables, indexes, columns, functions, and data types in a
database.
table A set of columns and rows. Each column is referred to as a field. Values in
each field represent a data type. For example, if a table contains three fields
of person names, cities, and states, it has three columns: Name, City, and
State. In every row in the table, the Name column contains a name, the City
column contains a city, and the State column contains a state.
tablespace A tablespace is a logical storage structure that contains tables, indexes, and
objects. A tablespace provides an abstract layer between physical data and
logical data, and provides storage space for all database objects. When you
create an object, you can specify which tablespace it belongs to.
thesaurus Standardized words or phrases that express document themes and are used
for indexing and retrieval.
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
zsql GaussDB 100 interactive terminal. zsql enables you to interactively enter
queries, issue them to GaussDB 100, and view the query results. Queries
can also be entered from files. zsql supports many meta commands and
shell-like commands, allowing you to conveniently compile scripts and
automate jobs.
Issue 04
Date 2019-12-28
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
5 Glossary................................................................................................................................. 577
GaussDB 100 is compatible with the user habits of mainstream databases. You can
use native GaussDB 100 interface names or their corresponding names in the
mainstream databases. For details, see Interface Mapping (GaussDB 100 Native
Interface Names vs. Mainstream Database Interface Names). The interfaces
mentioned in this document use their native GaussDB 100 names.
Intended Audience
This document is intended for developers on C/Java application based on GaussDB
100, providing necessary references.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Example Conventions
The following table describes some example information in this document. You
can replace the example information as needed.
Information Description
Change History
Version Change Description Date
03 Added: 2019-06-26
● Load Balancing
● MEDIAN function
● BUILD DATABASE syntax
● RESTORE BLOCKRECOVER syntax
● VALIDATE syntax
● UUID function
● MD5 function
● 32-Bit Unsigned Integer
● REALEASE SAVEPOINT syntax
Modified:
● Parameters added to BACKUP
● CANCEL parameter added to
RECOVER DATABASE
● clear parameter added to ALTER
DATABASE
● System role STATISTICS added to
CREATE ROLE
● Function SQLNumResultCols added to
ODBC Interfaces
● Syntax ORDER SIBLINGS BY added to
SELECT
● AS OF SCN or TIMESTAMP
parameter added in SELECT
● The parameter of forcibly submitting
residual transactions added in
COMMIT
● Method of the Cursor class added in
Python Interface Reference
● Description of overwriting the file
with the same name added in EXP
● IMP supports importing an exported
file of an earlier version to the current
version.
● ALTER TABLESPACE supports
shrinking undo tablespaces.
● ALTER DATABASE supports data file
management.
● RESTORE DATABASE supports
specifying a tablespace whose data is
to be restored based on full backup.
● GROUP BY supports bracket
expressions after GROUP BY.
02 Added: 2019-04-05
● Added the section Type Mapping.
● Added the section Development
based on ODBC.
● Added section Table Functions.
● Added section Data Query.
● Added the gsc_describle and
gsc_get_batch_error interfaces.
● Added the EXP and LOG functions in
Numeric Functions.
● Added the SPACE, TO_NCHAR, and
SUBSTRING_INDEX functions in
Character Processing Functions.
● Added the built-in advanced package
DBMS_JOB.
● Added the interface
java.sql.CallableStatement.
● Added description about creating SQL
mapping in ALTER SQL_MAP.
● DROP SQL_MAP
● Add the DECODE_NAME function in
Other Functions.
● Added the interface java.sql.Blob.
● Added the interface java.sql.Clob.
● Added the section Modifying a
Trigger.
● Added the section DBMS_LOB.
Modified:
● In GRANT, added the Whether the
Role Has This Permission and
Whether the User Has This
Permission columns to Table 3-53.
● In Creating a Stored Procedure,
added the description of the
sequences OBJECT_ID$ and
SEQ_PROC_001.
● In DBMS_JOB, added description of
the sequence JOBSEQ.
● Optimized the "Data Types" section.
Provided all data types and optimized
the document structure.
● In Time/Date Functions, added the
SLEEP and SYSTIMESTAMP functions.
Example Database
A human resource (HR) database is provided for users to learn and verify GaussDB
100 For details about how to install the database, see Installation Guide.
The database contains eight tables: staffs, sections, places, states, areas,
employments, employment_history, and college. For the relationship between
these tables, see Figure 1-1.
Table 1-1 hr
Object Name Object Description
Type
2.1 Overview
GaussDB 100 supports application development based on C, Java, Python, and GO.
Understanding its system structure and related concepts can facilitate
development.
● The instances can start service listening; allocate memory, such as the system
global area (SGA); initialize the thread pool and buffers; and open control
files and data files.
● The SQL engine parses SQL statements, generates query plans, and calculates
expressions.
● The storage engine manages physical and logical storage; implements data
storage, fetch, and persistence; ensures transaction Atomicity, Consistency,
Isolation, Durability (ACID); and controls concurrency.
For details about error information during SQL execution, see GaussDB 100
V300R001C00 Error Code Reference.
Instances
In GaussDB 100, instances are a group of database processes running in the
memory. An instance can manage one or more databases that form a cluster. A
cluster is an area in the storage disk. This area is initialized during installation and
contains a directory, which is called data directory and stores all data.
Theoretically, one server can start multiple instances on different ports, but
GaussDB 100 manages only one instance at a time. The start and stop of an
instance rely on the specific data directory.
Databases
Databases manage various data objects and are isolated from each other. While
creating a database, you can specify its tablespace. If you do not specify it, the
object will be saved to your default tablespace. Objects managed by a database
can be distributed to multiple tablespaces.
Tablespaces
In GaussDB 100, a tablespace is a directory storing physical files of databases.
Multiple tablespaces can coexist. Files are physically isolated using tablespaces
and managed by a file system.
----End
Parameters
Parameter Description
Examples
// Create a connection object and connect to the database by specifying the URL, username, and password.
int test_conn_db(char * url, char * user, char * password)
{
gsc_conn_t conn;
if (gsc_alloc_conn(&conn) != GSC_SUCCESS)
{
return GSC_ERROR;
}
if (gsc_connect(conn, url, user, password) != GSC_SUCCESS)
{
return GSC_ERROR;
}
// Use the connection to perform other operations.
gsc_free_conn(conn);
conn = NULL;//to avoid using wild pointer, user should set conn NULL after free
return GSC_SUCCESS;
}
gsc_free_conn
Description: Releases a connection object. This API is invoked after the
gsc_disconnect (database disconnection) operation is performed.
Precautions:
To use the conn variable again after gsc_free_conn is used, set conn to NULL.
API:
void gsc_free_conn(gsc_conn_t conn);
Thread-safe: no
gsc_connect
Description: Connects to the database. The URL format is ip:port and only
supports TCP connections.
API:
int gsc_connect(gsc_conn_t conn, const char * url, const char * user, const char * password);
Parameter:
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_set_conn_attr
Description: Sets connection attributes.
GSC_ATTR_SS Path of the SSL client Full path of the CA certificate file. If
L_CA root certificate file used this parameter is not set, the server
for verifying the device certificate is not checked
certificate on the server Default value: N/A
GSC_ATTR_SS Private key of the SSL Full path of the private key file
L_KEY client certificate used Default value: N/A
for decryption and
digital signatures
GSC_ATTR_SS Path of the SSL client Full path of the CRL file
L_CRL revocation list (CRL) file Default value: N/A
GSC_ATTR_C Timeout interval for The unit is s and the default value is
ONNECT_TIM connection 10. The value -1 indicates no
EOUT timeout.
API:
int gsc_set_conn_attr(gsc_conn_t conn, int attr, const void * data, unsigned int len);
Parameter:
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_conn_attr
Description: Obtains the attributes of a connection. The following table lists the
supported connection attributes.
GSC_ATTR_SSL_KEYP Encryption password for the private key file of the SSL
WD client
API:
int gsc_get_conn_attr(gsc_conn_t conn, int attr, void * data, unsigned int len, unsigned int * attr_len);
Parameter:
● conn: object to be connected
● attr: connection attribute
● data: address to be written to attributes
● len: length of the address to be written to attributes
● attr_len: attribute value length. This parameter is valid only if the attribute
value is a string.
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_error
Description: Obtains the error codes and error information on the connection.
This interface can be invoked if an interface fails to be executed.
API:
void gsc_get_error(gsc_conn_t conn, int * code, const char ** message);
Parameter:
● conn: object to be connected
● code: obtained error code
● message: obtained the error message
Return value: empty
Thread-safe: no
gsc_get_error_position
Description: Obtains the error location of a SQL statement that fails to be
executed on the connection and analyzes the cause of the error.
API:
void gsc_get_error_position(gsc_conn_t conn, unsigned short * line, unsigned short * column)
Parameter:
● conn: object to be connected
● line: error row
● column: error column
Return value: empty
Thread-safe: no
gsc_get_message
Description: Obtains the error information about a connection. This interface can
be invoked when an interface fails to be executed.
API:
char* gsc_get_message(gsc_conn_t conn);
gsc_disconnect
Description: Disconnects the database.
API:
void gsc_disconnect(gsc_conn_t conn);
gsc_get_sid
Description: Obtains the connection ID.
API:
unsigned int gsc_get_sid(gsc_conn_t conn);
gsc_cancel
Description: Cancels the ongoing operation on the connection, usually when the
operation times out.
API:
int gsc_cancel(gsc_conn_t conn, int sid);
Parameter:
● conn: object to be connected
● sid: connection ID
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_alloc_stmt
Description: Creates a handle object and uses it to prepare and execute SQL
statements.
API:
int gsc_alloc_stmt(gsc_conn_t conn, gsc_stmt_t * stmt);
Parameter:
● conn: object to be connected
● stmt: handle to be created
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_free_stmt
Description: Releases a handle
Precautions:
To use the stmt variable again after gsc_free_stmt is used, set stmt to NULL.
API:
void gsc_free_stmt(gsc_stmt_t stmt);
gsc_set_stmt_attr
Description: Sets handle attributes.
The following table lists the supported handle attributes.
API:
int gsc_set_stmt_attr(gsc_stmt_t stmt, int attr, const void * data, unsigned int len);
Parameter:
● stmt: handle to be set
● attr: handle attribute to be set
● data: handle attribute value to be set
● len: attribute value length. This parameter is valid only if the attribute value is
a string.
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_stmt_attr
Description: Obtains the attributes of a handle.
API:
int gsc_get_stmt_attr(gsc_stmt_t pstmt, int attr, const void * data, unsigned int buf_len, unsigned int * len)
Parameter:
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_prepare
Description: Preprocesses SQL statements. To run SQL statements, you need to
invoke gsc_prepare and gsc_execute.
API:
int gsc_prepare(gsc_stmt_t stmt, const char * sql);
Parameter:
● stmt: handle
● sql: SQL statement
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
NOTICE
API:
int gsc_bind_by_pos(gsc_stmt_t stmt, unsigned int pos, int type, const void * data, int size, unsigned short *
ind);
int gsc_bind_by_pos2(gsc_stmt_t stmt, unsigned int pos, int type, const void * data, int size, unsigned short *
ind, int direction);
Parameter:
● stmt: handle
● pos: binding position
● type: type of the data to be bound
● data: value to be bound
● size: size of the data to be bound
● ind: size of the value to be bound
● direction: parameter binding direction
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
NOTICE
API:
int gsc_bind_by_name(gsc_stmt_t stmt, const char * name, int type, const void * data, int size, unsigned
short * ind);
int gsc_bind_by_name2(gsc_stmt_t stmt, const char * name, int type, const void * data, int size, unsigned
short * ind, int direction);
Parameter:
● stmt: handle
● name: parameter name
● type: type of the data to be bound
● data: value to be bound
● size: size of the data to be bound
● ind: size of the value to be bound
● direction: parameter binding direction
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_column_count
Description: Obtains the number of queried columns by using handles. It should
be invoked after gsc_prepare.
API:
int gsc_get_column_count(gsc_stmt_t stmt, unsigned int * column_count);
Parameter:
● stmt: handle
● column_count: number of queried columns
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_desc_column_by_id
Description: Obtains the column description by column subscript, which starts
from 0.
Name Description
API:
int gsc_desc_column_by_id(gsc_stmt_t stmt, unsigned int id, gsc_column_desc_t * desc);
Parameter:
● stmt: handle
● id: column subscript
● desc: description of the column to be returned
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_desc_column_by_name
Description: Obtains the description of a column based on the column name. For
details about the column description, see Table 2-7.
API:
int gsc_desc_column_by_name(gsc_stmt_t pstmt, const char* col_name, gsc_column_desc_t * desc);
Parameter:
● pstmt: handle
● col_name: column name
● desc: description of the column to be returned
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_column_by_id
Description: Obtains query column data by column subscript. The returned
column data is encoded and can be converted.
GSC_TYPE_CURSOR -
GSC_TYPE_COLUMN -
GSC_TYPE_INTERVAL -
API:
int gsc_get_column_by_id(gsc_stmt_t stmt, unsigned int id, void ** data, unsigned int * size, unsigned int *
is_null);
Parameter:
● stmt: handle
● id: column subscript
● data: column data to be returned
● size: size of the column to be returned
● is_null: whether the column to be returned is empty
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_column_by_name
Description: Obtains query column data by column name. The returned column
data is encoded and can be converted.
API:
int gsc_get_column_by_name(gsc_stmt_t stmt, const char* name, void ** data, unsigned int * size, unsigned
int * is_null);
Parameter:
● stmt: handle
● name: column name
● data: column data to be returned
● size: size of the column to be returned
● is_null: whether the column to be returned is empty
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_affected_rows
Description: Obtains the number of records in the current response packet for
SELECT and EXPLAIN statements; or obtains the number of affected records for
INSERT, DELETE, UPDATE, and MERGE statement.
API:
unsigned int gsc_get_affected_rows(gsc_stmt_t pstmt);
Parameter:
● pstmt: handle
pstmt: handle
Thread-safe: no
gsc_column_as_string
Description: Obtains column data by string.
Before importing column data, you need to apply for the required memory. The
memory size is the column description size returned by gsc_desc_column_by_id. If
available memory is insufficient, strings will be truncated, and errors will be
reported for other data types.
API:
int gsc_column_as_string(gsc_stmt_t stmt, unsigned int id, char * str, unsigned int buf_size);
Parameter:
● stmt: handle
● id: column subscript
● str: memory address where the column data is to be written
● buf_size: size of the column data to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_bind_column
Description: Binds the queried column memory and applies for memory according
to the specified column description size.
GSC_TYPE_CURSOR - -
GSC_TYPE_COLUMN - -
GSC_TYPE_INTERVAL - -
API:
int gsc_bind_column(gsc_stmt_t stmt, unsigned int id, unsigned short bind_type, unsigned short bind_size,
void * bind_ptr, unsigned short * ind_ptr);
Parameter:
● stmt: handle
● id: column subscript
● bind_type: type of the data to be bound
● bind_size: size of memory to be bound
● bind_ptr: address of memory to be bound
● ind_ptr: size of the column to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_execute
Description: Runs SQL statements. This interface should be invoked after
gsc_prepare.
API:
int gsc_execute(gsc_stmt_t stmt);
gsc_fetch
Description: Returns the number of rows obtained in a query. The interface is
invoked after the gsc_execute or gsc_query interface. The returned number of
rows is 0 or the number of records found.
API:
int gsc_fetch(gsc_stmt_t stmt, unsigned int * rows);
Parameter:
● stmt: handle
● rows: number of records to be returned
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_commit
Description: Commits uncompleted transactions. If manual committing is enabled,
after an update operation is performed, use this interface to commit the
transaction.
API:
int gsc_commit(gsc_conn_t conn);
gsc_set_autocommit
Description: Its function is the same as that of setting the
GSC_ATTR_AUTO_COMMIT parameter (whether to automatically commit
transactions) for gsc_set_conn_attr.
API:
void gsc_set_autocommit(gsc_conn_t conn, unsigned int auto_commit);
Parameter:
● conn: object to be connected
● auto_commit: whether to automatically commit transactions. 1: automatic, 0:
manual
Return value: none
Thread-safe: no
gsc_set_paramset_size
Description: Its function is the same as that of gsc_set_stmt_attr when setting
the GSC_ATTR_PARAMSET_SIZE parameter.
API:
void gsc_set_paramset_size(gsc_stmt_t pstmt, unsigned int sz);
Parameter:
● pstmt: handle
● sz: number of batch bound records
Return value: none
Thread-safe: no
gsc_query
Description: Runs SQL statements in query mode.
This interface is equivalent to gsc_prepare plus gsc_execute and cannot bind
parameters.
API:
int gsc_query(gsc_conn_t conn, const char * sql);
Parameter:
● conn: object to be connected
● sql: SQL statement to be executed
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_query_stmt
Description: Obtains the handle of a query. The handle can be used to invoke an
interface related to the handle.
API:
gsc_stmt_t gsc_get_query_stmt(gsc_conn_t conn);
gsc_query_get_affected_rows
Description: Obtains the number of rows affected by SQL execution.
API:
unsigned int gsc_query_get_affected_rows(gsc_conn_t conn);
gsc_query_get_column_count
Description: Obtains the number of columns returned by the SQL query.
API:
unsigned int gsc_query_get_column_count(gsc_conn_t conn);
gsc_query_fetch
Description: Obtains the query result.
API:
int gsc_query_fetch(gsc_conn_t conn, unsigned int * rows);
Parameter:
● conn: object to be connected
● rows: number of obtained records
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_query_describe_column
Description: Obtains the description of a column.
API:
int gsc_query_describe_column(gsc_conn_t conn, unsigned int id, gsc_column_desc_t * desc);
Parameter:
● conn: object to be connected
● id: sequence number of a column (starting from 0)
● desc: description of the column to be returned
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_query_get_column
Description: Obtains column values.
API:
int gsc_query_get_column(gsc_conn_t conn, unsigned int id, void ** data, unsigned int * size, unsigned int *
is_null);
Parameter:
● conn: object to be connected
● id: sequence number of a column (starting from 0)
● data: column value to be returned
● size: length of the column value to be returned
● Is_null: whether the column value to be returned is empty
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_write_blob
Description: Writes data to BLOB columns.
API:
int gsc_write_blob(gsc_stmt_t pstmt, unsigned int pos, const void * data, unsigned int size);
Parameter:
● pstmt: handle
● pos: sequence number of a column (starting from 0)
● data: data to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_write_clob
Description: Writes CLOB column data.
API:
int gsc_write_clob(gsc_stmt_t pstmt, unsigned int pos, const void * data, unsigned int size, unsigned int
*nchars);
Parameter:
● pstmt: handle
● pos: sequence number of a column (starting from 0)
● data: data to be written
● size: length of the data to be written
● nchars: number of characters to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_write_batch_blob
Description: Writes data to BLOB columns in batches.
API:
int gsc_write_batch_blob(gsc_stmt_t pstmt, unsigned int id, unsigned int piece, const void * data, unsigned
int size);
Parameter:
● stmt: handle
● id: sequence number of a column (starting from 0)
● piece: sequence number of a batch (starting from 0)
● data: data to be written
● size: length of the data to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_write_batch_clob
Description: Writes data to CLOB columns in batches.
API:
int gsc_write_batch_clob(gsc_stmt_t stmt, unsigned int id, unsigned int piece, const void * data, unsigned int
size, unsigned int *nchars);
Parameter:
● stmt: handle
● id: sequence number of a column (starting from 0)
● piece: sequence number of a batch (starting from 0)
● data: data to be written
● size: length of the data to be written
● nchars: number of characters to be written
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_read_blob_by_id
Description: Reads data from BLOB columns.
API:
int gsc_read_blob_by_id(gsc_stmt_t pstmt, unsigned int id,
unsigned int byte_offset, void * buffer, unsigned int size, unsigned int * nbytes, unsigned int *eof);
Parameter:
● pstmt: handle
● id: sequence number of a column (starting from 0)
● byte_offset: start byte of read data
● buffer: read column data
● size: length of read column data
● nbytes: length of read column (unit: byte)
● eof: whether column data reading is complete
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_read_blob
Description: Reads data from BLOB columns.
API:
Parameter:
● pstmt: handle
● locator: column pointer
● byte_offset: start byte of read data
● buffer: read column data
● size: length of read column data
● nbytes: length of read column (unit: byte)
● eof: whether column data reading is complete
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_read_clob_by_id
Description: Reads data from CLOB columns.
API:
int gsc_read_clob_by_id(gsc_stmt_t pstmt, unsigned int id, unsigned int byte_offset,
void * buffer, unsigned int size, unsigned int *nchars, unsigned int * nbytes, unsigned int *eof);
Parameter:
● pstmt: handle
● id: sequence number of a column (starting from 0)
● byte_offset: start byte of read data
● buffer: read column data
● size: length of read column data
● nchars: length of read characters
● nbytes: length of read column (unit: byte)
● eof: whether column data reading is complete
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_read_clob
Description: Reads data from CLOB columns.
API:
int gsc_read_clob(gsc_stmt_t pstmt, void * locator, unsigned int byte_offset, void * buffer, unsigned int
size, unsigned int *nchars, unsigned int * nbytes, unsigned int *eof);
Parameter:
● pstmt: handle
● locator: column pointer
● byte_offset: start byte of read data
● buffer: read column data
● size: length of read column data
● nchars: length of read characters
● nbytes: length of read column (unit: byte)
● eof: whether column data reading is complete
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_fetch_serveroutput
Description: Obtains serveroutput information.
To use this interface, execute gsc_set_conn_attr to set GSC_ATTR_SERVEROUTPUT
to 1 to enable the serveroutput function.
API:
int gsc_fetch_serveroutput(gsc_stmt_t stmt, char ** data, unsigned int * len);
Parameter:
● stmt: handle
● data: serveroutput data
● len: size of the serveroutput data
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_implicit_resultset
Description: Obtains multiple result sets (dbms_sql.return_result).
If the returned value of resultset is NULL, there is no more result set. You can use
resultset to invoke other interfaces, such as gsc_fetch, to further obtain cursor
data.
API:
int gsc_get_implicit_resultset(gsc_stmt_t stmt, gsc_stmt_t * resultset);
Parameter:
● stmt: handle
gsc_fetch_outparam
Description: Obtains output parameters. Currently, it can only be used for stored
procedures.
API:
int gsc_fetch_outparam(gsc_stmt_t stmt, unsigned int * rows);
Parameter:
● stmt: handle
● rows: number of output parameter records to be returned
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_outparam_by_id
Description: Obtains the data of the output parameter column.
The following table describes the data in the output parameter columns obtained
based on subscripts.
Name Description
API:
int gsc_get_outparam_by_id(gsc_stmt_t stmt, unsigned int id, void ** data, unsigned int * size, unsigned int
* is_null)
Parameter:
● stmt: handle
● id: subscript of the output parameter
gsc_desc_outparam_by_id
Description: Obtains the description of output parameters based on their
subscripts.
API:
int gsc_desc_outparam_by_id(gsc_stmt_t stmt, unsigned int id, gsc_outparam_desc_t * desc);
Parameter:
● stmt: handle
● id: subscript of the output parameter
● desc: description of the output parameter column
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_desc_outparam_by_name
Description: Obtains the description of output parameters based on their names.
API:
int gsc_desc_outparam_by_name(gsc_stmt_t stmt, const char* name, gsc_outparam_desc_t * desc);
Parameter:
● stmt: handle
● name: output parameter name
● desc: description of the output parameter column
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_outparam_by_name
Description: Obtains the data of a specified output parameter column based on
the output parameter column name.
API:
int gsc_get_outparam_by_name(gsc_stmt_t stmt, const char* name, void ** data, unsigned int * size,
unsigned int * is_null);
Parameter:
● stmt: handle
● name: output parameter column name
● * data: output parameter column data
● size: size of the output parameter column
● Is_null: whether the output parameter column is NULL
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_outparam_as_string_by_id
Description: Obtains the information about output parameters based on their
subscripts.
API:
int gsc_outparam_as_string_by_id(gsc_stmt_t stmt, unsigned int id, char * str, unsigned int buf_size);
Parameter:
● stmt: handle
● id: subscript of the output parameter column
● str: start address that stores data of the output parameter column
● buf_size: data size of the output parameter column
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_outparam_as_string_by_name
Description: Obtains the information about output parameters based on their
names.
API:
int gsc_outparam_as_string_by_name(gsc_stmt_t stmt, const char* name, char * str, unsigned int buf_size);
Parameter:
● stmt: handle
● name: output parameter column name
● str: start address that stores data of the output parameter column
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_query_multiple
Description: Runs multiple SQL statements.
API:
int gsc_query_multiple(gsc_conn_t conn, const char * sql);
Parameter:
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_query_resultset
Description: Obtains the result set of running multiple SQL statements.
API:
int gsc_get_query_resultset(gsc_conn_t pconn, gsc_stmt_t * resultset);
Parameter:
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_describle
Description: Obtains the description of a specified object. The column description
of the object is obtained by invoking the gsc_desc_column_by_id interface.
API:
int gsc_describle(gsc_stmt_t stmt, char * objptr, gsc_desc_type_t dtype);
Parameter:
● stmt: handle
● objptr: object whose description information is to be obtained
● dtype: object type Currently, the following object types are supported:
– 0 or 1: table
– 2: view
– 3: synonym
– 4: query statement
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
gsc_get_batch_error
Description: Obtains the error information about batch execution. The number of
errors that can be obtained can be set by invoking the gsc_set_stmt_attr interface
to set GSC_ATTR_ALLOWED_BATCH_ERRS of the handle.
API:
int gsc_get_batch_error(gsc_stmt_t stmt, unsigned int * line, char ** err_message, unsigned int * rows);
Parameter:
● stmt: handle
● line: batch execution location where errors occur
● err_message: error information about batch execution
● rows: number of error SQL statements
Return value:
● 0: successful
● !=0: failed
Thread-safe: no
2.2.5 Examples
The following example demonstrates how to develop an application based on the
C APIs provided by GaussDB 100. For example, bind parameters to insert data.
// This example shows how to insert a row of data into the test_num table by binding parameters.
#include <stdio.h>
#include "gsc.h"
int test_num()
{
int f0 = 2147483647;
long long f1 = 9223372036854775807;
double f2 = 222.222;
unsigned int f3 = 4294967295;
char f4[50] = "1234567890";
char f5[50] = "333.333";
unsigned short len[6] = { 4, 8, 8, 4, 10, 7 };
// Create a connection.
CM_RETURN_IF_FALSE(gsc_alloc_conn(&gsc_conn) , GSC_SUCCESS);
return GSC_SUCCESS;
}
JDBC Package
JDBC package name: com.huawei.gauss.jdbc.ZenithDriver.jar
The JDBC package is in the GAUSSDB100-V300R001C00-CLIENT-JDBC folder in
the installation package directory.
Driver Class
Before establishing a database connection, load the
com.huawei.gauss.jdbc.ZenithDriver database driver class.
Function Prototype
To use a JDBC to create a database connection, use the following function:
DriverManager.getConnection(String url, String user, String password);
Parameters
● Currently, GaussDB 100 JDBC supports two connection modes: Common TCP and SSL.
SSL modes are classified into unidirectional authentication and bidirectional
authentication.
● If the SSL function is enabled but no certificate information is configured on the JDBC,
the JDBC uses unidirectional authentication to connect to the database.
● If a certificate file is configured on the JDBC and the SSL switch is enabled, the JDBC
uses more secure bidirectional authentication to connect to the database.
● When the application JVM is started, the Java command line parameters
(recommended) are used, for example, -
Djavax.net.ssl.trustStore=path_to_truststore_file.
● Set system parameters in the code, for example,
System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file").
Table 2-13 The environment variables used for SSL bidirectional authentication
are described as follows:
javax.net.ssl.keyStore Location of the SSL keyStore file. The keystore file can
be regarded as a key library. The key file contains the
public key, private key, and digital signature.
try {
// Create a database connection.
//getConnection(String url, String user, String password)
conn = DriverManager.getConnection(sourceURL,username,passwd);
System.out.println("Connection succeed!");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return conn;
};
3. Use the keytool to convert the file generated in the previous step to java
keystore.
shell> keytool -importkeystore -srckeystore client-keystore.p12 -srcstoretype pkcs12 -srcstorepass
mypassword -destkeystore keystore -deststoretype JKS -deststorepass mypassword
----End
Step 2 Execute the SQL statement by triggering the executeUpdate method in Statement.
int rc = stmt.executeUpdate("CREATE TABLE tab1(id INTEGER, name VARCHAR(32))");
----End
Step 3 Execute the precompiled SQL statement by triggering the executeUpdate method
in PreparedStatement.
int rowcount = pstmt.executeUpdate();
Step 4 Close the precompiled statement object by calling the close method in
PreparedStatement.
pstmt.close();
----End
Batch Processing
When a prepared statement batch processes multiple pieces of similar data, the
database creates only one query plan. This improves the compilation and
optimization efficiency. Perform the following procedures:
Step 2 Call the setShort parameter for each piece of data, and call addBatch to confirm
that the setting is complete.
pstmt.setShort(1, (short)2);
pstmt.addBatch();
Step 4 Close the precompiled statement object by calling the close method in
PreparedStatement.
pstmt.close();
Do not terminate a batch processing action when it is ongoing; otherwise, the database
performance will deteriorate. Therefore, disable the automatic submission function during
batch processing, and manually submit every several lines. The statement for disabling
automatic submission is conn.setAutoCommit(false);.
----End
Step 3 Invoke the execute method of CallableStatement to run the precompiled stored
procedure.
callStmt.execute();
Step 5 Invoke the close method of CallableStatement to close the precompiled stored
procedure.
callStmt.close();
----End
Table 2-15 Common methods for obtaining data from a result set
Method Description
Method Description
Void setServerName(String) No
void setDatabaseName(String) No
void setUser(String) No
void setPassword(String); No
void setPortNumber(int) No
void initializeFrom(BaseDataSourc No
e)
com.huawei.gauss.datasource.GSSimpleDataSource
This is a simple DataSource that provides non-pooling connections. To use this
DataSource, you must set databaseName. Other parameters (serverName,
portNumber, user, and password) are optional. For details about these
parameters, see the description of the parent class. The following table describes
the exclusive or rewritten interfaces of this class.
T unWrap(Class<T>) Yes
com.huawei.gauss.datasource.GSConnectionPoolDataSource
This class is the DataSource that implements the
javax.sql.ConnectionPoolDataSource interface. When the DataSource needs to be
configured for the application or middleware, this class can be set. This class can
be used when you configure the connection pool. To use this DataSource, you
must set databaseName. Other parameters (serverName, portNumber, user,
and password) are optional. For details about these parameters, see the
description of the parent class. The following table describes the exclusive or
rewritten interfaces of this class.
void setDefaultAutoCom- No
mit(boolean)
com.huawei.gauss.datasource.GSPoolingDataSource
This is a DataSource that provides pooling connections, which contain the
implementation of a connection pool. Do not use this class if the application or
middleware uses a connection pool. To use this DataSource, you must set
dataSourceName, databaseName, user, and password. Other parameters
(serverName, portNumber, initialConnections, and maxConnections) are
optional. For details about these parameters, see the description of the parent
class. The following table describes the exclusive or rewritten interfaces of this
class.
T unwrap(Class<T>) Yes
2.3.9 Examples
This example illustrates how to develop applications based on GaussDB 100 JDBC
interfaces.
Before performing this example, ensure that the user for connecting to the
database exists in the database. If the user does not exist, see CREATE USER and
GRANT to create the user and grant permissions to it.
If whitelist checking is enabled, you need to configure the IP address whitelist. For
details, see Database Usage > Configuring Client Access Authentication in
GaussDB 100 V300R001C00 User Guide (Standalone).
Examples
//DBtest.java
// This example illustrates the main processes of JDBC-based development, covering database connection
creation, table creation, and data insertion.
package com.huawei.demo;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
try {
// Create a database connection.
//getConnection(String url, String user, String password)
conn = DriverManager.getConnection(sourceURL,username,passwd);
System.out.println("Connection succeed!");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return conn;
};
stmt.close();
} catch (SQLException e) {
if (stmt != null) {
try {
stmt.close();
} catch (SQLException e1) {
e1.printStackTrace();
}
}
e.printStackTrace();
}
}
try {
// Generate a prepared statement.
pst = conn.prepareStatement("INSERT INTO jdbc_test1 VALUES (?,?)");
for (int i = 0; i < 3; i++) {
// Add parameters.
pst.setInt(1, i);
pst.setString(2, "data " + i);
pst.addBatch();
}
// Run batch processing.
pst.executeBatch();
System.out.println("INSERT INTO succeed!");
pst.close();
} catch (SQLException e) {
if (pst != null) {
try {
pst.close();
} catch (SQLException e1) {
e1.printStackTrace();
}
}
e.printStackTrace();
}
}
System.out.println("UPDATE succeed!");
pstmt.close();
} catch (SQLException e) {
if (pstmt != null) {
try {
pstmt.close();
} catch (SQLException e1) {
e1.printStackTrace();
}
}
e.printStackTrace();
}
}
/**
* Main process. Call static methods one by one.
* @param args
*/
public static void main(String[] args) {
String userName = "gaussdba";
String password = "gaussdb_123";
// Create a database connection.
Connection conn = GetConnection(userName, password);
// Create a table.
CreateTable(conn);
ExecPreparedSQL(conn);
2.3.10.1 java.sql.Connection
This section describes the support for java.sql.Connection, the database connection
interface.
void close()
boolean isReadOnly()
boolean isClosed()
boolean isValid(int)
DatabaseMetaData getMetaData()
void clearWarnings()
void commit()
Blob createBlob()
Clob createClob()
boolean getAutoCommit()
String getCatalog()
Properties getClientInfo()
String getClientInfo(String)
int getTransactionIsolation()
Map getTypeMap()
CallableStatement prepareCall(String)
PreparedStatement prepareStatement(String)
void rollback()
void setAutoCommit(boolean)
void setCatalog(String)
void setClientInfo(Properties)
void setSchema(String)
Statement createStatement()
void setReadOnly(boolean)
int getHoldability()
void setHoldability(int)
void setTransactionIsolation(int)
NOTICE
● 接口内部默认使用自动提交模式,若通过setAutoCommit(false)关闭自动提交,将
会导致后面执行的语句都受到显式事务(明确指定事务的开始)包裹,此时需要手
动调用commit()方法提交事务。
● Savepoints are not supported.
2.3.10.2 java.sql.DatabaseMetaData
This section describes the support for Java.sql.DatabaseMetaData, a database
object definition interface.
该接口中的方法都不是线程安全的。
String getURL()
boolean isReadOnly()
String getUserName()
int getResultSetHoldability()
Connection getConnection()
boolean dataDefinitionCausesTransactionCom-
mit()
boolean dataDefinitionIgnoredInTransactions()
boolean deletesAreDetected(int)
boolean doesMaxRowSizeIncludeBlobs()
boolean generatedKeyAlwaysReturned()
String getCatalogSeparator()
String getCatalogTerm()
ResultSet getCatalogs()
int getDatabaseMajorVersion()
int getDatabaseMinorVersion()
String getDatabaseProductName()
String getDatabaseProductVersion()
int getDefaultTransactionIsolation()
int getDriverMajorVersion()
int getDriverMinorVersion()
String getDriverName()
String getDriverVersion()
String getExtraNameCharacters()
String getIdentifierQuoteString()
int getJDBCMajorVersion()
int getJDBCMinorVersion()
int getMaxBinaryLiteralLength()
int getMaxCatalogNameLength()
int getMaxCharLiteralLength()
int getMaxColumnNameLength()
int getMaxColumnsInGroupBy()
int getMaxColumnsInIndex()
int getMaxColumnsInOrderBy()
int getMaxColumnsInSelect()
int getMaxColumnsInTable()
int getMaxConnections()
int getMaxCursorNameLength()
int getMaxIndexLength()
int getMaxProcedureNameLength()
int getMaxRowSize()
int getMaxSchemaNameLength()
int getMaxStatementLength()
int getMaxStatements()
int getMaxTableNameLength()
int getMaxTablesInSelect()
int getMaxUserNameLength()
String getNumericFunctions()
String getProcedureTerm()
RowIdLifetime getRowIdLifetime()
String getSQLKeywords()
int getSQLStateType()
String getSchemaTerm()
ResultSet getSchemas()
String getSearchStringEscape()
String getStringFunctions()
ResultSet getTableTypes()
ResultSet getTypeInfo()
boolean insertsAreDetected(int)
boolean isCatalogAtStart()
boolean locatorsUpdateCopy()
boolean nullPlusNonNullIsNull()
boolean nullsAreSortedAtEnd()
boolean nullsAreSortedAtStart()
boolean nullsAreSortedHigh()
boolean nullsAreSortedLow()
boolean othersDeletesAreVisible(int)
boolean othersInsertsAreVisible(int)
boolean othersUpdatesAreVisible(int)
boolean ownDeletesAreVisible(int)
boolean ownInsertsAreVisible(int)
boolean ownUpdatesAreVisible(int)
boolean storesLowerCaseIdentifiers()
boolean storesLowerCaseQuotedIdentifiers()
boolean storesMixedCaseIdentifiers()
boolean storesMixedCaseQuotedIdentifiers()
boolean storesUpperCaseIdentifiers()
boolean storesUpperCaseQuotedIdentifiers()
boolean supportsANSI92EntryLevelSQL()
boolean supportsANSI92FullSQL()
boolean supportsANSI92IntermediateSQL()
boolean supportsAlterTableWithAddColumn()
boolean supportsAlterTableWithDropColumn()
boolean supportsBatchUpdates()
boolean supportsCatalogsInDataManipulation()
boolean supportsCatalogsInIndexDefinitions()
boolean supportsCatalogsInPrivilegeDefini-
tions()
boolean supportsCatalogsInProcedureCalls()
boolean supportsCatalogsInTableDefinitions()
boolean supportsColumnAliasing()
boolean supportsConvert()
boolean supportsCoreSQLGrammar()
boolean supportsCorrelatedSubqueries()
boolean supportsDataDefinitionAndDataMani-
pulationTransactions()
boolean supportsDataManipulationTransaction-
sOnly()
boolean supportsDifferentTableCorrelation-
Names()
boolean supportsExpressionsInOrderBy()
boolean supportsExtendedSQLGrammar()
boolean supportsFullOuterJoins()
boolean supportsGetGeneratedKeys()
boolean supportsGroupBy()
boolean supportsGroupByUnrelated()
boolean supportsIntegrityEnhancementFacili-
ty()
boolean supportsLikeEscapeClause()
boolean supportsLimitedOuterJoins()
boolean supportsMinimumSQLGrammar()
boolean supportsMixedCaseIdentifiers()
boolean supportsMixedCaseQuotedIdentifiers()
boolean supportsMultipleOpenResults()
boolean supportsMultipleResultSets()
boolean supportsMultipleTransactions()
boolean supportsNamedParameters()
boolean supportsNonNullableColumns()
boolean supportsOpenCursorsAcrossCommit()
boolean supportsOpenCursorsAcrossRollback()
boolean supportsOpenStatementsAcrossCom-
mit()
boolean supportsOpenStatementsAcrossRoll-
back()
boolean supportsOrderByUnrelated()
boolean supportsOuterJoins()
boolean supportsPositionedDelete()
boolean supportsPositionedUpdate()
boolean supportsResultSetHoldability(int)
boolean supportsSavepoints()
boolean supportsSchemasInDataManipulation()
boolean supportsSchemasInIndexDefinitions()
boolean supportsSchemasInPrivilegeDefini-
tions()
boolean supportsSchemasInProcedureCalls()
boolean supportsSchemasInTableDefinitions()
boolean supportsSelectForUpdate()
boolean supportsStatementPooling()
boolean supportsStoredFunctionsUsingCallSyn-
tax()
boolean supportsStoredProcedures()
boolean supportsSubqueriesInComparisons()
boolean supportsSubqueriesInExists()
boolean supportsSubqueriesInIns()
boolean supportsSubqueriesInQuantifieds()
boolean supportsTableCorrelationNames()
boolean supportsTransactions()
boolean supportsUnion()
boolean supportsUnionAll()
boolean updatesAreDetected(int)
boolean usesLocalFilePerTable()
boolean usesLocalFiles()
boolean supportsResultSetType(int)
boolean supportsTransactionIsolationLevel(int)
2.3.10.3 java.sql.Driver
This section describes the support for Java.sql.Driver, the database driver interface.
boolean jdbcCompliant()
int getMajorVersion()
int getMinorVersion()
2.3.10.4 java.sql.PreparedStatement
This section describes the support for java.sql.PreparedStatement, the prepared
statement interface.
boolean execute()
void addBatch()
void clearParameters()
ResultSet executeQuery()
int executeUpdate()
ResultSetMetaData getMetaData()
boolean execute(String)
boolean isClosed()
ResultSet executeQuery(String)
int executeUpdate(String)
void cancel()
void clearBatch()
void clearWarnings()
int[] executeBatch()
int getFetchSize()
ResultSet getGeneratedKeys()
int getMaxRows()
boolean getMoreResults(int)
boolean getMoreResults()
int getQueryTimeout()
ResultSet getResultSet()
int getUpdateCount()
SQLWarning getWarnings()
void setFetchSize(int)
void setMaxRows(int)
void setQueryTimeout(int)
Connection getConnection()
int getFetchDirection()
int getResultSetConcurrency()
int getResultSetHoldability()
int getResultSetType()
void setFetchDirection(int)
2.3.10.5 java.sql.ResultSet
This section describes the support for Java.sql.ResultSet, the execution result set
interface.
Object getObject(String)
Object getObject(int)
boolean getBoolean(int)
boolean getBoolean(String)
byte getByte(String)
byte getByte(int)
short getShort(int)
short getShort(String)
int getInt(int)
int getInt(String)
long getLong(String)
long getLong(int)
float getFloat(String)
float getFloat(int)
double getDouble(String)
double getDouble(int)
byte[] getBytes(int)
byte[] getBytes(String)
boolean next()
void close()
int getType()
Date getDate(String)
Date getDate(int)
boolean isClosed()
String getString(String)
String getString(int)
Time getTime(int)
Time getTime(String)
Timestamp getTimestamp(String)
Timestamp getTimestamp(int)
ResultSetMetaData getMetaData()
void clearWarnings()
int getFetchDirection()
int getFetchSize()
SQLWarning getWarnings()
void setFetchSize(int)
void afterLast()
void beforeFirst()
int findColumn(String)
BigDecimal getBigDecimal(String)
BigDecimal getBigDecimal(int)
InputStream getBinaryStream(String)
InputStream getBinaryStream(int)
Blob getBlob(String)
Blob getBlob(int)
Reader getCharacterStream(int)
Reader getCharacterStream(String)
Clob getClob(String)
Clob getClob(int)
int getConcurrency()
int getHoldability()
int getRow()
Statement getStatement()
boolean isAfterLast()
boolean isBeforeFirst()
boolean isFirst()
boolean isLast()
boolean wasNull()
void setFetchDirection(int)
2.3.10.6 java.sql.ResultSetMetaData
This section describes the support for Java.sql.ResultSetMetaData, the ResultSet
object information interface.
boolean isReadOnly(int)
String getCatalogName(int)
int getColumnDisplaySize(int)
String getColumnLabel(int)
String getSchemaName(int)
String getTableName(int)
boolean isCaseSensitive(int)
boolean isCurrency(int)
boolean isDefinitelyWritable(int)
boolean isSearchable(int)
boolean isSigned(int)
boolean isWritable(int)
int getColumnCount()
String getColumnName(int)
int getPrecision(int)
int getScale(int)
int getColumnType(int)
boolean isAutoIncrement(int)
String getColumnTypeName(int)
String getColumnClassName(int)
boolean isWrapperFor(Class)
2.3.10.7 java.sql.Statement
This section describes the support status for java.sql.Statement, the SQL statement
interface.
boolean execute(String)
void close()
boolean isClosed()
ResultSet executeQuery(String)
int executeUpdate(String)
void cancel()
void clearWarnings()
int getFetchSize()
ResultSet getGeneratedKeys()
int getMaxRows()
boolean getMoreResults(int)
boolean getMoreResults()
int getQueryTimeout()
ResultSet getResultSet()
int getUpdateCount()
SQLWarning getWarnings()
void setFetchSize(int)
void setMaxRows(int)
void setQueryTimeout(int)
Connection getConnection()
int getFetchDirection()
int getResultSetConcurrency()
int getResultSetHoldability()
int getResultSetType()
void setFetchDirection(int)
2.3.10.8 java.sql.CallableStatement
java.sql.CallableStatement是SQL语句接口,主要用于执行存储过程,本节将介绍对此
接口的支持情况。
该接口中的方法都不是线程安全的。
Object getObject(int)
Object getObject(String)
boolean getBoolean(String)
boolean getBoolean(int)
byte getByte(String)
byte getByte(int)
short getShort(String)
short getShort(int)
int getInt(String)
int getInt(int)
long getLong(String)
long getLong(int)
float getFloat(int)
float getFloat(String)
double getDouble(int)
double getDouble(String)
返回值类型 方法名
byte[] getBytes(int)
byte[] getBytes(String)
String getString(String)
String getString(int)
Time getTime(int)
Time getTime(String)
Timestamp getTimestamp(int)
Timestamp getTimestamp(String)
Date getDate(String)
Date getDate(int)
BigDecimal getBigDecimal(String)
BigDecimal getBigDecimal(int)
Blob getBlob(int)
Blob getBlob(String)
Reader getCharacterStream(String)
Reader getCharacterStream(int)
Clob getClob(String)
Clob getClob(int)
boolean wasNull()
boolean execute()
返回值类型 方法名
void addBatch()
void clearParameters()
ResultSet executeQuery()
int executeUpdate()
ResultSetMetaData getMetaData()
返回值类型 方法名
boolean execute(String)
boolean isClosed()
Connection getConnection()
ResultSet executeQuery(String)
int executeUpdate(String)
void cancel()
void clearBatch()
void clearWarnings()
int[] executeBatch()
int getFetchSize()
ResultSet getGeneratedKeys()
int getMaxRows()
boolean getMoreResults(int)
boolean getMoreResults()
int getQueryTimeout()
ResultSet getResultSet()
int getUpdateCount()
SQLWarning getWarnings()
void setFetchSize(int)
void setMaxRows(int)
void setQueryTimeout(int)
int getFetchDirection()
返回值类型 方法名
int getResultSetConcurrency()
int getResultSetHoldability()
int getResultSetType()
void setFetchDirection(int)
2.3.10.9 java.sql.Blob
java.sql.Blob是Blob接口,主要用于绑定或获取数据库Blob字段,本节将介绍对此接口
的支持情况。
long length()
OutputStream setBinaryStream(long)
InputStream getBinaryStream()
void free()
2.3.10.10 java.sql.Clob
This section describes the support for java.sql.Clob, a CLOB interface used for
binding or obtaining CLOB columns.
long length()
Writer setCharacterStream(long)
Reader getCharacterStream()
void free()
2.3.11.1 com.huawei.gauss.jdbc.GaussConnection
GaussConnection is a Zenith connection interface. This section describes the non-
standard interfaces provided by this interface.
The methods in this interface are not thread safe.
2.3.11.2 com.huawei.gauss.jdbc.GaussPrepareStatement
GaussPrepareStatement is a Zenith prepared statement interface. This section
describes the non-standard interfaces provided by this interface.
The methods in this interface are not thread safe.
Function Prototype
● SSL connections are not used.
– Parameters in args format
connect('IP','username','password','port')
Alternatively,
connect('IP','username','password','port', '/gs_regress/ssl/ca.crt','/gs_regress/ssl/client-cert.crt','/
gs_regress/ssl/client-key.crt','','','P')
For details about the parameters supported by the connect method, see Table
2-33.
Examples
Connect to a database without using SSL connections.
# Load the pyzenith module.
import pyzenith
# Connect to a database.
host='192.168.0.1'
username='gaussdba'
password='gaussdb_123'
port='1888'
conn=pyzenith.connect(host,username,password,port)
To obtain the result set, you can use one of the fetch methods in Cursor Class.
Function Prototype
● Run an SQL statement and automatically commit it.
# Create a cursor.
c=conn.cursor()
# Set the automatic committing.
conn.autocommit(True)
# Invoke the execute method to create a table.
c.execute("create table tablename(relational_properties)")
# Invoke the execute method to insert data.
c.execute("insert into tablename values(expression [ , ... ])")
# Disable the cursor.
c.close()
Examples
● Run an SQL statement and automatically commit it.
# Create a cursor.
c=conn.cursor()
# Set the automatic committing.
conn.autocommit(True)
# Invoke the execute method to create a table.
c.execute("create table testexecute(a int,b char(10),c date)")
# Invoke the execute method to insert data.
c.execute("insert into testexecute values(1,'s','2012-12-13')")
# Disable the cursor.
c.close()
# Close a database connection.
conn.close()
● Run the SQL statement and roll back or manually commit it.
# Load the pyzenith module.
import pyzenith
# Connect to the database as user gaussdba. The password is gaussdb_123 and the port number is
1888.
host='192.168.0.1'
username='gaussdba'
password='gaussdb_123'
port='1888'
conn=pyzenith.connect(host,username,password,port)
# Create a cursor.
c=conn.cursor()
# Invoke the execute method to create a table.
c.execute("create table testexecute(a int,b char(10),c date)")
# Invoke the execute method to insert data.
c.execute("insert into testexecute values(1,'s','2012-12-13')")
# Roll back or commit a transaction.
conn.rollback() or conn.commit()
# Disable the cursor.
c.close()
# Close a database connection.
conn.close()
conn=pyzenith.connect(host,username,password,port)
# Create a cursor.
c=conn.cursor()
# Invoke the execute method to create a table.
c.execute("create table testexecute(a int,b char(10),c date)")
# Invoke the execute method to create a table.
c.execute("insert into testexecute values(1,'s','2012-12-13')")
# Invoke the execute method to obtain the query result.
c.execute("select * from testexecute")
# Obtain all results using the fetchall method.
row =c.fetchall()
# Disable the cursor.
c.close()
# Disable the cursor.
conn.close()
Function Prototype
conn.close()
Global Variables
Connect Method
Connection Class
Cursor Class
Time Objects
2.5 基于 GO 开发
2.5.1 Go Driver
The Go driver is released as source code. An upper-layer application imports the
code to an application project and compiles it together with the application. The
Zenith Go driver is developed from the Zenith C driver and is encapsulated using
the cgo tool.
The Go driver has three types of files: Go API files, C drive library files, and C
header files. The lib subdirectory stores the C driver dynamic library, and the
include subdirectory stores the header file involved in the C driver cgo.
NOTICE
(zenith) and description string. For details about the parameters, see the open
method.
Function Prototype
● Syntax reference
func open(driverName, dataSourceName string) (*DB, error)
● Parameter description: driverName has a fixed value zenith. To connect the
string with other parameters, use a question mark (?) to separate the string
from others. Other parameters can be separated by an ampersand (&), a
semicolon (;), or a combination of the two. The format of dataSourceName
can be:
user/passwd@ip:port?parameter=value1&;parameter=value2...
Examples
● Connect to a database using SSL.
db, err = sql.Open("zenith", "user/password@127.0.0.1:1611;ssl_ca=ca.pem;ssl_cert=client-
cert.pem;ssl_key=client-key.pem;ssl_mode=required")
if err != nil {
return err
}
if err != nil {
return err
}
// Execute the SQL statements.
if _,err := db.Exec("create table tst_autocommit (id bigint, name char(30))"); err != nil {
return err
}
if _,err := tx.Exec("insert into tst_autocommit values (3, 'Golus')"); err != nil {
return err
}
if _,err := tx.Exec("insert into tst_autocommit values (4, 'Hellen')"); err != nil {
return err
}
// Commit the transaction.
if err := tx.Commit(); err != nil {
return err
}
● If multiple SQL statements are concatenated into a long SQL statement for
execution. The execution complies with the following restrictions:
– SQL statements are separated by semicolons (;).
– SQL statements cannot contain bound parameters.
– PL/SQL statements, such as stored procedures and anonymous blocks, are
not supported.
– The SELECT statement is not supported. If the SELECT statement is
included, the long SQL statement can be executed properly but no value
will be returned.
if _,err := db.Exec("create table tstMultisql(f1 int)"); err != nil {
return err
}
if _,err :=db.Exec("insert into tstMultisql values(1);insert into tstMultisql values(2);insert tstMultisql
values(4);select * from tstMultiSql;update tstMultisql set f1=3 where f1=2;delete from tstMultisql
where f1=4;"); err != nil {
return err
}
Function Prototype
func (db *DB) Close() error
2.5.7 Go Interfaces
● The Zenith Go driver is executed based on the SQL interfaces defined in the
Go language. An application imports the Zenith Go driver and executes the
init method to register the driver. For details about the standard SQL
interfaces of the Go language, visit https://golang.org/pkg/database/sql/.
Parameter Description
Parameter Description
(rs *Rows) Next() Prepares the next result If there is a further result
row for the Scan method set, true is returned. If
to read. there is no further result
set or there is an error
forward, false is
returned.
2.5.8 Examples
● Connect to a database through the Go driver, run SQL statements, and close
the connection.
package zenithdriver
import (
"database/sql"
"database/sql/driver"
"fmt"
"os"
"reflect"
"strconv"
"testing"
"time"
"unsafe"
)
type (
testValue struct {
direction int
data driver.Value
}
)
// Create a connection.
db, err = sql.Open("zenith", "zenithdriver/Gauss_234@127.0.0.1:1611")
if err != nil {
return err
}
// Run the SQL statement to create a table.
_,err = db.Exec("create table tst_batchbind (id real, name varchar(20))")
if err != nil {
return err
}
// Create a statement.
stmt, err = db.Prepare("insert into tst_batchbind values (:1,:2)")
if err != nil {
return err
}
// Execute the SQL statement through bind parameters.
var intput = [5][2]driver.Value {{1.2,"Golus"}, {2.3, "Bonus"}, {3.5, "Franj"}, {4.6, "Wliian"}, {5.7,
"Dous"}}
for _,value := range intput {
_, err = stmt.Exec(value[0], value[1])
if err != nil {
return err
}
}
// Close the statement.
stmt.Close()
// Close the connection.
db.Close()
Prerequisites
Use unixODBC-2.3.6 or later.
Procedure
Step 1 Obtain the unixODBC source code package from http://www.unixodbc.org/.
Step 2 Install unixODBC.
unixODBC is installed in the /usr/local directory by default. The data source file is
generated in the /usr/local/etc directory, and the library file is generated in
the /usr/local/lib directory.
tar zxvf unixODBC-2.3.6.tar.gz
cd unixODBC-2.3.6
./configure --enable-gui=no
make
make install
For descriptions of the parameters in the odbc.ini file, see Table 2-46.
Currently, GaussDB 100 ODBC supports two connection modes: Common TCP and SSL. SSL
modes are classified into unidirectional authentication and bidirectional authentication.
● If the SSL switch is turned on but no certificate information is configured on ODBC,
ODBC connects to the database in unidirectional authentication mode.
● If a certificate file is configured on ODBC and the SSL switch is turned on, ODBC
connects to the database through bidirectional authentication, which is more secure.
----End
Step 1 Add the following compilation and link parameters for compilation using libraries
and header files in the local environment:
-I${include_path} -lzeodbc -lodbc -lodbcinst
----End
Function API
Function API
2.6.4 Example
This section provides an example to illustrate how to develop applications based
on GaussDB 100.
#if WIN32
#include <windows.h>
#endif
#include <stdlib.h>
#include <stdio.h>
#include "sql.h"
#include "sqlext.h"
int main()
{
SQLHANDLE h_env = NULL;
SQLHANDLE h_conn = NULL;
SQLHANDLE h_stmt = NULL;
SQLINTEGER ret;
SQLCHAR *dsn = (SQLCHAR *)"myzenith";/*Data source name*/
SQLCHAR *username = (SQLCHAR *)"gaussdba";/*User name*/
SQLCHAR *password = (SQLCHAR *)"gaussdb_123";/*Password*/
SQLSMALLINT dsn_len = (SQLSMALLINT)strlen((const char *)dsn);
SQLSMALLINT username_len = (SQLSMALLINT)strlen((const char *)username);
SQLSMALLINT password_len = (SQLSMALLINT)strlen((const char *)password);
{
SQLFreeHandle(SQL_HANDLE_DBC, h_conn);
SQLFreeHandle(SQL_HANDLE_ENV, h_env);
return SQL_ERROR;
}
do
{
ret = SQLFetch(h_stmt);
if (ret != SQL_SUCCESS && ret != SQL_NO_DATA)
{
break;
}
if (ret == SQL_SUCCESS)
{
printf("get %d from table 'test'.\n", colvalue);
}
} while (ret != SQL_NO_DATA);
SQLINTEGER row = 0;
SQLRowCount(h_stmt, &row);
printf("get %d rows table 'test'.\n", row);
// Release handles.
SQLFreeHandle(SQL_HANDLE_STMT, h_stmt);
SQLFreeHandle(SQL_HANDLE_DBC, h_conn);
SQLFreeHandle(SQL_HANDLE_ENV, h_env);
return SQL_SUCCESS;
}
SQLAllocHandle
Description: Allocates ODBC handles.
API:
SQLRETURN SQL_API SQLAllocHandle(SQLSMALLINT HandleType,
SQLHANDLE InputHandle,
SQLHANDLE *OutputHandle)
Input parameter:
Output parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLAllocEnv
Description: Allocates ODBC environment handles.
API:
SQLRETURN SQL_API SQLAllocEnv(SQLHENV *EnvironmentHandle)
Output parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLAllocConnect
Description: Allocates ODBC connection handles.
API:
SQLRETURN SQL_API SQLAllocConnect(SQLHENV EnvironmentHandle,
SQLHDBC *ConnectionHandle)
Input parameter:
Output parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLAllocStmt
Description: Allocates ODBC execution handles.
API:
SQLRETURN SQL_API SQLAllocStmt(SQLHDBC ConnectionHandle,
SQLHSTMT *StatementHandle)
Input parameter:
ConnectionHandle: connection handle
Output parameter:
StatementHandle: allocated execution handle
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFreeHandle
Description: Releases ODBC handles.
API:
SQLRETURN SQL_API SQLFreeHandle(SQLSMALLINT HandleType, SQLHANDLE Handle)
Input parameter:
● HandleType: type of the handle to be released (SQL_HANDLE_ENV,
SQL_HANDLE_DBC, or SQL_HANDLE_STMT)
● Handle: handle
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFreeEnv
Description: Releases ODBC environment handles.
API:
SQLRETURN SQL_API SQLFreeEnv(SQLHENV EnvironmentHandle)
Input parameter:
EnvironmentHandle: environment handle to be released
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFreeConnect
Description: Releases ODBC connection handles.
API:
SQLRETURN SQL_API SQLFreeConnect(SQLHDBC ConnectionHandle)
Input parameter:
ConnectionHandle: connection handle to be released
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFreeStmt
Description: Releases ODBC execution handles.
API:
SQLRETURN SQL_API SQLFreeStmt(SQLHSTMT StatementHandle,
SQLUSMALLINT Option)
Input parameter:
● StatementHandle: execution handle to be released
● Option: how a handle is released (SQL_DROP)
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLSetEnvAttr
Description: Sets ODBC environment handle attributes.
API:
SQLRETURN SQL_API SQLSetEnvAttr(SQLHENV EnvironmentHandle,
SQLINTEGER Attribute,
SQLPOINTER Value,
SQLINTEGER StringLength)
Input parameter:
● EnvironmentHandle: environment handle to be set
● Attribute: name of the attribute to be set (SQL_ATTR_ODBC_VERSION or
SQL_ATTR_OUTPUT_NTS)
SQLSetConnectAttr
Description: Sets ODBC connection handle attributes.
API:
SQLRETURN SQL_API SQLSetConnectAttr(SQLHDBC ConnectionHandle,
SQLINTEGER Attribute,
SQLPOINTER Value,
SQLINTEGER StringLength)
Input parameter:
● ConnectionHandle: connection handle to be set
● Attribute: name of the attribute to be set (SQL_ATTR_AUTOCOMMIT,
SQL_ATTR_LOGIN_TIMEOUT, or SQL_ATTR_CONNECTION_TIMEOUT)
● Value: value of the attribute to be set
● StringLength: length of an attribute value
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLSetStmtAttr
Description: Sets ODBC execution handle attributes.
API:
SQLRETURN SQL_API SQLSetStmtAttr(SQLHSTMT StatementHandle,
SQLINTEGER Attribute,
SQLPOINTER Value,
SQLINTEGER StringLength)
Input parameter:
● StatementHandle: execution handle to be set
● Attribute: name of the attribute to be set (such as
SQL_ATTR_PARAMSET_SIZE, SQL_ATTR_ROW_ARRAY_SIZE,
SQL_ATTR_ROW_BIND_TYPE, SQL_ATTR_ROWS_FETCHED_PTR, and
SQL_ATTR_ROW_STATUS_PTR)
● Value: value of the attribute to be set
● StringLength: length of an attribute value
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLConnect
Description: Uses a connection handle to connect to a data source.
API:
SQLRETURN SQL_API SQLConnect(SQLHDBC ConnectionHandle,
SQLCHAR *ServerName,
SQLSMALLINT NameLength1,
SQLCHAR *UserName,
SQLSMALLINT NameLength2,
SQLCHAR *Authentication,
SQLSMALLINT NameLength3)
Input parameter:
● ConnectionHandle: connection handle
● ServerName: name of the data source configured for ODBC
● NameLength1: length of the data source name
● UserName: username for connecting to the data source
● NameLength2: username length
● Authentication: password for connecting to the data source
● NameLength3: password length
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLDisconnect
Description: Disconnects from a data source.
API:
SQLRETURN SQL_API SQLDisconnect(SQLHDBC ConnectionHandle)
Input parameter:
ConnectionHandle: connection handle
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLDriverConnect
Description: Uses a connection handle to connect to a data source.
API:
SQLRETURN SQL_API SQLDriverConnect(
SQLHDBC hdbc,
SQLHWND hwnd,
SQLCHAR *szConnStrIn,
SQLSMALLINT cbConnStrIn,
SQLCHAR *szConnStrOut,
SQLSMALLINT cbConnStrOutMax,
SQLSMALLINT *pcbConnStrOut,
SQLUSMALLINT fDriverCompletion)
Input parameter:
Output parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLPrepare
Description: Prepares an SQL statement to be executed.
API:
SQLRETURN SQL_API SQLPrepare(SQLHSTMT StatementHandle,
SQLCHAR *StatementText,
SQLINTEGER TextLength)
Input parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLBindParameter
Description: Binds parameters to existing SQL execution handles.
API:
SQLRETURN SQL_API SQLBindParameter(
SQLHSTMT StatementHandle,
SQLUSMALLINT ParameterNumber,
SQLSMALLINT InputOutputType,
SQLSMALLINT ValueType,
SQLSMALLINT ParameterType,
SQLULEN ColumnSize,
SQLSMALLINT DecimalDigits,
SQLPOINTER ParameterValuePtr,
SQLLEN BufferLength,
SQLLEN * StrLen_or_IndPtr)
Input parameter:
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLBindCol
Description: Binds to the buffer for storing result set columns.
API:
SQLRETURN SQL_API SQLBindCol(SQLHSTMT StatementHandle,
SQLUSMALLINT ColumnNumber, SQLSMALLINT TargetType,
SQLPOINTER TargetValue, SQLLEN BufferLength,
SQLLEN *StrLen_or_Ind)
Input parameter:
SQLExecute
Description: Runs an SQL statement.
API:
SQLRETURN SQL_API SQLExecute(
SQLHSTMT StatementHandle)
SQLExecDirect
Description: Directly runs an SQL statement.
API:
SQLRETURN SQL_API SQLExecDirect(
SQLHSTMT StatementHandle,
SQLCHAR * StatementText,
SQLINTEGER TextLength)
Input parameter:
● StatementHandle: execution handle
● StatementText: SQL statement
● TextLength: SQL statement length
Return value:
● SQL_SUCCESS: successful
● SQL_NEED_DATA: unbound parameters exist
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLColAttribute
Description: Describes column attributes.
API:
SQLRETURN SQL_API SQLColAttribute(
SQLHSTMT StatementHandle,
SQLUSMALLINT ColumnNumber,
SQLUSMALLINT FieldIdentifier,
SQLPOINTER CharacterAttributePtr,
SQLSMALLINT BufferLength,
SQLSMALLINT * StringLengthPtr,
SQLLEN * NumericAttributePtr)
Input parameter:
● StatementHandle: execution handle
● ColumnNumber: column sequence number, starting from 1
● FieldIdentifier: description type (SQL_DESC_NAME, SQL_DESC_LENGTH,
SQL_DESC_NULLABLE, SQL_DESC_TYPE, or SQL_DESC_TYPE_NAME)
● BufferLength: length of character buffer
Output parameter:
● CharacterAttributePtr: character buffer
● StringLengthPtr: length of the obtained string
● NumericAttributePtr: digit buffer
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLGetDiagRec
Description: Diagnoses faults.
API:
SQLRETURN SQL_API SQLGetDiagRec(
SQLSMALLINT HandleType,
SQLHANDLE Handle,
SQLSMALLINT RecNumber,
SQLCHAR * SQLState,
SQLINTEGER * NativeErrorPtr,
SQLCHAR * MessageText,
SQLSMALLINT BufferLength,
SQLSMALLINT * TextLengthPtr)
Input parameter:
● HandleType: handle type
● Handle: handle
● RecNumber: sequence number of a diagnosed error. Currently, only one error
is cached.
● BufferLength: length of the error information buffer
Output parameter:
● SQLState: SQL status (currently not supported)
● NativeErrorPtr: error code on the data source side
● MessageText: error information buffer
● TextLengthPtr: error information length
Return value:
● SQL_SUCCESS: successful
● SQL_NO_DATA: no more diagnosis information available
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFetch
Description: Obtains the next row in the results.
API:
SQLRETURN SQL_API SQLFetch(
SQLHSTMT StatementHandle)
SQLGetData
Description: Obtains the data of a column from result rows.
API:
SQLRETURN SQL_API SQLGetData(
SQLHSTMT StatementHandle,
SQLUSMALLINT Col_or_Param_Num,
SQLSMALLINT TargetType,
SQLPOINTER TargetValuePtr,
SQLLEN BufferLength,
SQLLEN * StrLen_or_IndPtr)
Input parameter:
● StatementHandle: execution handle
● Col_or_Param_Num: column sequence number, starting from 1
● TargetType: C target buffer type
● TargetValuePtr: target buffer
● BufferLength: buffer length
● StrLen_or_IndPtr: fetched data length
Return value:
● SQL_SUCCESS: successful
● SQL_SUCCESS_WITH_INFO: The column is truncated. Try again to obtain the
remaining data.
● SQL_NULL_DATA: The column is empty.
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLCloseCursor
Description: Closes a cursor.
API:
SQLRETURN SQL_API SQLCloseCursor(
SQLHSTMT StatementHandle)
Input parameter:
StatementHandle: execution handle
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLRowCount
Description: Obtains the number of rows. For a query, the number of obtained
rows is returned. For a modification SQL statement, the number of modified rows
is returned.
API:
SQLRETURN SQL_API SQLRowCount(
SQLHSTMT StatementHandle,
SQLLEN * RowCountPtr)
Input parameter:
StatementHandle: execution handle
Output parameter:
RowCountPtr: number of modified rows
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLEndTran
Description: Ends a transaction.
API:
SQLRETURN SQL_API SQLEndTran(SQLSMALLINT HandleType,
SQLHANDLE Handle,
SQLSMALLINT CompletionType)
Input parameter:
● HandleType: handle type
● Handle: handle
● CompletionType: how a transaction is ended (SQL_COMMIT or
SQL_ROLLBACK)
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLError
Description: Obtains the number of rows. For a query, the number of obtained
rows is returned. For a modification SQL statement, the number of modified rows
is returned.
API:
SQLRETURN SQL_API SQLError(SQLHENV EnvironmentHandle,
SQLHDBC ConnectionHandle,
SQLHSTMT StatementHandle,
SQLCHAR *Sqlstate,
SQLINTEGER *NativeError,
SQLCHAR *MessageText,
SQLSMALLINT BufferLength,
SQLSMALLINT *TextLength)
Input parameter:
● EnvironmentHandle: environment handle
● ConnectionHandle: connection handle
● StatementHandle: execution handle
● BufferLength: length of the error information buffer
Output parameter:
● Sqlstate: The output is empty.
● NativeError: local error code of a data source
● MessageText: error information buffer
● TextLength: error information length
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLFetchScroll
Description: Fetches specified rows.
API:
SQLRETURN SQL_API SQLFetchScroll(
SQLHSTMT StatementHandle,
SQLSMALLINT FetchOrientation,
SQLLEN FetchOffset)
Input parameter:
SQLParamData
Description: Checks parameter status.
API:
SQLRETURN SQL_API SQLParamData(
SQLHSTMT StatementHandle,
SQLPOINTER * ValuePtrPtr)
Input parameter:
StatementHandle: execution handle
Output parameter:
ValuePtrPtr: For prepared SQL statement, the parameter pointer that is not fully
bound is returned. For executed SQL statements, the result column token
(currently column ID) is returned.
Return value:
● SQL_SUCCESS: successful
● SQL_NEED_DATA: Parameters that need to be bound to more data exist.
● SQL_PARAM_DATA_AVAILABLE: Some columns are not completely obtained.
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLPutData
Description: Inputs data to be bound to execution handles. If SQLParamData
returns readable data, you do not need to perform SQLExecute again.
API:
SQLRETURN SQL_API SQLPutData(
SQLHSTMT StatementHandle,
SQLPOINTER DataPtr,
SQLLEN StrLen_or_Ind)
Input parameter:
● StatementHandle: execution handle
● DataPtr: first bound address of the input parameter
● StrLen_or_Ind: input parameter length
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
SQLNumResultCols
Description: Returns the number of columns in the result set.
Interface:
SQLRETURN SQLNumResultCols(
SQLHSTMT StatementHandle,
SQLSMALLINT * ColumnCountPtr)
Input parameter:
● StatementHandle: execution handle
Output parameter:
● ColumnCountPtr: where to return the number of columns in the result set
Return value:
● SQL_SUCCESS: successful
● !=SQL_SUCCESS: failed
Thread-safe: no
3.1 Conventions
Variable Naming Conventions
GaussDB 100 supports user-defined names, called identifiers, including database
names, table names, column names, view names, function names, procedure
names, variable names, and usernames.
● Identifiers start with letters or underscores (_) and can contain any characters,
such as letters, digits, and underscores (_).
● A database name can contain a maximum of 30 characters. Other identifiers
can contain a maximum of 64 characters.
● If an identifier is not enclosed in quotation marks, it will be truncated. If it
contains spaces or the following characters: !%&^(,)*+,-./. Characters before
the truncating position are used as an identifier and others are used as
another identifier or keyword.
SQL Standards
● The length of the SQL statement cannot exceed 1 MB. Otherwise, an error is
reported.
● A constant string supports a maximum of 8000 bytes.
● The maximum length of a row (total length of all column values in the row)
in a table is 64000 bytes.
Comment Conventions
The SQL scripts of GaussDB 100 support two comment formats:
● Single-line comment
Format: -- Comment
● Multi-line comment
Format: /*Comment*/
3.2 Example
A human resource (HR) database is provided for users to learn and verify GaussDB
100 For details about how to install the database, see Installation Guide.
3.2.1 示例库说明
如果通过执行示例数据库脚本安装示例库,系统会自动创建一个名为hr的用户。如果没
有安装示例数据库,可以通过本章的SQL语句手工创建示例。
● employment_history表。
--创建employment_history表。
create table hr.employment_history
(
staff_id NUMBER(6),
start_date DATE,
end_date DATE,
employment_id VARCHAR2(10),
section_id NUMBER(4)
);
-- Insert data.
insert into hr.employment_history (staff_id, start_date, end_date, employment_id, section_id)
values (102, to_date('13-01-1993', 'dd-mm-yyyy'), to_date('24-07-1998', 'dd-mm-yyyy'), 'IT_PROG',
60);
● sections表。
--创建sections表。
create table hr.sections
(
section_id NUMBER(4) not null,
section_name VARCHAR2(30),
manager_id NUMBER(6),
place_id NUMBER(4)
);
--插入数据。
insert into hr.sections (section_id, section_name, manager_id, place_id)
values (10, 'Administration', 200, 1700);
● places表
--创建places表。
create table hr.places
(
place_id NUMBER(4) not null,
street_address VARCHAR2(40),
postal_code VARCHAR2(12),
city VARCHAR2(30),
state_province VARCHAR2(25),
state_id CHAR(2)
);
--插入数据。
insert into hr.places (place_id, street_address, postal_code, city, state_province, state_id)
values (1000, '1297 Via Cola di Rie', '00989', 'Roma', '', 'IT');
● areas表。
--创建areas表。
create table hr.areas
(
area_id NUMBER,
area_name VARCHAR2(25)
);
--插入数据。
insert into hr.areas (area_id, area_name)
values (1, 'Europe');
● college表。
--创建college表。
create table hr.college
(
college_id NUMBER,
college_name VARCHAR2(40)
);
-- Insert data.
insert into hr.college (college_id, college_name)values (1001, 'The University of Melbourne');
● employments表。
--创建employments表。
create table hr.employments
(
employment_id VARCHAR2(10) not null,
employment_title VARCHAR2(35),
min_salary NUMBER(6),
max_salary NUMBER(6)
);
--插入数据。
insert into hr.employments (employment_id, employment_title, min_salary, max_salary)
values ('AD_PRES', 'President', 20000, 40000);
● states表。
--创建states表。
create table hr.states
(
state_id CHAR(2),
state_name VARCHAR2(40),
area_id NUMBER,
constraint state_c_id_pk primary key (state_ID)
);
--插入数据。
insert into hr.states (state_id, state_name, area_id)
values ('AR', 'Argentina', 2);
● staffs表。
--创建staffs表。
CREATE TABLE hr.staffs
(
staff_id NUMBER(6) not null,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
);
--插入数据。
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844', to_date('13-01-2000', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (200, 'Jennifer', 'Whalen', 'JWHALEN', '515.123.4444', to_date('17-09-1987', 'dd-mm-yyyy'),
'AD_ASST', 4400.00, null, 101, 10);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (201, 'Michael', 'Hartstein', 'MHARTSTE', '515.123.5555', to_date('17-02-1996', 'dd-mm-yyyy'),
'MK_MAN', 13000.00, null, 100, 20);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (202, 'Pat', 'Fay', 'PFAY', '603.123.6666', to_date('17-08-1997', 'dd-mm-yyyy'), 'MK_REP',
6000.00, null, 201, 20);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (203, 'Susan', 'Mavris', 'SMAVRIS', '515.123.7777', to_date('07-06-1994', 'dd-mm-yyyy'),
'HR_REP', 6500.00, null, 101, 40);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (204, 'Hermann', 'Baer', 'HBAER', '515.123.8888', to_date('07-06-1994', 'dd-mm-yyyy'), 'PR_REP',
10000.00, null, 101, 70);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (205, 'Shelley', 'Higgins', 'SHIGGINS', '515.123.8080', to_date('07-06-1994', 'dd-mm-yyyy'),
'AC_MGR', 12000.00, null, 101, 110);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (206, 'William', 'Gietz', 'WGIETZ', '515.123.8181', to_date('07-06-1994', 'dd-mm-yyyy'),
'AC_ACCOUNT', 8300.00, null, 205, 110);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (100, 'Steven', 'King', 'SKING', '515.123.4567', to_date('17-06-1987', 'dd-mm-yyyy'), 'AD_PRES',
24000.00, null, null, 90);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (101, 'Neena', 'Kochhar', 'NKOCHHAR', '515.123.4568', to_date('21-09-1989', 'dd-mm-yyyy'),
'AD_VP', 17000.00, null, 100, 90);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (102, 'Lex', 'De Haan', 'LDEHAAN', '515.123.4569', to_date('13-01-1993', 'dd-mm-yyyy'),
'AD_VP', 17000.00, null, 100, 90);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (103, 'Alexander', 'Hunold', 'AHUNOLD', '590.423.4567', to_date('03-01-1990', 'dd-mm-yyyy'),
'IT_PROG', 9000.00, null, 102, 60);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (104, 'Bruce', 'Ernst', 'BERNST', '590.423.4568', to_date('21-05-1991', 'dd-mm-yyyy'), 'IT_PROG',
6000.00, null, 103, 60);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (105, 'David', 'Austin', 'DAUSTIN', '590.423.4569', to_date('25-06-1997', 'dd-mm-yyyy'),
'IT_PROG', 4800.00, null, 103, 60);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (106, 'Valli', 'Pataballa', 'VPATABAL', '590.423.4560', to_date('05-02-1998', 'dd-mm-yyyy'),
'IT_PROG', 4800.00, null, 103, 60);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (107, 'Diana', 'Lorentz', 'DLORENTZ', '590.423.5567', to_date('07-02-1999', 'dd-mm-yyyy'),
'IT_PROG', 4200.00, null, 103, 60);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (108, 'Nancy', 'Greenberg', 'NGREENBE', '515.124.4569', to_date('17-08-1994', 'dd-mm-yyyy'),
'FI_MGR', 12000.00, null, 101, 100);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (109, 'Daniel', 'Faviet', 'DFAVIET', '515.124.4169', to_date('16-08-1994', 'dd-mm-yyyy'),
'FI_ACCOUNT', 9000.00, null, 108, 100);
insert into hr.staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id)
values (110, 'John', 'Chen', 'JCHEN', '515.124.4269', to_date('28-09-1997', 'dd-mm-yyyy'),
'FI_ACCOUNT', 8200.00, null, 108, 100);
3.3 Keyword
Keywords are meaningful words in SQL statements. This topic describes SQL
standard keywords and special GaussDB 100 keywords.
ABORT Non-reserved - -
ABS - Non-reserved -
ACCESS - - -
ACCOUNT Non-reserved - -
ADMIN - Reserved -
AGGREGATE - Reserved -
ALIAS - Reserved -
ALSO - - -
ALWAYS - - -
ANALYSE - - -
ANALYZE Reserved - -
APP - - -
ARRAY - Reserved -
ASENSITIVE - Non-reserved -
ASSIGNMENT - Non-reserved -
ASYMMETRIC - Non-reserved -
AT - Reserved Reserved
ATOMIC - Non-reserved -
ATTRIBUTE - - -
AUDIT Reserved - -
AUTHID - - -
AUTOEXTEND Non-reserved - -
AUTOMAPPED - - -
BACKWARD - - -
BARRIER - - -
BIGINT Non-reserved - -
BINARY_DOUBLE Non-reserved - -
BINARY_INTEGER Non-reserved - -
BITVAR - Non-reserved -
BUCKETS - - -
BREADTH - Reserved -
C - Non-reserved Non-
reserved
CACHE Non-reserved - -
CALLED - Non-reserved -
CARDINALITY - Non-reserved -
CHAIN - Non-reserved -
CHARACTERISTICS - - -
CHECKED - Non-reserved -
CHECKPOINT Non-reserved - -
CLASS - Reserved -
CLEAN - - -
CLUSTER - - -
COLUMNS Non-reserved - -
COMMAND_FUNCTIO - Non-reserved -
N_CODE
COMMENT Non-reserved - -
COMMENTS - - -
COMPRESS Reserved - -
COMPLETION - Reserved -
CONCURRENTLY - - -
CONDITION - - -
CONFIGURATION - - -
CONSTRUCTOR - Reserved -
CONTAINS - Non-reserved -
CONTENT Non-reserved - -
CONVERSION - - -
COORDINATOR - - -
COPY - - -
COST - - -
CSV - - -
CUBE - Reserved -
CUMULATIVE Reserved - -
CURRENT_CATALOG - - -
CURRENT_PATH - Reserved -
CURRENT_ROLE - Reserved -
CURRENT_SCHEMA - - -
DATABASE Non-reserved - -
DATAFILE Non-reserved - -
DBCOMPATIBILITY - - -
DECODE - - -
DEFAULTS - - -
DEFINED - Non-reserved -
DEFINER - Non-reserved -
DELIMITER - - -
DELIMITERS - - -
DELTA - - -
DEPTH - Reserved -
DEREF - Reserved -
DESTROY - Reserved -
DESTRUCTOR - Reserved -
DETERMINISTIC - Reserved -
DIRECT - - -
DISABLE Non-reserved - -
DISCARD Non-reserved - -
DISPATCH - Non-reserved -
DISTRIBUTE Non-reserved - -
DISTRIBUTION - - -
DO Non-reserved - -
DOCUMENT - - -
DYNAMIC - Reserved -
DYNAMIC_FUNCTION_ - Non-reserved -
CODE
EACH - Reserved -
ENABLE Non-reserved - -
ENCODING - - -
ENCRYPTED - - -
ENFORCED - - -
ENUM - - -
EOL - - -
EQUALS - Reserved -
ESCAPING - - -
EVERY - Reserved -
EXCHANGE - - -
EXCLUDE - - -
EXCLUDING - - -
EXCLUSIVE - - -
EXISTING - Non-reserved -
EXPLAIN Non-reserved - -
EXTENSION - - -
FAMILY Non-reserved - -
FILEHEADER Non-reserved - -
FINAL - Non-reserved -
FINISH Reserved - -
FOLLOWING Non-reserved - -
FORALL Reserved - -
FORCE Non-reserved - -
FORMATTER Non-reserved - -
FORWARD Non-reserved - -
FREE - Reserved -
FREEZE Reserved - -
FUNCTIONS Non-reserved - -
G - Non-reserved -
GENERAL - Reserved -
GENERATED - Non-reserved -
GO - Reserved Reserved
GREATEST - - -
GROUPID Non-reserved - -
GROUPING - Reserved -
HANDLER Non-reserved - -
HEADER Non-reserved - -
HIERARCHY - Non-reserved -
HOST - Reserved -
IDENTIFIED Reserved - -
IF Non-reserved - -
IGNORE - Reserved -
ILIKE Reserved - -
IMMUTABLE Non-reserved - -
IMPLEMENTATION - Non-reserved -
IMPLICIT Non-reserved - -
INCLUDING Non-reserved - -
INCREMENT Reserved - -
INDEX Reserved - -
INDEXES Non-reserved - -
INFIX - Non-reserved -
INHERIT Non-reserved - -
INHERITS Non-reserved - -
INITIAL Non-reserved - -
INITIALIZE - Reserved -
INITRANS Non-reserved - -
INLINE Non-reserved - -
INOUT - Reserved -
INSTANCE - Non-reserved -
INSTANTIABLE - Non-reserved -
INSTEAD Non-reserved - -
ISNULL Reserved - -
ITERATE - Reserved -
K - Non-reserved -
KEY_MEMBER - Non-reserved -
KEY_TYPE - Non-reserved -
LABEL Non-reserved - -
LATERAL - Reserved -
LC_COLLATE Non-reserved - -
LC_CTYPE Non-reserved - -
LEAKPROOF Non-reserved - -
LEAST - - -
LIST Reserved - -
LISTEN Non-reserved - -
LOAD Non-reserved - -
LOCATION Non-reserved - -
LOCATOR - Reserved -
LOCK Reserved - -
LOG Non-reserved - -
LOGGING Non-reserved - -
LOGIN Non-reserved - -
LOOP Non-reserved - -
M - Non-reserved -
MAP - Reserved -
MAPPING Non-reserved - -
MATCHED - - -
MAXEXTENTS Non-reserved - -
MAXSIZE Non-reserved - -
MAXTRANS Non-reserved - -
MAXVALUE Non-reserved - -
MERGE Non-reserved - -
METHOD - Non-reserved -
MINEXTENTS Non-reserved - -
MINUS Reserved - -
MINVALUE Non-reserved - -
MOD - Non-reserved -
MODE Non-reserved - -
MODIFIES - Reserved -
MOVE Non-reserved - -
MOVEMENT Non-reserved - -
NCHAR Reserved - -
NCLOB - Reserved -
NEW - Reserved -
NLSSORT Non-reserved - -
NOCOMPRESS Non-reserved - -
NOCYCLE Non-reserved - -
NODE Non-reserved - -
NOLOGGING Non-reserved - -
NOLOGIN Non-reserved - -
NOMAXVALUE Non-reserved - -
NOMINVALUE Non-reserved - -
NONE - Reserved -
NOTHING Non-reserved - -
NOTIFY Non-reserved - -
NOTNULL Reserved - -
NOWAIT Reserved - -
NULLS Non-reserved - -
NUMSTR Non-reserved - -
NVARCHAR2 Non-reserved - -
NVL - - -
OFFLINE Reserved - -
OFFSET Non-reserved - -
OIDS Non-reserved - -
OLD - Reserved -
ONLINE Reserved - -
OPERATION - Reserved -
OPERATOR Non-reserved - -
OPTIMIZATION Non-reserved - -
ORDINALITY - Reserved -
OUT - Reserved -
OVER - - -
OVERLAY - Non-reserved -
OVERRIDING - Non-reserved -
OWNED Non-reserved - -
OWNER Non-reserved - -
PARAMETER - Reserved -
PARAMETERS - Reserved -
PARAMETER_MODE - Non-reserved -
PARAMETER_NAME - Non-reserved -
PARAMETER_ORDINAL - Non-reserved -
_POSITION
PARAMETER_SPECIFIC_ - Non-reserved -
CATALOG
PARAMETER_SPECIFIC_ - Non-reserved -
NAME
PARAMETER_SPECIFIC_ - Non-reserved -
SCHEMA
PARSER Non-reserved - -
PARTITION Non-reserved - -
PARTITIONS Non-reserved - -
PASSING Non-reserved - -
PASSWORD Non-reserved - -
PATH - Reserved -
PCTFREE Non-reserved - -
PER Non-reserved - -
PERCENT Non-reserved - -
PERFORMANCE Non-reserved - -
PLACING Non-reserved - -
PLANS Non-reserved - -
POOL Non-reserved - -
POSTFIX - Reserved -
PRECEDING Non-reserved - -
PREFERRED Non-reserved - -
PREORDER - Reserved -
PREPARED Non-reserved - -
PRIVILEGE Non-reserved - -
PROCEDURAL Non-reserved - -
PROFILE Non-reserved - -
QUERY Non-reserved - -
QUOTE Non-reserved - -
RANGE Non-reserved - -
RAW Reserved - -
READS - Reserved -
REASSIGN Non-reserved - -
REBUILD Non-reserved - -
RECHECK Non-reserved - -
REFERENCING - Reserved -
REINDEX Non-reserved - -
REJECT Non-reserved - -
RELEASE Non-reserved - -
RELOPTIONS Non-reserved - -
REMOTE Non-reserved - -
RENAME Reserved - -
REPLACE Non-reserved - -
REPLICA Non-reserved - -
REPLICATION Non-reserved - -
RESET Non-reserved - -
RESIZE Reserved - -
RESOURCE Non-reserved - -
RESTART Non-reserved - -
RESULT - Reserved -
RETURNING Non-reserved - -
REUSE Non-reserved - -
ROLLUP - Reserved -
ROUTINE - Reserved -
ROUTINE_CATALOG - Non-reserved -
ROUTINE_NAME - Non-reserved -
ROUTINE_SCHEMA - Non-reserved -
ROW - Reserved -
ROWID Reserved - -
ROWNUM Reserved - -
ROWSCN Reserved - -
RULE Non-reserved - -
SCOPE - Reserved -
SELF - Non-reserved -
SENSITIVE - Non-reserved -
SEQUENCES Non-reserved - -
SERVER Non-reserved - -
SESSIONTIMEZONE Non-reserved - -
SETOF - - -
SETS - Reserved -
SHARE Non-reserved - -
SHOW Non-reserved - -
SIMILAR - Non-reserved -
SMALLDATETIME - - -
SNAPSHOT Non-reserved - -
SOURCE - Non-reserved -
SPECIFIC - Reserved -
SPECIFICTYPE - Reserved -
SPECIFIC_NAME - Non-reserved -
SPLIT Non-reserved - -
SQLCODE - - Reserved
SQLERROR - - Reserved
SQLEXCEPTION - Reserved -
SQL_MAP Reserved - -
SQLWARNING - Reserved -
STABLE Non-reserved - -
STANDALONE Non-reserved - -
STATE - Reserved -
STATIC - Reserved -
STATISTICS Non-reserved - -
STDIN Non-reserved - -
STDOUT Non-reserved - -
STORAGE Non-reserved - -
STORE Non-reserved - -
STRICT Non-reserved - -
STRIP Non-reserved - -
STRUCTURE - Reserved -
STYLE - Non-reserved -
SUBLIST - Non-reserved -
SUPERUSER Non-reserved - -
SYNONYM Reserved - -
SYS_REFCURSOR Non-reserved - -
SYSDATE Reserved - -
SYSID Non-reserved - -
TABLES Non-reserved - -
TABLESPACE Non-reserved - -
TEMP Non-reserved - -
TEMPLATE Non-reserved - -
TERMINATE - Reserved -
TEXT Non-reserved - -
TINYINT Non-reserved - -
TRANSACTIONS_COM - Non-reserved -
MITTED
TRANSACTIONS_ROLL - Non-reserved -
ED_BACK
TRANSACTION_ACTIVE - Non-reserved -
TRANSFORM - Non-reserved -
TRANSFORMS - Non-reserved -
TREAT - Reserved -
TRIGGER_CATALOG - Non-reserved -
TRIGGER_NAME - Non-reserved -
TRIGGER_SCHEMA - Non-reserved -
TRUNCATE Non-reserved - -
TRUSTED Non-reserved - -
TYPES Non-reserved - -
UESCAPE - - -
UNBOUNDED Non-reserved - -
UNDER - Reserved -
UNENCRYPTED Non-reserved - -
UNLIMITED Non-reserved - -
UNLISTEN Non-reserved - -
UNLOCK Non-reserved - -
UNLOGGED Non-reserved - -
UNNEST - Reserved -
UNTIL Reserved - -
UNUSABLE Non-reserved - -
USER_DEFINED_TYPE_ - Non-reserved -
CATALOG
USER_DEFINED_TYPE_ - Non-reserved -
NAME
USER_DEFINED_TYPE_ - Non-reserved -
SCHEMA
VACUUM Non-reserved - -
VALID Non-reserved - -
VALIDATE Non-reserved - -
VALIDATION Non-reserved - -
VALIDATOR Non-reserved - -
VARCHAR2 Reserved - -
VARIABLE - Reserved -
VARIADIC Non-reserved - -
VERBOSE - - -
VERSION Non-reserved - -
VOLATILE Non-reserved - -
WHITESPACE Non-reserved - -
WINDOW Non-reserved - -
WORKLOAD Non-reserved - -
WRAPPER Non-reserved - -
XML Non-reserved - -
XMLATTRIBUTES - - -
XMLCONCAT - - -
XMLELEMENT - - -
XMLEXISTS - - -
XMLFOREST - - -
XMLPARSE - - -
XMLPI - - -
XMLROOT - - -
XMLSERIALIZE - - -
YES Non-reserved - -
3.4.1.1 Integer
GaussDB 100 supports basic (native) 32-bit integers and 64-bit integers.
BINARY_INTEGER
Syntax:
BINARY_INTEGER
INTEGER
Syntax:
INTEGER
Purpose:
Keywords:
● INT
● INT SIGNED
● INTEGER SIGNED
● SHORT
● SMALLINT
● TINYINT
BINARY_UINT32
Syntax:
BINARY_UINT32
INTEGER UNSIGNED
Syntax:
INTEGER UNSIGNED
Purpose:
Keywords:
● UINT
● BINARY_UINT32
● INTEGER UNSIGNED
For the INTEGER UNSIGNED data type, the error message "GS-00659 %s out of range" is
displayed if overflow occurs.
BINARY_BIGINT
Syntax:
BINARY_BIGINT
Keywords:
● BINARY_BIGINT
● BIGINT SIGNED
BIGINT
Syntax:
BIGINT
Purpose:
Keywords:
● BIGINT
● BINARY_BIGINT
For the BIGINT data type, error GS-00659 "%s out of range" will be displayed if there is
overflow.
BINARY_DOUBLE
Syntax:
BINARY_DOUBLE
DOUBLE
Syntax:
DOUBLE
Purpose:
● USE_NATIVE_DATATYPE=TRUE maps to the BINARY_DOUBLE type.
● USE_NATIVE_DATATYPE = FALSE maps to the NUMBER type.
Keywords:
● REAL
● DOUBLE
● FLOAT
● BINARY_DOUBLE
FLOAT
Syntax:
FLOAT
Purpose:
● USE_NATIVE_DATATYPE=TRUE maps to the BINARY_DOUBLE type.
● USE_NATIVE_DATATYPE = FALSE maps to the NUMBER type.
Keywords:
● REAL
● DOUBLE
● FLOAT
● BINARY_DOUBLE
REAL
Syntax:
REAL
Purpose:
● USE_NATIVE_DATATYPE=TRUE maps to the BINARY_DOUBLE type.
● USE_NATIVE_DATATYPE = FALSE maps to the NUMBER type.
Occupied space: 8 bytes
Keywords:
● REAL
● DOUBLE
● FLOAT
● BINARY_DOUBLE
● Floating point number are not accurate, and their upper and lower boundaries are not
exact values. Note this point when testing the bounds. select cast(1E308 as real) + 1
from SYS_DUMMY; cannot be directly used for tests. This is because of floating-point
number definitions at the bottom layer of computers, and is irrelevant to the specific
implementation of GaussDB 100.
● For the DOUBLE data type, error GS-00659 "%s out of range" will be displayed if there
is overflow.
● The DECIMAL or NUMBER data type can store a maximum of 40 valid digits.
● For the DECIMAL or NUMBER data type, error GS-00659 "%s out of range"
will be displayed if there is overflow.
DECIMAL/NUMBER
Syntax:
NUMBER/DECIMAL
NUMBER/DECIMAL(p)
NUMBER/DECIMAL(p,s)
● The value range of p is [1, 38], indicating the maximum storage precision.
● The value range of s is [–84, 127], indicating the number of valid digits after
the decimal point.
● If p and s are not specified, the value after the decimal point is not limited. A
maximum of 40 valid digits can be stored.
● If you do not specify s or s=0, the NUMBER type has no decimal part.
Keywords:
● DECIMAL
● NUMBER
● NUMERIC
– precision indicates the number of valid digits, and its value range is [1,
38]. scale indicates the number of digits to the right of the decimal point,
and its value range is [–84, 127]. If the value of scale is a negative
number, the number of digits on the left of the decimal point is reduced.
– If precision is specified but scale is not specified, the default value of
scale is 0. That is, in column_name NUMBER(precision), the default
value of scale is 0, indicating that there is no decimal part.
The following table lists how the settings of precision and scale affect
the storage of the Number type.
DECIMAL and NUMBER types use variable-length storage, and its length depends on the
number of valid digits.
CHAR
Syntax:
CHAR(size [BYTE | CHAR])
● For a fixed-length string, size BYTE or size CHAR indicates the maximum number of
bytes or characters that can be contained.
● If neither CHAR nor BYTE is specified, BYTE is used by default.
● If the input length is less than the value specified by size, spaces are used to right-pad
the value.
NCHAR
Syntax:
NCHAR(size)
CLOB
Syntax:
CLOB
VARCHAR
Syntax:
VARCHAR(size [BYTE | CHAR])
NVARCHAR
Syntax:
NVARCHAR(size)
BINARY
Syntax:
BINARY(size)
VARBINARY
Syntax:
VARBINARY(size)
IMAGE
Syntax:
IMAGE
Purpose: Stores large object data. It is the large object type of VARBINARY.
Occupied space: 0 to 4 GB
Keywords:
● IMAGE
● LONGBLOB
● MEDIUMBLOB
BLOB
Syntax:
BLOB
Purpose: Stores binary data of variable-length objects. It is the large object type of
RAW.
Occupied space: 0 to 4 GB
Keywords:
● BLOB
● BYTEA
DATETIME/DATE
Syntax:
DATETIME
Purpose: Stores data of the date type that does not contain the time zone.
The value includes year, month, day, hour, minute, and second.
Keywords:
● DATE
● DATETIME
TIMESTAMP
Syntax:
TIMESTAMP[(n)]
Purpose: Stores the data of the timestamp type without the time zone.
● The value includes year, month, day, hour, minute, second, and microsecond.
● The value range of n is [0, 6], indicating the precision after second.
TIMESTAMP(n) can also be set to TIMESTAMP without any parameters. In this
case, the number of decimal digits after the second is 6 by default.
Value range: [0001-01-01 00:00:00.000000, 9999-12-31 23:59:59.999999]
Occupied space: 8 bytes
Keywords: TIMESTAMP
Purpose: Stores the data of the timestamp type with the time zone.
● The value includes year, month, day, hour, minute, second, and microsecond.
● The value range of n is [0, 6], indicating the precision after second.
TIMESTAMP(n) can also be set to TIMESTAMP without any parameters. In this
case, the number of decimal digits after the second is 6 by default.
Value range: [0001-01-01 00:00:00.000000, 9999-12-31 23:59:59.999999]
Occupied space: 12 bytes
Keywords: TIMESTAMP(n) WITH TIME ZONE
Purpose: Stores data of the time stamp type with the time zone. Time zone is not
stored. When data is stored, its timestamp is converted to the timestamp of the
database time zone. When users check the data, the timestamp is converted to
the timestamp of the time zone where the current session is located.
TIMESTAMP(n) can also be set to TIMESTAMP without any parameters. In this
case, the number of decimal digits after the second is 6 by default.
Occupied space: 8 bytes
Keywords: TIMESTAMP(n) WITH LOCAL TIME ZONE
" " (space), "-" Delimiter Yes -- Use the delimiter "X" to
separate seconds and
(hyphen), milliseconds in data of the date
"\", "/", ":", type.
select
"," (comma), "." to_timestamp('2017-09-11
(period), ";" 23:45:59.44', 'YYYY-MM-DD
HH24:MI:SSXFF6') from
(semicolon), X SYS_DUMMY;
-- Use the delimiters "-"
(hyphen), "/", ":",
and "." to separate data of the
date type.
select to_char(systimestamp,
'YYYY-MM/DD HH24.MI:SS.FF')
from SYS_DUMMY;
-- Use the delimiters "
" (space), "\", ",",
and ";" to separate data of the
date type.
select to_char(systimestamp,
'YYYY MM\DD HH24.MI;SS,FF')
from SYS_DUMMY;
HH (HH12 by
default)
MI Minute (0 to Yes -
59)
MM Month Yes -
number (1 to
12) of a date
Q Quarter (1 to No -
4) of a date
SS Second (0 to Yes -
59)
WW Week of a No -
year that the
current date
is in. The first
week starts
from the first
day of the
current year
and 7 days
are regarded
as a week.
W Week of a No -
month that
the current
date is in.
The first
week starts
from the first
day of the
current
month and 7
days are
regarded as
a week.
YY 2-digit year No -
(for example,
2018 can be
written as
18)
The system also provides the default output format for the date type, as shown in
Table 3-5.
Examples
● Based on the description about format control characters, you can use the
TO_CHAR function to specify the output format of the date type. For
example:
SELECT to_char(sysdate, 'MON-YY-DD') FROM SYS_DUMMY;
TO_CHAR(SYSDATE, 'MON-YY-DD')
-----------------------------
JAN-18-07
1 rows fetched.
TO_CHAR(SYSDATE, 'MON-YY-DD HH
-------------------------------
JAN-18-07 05:01:15 AM
1 rows fetched.
SYSDATE SYSTIMESTAMP
---------------------- ----------------------------------------
2018-01-07 17:18:18 2018-01-07 17:18:18.230000 +08:00
1 rows fetched.
TO_DATE('07-JAN-2018', 'DD-MON-YYYY')
-------------------------------------
2018-01-07 00:00:00
1 rows fetched.
BOOLEAN
Syntax:
BOOLEAN
Examples:
● year_precision indicates the precision of the YEAR field, that is, the maximum
number of valid digits in the YEAR column.
● If year_precision is set to 2, the maximum value of the YEAR field is 99. If the
value specified by the user exceeds 99, an error is reported.
● The minimum value of year_precision is 0, and the maximum value is 4. The
default value is 2. If the input exceeds the specified precision, an error is
reported.
Purpose: Stores an interval in the unit of days, hours, minutes, seconds, and
microseconds.
● The value range of n1 is [0,7], which indicates the precision of the day. The
default value is 2.
● The value range of n2 is [0,6], which indicates the precision after the second.
If the value is not specified, the default value 6 is used.
● The minimum value of INTERVAL DAY TO SECOND is -9999999
23:59:59.999999, indicating that the time difference is negative 999999 days
23 hours 59 minutes 59.999999 seconds.
Examples:
● day_precision indicates the precision of the DAY field. Its effect is the same
as that of year_precision.
● The minimum value is 0, the maximum value is 7, and the default value is 2.
● fractional_seconds_precision indicates the precision after the decimal point
in the specified SECOND field. The minimum value is 0, the maximum value is
6, and the default value is 6.
● The decimal part of the SECOND field can exceed the specified precision.
However, the excessive part will be rounded off so that the result meets the
specified precision requirement.
Examples
CREATE TABLE PFA_dsitvl(id int, dsval interval day(7) to second);
INSERT INTO PFA_dsitvl VALUES(1, '1231 12:3:4.1234');insert into PFA_dsitvl values(2,
'P1231DT16H3.3333333S');
INSERT INTO PFA_dsitvl VALUES(3, 'PT12H');
INSERT INTO PFA_dsitvl VALUES(4, '-P99DT655M999.99999S');
INSERT INTO PFA_dsitvl VALUES(5, '-0 00:19:7.7777777777');
INSERT INTO PFA_dsitvl VALUES(6, '-1234 0:0:0.0004');
ID DSVAL
--------------------------------------
6 -0001234 00:00:00.000400
4 -0000099 11:11:39.999990
5 -0000000 00:19:07.777778
3 +0000000 12:00:00.000000
1 +0001231 12:03:04.123400
2 +0001231 16:00:03.333333
6 rows fetched.
Compare whether two dsval values are equivalent. In the following SQL
statement, the right operand used for comparison is a string. Before comparison,
the system converts the string to the INTERVAL DAY TO SECOND type.
SELECT * FROM PFA_dsitvl WHERE dsval = '0000 12:0000:0.000000';
ID DSVAL
--------------------------------------
3 +0000000 12:00:00.000000
1 rows fetched.
ID DSVAL
--------------------------------------
1 +0001231 12:03:04.123400
2 +0001231 16:00:03.333333
2 rows fetched.
MIN(DSVAL) MAX(DSVAL)
----------------------------------------------------
-0001234 00:00:00.000400 +0001231 16:00:03.333333
1 rows fetched.
+0000002 00:00:00.000000
1 rows fetched.
1 rows fetched.
1 rows fetched.
1 rows fetched.
Similar situations may occur in a leap year. February 2016 has 29 days. After
one year is added, the result is 2017, which is not a leap year and has only 28
days in February. In this case, an error occurs.
SELECT TO_DATE('2016-02-29', 'YYYY-MM-DD') + TO_YMINTERVAL('P1Y') FROM SYS_DUMMY;
Another example:
SELECT TO_DATE('2018-01-31', 'YYYY-MM-DD') + NUMTOYMINTERVAL(1, 'year') FROM SYS_DUMMY;
TO_TIMESTAMP('9999-05-23 11',
------------------------------
+2915142 11:00:00.000000
1 rows fetched.
3.5 Functions
Functions encapsulate service logic to implement specific functionalities. After a
function is executed, the result is returned. Users can modify system functions in
GaussDB 100. However, after the modification, the meaning of the functions may
change, which results in disorder in system control.
Absolute
----------------------------------------
100
1 rows fetched.
ACOS
Syntax:
ACOS(n)
ACOS
----------------------------------------
3.1415926535897932384626433832795028842
1 rows fetched.
ASIN
Syntax:
ASIN(n)
Example:
Return the arc sine of 0.5.
SELECT ASIN(0.5) AS "ASIN" from SYS_DUMMY;
ASIN
----------------------------------------
.523598775598298873077107230546583814033
1 rows fetched.
BITAND
Syntax:
BITAND(exp1,exp2)
BITAND
--------------------
1
1 rows fetched.
BITOR
Syntax:
BITOR(exp1,exp2)
BITOR
--------------------
31
1 rows fetched.
BITXOR
Syntax:
BITXOR(exp1,exp2)
Example:
● Example 1:
SELECT BITXOR (1,1) AS "BITXOR" from SYS_DUMMY;
BITXOR
--------------------
0
1 rows fetched.
● Example 2
SELECT BITXOR (1,0)AS "BITXOR" from SYS_DUMMY;
BITXOR
--------------------
1
1 rows fetched.
● Example 3
SELECT BITXOR (11,3) AS "BITXOR" from SYS_DUMMY;
BITXOR
--------------------
8
1 rows fetched.
CEIL
Syntax:
CEIL(n)
Example:
CEIL
----------------------------------------
16
1 rows fetched.
COS
Syntax:
COS(n)
COS
----------------------------------------
-.50000000000011937382925089877706420632
1 rows fetched.
EXP
Syntax:
EXP(n)
Purpose: Returns the nth power of e (the base number of the natural logarithm).
● The return type is NUMBER.
● The input parameter n is a numeric data type or any non-numeric data type
that can be implicitly converted into a numeric data type.
Example:
Return the value for the third power of e.
SELECT EXP(3);
EXP(3)
----------------------------------------
20.085536923187667740928529654581717897
FLOOR
Syntax:
FLOOR(exp)
FLOOR
----------------------------------------
12
1 rows fetched.
INET_NTOA
Syntax:
INET_NTOA(exp)
Example:
INET_NTOA(4294967295)
---------------------
255.255.255.255
1 rows fetched.
LN
Syntax:
LN(exp)
Note: The exp parameter can be set only to a NUMBER value greater than 0.
Example:
LN
----------------------------------------
4.24849524204935898912334419812754393724
1 rows fetched.
LOG
Syntax:
log(exp1[,exp2])
Note:
Example:
LOG
----------------------------------------
4
1 rows fetched.
MOD
Syntax:
MOD(exp1,exp2)
MOD
----------------------------------------
2
1 rows fetched.
POWER
Syntax:
POWER(base,expn)
POWER
----------------------------------------
125
1 rows fetched.
RAWTOHEX
Syntax:
RAWTOHEX(exp)
1 rows fetched.
ROUND
Syntax:
ROUND (number[, decimals])
1 rows fetched.
1 rows fetched.
1 rows fetched.
SIGN
Syntax:
SIGN (exp)
Purpose: Obtains the signal of the exp result. If the value is greater than 0, 1 is
returned. If the value is less than 0, -1 is returned. If the value is 0, 0 is returned.
The input parameter exp is a numeric value.
Example:
Obtain the signal of 5-6.
SELECT SIGN(5-6) FROM SYS_DUMMY;
SIGN(5-6)
----------------------------------------
-1
1 rows fetched.
SIN
Syntax:
SIN(n)
SIN
----------------------------------------
.707106781186584075022132715997995378626
1 rows fetched.
SQRT
Syntax:
SQRT(n)
SQRT
----------------------------------------
7
1 rows fetched.
TRUNC
Syntax:
TRUNC(number,scale)
TRUNC
----------------------------------------
15.7
1 rows fetched.
SELECT TRUNC(15.79,-1)AS "TRUNC" from SYS_DUMMY;
TRUNC
----------------------------------------
10
1 rows fetched.
● Expressions that can be converted to STRING values are those whose results are of the
numeric, date, Boolean, BINARY, CHAR, VARCHAR, or VARCHAR2 type.
● Expressions that can be converted to INT values are those whose results are of the
numeric, Boolean, BINARY, CHAR, VARCHAR, or VARCHAR2 type.
● The function supports CLOB and BLOB data. A maximum of 65534 bytes are supported
(Data length of the CLOB and BLOB types is not restricted by the LENGTH and
LENGTHB functions.).
CONCAT
Syntax:
CONCAT(str[,...])
CONCAT('11',NULL,'22')
----------------------
1122
1 rows fetched.
● Concatenate three strings. If one string is NULL, it is ignored.
SELECT CONCAT('11','NULL','22');
CONCAT('11','NULL','22')
------------------------
11NULL22
1 rows fetched.
CONCAT_WS
Syntax:
CONCAT_WS(separator, str1, str2,...)
Purpose: Concatenates one or more strings, which are separated by commas (,).
The return value is the concatenation of each parameter value.
● If a parameter is NULL, the CONCAT_WS function ignores this parameter.
However, if NULL is enclosed in single quotes, the CONCAT_WS function will
process NULL as a string.
● This function can be nested.
● The input parameter is a string or an expression that can be converted to a
string. The return value is a string.
Note: The return value supports a maximum of 8000 bytes. If the value exceeds
8000 bytes, an error is reported.
Example:
Concatenate three strings. If one parameter is NULL, it is ignored.
SELECT CONCAT_WS('-','11',NULL,'22');
CONCAT_WS('-','11',NULL,'22')
------------------------------
11-22
1 rows fetched.
DBMS_LOB.SUBSTR
Syntax:
DBMS_LOB.SUBSTR(str[,len[,start]])
Purpose: Truncates a string. This function is used to truncate and return the
substring of len bytes starting from |start| in str.
● The start parameter indicates the index position. The value is a positive
integer, indicating the position of the |start| byte from left to right.
● len indicates the number of bytes truncated from the index position to the
right. When len is less than or equal to 0, the return value of the
DBMS_LOB.SUBSTR function is empty.
1 rows fetched.
FIND_IN_SET
Syntax:
FIND_IN_SET(sub,src)
FIND_IN_SET('B','A,B,C,D')
--------------------------
2
1 rows fetched.
HEX
Syntax:
HEX(p1)
HEX('ABC') HEX(255)
---------- --------
616263 FF
1 rows fetched.
HEX2BIN
Syntax:
HEX2BIN(str)
Example:
HEX2BIN('0X39')
----------------------------------------------------------------
9
1 rows fetched.
HEXTORAW
Syntax:
HEXTORAW(str)
Example:
HEXTORAW('ABCDEF')
----------------------------------------------------------------
ABCDEF
1 rows fetched.
INSERT
Syntax:
INSERT(str,pos,len,newstr)
Purpose: Returns the string str and replaces the string starting from the pos
position with length of len with newstr.
● If pos is not within the length of the string str, the original string is returned.
● If the value of len is greater than the lengths of the other string starting from
pos, replace all starting from pos with newstr.
● The input parameters str and newstr are expressions that can be converted
to STRING values. The input parameters pos and len are expressions that can
be converted to INTEGER values. The return type is STRING.
Note:
This function processes characters. The return value supports a maximum of 8000
bytes.
Example:
● Return the string Quadratic and replace the four characters starting from the
third character with the new string What.
SELECT INSERT('Quadratic', 3, 4, 'What');
INSERT('QUADRATIC', 3, 4, 'WHAT')
---------------------------------
QuWhattic
1 rows fetched.
● Return the string Quadratic and replace the four characters starting from the
tenth character with the new string What. In this example, the start position
pos is not within the length of the string Quadratic, so the original string is
returned.
SELECT INSERT('Quadratic', 10, 4, 'What');
1 rows fetched.
● Return the string Quadratic and replace the 100 characters starting from the
third character with the new string What. In this example, the value of the
parameter len is greater than the length of the remaining string following the
start position pos. Therefore, all characters following the start position pos
are replaced with the string newstr.
SELECT INSERT('Quadratic', 3, 100, 'What');
1 rows fetched.
INSTR
Syntax:
INSTR(str1,str2[,pos[,n]])
Purpose: Searches for a string. This function returns the position of the string to
be searched for in the source string and computes the position by character.
● str1 is the source string. str2 is the string to be searched for. pos is the index
position, indicating the starting position in str1 for searching for str2. pos is
optional and the value is an integer that is not 0. If this parameter is omitted,
the default value is 1. n indicates the number of times that str2 occurs and is
optional. The value is a positive integer. If the parameter is omitted, the
default value is 1.
● In the source string str1, the INSTR function searches for the string str2 from
the |pos| position according to the sequence from left to right (pos is a
positive integer) or from right to left (pos is a negative integer). When str2
occurs for the nth time, the system returns the position of the first character
of str2 in the source string str1. If no str2 is found for the nth time of its
occurrence, the system returns 0. Regardless of whether the search sequence
is from left to right or from right to left, the position of the string to be
searched for in the source string is calculated from left to right.
● The input parameters str1 and str2 are expressions that can be converted to
STRING values. The input parameters pos and n are expressions that can be
converted to INT values. The return type is INT.
Example:
● Search a string gaussdb for a string au from the first character from left to
right to determine the first occurrence of au.
SELECT INSTR('gaussdb','au', 1, 1) POSITION FROM SYS_DUMMY;
POSITION
------------
2
1 rows fetched.
● Search a string gaussdb for a string db from the last character to determine
the first occurrence of db from right to left.
SELECT INSTR('gaussdb','db', -1, 1) POSITION FROM SYS_DUMMY;
POSITION
------------
6
1 rows fetched.
INSTRB
Syntax:
INSTRB(str1,str2[,pos[,n]])
Purpose: Searches for a string. This function returns the position of the string to
be searched for in the source string and computes the position by byte.
● str1 is the source string, in which the target string is searched for.
● str2 is the string to be searched for in str1.
● The input parameters str1 and str2 are expressions that can be converted to
STRING values. The input parameters pos and n are expressions that can be
converted to INT values. The return type is INT.
● pos is the index position, indicating the starting position in str1 for searching
for str2. pos is optional and the value is an integer that is not 0. If this
parameter is omitted, the default value is 1.
● n indicates the number of times that str2 occurs and is optional. The value is
a positive integer. If the parameter is omitted, the default value is 1.
Note:
● In the source string str1, the INSTRB function searches for the string str2
from the |pos| position according to the sequence from left to right (pos is a
positive integer) or from right to left (pos is a negative integer). When str2
occurs for the nth time, the system returns the position of the first character
of str2 in the source string str1. If no str2 is found for the nth time of its
occurrence, the system returns 0.
● Regardless of whether the search sequence is from left to right or from right
to left, the position of the string to be searched for in the source string is
calculated from left to right.
Example:
● Search for the position of or starting from the third byte in oracleor from left
to right when or occurs for the first time. The first search specifies n, and the
second search does not.
SELECT INSTRB('oracleor','or', 3, 1) POSITION_WITH_n, INSTRB('oracleor','or', 3)
POSITION_WITHOUT_n FROM SYS_DUMMY;
POSITION_WITH_N POSITION_WITHOUT_N
--------------- ------------------
7 7
1 rows fetched.
● Search for the position of the 我A string when it occurs for the first time
following the first byte in the A string from left ro right by byte and character.
SELECT INSTRB('我A','A',1,1)AS BYTE_POSITION,INSTR('我A','A',1,1) AS WORD_POSITION FROM
SYS_DUMMY;
BYTE_POSITION WORD_POSITION
------------- -------------
4 2
1 rows fetched.
INET_ATON
Syntax:
INET_ATON(str)
INET_ATON('192.168.1.1')
------------------------
3232235777
1 rows fetched.
LEFT
Syntax:
LEFT(str,length)
Purpose: Returns a certain number of characters from the left of a given string.
● str is a string from which substrings are extracted. Strings of the CLOB type
are not supported.
LEFT('ABCDEFG', 3)
------------------
abc
1 rows fetched.
LENGTH
Syntax:
LENGTH(str)
Purpose: Obtains the length of a string. This function returns the number of str.
The input parameter is an expression that can be converted to a STRING value.
The return type is INT.
Note: If the input parameter is of the UTF-8 type and the error "Nls internal error,
invalid utf-8 buffer" is reported, use the LENGTHB function.
Example:
Determine the character length of the 我的成绩是90分 string.
SELECT LENGTH('my score is 90') AS BYTE_LENGTH FROM SYS_DUMMY;
BYTE_LENGTH
------------
14
1 rows fetched.
LENGTHB
Syntax:
LENGTHB(str)
Purpose: Obtains the length of a string. This function returns the number of bytes
of str.
The input parameter is an expression that can be converted to a STRING value.
The return type is INT.
Note: If the input parameter is of the CHAR type, this function equals the LENGTH
function.
Example:
WORD_LENGTH
------------
20
1 rows fetched.
LOCATE
Syntax:
LOCATE(substr,str[,pos])
Purpose: Returns the position of the substr substring upon its first occurrence in
the str string. The start position is pos. When the pos parameter is omitted, the
start position is the first character. If substr is not in str after the start position
pos, 0 is returned.
The input parameters substr and str are expressions that can be converted to
STRING values. The input parameter pos is an expression that can be converted to
an INTEGER value. The return type is INTEGER.
Note:
Example:
● Return the position of the bar substring upon its first occurrence in the
foobarbar string. The start position is the fifth character.
SELECT LOCATE('bar', 'foobarbar', 5);
LOCATE('BAR', 'FOOBARBAR', 5)
-----------------------------
7
1 rows fetched.
LOCATE('XBAR', 'FOOBAR')
------------------------
0
1 rows fetched.
LOWER
Syntax:
LOWER(str)
Example:
Lower
------------------
abcdefg
1 rows fetched.
LPAD
Syntax:
LPAD(str,pad_len[,pad_str])
EMPLOYEE_ID LAST_NAME
------------ ----------------------------------------------------------------
1001 ...............BROWN
13 ...............Jones
102 ...............Smith
3 rows fetched.
LTRIM(str)
Syntax:
LTRIM(str[,set])
Purpose: Deletes spaces or other predefined characters on the left of a string. This
function can be used to format the output of a query.
● This function deletes all characters in set from the left of str. If set is not
specified, spaces are deleted by default.
● If str is a character data, it must be enclosed in single quotation marks. The
LTRIM function checks whether the leftmost character of str is contained in
set. If the leftmost character of str is contained in set, the function deletes
the character until the leftmost character of str is not included in set.
● The input parameter is an expression that can be converted to a STRING
value. The return type is STRING.
Note:
Currently, the CLOB and BLOB data cannot be processed.
Example:
● Delete the less than sign (<), greater than sign (>), and equal to sign (=) from
the leftmost of the <=====>GAUSSDB <=====> string.
SELECT LTRIM('<=====>GAUSSDB <=====>', '<>=') "LTRIM Example" FROM SYS_DUMMY;
LTRIM Example
------------------
GAUSSDB <=====>
1 rows fetched.
● If set is not specified, spaces are deleted from the leftmost part of the
GAUSSDB string.
SELECT LTRIM(' GAUSSDB') "LTRIM Example" FROM SYS_DUMMY;
LTRIM Example
-------------
GAUSSDB
1 rows fetched.
● Delete the L and M letters from the leftmost of the GAUSSDB string.
SELECT LTRIM('GAUSSDB', 'GA') "LTRIM Example" FROM SYS_DUMMY;
LTRIM Example
---------------
USSDB
1 rows fetched.
REGEXP_INSTR
Syntax:
REGEXP_INSTR(str,pattern[,position[,occurrence[,return_opt[,match_param[,subexpr]]]]])
Purpose: Returns the start or end position of the string that complies with the
regular expression.
● The input parameter str is a string that requires regular processing. It
supports the STRING and NUMBER types.
● The input parameter pattern is the regular expression for matching.
● The input parameter position indicates the start position of the string where
the matching will start. The default value is 1.
● The input parameter occurrence indicates the sequence time of the
occurrence of the group matching the regular expression. The default value is
1.
● The input parameter return_opt indicates the return mode, in which 0
indicates the start position is returned, and 1 indicates the end position is
returned.
● The input parameter match_param indicates the search mode (i indicates
case-insensitive search, and c indicates case-sensitive search. The default
value is c).
● The input parameter subexpr indicates that for pattern with subexpressions,
the function returns the string matching the subexprth subexpression (the
default value is 0).
● The return type is INTEGER.
Note: Currently, the CLOB and BLOB data cannot be processed.
Example 1:
Return the start position of the string complying with the regular expression [^,]+
in the 17,20,23 string. The start position is the first character, and the group
appears for the third time. The search is case-insensitive.
SELECT REGEXP_INSTR('17,20,23','[^,]+',1,3,0,'i') AS STR FROM SYS_DUMMY;
STR
---
7
1 rows fetched.
Example 2:
Return the end position of the string complying with the regular expression [^,]+
in the 17,20,23 string. The start position is the first character, and the group
appears for the third time. The search is case-insensitive.
SELECT REGEXP_INSTR('17,20,23','[^,]+',1,3,1,'i') AS STR FROM SYS_DUMMY;
STR
---
9
1 rows fetched.
REGEXP_SUBSTR
Syntax:
REGEXP_SUBSTR(str,pattern[,position[,occurrence[,match_param[,subexpr]]]])
STR
---
23
1 rows fetched.
REPLACE
Syntax:
REPLACE(str,src,dst)
Purpose: Replaces the src substring in the str string with the dst substring.
The input parameter str indicates the original string; src indicates the string to be
replaced, and dst indicates string to be replaced with. The return type is STRING.
Note:
● Currently, the CLOB and BLOB data cannot be processed.
● The returned value supports a maximum of 8000 bytes. If the value exceeds
8000 bytes, an error is reported.
Example:
Replace sg in the fgsgswsgs string with eeerrrttt.
SELECT REPLACE('fgsgswsgs', 'sg' ,'eeerrrttt') FROM SYS_DUMMY;
---------------------------------------
fgeeerrrtttsweeerrrttts
1 rows fetched.
REVERSE
Syntax:
REVERSE(str)
STR
---
DCBA
1 rows fetched.
RIGHT
Syntax:
RIGHT(str,length)
Purpose: Returns a certain number of characters from the right of a given string.
● str is a string from which substrings are extracted. Strings of the CLOB type
are not supported.
● length is a positive integer that specifies the number of characters returned
from the right.
Note:
● If length is 0 or negative, the RIGHT function returns an empty string.
● If length is greater than the length of str, the RIGHT function returns the
entire str string.
● Currently, the client supports a string of up to 32767 bytes. Therefore, the
function returns a maximum of 32767 bytes.
Example:
Return the three characters from the right of the abcdefg string.
select right('abcdefg', 3) from SYS_DUMMY;
RIGHT('ABCDEFG', 3)
-------------------
efg
1 rows fetched.
RPAD
Syntax:
RPAD(str,pad_len[,pad_str])
3 rows fetched.
RTRIM
Syntax:
RTRIM(str[,set])
RTRIM Example
---------------
<=====>GAUSSDB
1 rows fetched.
● If set is not specified, spaces are deleted from the rightmost part of the
GAUSSDB string.
SELECT RTRIM(' GAUSSDB ') "RTRIM Example" FROM SYS_DUMMY;
RTRIM Example
-------------
GAUSSDB
1 rows fetched.
● Delete the D and B letters from the rightmost of the GAUSSDB string.
SELECT RTRIM('GAUSSDB', 'DB') "RTRIM Example" FROM SYS_DUMMY;
RTRIM Example
---------------
GAUSS
1 rows fetched.
SPACE
Syntax:
SPACE(n)
CONCAT('TOTAL NUMBER:',SPACE(1),'59')
----------------------------------------------------------------
total number: 59
1 rows fetched.
SUBSTR
Syntax:
SUBSTR(str, start[,len])
SUBSTR(str FROM start [FOR len])
Purpose: Truncates a string. This function is used to truncate and return the
substring of len characters starting from |start| in str.
● The start parameter indicates the index position. The value is positive,
indicating the position of the |start| character from left to right. The value is
negative indicating the position of the |start| character from right to left.
● When start is 0, the index position is the first character from left to right.
● len indicates the number of characters truncated from the index position to
the right. When len is less than or equal to 0, the return value of the SUBSTR
function is empty. len is an optional parameter. If this parameter is omitted,
all characters starting from the index position to the end of str are returned.
● The input parameter str must be an expression that can be converted to a
STRING value. The start and len parameters must be expressions that can be
converted to INT values.
● The return type is STRING.
Example:
● Truncate a string of 6 characters starting from the fifth character from left to
right.
SELECT SUBSTR('Quadratically',5,6) EXMAPLE1, SUBSTR('Quadratically' FROM 5 FOR 6) EXAMPLE2
FROM SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
ratica ratica
1 rows fetched.
● Truncate a string of 0 characters from left to right starting from the fifth
character. The return value is empty.
SELECT SUBSTR('Quadratically',5,0) EXMAPLE1, SUBSTR('Quadratically' FROM 5 FOR 0) EXAMPLE2
FROM SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
1 rows fetched.
● Truncates a string from right to left starting from the fifth character. len is
omitted. All characters starting from the index position to the end of str are
returned.
SELECT SUBSTR('Quadratically',-5) EXMAPLE1, SUBSTR('Quadratically' FROM -5) EXAMPLE2 FROM
SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
cally cally
1 rows fetched.
SUBSTRB
Syntax:
SUBSTRB(str, start[,len])
Purpose: Truncates a string. This function is used to truncate and return the
substring of len bytes starting from |start| in str.
● The start parameter indicates the index position. The value is positive,
indicating the position of the |start| byte from left to right. The value is
negative indicating the position of the |start| byte from right to left.
● When start is 0, the index position is the first byte from left to right.
● len indicates the number of bytes truncated from the index position to the
right. When len is less than or equal to 0, the return value of the SUBSTR
function is empty. len is an optional parameter. If this parameter is omitted,
all bytes starting from the index position to the end of str are returned.
● The input parameter str must be an expression that can be converted to a
STRING value. The start and len parameters must be expressions that can be
converted to INT values. The return type is STRING.
Example:
● Truncate a string of 6 bytes starting from the fifth byte from left to right.
SELECT SUBSTRB('Quadratically',5,6) EXMAPLE FROM SYS_DUMMY;
EXMAPLE
--------
ratica
1 rows fetched.
● Truncate a string of 0 bytes from left to right starting from the fifth byte. The
return value is empty.
SELECT SUBSTRB('Quadratically',5,0) EXMAPLE FROM SYS_DUMMY;
EXMAPLE
-------
1 rows fetched.
● Truncates a string from right to left starting from the fifth byte. len is
omitted. All bytes starting from the index position to the end of str are
returned.
SELECT SUBSTRB('Quadratically',-5) EXMAPLE FROM SYS_DUMMY;
EXMAPLE
-------
cally
1 rows fetched.
SUBSTRING
Syntax:
SUBSTRING(str, start[,len])
SUBSTRING(str FROM start [FOR len])
Purpose: Truncates a string. SUBSTRING has the same function as SUBSTR and is
used to truncate and return the substring of len characters starting from |start| in
str.
● The start parameter indicates the index position. The value is positive,
indicating the position of the |start| byte from left to right. The value is
negative indicating the position of the |start| byte from right to left.
● When start is 0, the index position is the first byte from left to right.
● len indicates the number of bytes truncated from the index position to the
right. When len is less than or equal to 0, the return value of the SUBSTR
function is empty. len is an optional parameter. If this parameter is omitted,
all bytes starting from the index position to the end of str are returned.
● The input parameter str must be an expression that can be converted to a
STRING value. The start and len parameters must be expressions that can be
converted to INT values. The return type is STRING.
Example:
● Truncate a string of 6 characters starting from the fifth character from left to
right.
SELECT SUBSTRING('Quadratically',5,6) EXMAPLE1, SUBSTRING('Quadratically' FROM 5 FOR 6)
EXAMPLE2 FROM SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
ratica ratica
1 rows fetched.
● Truncate a string of 0 characters from left to right starting from the fifth
character. The return value is empty.
SELECT SUBSTRING('Quadratically',5,0) EXMAPLE1, SUBSTRING('Quadratically' FROM 5 FOR 0)
EXAMPLE2 FROM SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
1 rows fetched.
● Truncates a string from right to left starting from the fifth character. len is
omitted. All characters starting from the index position to the end of str are
returned.
SELECT SUBSTRING('Quadratically',-5) EXMAPLE1, SUBSTRING('Quadratically' FROM -5) EXAMPLE2
FROM SYS_DUMMY;
EXMAPLE1 EXAMPLE2
-------- --------
cally cally
1 rows fetched.
SUBSTRING_INDEX
Syntax:
SUBSTRING_INDEX(str,delim,count)
SUBSTRING_INDEX('192.168.0.1','.',1)
----------------------------------
192
1 rows fetched.
TO_NCHAR
Syntax:
TO_NCHAR(text_exp)
TO_NCHAR(datetime_exp, [datetime_fmt])
TRIM
Syntax:
TRIM ( [ LEADING | TRAILING | BOTH ] [ set ] [ FROM ] str )
Purpose: Deletes spaces or other predefined characters from the input string in
the specified direction. This function can be used to format the output of a query.
● TRIM has the following values:
– LEADING: Data is deleted from the beginning of a string.
– TRAILING: Data is deleted from the end of a string.
– BOTH: Data is deleted from both ends. If LEADING, TRAILING, and
BOTH are not specified, characters are deleted from both ends by default.
● The set parameter indicates a character set. If any character in the character
set is contained in the beginning or end of str, the trim operation is
performed. If set is not specified, spaces are deleted by default.
● The input parameter str is an expression that can be converted to a STRING
value, and set is a character of the SQL syntax. The return type is STRING.
Note:
● This function can also be invoked in the form of a common function
parameter. The invoking method is TRIM( str [, set]). When this method is
Example:
1 rows fetched.
UPPER
Syntax:
UPPER(str)
The input parameter is an expression that can be converted to a string. The return
type is STRING.
Example:
UPPER
----------------------------------------------------------------
ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩ
1 rows fetched.
Upper
------------------
ABCDEFG
1 rows fetched.
ADD_MONTHS
Syntax:
ADD_MONTHS(date, n)
ADD_MONTHS(datetime_string, n)
Purpose: Returns the value of date or datetime_string plus (n>0) or minus (n<0)
n months.
● The first parameter can be the DATE or TIMESTAMP type or a date string that
complies with NLS_DATE_FORMAT.
● The second parameter is an integer of the Int32 type. The accepted input
value is of the numeric type or can be converted to a numeric string. If the
input value is a floating point number, the value will be converted to Int32
integer by discarding the decimal part; if the value is out of the range
specified by Int32, an error is reported.
Note:
● If date is the last day of the month, then the result is the last day of the
resulting month. Example:
SELECT ADD_MONTHS(to_date('2016-02-29','yyyy-mm-dd'),1) from SYS_DUMMY;
ADD_MONTHS(TO_DATE('2016-02-29','YYYY-MM-DD'),1)
------------------------------------------------
2016-03-31 00:00:00
1 rows fetched.
● If the resulting month has fewer days than the day component of date, the
validity period will be adjusted based on the current date. Example:
SELECT ADD_MONTHS(to_date('2016-01-30','yyyy-mm-dd'),1) from SYS_DUMMY;
ADD_MONTHS(TO_DATE('2016-01-30','YYYY-MM-DD'),1)
------------------------------------------------
2016-02-29 00:00:00
1 rows fetched.
Example:
ADD_MONTHS(TO_DATE('2018-03-02','YYYY-MM-DD'),1)
------------------------------------------------
2018-04-02 00:00:00
1 rows fetched.
CURRENT_TIMESTAMP
Syntax:
CURRENT_TIMESTAMP(fractional_second_precision)
Purpose: Obtains the current system time and time zone. The return value type is
timestamp with time zone, which is the same as sessiontimezone.
Example:
CURRENT_TIMESTAMP()
----------------------------------------
2019-04-12 16:53:37.160018 +08:00
1 rows fetched.
-- Obtain the current system time, with the precision of the decimal digits after the second set to 4.
SELECT CURRENT_TIMESTAMP(4) FROM SYS_DUMMY;
CURRENT_TIMESTAMP(4)
----------------------------------------
2019-04-12 17:18:37.3949 +08:00
1 rows fetched.
-- Modify the current time zone.
ALTER SESSION SET TIME_ZONE = '+6:00';
Succeed.
-- Obtain the current time and time zone.
SELECT CURRENT_TIMESTAMP () FROM SYS_DUMMY;
CURRENT_TIMESTAMP ()
----------------------------------------
2019-04-12 15:45:26.050131 +06:00
1 rows fetched.
-- Obtain the current time and time zone, with the precision of the decimal digits after the second set to 4.
SELECT CURRENT_TIMESTAMP(4) FROM SYS_DUMMY;
CURRENT_TIMESTAMP(4)
----------------------------------------
2019-04-12 16:47:22.9578 +06:00
1 rows fetched.
EXTRACT
Syntax:
EXTRACT(field FROM datetime)
Purpose: Extracts the value of a specified time field (field) from the specified date
(datetime).
field can be YEAR, MONTH, DAY, HOUR, MINUTE, or SECOND. The return type
is NUMBER.
Note:
● If field is SECOND, the return value is a floating point number, in which the
integer part is second and the decimal part is microsecond.
● Any numeric data type or any non-numeric data type that can be implicitly
converted to a numeric data type are used as a parameter. This function
returns the data type same as that of the parameter.
Example:
Extract a month from a specified date.
SELECT EXTRACT (MONTH from date '2018-10-04');
--------------------------------------
10
1 rows fetched.
FROM_UNIXTIME
Syntax:
FROM_UNIXTIME(unix_timestamp)
FROM_UNIXTIME(unix_timestamp,format)
Purpose: Returns the datetime based on the Unix timestamp in GaussDB 100.
Note:
● FROM_UNIXTIME(unix_timestamp)
Input parameter is a BIGINT number. Numeric strings are supported. Without
format strings, the default format is YYYY-MM-DD HH:MM:SS. There are 6
decimal places behind SS.
● FROM_UNIXTIME(unix_timestamp,format)
Input parameter is date in the specified format, which is case insensitive. The
supported formats are as follows:
– %Y: four-digit year.
– %D: a date in a month. The English suffix is not supported.
– %M: English name of a month.
– %h: hour, in 24-hour format.
– %i: minute.
– %s: second.
– %x: four-digit year (Monday is the first day of each week.).
Example:
Return the date corresponding to the time stamp 1111885200.
SELECT FROM_UNIXTIME(1111885200);
FROM_UNIXTIME(1111885200)
--------------------------------
2005-03-27 09:00:00.000000
1 rows fetched.
GETUTCDATE
Syntax:
GETUTCDATE()
Purpose: Returns the value of date or datetime_string plus (n>0) or minus (n<0)
n months.
Note:
● By default, the return format is as follows: YYYY-MM-DD HH:MM:SS
TIMEZONE. There are six decimals after the second SS, and the value of
TIMEZONE is in the TZH:TZM format.
● If the GETUTCDATE() function has no input parameter, the return value type
is timestamp with time zone, which contains the time zone information.
Example:
Assume that the current Beijing time (GMT+8) is 2019-04-09 17:11:01. Query for
the current UTC time.
SELECT GETUTCDATE();
GETUTCDATE()
----------------------------------------
2019-04-09 09:11:01.838655 +00:00
1 rows fetched.
MONTHS_BETWEEN
Syntax:
MONTHS_BETWEEN(date1,date2)
Purpose: Enables GaussDB 100 to calculate the month differences between two
dates (date1 and date2).
Note:
● If date1 and date2 are the same days or last days of two months, the return
value is an integer. Otherwise, the return value contains the decimal part,
which is equal to the number of days between two dates divided by 31.
● If date1 is greater than date2, the return value is a positive number. If date1
is smaller than date2, the return value is a negative number.
● The input parameter is of the DATE or TIMESTAMP type. The return value is
of the NUMBER type.
Example:
Calculate the months between two dates.
SELECT MONTHS_BETWEEN
(TO_DATE('10-12-2018','MM-DD-YYYY'),
TO_DATE('07-25-2018','MM-DD-YYYY') ) "Months"
FROM SYS_DUMMY;
Months
----------------------------------------
2.58064516129032258064516129032258064516
1 rows fetched.
NOW
Syntax:
now(fractional_second_precision)
to 6, and the value must be an integer constant. Parameters are optional for
now(prec). That is, you can use now(). In this case, the default parameter value 6
is used.
Example:
Return the current system time. The value is accurate to six digits after the second.
SELECT NOW() FROM SYS_DUMMY;
NOW()
----------------------------------------
2019-04-12 18:49:33.060337 +08:00
1 rows fetched.
SLEEP
Syntax:
SLEEP(n_second)
Purpose: Sets the inactivity period after which the system enters sleep mode. The
unit is second.
Examples:
Set the inactivity period to be 3 seconds after which the system enters sleep mode.
Example 1:
SELECT SLEEP(3) FROM SYS_DUMMY;
SLEEP(3)
--------
1 rows fetched.
----------------------------------------
2019-04-23 14:49:20.187932 +08:00
----------------------------------------
2019-04-23 14:49:23.187932 +08:00
SYSTIMESTAMP
Syntax:
SYSTIMESTAMP
Purpose: Returns the current timestamp. The return value type is timestamp with
time zone, and the return format is the same as that of
NLS_TIMESTAMP_TZ_FORMAT.
Note:
Examples:
SYSTIMESTAMP
----------------------------------------
2019-04-23 14:50:35.175553 +08:00
1 rows fetched.
TIMESTAMPADD
Syntax:
TIMESTAMPADD(unit, interval,datetime)
Note:
Example:
Return the time that is added two weeks after the specified date.
SELECT TIMESTAMPADD(WEEK,2,'2018-10-04');
TIMESTAMPADD(WEEK,2,'2018-10-04')
---------------------------------
2018-10-18 00:00:00.000000
1 rows fetched.
TIMESTAMPDIFF
Syntax:
TIMESTAMPDIFF(unit,begin,end)
Purpose: Returns the interval between the dates specified by begin and end. The
unit of the interval is specified by unit. The return value type is NUMBER.
Note:
● The data type of begin and end can be DATE or TIMESTAMP, that is, a date or
time expression. Value range: [0001-01-01 00:00:00, 9999-12-31 23:59:59]
● unit specifies the time interval. The value can be YEAR, QUARTER, MONTH,
WEEK, DAY, HOUR, MINUTE, SECOND, MICROSECOND, SQL_TSI_DAY,
SQL_TSI_FRAC_SECOND, SQL_TSI_HOUR, SQL_TSI_MINUTE,
1 rows fetched.
TRUNC
Syntax:
TRUNC(date[,fmt])
TRUNC(SYSDATE,'YY')
----------------------
2018-01-01 00:00:00
1 rows fetched.
UNIX_TIMESTAMP
Syntax:
UNIX_TIMESTAMP()
UNIX_TIMESTAMP(datetime)
UNIX_TIMESTAMP(datetime_string)
Purpose: Obtains the Unix timestamp in GaussDB 100, that is, the number of
seconds since the current time to 1970-01-01 00:00:00 UTC.
The syntax format of this function is as follows:
● unix_timestamp(): If the input parameter is not specified, the Unix timestamp
of the current time is obtained.
● unix_timestamp(datetime): If the input parameter is the Datetime type, the
Unix time stamp of the time is obtained.
● unix_timestamp(datetime_string): If the input parameter is the time type
string, the Unix time stamp of the time is obtained. The string must comply
with the common time format. The default format is YYYY-MM-DD
UNIX_TIMESTAMP('2015-11-13 10:20:19')
-------------------------------------
1447381219
1 rows fetched.
Purpose: Enters a value and the interval description field and outputs the
INTERVAL DAY TO SECOND type.
The num parameter can be:
● A value type, such as integer, large integer, floating point number, or high-
precision NUMBER
● An expression that can be implicitly converted to a numeric value
For details about the input and output relationships, see Table 1.
numtodsinterval(3.14259 +0000003 -
26535897932384626, 03:25:20.005270
'DAY')
numtodsinterval(3.14259 +0000000 -
26535897932384626, 00:03:08.555559
'MINUTE')
numtodsinterval(999999 +0011574 -
999.99999, 'second') 01:46:39.999990
Note: The interval_unit parameter is a string that indicates the INTERVAL field
corresponding to num. For the NUMTODSINTERVAL function, interval_unit can be
1 rows fetched.
NUMTOYMINTERVAL
Syntax:
NUMTOYMINTERVAL(num, 'interval_unit')
Purpose: Enters a value and the interval description field and outputs the
INTERVAL YEAR TO MONTH type.
Note:
● The interval_unit parameter is a string that indicates the INTERVAL field
corresponding to num. For the NUMTOYMINTERVAL function, interval_unit
can be YEAR or MONTH. Note that interval_unit is case-insensitive and that
spaces at its beginning and end are ignored.
● The num parameter can be:
– A value type, such as integer, large integer, floating point number, or
high-precision NUMBER
– An expression that can be implicitly converted to a numeric value
For details about the input and output relationships, see Table 2.
numtoyminterval(9999.9, +9999-11 -
'year')
numtoyminterval(99999. +8333-04 -
9, 'month')
numtoyminterval(+3.142 +0003-02 -
5926535897932384626,
'year')
Example:
Return the time that is one month later than the current time.
SELECT SYSDATE + NUMTOYMINTERVAL(1, 'MONTH') from SYS_DUMMY;
1 rows fetched.
TO_DSINTERVAL
Syntax:
TO_DSINTERVAL(str_exp)
Purpose: Enters an INTERVAL string and outputs the value of INTERVAL DAY TO
SECOND type.
It indicates the day (DAY) and time (including hour, minute, second, and
microsecond) in the interval. This parameter is used to accurately describe time.
Note:
● The ISO format of TO_DSINTERVAL must start from DAY, and YEAR or
MONTH cannot be specified. Otherwise, a syntax error is reported.
● Spaces between format elements are allowed in the SQL interval format, but
not allowed in the ISO interval format.
● The ISO format of TO_DSINTERVAL must comply with the ISO format of the
TO_YMINTERVAL function.
The value of each field is shown in Table 3.
'9999999 +9999999 -
23:59:59.999999' 23:59:59.999999
Example:
Return the IDs and names of employees who have worked for the company for
180 days by December 31, 2018.
--Delete the employee table.
DROP TABLE IF EXISTS employee;
3 rows fetched.
TO_YMINTERVAL
Syntax:
TO_YMINTERVAL(str_exp)
Purpose: Enters an INTERVAL string and outputs the value of INTERVAL YEAR TO
MONTH type.
Indicates the year (YEAR) and month (MONTH) in the interval. This function
calculates how many years and months are in the interval.
TO_YMINTERVAL supports two text formats:
● SQL time interval format, compatible with the SQL standard (ISO/IEC
9075:2003)
● ISO time interval format, compatible with the ISO 8601:2004 standard
For details about the value ranges of different domains in the two standards,
see Table 5.
'P01Y02M' +01-02 -
'P24M' +02-00 -
'-P0Y123M' -10-03 -
'1233-0' +1233-00 -
Note:
● The values of a field should be a non-negative integer.
● In the TO_YMINTERVAL function, even if the values of fields such as DAY,
HOUR, and MINUTE are specified, they are still ignored. The
TO_YMINTERVAL function focuses only on the values specified in the YEAR
and MONTH fields.
● The values of the fields in ISO format must be specified in the required
sequence. For example, the YEAR field cannot follow the MONTH field.
● If the time field indicator T is specified, it should be followed by at least one
time field.
● If the FRAC_SEC field is specified, the SECOND field must be specified, and no
spaces are allowed between the fields.
Example 1:
Return the time two years and five months later than the current time.
SELECT (SYSDATE) + TO_YMINTERVAL('02-05') from SYS_DUMMY;
(SYSDATE) + TO_YMINTERVAL('02-05')
----------------------------------
2021-08-13 15:45:53
1 rows fetched.
Example 2:
Return the IDs and names of employees who have worked for the company for
one year and three months by December 31, 2018.
--Create the employee table.
CREATE TABLE employee(employee_id INT NOT NULL,first_name VARCHAR(10),last_name VARCHAR(10),
hire_date DATETIME);
-- Insert several data records.
INSERT INTO employee(employee_id,first_name,last_name,hire_date)
VALUES(1001,'Alice','BROWN','2017-06-20 12:00:00');
INSERT INTO employee(employee_id,first_name,last_name,hire_date)
VALUES(1002,'BOB','Smith','2017-10-20 12:00:00');
INSERT INTO employee(employee_id,first_name,last_name,hire_date)
VALUES(1003,'ALAN','Jones','2017-05-10 12:00:00');
-- Commit the transaction.
COMMIT;
-- Query for the employees who have worked for the company for one year and three months by December
31, 2018.
SELECT employee_id, first_name, last_name FROM employee WHERE hire_date + TO_YMINTERVAL('01-03')
<= DATE '2018-12-31' ORDER BY employee_id;
2 rows fetched.
ASCII
Syntax:
ASCII(str)
Purpose: Returns the ASCII code corresponding to the first character of str.
Example:
Return the ASCII code corresponding to the first character (h) of the string
helloword.
SELECT ASCII('helloword') FROM SYS_DUMMY;
ASCII('HELLOWORD')
--------------------
104
1 rows fetched.
CAST
Syntax:
CAST(expr as datatype)
Purpose: Converts the column name/value to the specified data type datatype.
The expression can be converted to the same type as itself.
The CAST function can be used to convert data types in the following scenarios. In
other scenarios, an error is reported.
● Two expressions can be implicitly converted.
● The data type must be explicitly converted.
Example:
Convert the string '10' to the int type.
SELECT CAST('10' AS INT) FROM SYS_DUMMY;
CAST('10' AS INT)
-----------------
10
1 rows fetched.
CHAR
Syntax:
CHAR(n)
Purpose: Returns the character whose ASCII code is n. The value range of n is
[0,127].
The input parameter is an expression that can be converted to a numeric value.
The character whose ASCII code is n is returned.
Example:
Return the character whose ASCII code is 67.
SELECT CHAR(67) FROM SYS_DUMMY;
CHAR(67)
--------
C
1 rows fetched.
CHR
Syntax:
CHR(n)
Purpose: Returns the character whose ASCII code is n. The value range of n is
[0,127].
The input parameter is an expression that can be converted to a numeric value.
The character whose ASCII code is n is returned.
Example:
Return the character whose ASCII code is 97.
SELECT CHR(97) FROM SYS_DUMMY;
CHR(97)
-------
a
1 rows fetched.
CONVERT
Syntax:
CONVERT(expr,data_type)
DATETIME
---------------------------------------------------------------
2018-06-28 13:14:15.000000
1 rows fetched.
DECODE
Syntax:
DECODE(expr,{search,result} [,...] [default])
Purpose:
● Compares expr with each search one by one. If expr is equal to a search, the
corresponding result is returned.
● If no match is found, default is returned.
● If default is omitted, NULL is returned.
● Multiple pairs of search and result are separated with commas (,).
● The input parameters support the following data types: INTEGER UNSIGNED,
INT, BIGINT, REAL, STRING, NUMBER, DATE, TIMESTAMP, and BINARY.
● The return type is STRING.
Example:
Decode the value of staff_ID in the staffS_xian table.
STAFF_ID GROUP
---------------------------------------- -------
198 Group A
199 Group B
200 unknown
3 rows fetched.
IF
Syntax:
IF(cond,exp1,exp2)
Purpose: Calculates based on the condition cond. If the condition is true, exp1 is
returned. Otherwise, exp2 is returned.
Example:
Return the employee ID and salary list. If the salary is NULL, return secret.
-- Delete tables named staffS_xian.
DROP TABLE IF EXISTS staffS_xian;
-- Create the staffS_xian table.
CREATE TABLE staffS_xian
(
staff_ID NUMBER(6) not null,
NAME VARCHAR2(20),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
-- Insert record 1 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
Values (198, 'Wang Ying', 'wangying@126.com', '18095605632', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', NULL, 124, 50);
-- Insert record 2 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
values (199, 'He Kaiping', 'hekaipng02@126.com', '18095605532', to_date('13-01-2000', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, 124, 50);
-- Insert record 3 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
values (200, 'Li Rui', 'lirui03@126.com', '18095565632', to_date('17-09-1987', 'dd-mm-yyyy'), 'AD_ASST',
4400.00, 101, 10);
-- Insert record 4 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
Values (206, 'Li Ruiyun', 'liruiyun03@126.com', '18095565892', to_date('17-09-1988', 'dd-mm-yyyy'),
'AD_ASST', 3900.00, 101, 10);
-- Return the employee ID and salary list. If the salary is NULL, return secret.
SELECT staff_ID, IF(SALARY < 4000, SALARY, 'secret') "SALARY" FROM staffS_xian WHERE staff_ID IS NOT
NULL ORDER BY staff_ID;
STAFF_ID SALARY
---------------------------------------- ----------------------------------------------------
198 secret
199 2600
200 secret
206 3900
4 rows fetched.
IFNULL
Syntax:
IFNULL(expr1, expr2)
Purpose:
● If expr1 is not NULL, expr1 is returned.
● If expr1 is NULL, expr2 is returned.
Example:
Return the employee ID and salary list. If the salary is NULL, unknown is
returned.
-- Delete tables named staffS_xian.
DROP TABLE IF EXISTS staffS_xian;
-- Create the staffS_xian table.
CREATE TABLE staffS_xian
(
staff_ID NUMBER(6) not null,
NAME VARCHAR2(20),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
-- Insert record 1 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
Values (198, 'Wang Ying', 'wangying@126.com', '18095605632', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', NULL, 124, 50);
-- Insert record 2 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
values (199, 'He Kaiping', 'hekaipng02@126.com', '18095605532', to_date('13-01-2000', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, 124, 50);
-- Insert record 3 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
values (200, 'Li Rui', 'lirui03@126.com', '18095565632', to_date('17-09-1987', 'dd-mm-yyyy'), 'AD_ASST',
4400.00, 101, 10);
-- Return the employee ID and salary list. If the salary is NULL, unknown is returned.
SELECT staff_ID, IFNULL(SALARY, 'unknown') "SALARY" FROM staffS_xian WHERE staff_ID IS NOT NULL
ORDER BY staff_ID;
STAFF_ID SALARY
---------------------------------------- ----------------------------------------------------
198 unknown
199 2600
200 4400
3 rows fetched.
NULLIF
Syntax:
NULLIF(expr1, expr2)
Purpose:
● If expr1 equals expr2, NULL is returned.
● If expr1 does not equal expr2, expr1 is returned.
Note:
● expr1 cannot be NULL. Otherwise, an error is reported during verification.
● expr1 and expr2 must be of the same data type. Otherwise, an error will be
reported during verification.
● expr1 and expr2 cannot be of the CLOB or BLOB type at the same time.
Example:
Return a list containing employee IDs and salaries. Display salaries not equal to
2600.00 as they are and the salary equal to 2600.00 as NULL.
-- Delete tables named staffS_xian.
DROP TABLE IF EXISTS staffS_xian;
-- Create the staffS_xian table.
CREATE TABLE staffS_xian
(
staff_ID NUMBER(6) not null,
NAME VARCHAR2(20),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
-- Insert record 1 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
STAFF_ID SALARY
---------------------------------------- ----------------------------------------
198 1000.15
199
200 4400
3 rows fetched.
NVL
Syntax:
NVL(expr1, expr2)
Purpose:
Example:
Return the employee ID and salary list. If the salary is NULL, 0 is returned.
-- Delete tables named staffS_xian.
DROP TABLE IF EXISTS staffS_xian;
-- Create the staffS_xian table.
CREATE TABLE staffS_xian
(
staff_ID NUMBER(6) not null,
NAME VARCHAR2(20),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
-- Insert record 1 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
Values (198, 'Wang Ying', 'wangying@126.com', '18095605632', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', NULL, 124, 50);
-- Insert record 2 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
values (199, 'He Kaiping', 'hekaipng02@126.com', '18095605532', to_date('13-01-2000', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, 124, 50);
-- Insert record 3 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
STAFF_ID SALARY
---------------------------------------- ----------------------------------------
198 0
199 2600
200 4400
3 rows fetched.
NVL2
Syntax:
NVL2(expr1,expr2,expr3)
Purpose:
● If expr1 is not NULL, expr2 is returned.
● If expr1 is NULL, expr3 is returned.
Example:
Return the employee ID and salary list. If the salary is NULL, 0 is returned. If the
salary is not NULL, 1 is returned.
-- Delete tables named staffS_xian.
DROP TABLE IF EXISTS staffS_xian;
-- Create the staffS_xian table.
CREATE TABLE staffS_xian
(
staff_ID NUMBER(6) not null,
NAME VARCHAR2(20),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
-- Insert record 1 to the staffS_xian table.
INSERT INTO staffS_xian (staff_ID, NAME, EMAIL, PHONE_NUMBER, HIRE_DATE, employment_ID, SALARY,
MANAGER_ID, section_ID)
Values (198, 'Wang Ying', 'wangying@126.com', '18095605632', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', NULL, 124, 50);
STAFF_ID SALARY
---------------------------------------- ------------
198 0
199 1
200 1
3 rows fetched.
TO_CHAR
Syntax:
TO_CHAR(expr[,fmt])
Note:
Currently, the function supports only the format control character of the date type
(date, timestamp).
Example:
● Specify the format control character and return the current system time.
SELECT to_char(sysdate, 'MON-YY-DD HH:MM:SS AM') FROM SYS_DUMMY;
TO_CHAR(SYSDATE, 'MON-YY-DD HH
-------------------------------
JAN-18-07 05:01:15 AM
1 rows fetched.
● Set the time format of the current session and return the current system time.
-- Set the time format through NLS_DATE_FORMATDATE.
ALTER SESSION SET NLS_DATE_FORMAT='YYYYMMDD HH24:MI:SS';
-- Query the system time.
SELECT TO_CHAR(sysdate) FROM SYS_DUMMY;
TO_CHAR(SYSDATE)
------------------------------------------------
20181207 15:12:24
1 rows fetched.
TO_CHAR('56698')
----------------
56698
1 rows fetched.
TO_CLOB
Syntax:
TO_CLOB(str)
Example:
3 rows affected.
TO_DATE
Syntax:
TO_DATE(expr[,fmt])
Example:
1 rows fetched.
TO_NUMBER
Syntax:
TO_NUMBER(n[, fmt])
Example 1:
TO_NUMBER('123E500', 'XXXXXXX')
----------------------------------------
19129600
1 rows fetched.
Example 2:
TO_NUMBER('123.500', '000.0000')
----------------------------------------
123.5
1 rows fetched.
Example 3:
1 rows fetched.
TO_TIMESTAMP
Syntax:
TO_TIMESTAMP(expr[,fmt])
1 rows fetched.
UNHEX
Syntax:
UNHEX(expr1)
UNHEX('746869732069732061207465737420737472')
----------------------------------------------------------------
this is a test str
1 rows fetched.
AVG
Syntax:
AVG(expr)
AVG
----------------------------------------
116.333333333333333333333333333333333333
1 rows fetched.
COUNT
Syntax:
COUNT(expr)
Purpose: Returns the number of records by column. If this function is executed for
a single column, a not-NULL column value counts 1 and a NULL column value
counts 0. If this function is executed for all columns, that is, count(*), a record
counts 1 even if there are NULL values in the record.
Example:
COUNT
--------------------
3
1 rows fetched.
MAX
Syntax:
MAX(expr)
Purpose: Returns the maximum value. It can be used for data in the numeric,
date, or character type. If it is used for the character type, letters are ordered from
Z to A. If it is used for the date type, the latest date will be returned.
Example:
Return the maximum value of section_ID in the staffS table.
CREATE TABLE staffS
(
staff_ID NUMBER(6) not null,
FIRST_NAME VARCHAR2(20),
LAST_NAME VARCHAR2(25),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
COMMISSION_PCT NUMBER(2,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
INSERT INTO staffs (staff_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE,
employment_ID, SALARY, COMMISSION_PCT, MANAGER_ID, section_ID)
VALUES (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
MAX
----------------------------------------
50
1 rows fetched.
MEDIAN
Syntax:
MEDIAN(expr)
Purpose: Returns the median. The median is the value separating the higher half
from the lower half of a sorted result set. If the query result set contains an even
number of records, the median is the mean of the middle two values.
Example:
MEDIAN
----------------------------------------
199
1 rows fetched.
MIN
Syntax:
MIN(expr)
Purpose: Returns the minimum value. It can be used for data in the numeric, date,
or character type. If it is used for the character type, letters are ordered from A to
Z. If it is used for the date type, the earliest date will be returned.
Example:
Return the minimum value of staff_ID in the staffS table.
CREATE TABLE staffS
(
staff_ID NUMBER(6) not null,
FIRST_NAME VARCHAR2(20),
LAST_NAME VARCHAR2(25),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
COMMISSION_PCT NUMBER(2,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
INSERT INTO staffs (staff_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE,
employment_ID, SALARY, COMMISSION_PCT, MANAGER_ID, section_ID)
VALUES (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
MIN
----------------------------------------
198
1 rows fetched.
SUM
Syntax:
SUM(expr)
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
COMMISSION_PCT NUMBER(2,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
INSERT INTO staffs (staff_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE,
employment_ID, SALARY, COMMISSION_PCT, MANAGER_ID, section_ID)
VALUES (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
SUM
----------------------------------------
9600
1 rows fetched.
STDDEV
Syntax:
STDDEV(expr)
STDDEV
----------------------------------------
1039.23048454132637611646780490352342017
1 rows fetched.
STDDEV_SAMP
Syntax:
STDDEV_SAMP(expr)
STDDEV_POP
Syntax:
STDDEV_POP(expr)
Example:
Return the overall standard deviation and sample standard deviation of the
salaries in the staffS table.
CREATE TABLE staffS
(
staff_ID NUMBER(6) not null,
FIRST_NAME VARCHAR2(20),
LAST_NAME VARCHAR2(25),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
COMMISSION_PCT NUMBER(2,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
);
INSERT INTO staffs (staff_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER, HIRE_DATE,
employment_ID, SALARY, COMMISSION_PCT, MANAGER_ID, section_ID)
VALUES (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
Pop Samp
---------------------------------------- ----------------------------------------
779.422863405994782087350853677642565124 900
1 rows fetched.
GROUP_CONCAT
Syntax:
GROUP_CONCAT([DISTINCT] expr1 [, expr2...] [ORDER BY {unsigned_integer | col_name | expr} [ASC |
DESC] [, col_name...]] [SEPARATOR str_val])
STAFF_ID GROUP_CONCAT(SALARY)
---------------------------------------- ----------------------------------------------------------------
200 4400
198 2600,2600
199 2600
3 rows fetched.
LAG(expr,n ,m) over([ partition by [expr1][ ,... ] ] [order by [expr2][ ,... ] [NULLS FIRST | LAST] ])
Purpose: Returns data in the nth row of the same column before the current
record. Data is returned as an independent column. If the required data does not
exist, the default value is returned.
Note:
● If there is no value in the expr column in the nth row before the current
record, the default value m is returned.
● If the function has neither the parameter n nor m, the value of the expr
column in the previous row of the current row is returned. If the value does
not exist, NULL is returned.
Examples:
Return the staff salary two months ago in the staffs table.
-- Delete the staffs table.
DROP TABLE IF EXISTS staffs;
-- Create the staffs table.
CREATE TABLE staffs
(
staff_id NUMBER(6) not null,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
);
-- Insert data.
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2200.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2400.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844', to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK',
4000.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844', to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK',
4400.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (105, 'David', 'Austin', 'DAUSTIN', '590.423.4569', to_date('25-06-1997', 'dd-mm-yyyy'), 'IT_PROG',
4400.00, null, 103, 60);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (105, 'David', 'Austin', 'DAUSTIN', '590.423.4569', to_date('25-06-1997', 'dd-mm-yyyy'), 'IT_PROG',
4600.00, null, 103, 60);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (105, 'David', 'Austin', 'DAUSTIN', '590.423.4569', to_date('25-06-1997', 'dd-mm-yyyy'), 'IT_PROG',
4800.00, null, 103, 60);
-- Return the staff salary two months ago.
select staff_ID, lag(salary, 2,null)over(partition by staff_ID order by staff_ID) from staffs;
105
105
105 4400
198
198
198 2200
199
199
199 4000
9 rows fetched.
DBA_ANALYZE_TABLE
Syntax:
select * from table(dba_analyze_table(param1, param2));
Purpose: Returns the analysis result of the table, including the total number of
occupied pages, number of extents, total number of rows, number of row
connections, number of migrated rows, and average length of each row.
Note:
Note:
● param1: username string
● param2: table name string
● STAT_ITEM: names of the items involved in the analysis
● VALUE: value of the corresponding item
Example:
select * from table(dba_analyze_table('gaussdba', 'test_t1'));
STAT_ITEM VALUE
---------------------------------------------------------------- --------------------
total pages 8
total extents 1
total rows 8
linked rows 0
mirgated rows 0
average row size 133
6 rows fetched.
DBA_PROC_DECODE
Syntax:
select * from table(dba_proc_decode(param1, param2, param3));
Example:
select * from table(dba_proc_decode('gaussdba', 'GATHER_CHANGE_STATS', 'PROCEDURE'));
29 rows fetched.
DBA_PROC_LINE
Syntax:
select * from table(dba_proc_line(param1, param2));
Purpose: Returns the source code of the stored procedure. The result is displayed
in lines.
Note:
allows users having the DBA role to query the procedures, customized
functions, and triggers of other users.
Note:
Example:
select * from table(dba_proc_line('gaussdba', 'GATHER_CHANGE_STATS'));
LOC_LINE SOURCE_LINE
------------ ----------------------------------------------------------------
1 CREATE OR REPLACE PROCEDURE GATHER_CHANGE_STATS (
2 estimate_percent NUMBER DEFAULT 30,
3 change_percent NUMBER DEFAULT 10,
4 force BOOLEAN DEFAULT TRUE
5 )
6 --force false: don't gather when cbo is disable
7 IS
8 cbo_enable VARCHAR(3);
9 BEGIN
10 --check cbo flag
11 IF force = FALSE THEN
12 SELECT VALUE INTO cbo_enable FROM SYS.DV_PARAMETERS WHERE NAME='CBO';
13 IF UPPER(cbo_enable) = 'OFF' THEN
14 RETURN;
15 END IF;
16 END IF;
17
18 --flush modification to table
19 DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
20
21 --gather the table changed
22 FOR ITEM IN (SELECT A.OWNER, A.TABLE_NAME
23 FROM DBA_TABLES A, DBA_TAB_MODIFICATIONS B
24 WHERE A.PARTITIONED = 0 AND A.OWNER = B.TABLE_OWNER AND
A.TABLE_NAME=B.TABLE_NAME
25 AND( A.NUM_ROWS is null or
26 ((NVL(B.INSERTS, 0) + NVL(B.UPDATES, 0) + NVL(B.DELETES, 0))>= (CHANGE_PERCENT *
A.NUM_ROWS/100))))
27 LOOP
28 BEGIN
29 DBMS_STATS.GATHER_TABLE_STATS(ITEM.OWNER, ITEM.TABLE_NAME, NULL,
estimate_percent);
30 EXCEPTION
31 WHEN OTHERS THEN
32 NULL;
33 END;
34 END LOOP;
35
36 --temp table without statistic will gather at the first time
37 FOR ITEM IN (SELECT OWNER, TABLE_NAME FROM DBA_TABLES WHERE PARTITIONED = 0
AND TABLE_TYPE <> 'HEAP' AND LAST_ANALYZED IS NULL) LOOP
38 BEGIN
39 DBMS_STATS.GATHER_TABLE_STATS(ITEM.OWNER, ITEM.TABLE_NAME, null,
estimate_percent);
40 EXCEPTION
41 WHEN OTHERS THEN
42 NULL;
43 END;
44 END LOOP;
45
45 rows fetched.
GET_TAB_PARALLEL
Syntax:
select * from table(get_tab_parallel(param1, param2))
4 rows fetched.
GET_TAB_ROWS
Syntax:
select * from table(get_tab_rows(param1, param2,param3,param4,param5,param6));
Purpose: Directly obtains original row records from the storage engine.
Note:
● param1: table name
● param2: table partition number. The value –1 is used for an ordinary table.
● param3: SCN ID
● param4: Matching condition. If no conditions are specified, the value NULL is
used.
● param5: start pageid
● param6: end pageid
Example:
select * from table(get_tab_rows('tab1', -1, 'NULL', 1641400608411649, 12884903244,4393751543808));
C1 C2 NAME
------------ ------------ ------------------------------
0 23454 12334546
1 23454 12334546
2 23454 12334546
3 23454 12334546
4 23454 12334546
5 23454 12334546
6 23454 12334546
7 23454 12334546
8 23454 12334546
9 23454 12334546
10 23454 12334546
11 23454 12334546
12 23454 12334546
13 23454 12334546
14 23454 12334546
15 23454 12334546
...
PARALLEL_SCAN
Syntax:
select * from table(parallel_scan(param1,param2,param3,param4,param5));
C1 C2 NAME
------------ ------------ ------------------------------
213 23454 12334546
214 23454 12334546
215 23454 12334546
216 23454 12334546
217 23454 12334546
218 23454 12334546
219 23454 12334546
220 23454 12334546
221 23454 12334546
222 23454 12334546
223 23454 12334546
0 23454 12334546
1 23454 12334546
2 23454 12334546
3 23454 12334546
4 23454 12334546
5 23454 12334546
6 23454 12334546
7 23454 12334546
...
BIN2HEX(expr)
Purpose: Converts data of the following types to hexadecimal numbers (with 0x).
Example:
Convert the binary string 'A123' to a hexadecimal number.
SELECT BIN2HEX('A123') from SYS_DUMMY;
BIN2HEX('A123')
---------------
0x41313233
1 rows fetched.
CHAR_LENGTH
Syntax:
CHAR_LENGTH(str)
CHAR_LENGTH('CHARACTER')
------------------------
9
1 rows fetched.
COALESCE
Syntax:
COALESCE ( expression, expression [ , ...] )
Example:
1 rows fetched.
CONNECTION_ID
Syntax:
CONNECTION_ID()
Note:
● The session IDs of all concurrent connections at the same time are different.
● Note that if a connection is disconnected and the corresponding session is
reused by a new connection, the corresponding session ID is also reused.
Example:
CONNECTION_ID()
---------------
48
1 rows fetched.
TYPE_ID2NAME
Syntax:
TYPE_ID2NAME(data_type_id)
Purpose: Returns the data type name corresponding to the data type ID.
Note:
● This function is a diagnosis function. If the data type ID does not exist,
UNKNOWN_TYPE is returned.
● data_type_id is the data type ID. The mapping is as follows.
data_type_id data_type_name
20001 BINARY_INTEGER
data_type_id data_type_name
20002 BINARY_BIGINT
20003 BINARY_DOUBLE
20004 NUMBER
20005 DECIMAL
20006 DATE
20007 TIMESTAMP
20008 CHAR
20009 VARCHAR
20010 VARCHAR
20011 BINARY
20012 VARBINARY
20013 CLOB
20014 BLOB
20015 CURSOR
20016 COLUMN
20017 BOOLEAN
20018 TIMESTAMP_TZ
20019 TIMESTAMP_LTZ
20020 INTERVAL
20023 RAW
20024 IMAGE
20027 SMALLINT
20029 TINYINT
Example:
Return the data type name corresponding to the data type ID.
select TYPE_ID2NAME(20029);
TYPE_ID2NAME(20029)
----------------------------------------------------------------
TINYINT
1 rows fetched.
DECODE_NAME
Syntax:
DECODE_NAME(INDEX_NAME)
Purpose: Removes the OID part of the same indexes or constraints. This function
is not enabled for now.
FOUND_ROWS
Syntax:
FOUND_ROWS()
Purpose: Returns a SELECT statement with the LIMIT clause that is executed to
filter a part of the result and to know the number of rows of the complete result
set (that contains the results filtered by LIMIT).
Note:
● If FOUND_ROWS() is invoked after a SELECT statement that does not specify
SQL_CALC_FOUND_ROWS is executed, the number of rows returned by
FOUND_ROWS() is the number of records in the result set returned by the
SELECT statement. The number of rows that are filtered by the LIMIT clause
are not included.
● For UNION, UNION ALL, or MINUS statements, FOUND_ROWS() affects only
the global LIMIT clause. The LIMIT statements in SELECT subsets are not
affected.
● Before the FOUND_ROWS() function is executed, if the SELECT statement has
not been executed or failed to be executed, or if an update statement has
been executed, the FOUND_ROWS() function returns an undefined value.
Generally, the value 0 is returned.
Example:
Return the number of employees whose staff_id is greater than 1.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
-- Create the employee table.
CREATE TABLE employee(staff_id INT NOT NULL, first_name VARCHAR(64));
-- Insert data.
INSERT INTO employee(staff_id,first_name) values ('1', 'Alice');
INSERT INTO employee(staff_id,first_name) values ('2', 'Jack');
INSERT INTO employee(staff_id,first_name) values ('3', 'Brown');
-- Commit the transaction.
COMMIT;
STAFF_ID FIRST_NAME
------------ ----------------------------------------------------------------
2 Jack
3 Brown
2 rows fetched.
-- Return the number of rows that meet the condition.
SELECT FOUND_ROWS();
FOUND_ROWS()
--------------------
2
1 rows fetched.
GET_DISTRIBUTE_STR
This function is used only in distributed scenarios. In a standalone scenario, the
returned result is empty.
GET_LOCK
Syntax:
GET_LOCK(name_expr [, timeout_expr])
GET_LOCK('STAFF_ID',5)
--------------------
1
1 rows fetched.
TRY_GET_LOCK
Syntax:
TRY_GET_LOCK(name_expr)
TRY_GET_LOCK('STAFF_ID')
--------------------
TRUE
1 rows fetched.
GET_SHARED_LOCK
Syntax:
GET_SHARED_LOCK(name_expr [, timeout_expr])
Note:
Example:
Lock a column in the table.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
-- Create the employee table.
CREATE TABLE employee(staff_id INT NOT NULL, first_name VARCHAR(64));
-- Insert data.
INSERT INTO employee(staff_id,first_name) values ('1', 'Alice');
INSERT INTO employee(staff_id,first_name) values ('2', 'Jack');
INSERT INTO employee(staff_id,first_name) values ('3', 'Brown');
-- Commit the transaction.
COMMIT;
-- Lock the staff_id column.
SELECT GET_SHARED_LOCK('staff_id',5);
GET_SHARED_LOCK('STAFF_ID',5)
--------------------
TRUE
1 rows fetched.
GET_XACT_LOCK
Syntax:
GET_XACT_LOCK(name_expr)
GET_XACT_LOCK('STAFF_ID')
--------------------
TRUE
1 rows fetched.
TRY_GET_XACT_LOCK
Syntax:
TRY_GET_XACT_LOCK(name_expr)
Note:
Example:
-- Insert data.
INSERT INTO employee(staff_id,first_name) values ('1', 'Alice');
INSERT INTO employee(staff_id,first_name) values ('2', 'Jack');
INSERT INTO employee(staff_id,first_name) values ('3', 'Brown');
-- Commit the transaction.
COMMIT;
TRY_GET_XACT_LOCK('STAFF_ID')
--------------------
TRUE
1 rows fetched.
GET_XACT_SHARED_LOCK
Syntax:
GET_XACT_SHARED_LOCK(name_expr [, timeout_expr])
GET_XACT_SHARED_LOCK('STAFF_ID',5)
--------------------
TRUE
1 rows fetched.
GREATEST
Syntax:
GREATEST( expr1 [, expr2, ... expr_n] )
GREATEST(2, 5, 12, 3)
---------------------
12
1 rows fetched.
Example 2:
SELECT GREATEST('2', '5', '12', '3');
1 rows fetched.
Example 3:
SELECT GREATEST('apples', 'oranges', 'bananas');
1 rows fetched.
Example 4:
SELECT GREATEST('apples', 'applis', 'applas');
--------------------------------------
applis
1 rows fetched.
ISNUMERIC
Syntax:
ISNUMERIC(str)
Purpose: Checks whether the input parameter str can be converted to a number. If
it can, 1 is returned. If it cannot, 0 is returned.
Note:
● The input parameter is a numeric string or a character string.
● The input parameter cannot be $.
Example 1:
SELECT ISNUMERIC('1' + 0) from SYS_DUMMY;
ISNUMERIC('1' + 0)
---------------------
1
1 rows fetched.
Example 2:
SELECT ISNUMERIC('a' || '1') from SYS_DUMMY;
ISNUMERIC('A' || '1')
---------------------
0
1 rows fetched.
LAST_INSERT_ID
Syntax:
LAST_INSERT_ID([expr])
Purpose:
1. If the parameter is empty, the function returns the value automatically
generated in the AUTO_INCREMENT column of the last INSERT statement in
the current session.
2. If the expr parameter is specified, the function returns the value of the
parameter and uses it as the next value returned by LAST_INSERT_ID().
Example:
Return the value automatically generated in the AUTO_INCREMENT column of
the last INSERT statement.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
-- Create the employee table.
CREATE TABLE employee(staff_id INT AUTO_INCREMENT NOT NULL PRIMARY KEY,first_name
VARCHAR(10));
-- Insert data.
INSERT INTO employee VALUES (NULL, 'Bob');
INSERT INTO employee VALUES (NULL, 'BROWN');
INSERT INTO employee VALUES (NULL, 'ALICE');
-- Return the value automatically generated in the AUTO_INCREMENT column of the last INSERT
statement.
SELECT LAST_INSERT_ID();
LAST_INSERT_ID()
--------------------
3
1 rows fetched.
LEAST
Syntax:
LEAST( expr1 [, expr2, ... expr_n] )
LEAST(2, 5, 12, 3)
------------------
2
1 rows fetched.
Example 2:
SELECT LEAST('apples', 'oranges', 'bananas');
1 rows fetched.
Example 3:
SELECT LEAST('apples', 'applis', 'applas');
1 rows fetched.
MD5
Syntax:
md5([expr])
Purpose: The MD5 function encrypts the input parameter expr based on the MD5
algorithm and outputs the encrypted text.
MD5('GAUSS')
--------------------------------
710a4950250286365cf841f765a790f1
1 rows fetched.
-- Create the t_md5_test table.
create table t_md5_test(f1 int,f2 real,f3 blob,f4 numeric(4,1),f5 varchar(10));
Succeed.
-- Insert data into the t_md5_test table.
insert into t_md5_test values(2147483648-1,2.345,'100100111111',2.345,'aabbcc');
1 rows affected.
-- Encrypt data columns f1, f2, f3, f4, and f5 in the table.
select md5(f1),md5(f2),md5(f3),md5(f4),md5(f5) from t_md5_test;
MD5(F1) MD5(F2) MD5(F3) MD5(F4)
MD5(F5)
-------------------------------- -------------------------------- --------------------------------
-------------------------------- --------------------------------
c588c0a459f4ccc6f3dd26518d24707a 972da5c9c62440f43c8ad9c672e8bf36
3c96ce254c3e76d02e6959f19609c6dc 1a18da63cbbfb49cb9616e6bfd35f662
61a60170273e74a5be90355ffe8e86ad
1 rows fetched.
OBJECT_ID
Syntax:
OBJECT_ID(expr[, object_type [, object_owner]])
Purpose: Based on the specified database object name (the first parameter),
database object type, and object owner, returns the OBJECT_ID of the database
object that meets specified conditions in the USER_OBJECTS view. If no owners
are specified, the function searches for the database objects owned by the user of
the current session. If no required database objects are found, NULL is returned.
Note:
In the current version, the following database objects can be specified:
● TABLE (default)
● VIEW
● DYNAMIC VIEW
● PROCEDURE
● TRIGGER
● FUNCTION
In addition, because the database object in GaussDB 100 does not have a globally
unique identifier, the returned OBJECT_ID cannot be globally unique. The value
must be unique in the specified database object type.
Example:
OBJECT_ID('EMPLOYEE','TABLE')
-----------------------------
2070
1 rows fetched.
RELEASE_LOCK
Syntax:
RELEASE_LOCK(name_expr)
Purpose: Obtains the lock of the GET_LOCK() function before releasing the session
by specifying the lock name.
Example:
GET_LOCK('STAFF_ID',5)
----------------------
1
1 rows fetched.
-- Unlock the staff_id column.
SELECT RELEASE_LOCK('staff_id');
RELEASE_LOCK('STAFF_ID')
----------------------
1
1 rows fetched.
ROW_NUMBER() OVER
Syntax:
ROW_NUMBER() OVER (partition by expr order by expr)
Purpose: The function can be used only in a column list. It groups and sort data
returned by the query, and then number the data by group.
Example:
The salary level of each department is displayed according to the department
group.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
7 rows fetched.
SCN2DATE
Syntax:
SCN2DATE(scn)
Purpose: The value is converted from the scn value to a time value.
Note:
● The value of scn must be valid. You can obtain the value by querying related
views of the object.
Example:
Return the time value corresponding to the value of scn.
select scn2date(t.org_scn) from sys.SYS_TABLES t where t.name = 'TEST';
SCN2DATE(T.ORG_SCN)
----------------------
2019-01-10 20:35:38
1 rows fetched.
SERIAL_LASTVAL
Syntax:
SERIAL_LASTVAL('OWNER','TABLE_NAME')
Purpose: Returns the cache value of the auto-increment column of the table.
● OWNER is the owner of the table.
● TABLE_NAME is the table name.
Note:
● OWNER and TABLE_NAME must be capitalized and must be enclosed in
quotation marks.
● If the table does not contain any auto-increment columns, the function
returns the following error:
GS-00866, the table has no auto increment column.
Example:
Returns the cache value of the AUTO_INCREMENT column of the employee
table.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
-- Create the employee table.
CREATE TABLE employee (staff_id INT AUTO_INCREMENT primary key ,section_id INT,max_salary
NUMBER(10,2)) AUTO_INCREMENT 1000;
-- Return the cache value of the auto-increment column in the employee table.
SELECT SERIAL_LASTVAL('SYS','EMPLOYEE');
SERIAL_LASTVAL('SYS','EMPLOYEE')
--------------------
1000
1 rows fetched.
SHA
Syntax:
SHA(str_expr)
Purpose: Generates a fixed-length hash value for the input parameter by using
the SHA algorithm and returns the hash value in string format (40 bytes). This
function is an alias of the SHA1 function and their usage are the same.
Note:
● str_expr is a string expression. If the length exceeds 8000, an error is reported.
● If the input value is NULL, NULL is returned.
Example:
Return the hash value of abc.
SELECT SHA('abc');
SHA('ABC')
--------------------
A9993E364706816ABA3E25717850C26C9CD0D89D
1 rows fetched.
SHA1
Syntax:
SHA1(str_expr)
Purpose: Generates a fixed-length hash value for the input parameter by using
the SHA1 algorithm and returns the hash value in string format (40 bytes).
Note:
● str_expr is a string expression. If the length exceeds 8000, an error is reported.
● If the input value is NULL, NULL is returned.
Example:
Return the hash value of abc.
SELECT SHA1('abc');
SHA1('ABC')
--------------------
A9993E364706816ABA3E25717850C26C9CD0D89D
1 rows fetched.
SOUNDEX
Syntax:
SOUNDEX(expr)
-- Search for the name of the employee whose surname is pronounced as SMYTHE.
SELECT last_name, first_name
FROM employee
WHERE SOUNDEX(last_name)
= SOUNDEX('SMYTHE')
ORDER BY last_name, first_name;
LAST_NAME FIRST_NAME
-------------------- --------------------
Smith Lindsey
Smith William
2 rows fetched.
SYS_CONTEXT
Syntax:
SYS_CONTEXT(namespace_expr, parameter_expr [, length])
SYS_CONTEXT('USERENV', 'SID')
----------------------------------------------------------------
49
1 rows fetched.
Example 2:
Return the host name of the client machine of the current session.
SELECT SYS_CONTEXT('USERENV', 'TERMINAL') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'TERMINAL')
----------------------------------------------------------------
127.0.0.1
1 rows fetched.
Example 3:
Return the default schema name of the current query.
SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')
----------------------------------------------------------------
gaussdba
1 rows fetched.
Example 4:
Return the default schema ID of the current query.
SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMAID') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'CURRENT_SCHEMAID')
----------------------------------------------------------------
2
1 rows fetched.
Example 5:
Return the name of the current database.
SELECT SYS_CONTEXT('USERENV', 'DB_NAME') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'DB_NAME')
----------------------------------------------------------------
GaussDB
1 rows fetched.
Example 6:
Return the OS username of the client that is currently connected.
SELECT SYS_CONTEXT('USERENV', 'OS_USER') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'OS_USER')
----------------------------------------------------------------
gaussdba
1 rows fetched.
SYS_GUID
Syntax:
SYS_GUID()
Purpose: Generates a 16-byte global unique identifier. The return value type is
BINARY.
Note: To receive the SYS_GUID() result with a string type column, because the
binary byte is converted into a string in hexadecimal representation, you need to
define the column as 32 bytes or larger.
The features of the SYS_GUID function are the same as UUID. The differences are
as follows:
Example:
Create a table and use the globally unique ID as the primary key.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
-- Create the employee table.
CREATE TABLE employee(staff_id raw(16) default sys_guid() primary key, first_name VARCHAR(32));
-- Insert data.
INSERT INTO employee(first_name) values ( 'GREECE ');
INSERT INTO employee(first_name) values ( 'ALAN');
INSERT INTO employee(first_name) values ( 'FRANK ');
-- Commit the transaction.
COMMIT;
-- Query the table.
SELECT * FROM employee;
STAFF_ID FIRST_NAME
---------------------------------------------------------------- --------------------------------
B5E49C6E665846A9BF7F00794748AA50 GREECE
2698BC648AC247968DC987555DEAB179 ALAN
46540BE9789E4BF49D0EA87705A444CC FRANK
3 rows fetched.
UUID
Syntax:
UUID()
Purpose: Generates a 16-byte global unique identifier. The return value type is
BINARY.
Note: To receive the UUID() result with a string type column, because the binary
byte is converted into a string in hexadecimal representation, you need to define
the column as 32 bytes or larger.
The features of the UUID function are the same as SYS_GUID. The differences are
as follows:
Example:
Create a table and use the globally unique ID as the primary key.
-- Delete the employee table.
DROP TABLE IF EXISTS employee;
STAFF_ID FIRST_NAME
---------------------------------------------------------------- --------------------------------
B5E49C6E665846A9BF7F00794748AA50 GREECE
2698BC648AC247968DC987555DEAB179 ALAN
46540BE9789E4BF49D0EA87705A444CC FRANK
3 rows fetched.
UPDATING
Syntax:
UPDATING(col_name)
Example:
USERENV
Syntax:
USERENV(parameter_expr)
SYS_CONTEXT('USERENV', 'SID')
----------------------------------------------------------------
49
1 rows fetched.
Example 2:
Return the host name of the client machine of the current session.
SELECT SYS_CONTEXT('USERENV', 'TERMINAL') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'TERMINAL')
----------------------------------------------------------------
127.0.0.1
1 rows fetched.
Example 3:
Return the default schema name of the current query.
SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA')
----------------------------------------------------------------
gaussdba
1 rows fetched.
Example 4:
Return the default schema ID of the current query.
SELECT SYS_CONTEXT('USERENV', 'CURRENT_SCHEMAID') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'CURRENT_SCHEMAID')
----------------------------------------------------------------
2
1 rows fetched.
Example 5:
SYS_CONTEXT('USERENV', 'DB_NAME')
----------------------------------------------------------------
GaussDB
1 rows fetched.
Example 6:
Return the OS username of the client that is currently connected.
SELECT SYS_CONTEXT('USERENV', 'OS_USER') FROM SYS_DUMMY;
SYS_CONTEXT('USERENV', 'OS_USER')
----------------------------------------------------------------
gaussdba
1 rows fetched.
VERSION
Syntax:
VERSION()
VERSION()
--------------------------------------------------
GaussDB-100-V300R001C00B300 Release c87fe47
1 rows fetched.
VSIZE
Syntax:
VSIZE(expr)
Purpose: Returns the number of bytes occupied by the value of the specified
expression in GaussDB 100 storage.
Note:
● If the expression is NULL, the function returns NULL.
● This function does not support input parameter expressions of the CLOB type.
Example:
Return the number of bytes occupied by Alice in GaussDB 100 storage.
SELECT VSIZE('Alice') FROM SYS_DUMMY;
VSIZE('ALICE')
--------------------
5
1 rows fetched.
Purpose: Performs calculation within a subset of the return result related to the
current rows.
function_name is a window function name. Currently, the following functions are
supported: LAG, MAX, MIN, ROW_NUMBER, STDDEV, STDDEV_POP,
STDDEV_SAMP, and SUM.
● partition by indicates data grouping. expr1 is the name of a column to be
grouped. order by indicates data sorting. expr2 is the name of a column to be
ordered.
● If function_name is set to ROW_NUMBER(), the partition by and order by
clauses cannot be omitted. Other functions can be omitted.
● For details about [NULLS FIRST | LAST], see •ORDER BY.
Examples:
Return the maximum salary in the staff table.
-- Delete the staffs table.
DROP TABLE IF EXISTS staffs;
-- Create the staffs table.
CREATE TABLE staffs
(
staff_id NUMBER(6) not null,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
);
-- Insert data.
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2200.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2400.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (198, 'Donald', 'OConnell', 'DOCONNEL', '650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'),
'SH_CLERK', 2600.00, null, 124, 50);
insert into staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id, salary,
commission_pct, manager_id, section_id)
values (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844', to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK',
4000.00, null, 124, 50);
3.6 Operators
During the compiling of SQL statements or the stored procedure, operators can be
used to process columns. Operators can be used to process one or more operands
and can be placed before, after, or between operands. Results are returned after
the processing.
● AND: The logic AND operation can be used in query conditions, such as
WHERE, ON, and HAVING.
● OR: The logic OR operation can be used in query conditions, such as WHERE,
ON, and HAVING.
● NOT: The NOT keyword can be added before the condition expression after
the WHERE or HAVING clause to reverse the condition result. This keyword is
used together with the relational operation, such as, NOT IN and NOT
EXISTS. The syntax is as follows:
select * from table where/having not {condition};
Table 3-12 lists operation rules, where a and b represent logical expressions.
All comparison operators are binary operators. Only data types that are the same
or can be implicitly converted can be compared using comparison operators.
Operators Function
= Equal to
<> or != Unequal
Comparison operators are available for all relevant data types. All comparison
operators are binary operators that returned values of Boolean type. Expressions
like 1 < 2 < 3 are invalid. (Because Boolean values cannot be compared with 3.)
1 rows fetched.
1 rows fetched.
1 rows fetched.
1 rows fetched.
SELECT 4/3 AS RESULT FROM SYS_DUMMY;
RESULT
--------------------
1.33333333333333
1 rows fetched.
1 rows fetched.
1 rows fetched.
1 rows fetched.
Note: 17 (binary: 10001), 13 (binary:
01101), and the bitwise OR result is
11101 (29).
& Bitwise AND SELECT '17' & '13' AS RESULT FROM SYS_DUMMY;
RESULT
--------------------
1
1 rows fetched.
Note: 17 (binary: 10001), 13 (binary:
01101), and the bitwise AND result is
00001 (1).
1 rows fetched.
Note: 17 (binary: 10001), 13 (binary:
01101), and the bitwise XOR result is
11100 (28).
RESULT
--------------------
40
1 rows fetched.
Note: 10. The left two bits are
equivalent to 10*4.
RESULT
--------------------
2
1 rows fetched.
● The priority sequence is: four arithmetic operations > left/right shift > bitwise AND>
bitwise XOR > bitwise OR.
● When bitwise operators are used for execution and if input parameter values have
decimal digits, the values will be first rounded off before participating in the bitwise
operation. In this scenario, if the BITOR, BITAND, and BITXOR functions are used, they
will round down the input parameter values and then perform bitwise operation.
● When the above operators are executed, a null string will be returned if the input
parameter values contain NULL.
Operator Description
Operator Description
NOT LIKE...[ESCAPE char] Searching for rows not matching the specified
pattern. Only character types are supported.
Example:
-- Create a table:
drop table if exists T_TEST_OPERATOR;
create table T_TEST_OPERATOR(ID int, NAME varchar(36));
-- Insert four records to the table:
insert into T_TEST_OPERATOR(ID,NAME) VALUES (1,'zhangsan');
insert into T_TEST_OPERATOR(ID,NAME) VALUES (2,'lisi');
insert into T_TEST_OPERATOR(ID,NAME) VALUES (3,'wangwu');
insert into T_TEST_OPERATOR(ID,NAME) VALUES (999,null);
commit;
-- IN operator
select * from T_TEST_OPERATOR where ID IN(1,2);
select * from T_TEST_OPERATOR where NAME IN('zhangsan');
-- NOT IN operator
select * from T_TEST_OPERATOR where ID NOT IN(1,2);
select * from T_TEST_OPERATOR where NAME NOT IN('zhangsan');
-- EXISTS operator
select count(1) from SYS_DUMMY where EXISTS(select ID from T_TEST_OPERATOR where
NAME='zhangsan');
-- NOT EXISTS operator
select count(1) from SYS_DUMMY where NOT EXISTS(select ID from T_TEST_OPERATOR where
NAME='zhangsan');
-- BETWEEN...AND... operator
select * from T_TEST_OPERATOR where ID BETWEEN 1 AND 2;
-- NOT BETWEEN...AND... operator
select * from T_TEST_OPERATOR where ID NOT BETWEEN 1 AND 2;
-- IS NULL operator
select * from T_TEST_OPERATOR where NAME IS NULL;
-- IS NOT NULL operator
select * from T_TEST_OPERATOR where NAME IS NOT NULL;
-- LIKE ...[ESCAPE char] operator
select * from T_TEST_OPERATOR where NAME LIKE '%an%';
select * from T_TEST_OPERATOR where NAME LIKE '\%an%' ESCAPE '\';
-- NOT LIKE...[ESCAPE char] operator
select * from T_TEST_OPERATOR where NAME NOT LIKE '%an%';
select * from T_TEST_OPERATOR where NAME NOT LIKE '\%an%' ESCAPE '\';
-- REGEXP operator
select * from T_TEST_OPERATOR where NAME REGEXP '[a-z]*';
-- REGEXP_LIKE operator
select * from T_TEST_OPERATOR where REGEXP_LIKE (NAME ,'[a-z]*');
-- ANY operator
select * from T_TEST_OPERATOR where ID = ANY(1,3,5);
select * from T_TEST_OPERATOR where NAME = ANY('zhangsan');
However, not all data types can be converted to each other. For example, the
following SQL statement uses an integer column to update the value of the date
column. Such operations cannot be performed. SQL statements performing such
operations will be filtered out in validation or before execution.
UPDATE T_TEST_CAST SET SECTION_DATE = 1000;
GS-00606, [1:39]inconsistent datatypes, expected DATE - got BINARY_INTEGER
The following table lists the conversion relationships between different data types.
The first row in the table indicates the target conversion type, and the first column
indicates the source type. If the two types of conversion do not meet the
relationship in the table, an error indicating that the type does not match is
returned.
Numeric √ √ × × Integer ×
(√)
The
value 0
indicates
false,
and
other
values 0
indicate
true.
String √ √ √ √ √ √
Date × √ √ × × ×
type
Binary × √ × √ × ×
Boolean √ √ × × √ ×
(Integer (String
0 or 1) TRUE or
FALSE)
Time × √ × × × √, but
interval INTERVA
type L YEAR
TO
MONTH
and
INTERVA
L DAY
TO
SECOND
cannot
be
converte
d to each
other.
For details about the conversion rules among preset data types and Boolean, see "Data
Dictionary and Views" in GaussDB 100 V300R001C00 Database Reference.
Precautions
● The type mapping function is used only for executing tables or modifying
table definitions and executing DDL statements.
● The type mapping function takes effect only when USE_NATIVE_DATATYPE is
set to TRUE.
● After the mapping file is added or modified, the database restarts for the
modification to take effect.
Procedure
Step 1 Add the TYPE_MAP_FILE=<filename> configuration item to the zengine.ini file.
Step 2 In the type mapping file, add a type mapping rule based on the user type.
The user name supports simple fuzzy match. The format of the type mapping file
is as follows:
[username]
old_datatype=map_datatype
● Data type of the integer type. Automatic type mapping is supported. The
boundary value must be explicitly configured in the mapping rule file based
on the actual application scenario to enable the specified type mapping
capability.
● NUMBER(p,s) (s>0) data type. Because the precision is involved, the data type
needs to be configured by explicitly specifying the type mapping.
The following table describes the rule.
----End
Examples
Step 1 Add the TYPE_MAP_FILE configuration item to the zengine.ini file, and set the
mapping file address to export/app/data/cfg/type_map_file.ini.
vim zengine.ini
TYPE_MAP_FILE = /export/app/data/cfg/type_map_file.ini
Step 2 Modify the mapping file type_map_file.ini to specify the mapping type.
Configure a mapping plan for user1. Set NUMBER(3) to INTEGER and
NUMBER(11) to BIGINT.
Configure a mapping plan for user2. Set NUMBER(5) to INTEGER and
NUMBER(25,5) to DOUBLE.
vim zengine.ini
[user1]
number(3)=integer
number(11)=bigint
[user2]
number(5)=integer
number(25,5)=double
Step 3 Restart the database for the mapping rule to take effect.
----End
In transaction management, you can start, commit, and roll back transactions,
prepare two-phase commit, set the transaction isolation level, and create a save
point for a transaction.
Starting a Transaction
GaussDB 100 provides no statement to start a transaction. The first executable
SQL statement (except the login statement) indicates the start of a transaction.
Committing a Transaction
The COMMIT statement changes all operations in the work units of the current
transaction to be permanent and ends the transaction.
● CREATE
● ALTER
● TRUNCATE
● DROP
● GRANT
● REVOKE
3.10 Expressions
GaussDB 100 expressions include simple expressions and compound expressions.
Scenario
You can use expressions in:
● A select list of the SELECT statement, for example, SELECT expr from object.
● A condition specified by the WHERE or HAVING clause, for example, ...
WHERE expr1 = expr2 or WHERE expr IN (expr1, expr2, ...).
● The ORDER BY clause, for example, ...ORDER BY expr.
● The VALUES clause in the INSERT statement, for example, INSERT INTO
table VALUES(expr1, expr2, ...).
● The SET clause in the UPDATE statement, for example, UPDATE table SET
table_column1 = expr1, table_column2 = expr2,....
Use the following complex expression as another example. The expression adds
one day to the current date, converts the new timestamp into the CHAR data type
by invoking the TO_CHAR function, and finally returns a portion of the character
string by invoking the SUBSTR function. The return value is a string.
SUBSTR(TO_CHAR(SYSTIMESTAMP+1), 3)
Syntax
{ [ query_name.
| [schema.] { table. | view. | materialized view. }
| t_alias.
] { column | ROWID }
| ROWNUM
| string
| number
| sequence. { CURRVAL | NEXTVAL }
| NULL
}
Syntax
{ (expr)
| { + | - | PRIOR } expr
| expr { * | / | + | - | || | & | | | ^ } expr
}
● || is not Backus-Naur Form (BNF) syntax notation, but a part of the syntax. It indicates
serial operations.
● When || is used for serial operations, a maximum of 32767 bytes can be processed in
serial. The excessive bytes will be truncated.
● A constant expression (operands in the expression are constants), for example, 1 + 3, is
verified and calculated during SQL parsing. If the expression does not meet the
calculation rule, an error is reported immediately. For example, in the select 0/0 from
SYS_DUMMY; statement, the expression 0/0 does not meet the calculation rule (the
divisor cannot be 0). For higher performance, the error is reported immediately after the
verification during SQL parsing other than SQL execution.
Syntax
CASE { simple_case_expression
| searched_case_expression
}
[else_clause]
END
● simple_case_expression:
expr { WHEN comparison_expr THEN return_expr } [ ... ]
● searched_case_expression:
{ WHEN condition THEN return_expr } [ ... ]
● else_clause:
ELSE else_expr
3.11.1 Overview
A query is an operation that retrieves data from one or more tables or views.
Query is one of the basic applications of a database. GaussDB 100 provides
various query modes to meet application requirements.
This section describes different types of queries and subqueries and how to use
them. For the full syntax of all the clauses and the semantics of SELECT, see
SELECT.
Prerequisites
To view data from a table or view, ensure that the table or view must be in your
own schemas or you have the READ or SELECT permission for the table or view.
Syntax
SELECT [hint_info] [SQL_CALC_FOUND_ROWS] [ DISTINCT ] { expression
[ [ AS ] name ] } [ , ... ]
[ FROM { table_reference [ AS OF {SCN(scn_number) | TIMESTAMP(date)} ] [ [AS] alias ] } [ , ... ] ]
[ WHERE { condition | [ NOT ] EXISTS ( correlated subquery ) } ]
[ [START WITH condition ] CONNECT BY [ NOCYCLE ] [ PRIOR ] condition ]
[ GROUP BY { column_name | expression } [ , ... ] ]
[ HAVING condition [ , ... ] ]
[ { UNION [ ALL ] | MINUS } select ]
[ ORDER [SIBLINGS] BY { column_name | number | expression } [ ASC | DESC ][ NULLS FIRST | NULLS
LAST ] [ , ... ] ]
[ LIMIT [ start, ] count | LIMIT count OFFSET start | OFFSET start[ LIMIT count ] ]
[ FOR UPDATE ]
● hint_info:
{/*+ {access_method_hint | join_order_hint | join_method_hint | parallel_hint }[...] */}
– access_method_hint:
{ FULL(table_name [...])
| INDEX(table_name index_name[...])
| NO_INDEX(table_name index_name[...])
| INDEX_ASC(table_name index_name[...])
| INDEX_DESC(table_name index_name[...])
| INDEX_FFS(table_name index_name[...])
| NO_INDEX_FFS(table_name index_name[...])
}
– join_order_hint:
{ ORDERED
| LEADING(table_name[...])
}
– join_method_hint:
{ USE_NL(table_name[...])
| USE_MERGE(table_name[...])
| USE_HASH(table_name[...])
}
– parallel_hint
{ parallel(degree)
}
● table_reference:
{ [ schema_name. ]table_name [partition(partition_name)][ [AS] alias ]
| [ schema_name. ]view_name [ [AS] alias]
| ( select query ) [ [AS] alias ]
| join_table
}
– join_table:
INNER join by default
▪ (+) can only be used in the WHERE clause, and the condition that
contains (+) does not belong to the OR clause.
▪ In a comparison condition, (+) allows for only six operators: =, <>, >,
<, >=, <=.
Parameter Description
● hint_info
– Specifies special comments in a SQL statement that pass instructions to
the database optimizer. The optimizer uses these hints to choose an
execution plan for the statement, unless there are some conditions that
prevent the optimizer from doing so.
– Exercise caution when using hint_info. You are advised to use hints for a
table query only when you have collected statistics about the table and
evaluated the execution plan without hints by using EXPLAIN PLAN.
– In later database versions, database conditions may change and query
performance will be enhanced, which will affect the use of hints. Note
the equal sign (=). If PRIOR is placed together with the parent ID, the
query traverses data in the direction of parent nodes. If it is placed
together with the child ID, the query traverses data in the direction of
child nodes.
– CONNECT_BY_ISCYCLE pseudocolumn
The CONNECT_BY_ISCYCLE pseudocolumn indicates whether the current
tuple will form the tree-structured data into a loop. It is valid only when
the NOCYCLE keyword is used in a hierarchical query clause. The
CONNECT_BY_ISCYCLE pseudocolumn returns 1 if the current row has a
child which is also its ancestor. Otherwise, it returns 0.
– CONNECT_BY_ISLEAF pseudocolumn
The CONNECT_BY_ISLEAF pseudocolumn returns 1 if the current row is a
leaf of the tree defined by the CONNECT BY condition. Otherwise, it
returns 0. This information indicates whether a given row can be further
expanded to show more of the hierarchy.
– LEVEL pseudocolumn
For each row returned by a hierarchical query, the LEVEL pseudocolumn
returns 1 for a root row, 2 for a child of a root, and so on. A root row is
the highest row within an inverted tree. A child row is any nonroot row. A
parent row is any row that has children. A leaf row is any row without
children.
● expression
Specifies a field or field expression to be queried.
● table_reference
Specifies a table or view to be queried, or a subquery.
[partition(partition_name)]
Specifies the partition of a table for query. partition_name indicates the
partition name.
● condition
Restricts the rows selected to those that satisfy one or more conditions.
Query conditions are defined by expressions and operators. Multiple
conditions can be associated by AND or OR. In GaussDB 100, conditions can
be defined by using:
– Comparison operators, such as >, <, >=, <=, !=, <>, and =.
– Test operators, such as LIKE, NOT LIKE, BETWEEN, NOT BETWEEN,
NULL, NOT NULL, IN, and NOT IN.
– EXISTS (select), which requires that the columns to be queried exist.
– NOT EXISTS (select), which requires that the columns to be queried do
not exist.
● GROUP BY
Specifies a column based on which a result set is grouped.
● HAVING
Specifies the filter conditions for restricting the results of a GROUP BY.
● ORDER BY
Specifies a column based on which a result set is sorted.
ORDER SIBLINGS BY
Specifies the columns used for sorting sibling nodes. This parameter can be
used only when CONNECT BY is specified.
● ASC | DESC
Specifies whether the ordering sequence is ascending or descending. The
default value is ASC.
● NULLS FIRST | NULLS LAST
Specifies the position of NULL values in the ORDER BY sorting. FIRST
indicates that NULL values are placed before non-NULL values and LAST
indicates that NULL values are placed after non-NULL values. If this
parameter is not specified, NULLS LAST is used in ASC mode and NULLS
FIRST is used in DESC mode by default.
● FOR UPDATE
Locks the selected rows for UPDATE.
● offset_expr, count_expr
offset_expr limits the offset of a result set and count_expr limits the number
of rows in a result set.
● start,count
count specifies the maximum number of rows to return, while start specifies
the number of rows to skip before the first row is returned. When both are
specified, rows specified by start will be skipped before rows specified by
count are returned.
● UNION [ALL]
Returns all rows in the result sets of multiple SELECT statements.
● FULL(table_name [...])
Specifies a full-table scan.
● INDEX(table_name index_name[...])
Specifies an index scan.
● NO_INDEX(table_name index_name[...])
Specifies a non-index scan.
● INDEX_ASC(table_name index_name[...])
Specifies an ascending index scan.
● INDEX_DESC(table_name index_name[...])
Specifies a descending index scan.
● INDEX_FFS(table_name index_name[...])
Specifies a fast full index scan.
● NO_INDEX_FFS(table_name index_name[...])
Excludes a fast full index scan of the specified indexes on the specified table.
● ORDERED
Joins tables in the order in which they appear in the FROM clause.
● LEADING(table_name[...])
Joins tables in the specified order.
● USE_NL(table_name[...])
Joins each specified table to another row source using a nested-loop join.
● USE_MERGE(table_name[...])
Joins each specified table with another row source using a sort-merge join.
● USE_HASH(table_name[...])
Joins each specified table with another row source using a hash join.
● join_table
Specifies a set of tables for join query.
– [INNER] JOIN returns records that have matching values in both tables.
In this case, the subsequent ON condition can be omitted.
– LEFT [OUTER] JOIN returns all records from the left table and the
matched records from the right table. The result is NULL from the right
side, if there is no match.
– RIGHT [OUTER] JOIN returns all records from the right table and the
matched records from the left table. The result is NULL from the left side,
when there is no match.
– FULL [OUTER] JOIN returns all records when there is a match in either
left or right table It is an equivalent of OUTER JOIN.
● {predicate } [ { AND | OR } condition]
Specifies the conditions that the query result set must satisfy.
– AND
Both of the two conditions must be satisfied.
– OR
Either of the two conditions must be satisfied.
● predicate:
Specifies the conditions that the query result set must satisfy.
Syntax
SELECT [ , ... ] FROM table_reference [ , ... ]
Usage
● The expression between the SELECT keyword and the FROM clause is called a
SELECT item. SELECT items are used to specify the columns to be queried,
and the FROM clause specifies the table where the columns are located. To
query all columns, use an asterisk (*) after SELECT. To query certain column,
specify the column names after SELECT and separate the names with
commas (,). For details, see 1 and 2.
● If two or more tables have the same column names, you are advised to
specify both table names and column names. Query results can be returned
without specifying column names but extra workload is required. Therefore,
you are advised to specify both table names and column names to reduce
Examples
● Example 1: Create a training table and insert three rows of data into the
table. Query all columns in the training table.
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL,course_name CHAR(50),course_start_date DATETIME,
course_end_date DATETIME,exam_date DATETIME,score INT);
-- Insert three rows of data into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25
12:00:00',90);
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'information safety','2017-06-20 12:00:00','2017-06-25 12:00:00','2017-06-26
12:00:00',95);
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'master all kinds of thinking methonds','2017-07-15 12:00:00','2017-07-20
12:00:00','2017-07-25 12:00:00',97);
Use an asterisk (*) after SELECT to query all columns in the training table.
SELECT * FROM training;
STAFF_ID COURSE_NAME COURSE_START_DATE
COURSE_END_DATE EXAM_DATE SCORE
------------ -------------------------------------------------- ---------------------- ----------------------
---------------------- ------------
10 SQL majorization 2017-06-15 12:00:00 2017-06-20 12:00:00
2017-06-25 12:00:00 90
10 information safety 2017-06-20 12:00:00 2017-06-25 12:00:00
2017-06-26 12:00:00 95
10 master all kinds of thinking methonds 2017-07-15 12:00:00 2017-07-20
12:00:00 2017-07-25 12:00:00 97
● Example 2: Query for staff IDs and course names in the training table.
SELECT staff_id,course_name FROM training;
STAFF_ID COURSE_NAME
------------ --------------------------------------------------
10 SQL majorization
10 information safety
10 master all kinds of thinking methonds
3 rows fetched.
3 rows fetched.
For details about how to define query conditions in GaussDB 100, see • condition.
Query conditions can be used in the WHERE, HAVING, and START WITH
condition CONNECT BY [NOCYCLE] [PRIOR] condition clauses.
Syntax
condition:
select_statement { predicate } [ { AND | OR } condition ] [ , ... n ]
● predicate:
{ expression { = | <> | != | > | >= | < | <= } { ALL | ANY } expression | ( select )
| string_expression [ NOT ] LIKE string_expression
| expression [ NOT ] BETWEEN expression AND expression
| expression IS [ NOT ] NULL
| expression [ NOT ] IN ( select | expression [ , ... n ] )
| [ NOT ] EXISTS ( select )
}
Usage
Query conditions are defined by using expressions and operators. In GaussDB 100,
conditions can be defined by using:
● Comparison Operators, such as >, <, >=, <=, !=, <>, and =.
In the expression, single quotation marks are optional for the numeric type
but mandatory for character and date types. For details, see 1.
● Test Operators, which specify query ranges. For details, see examples in Test
Operators.
To specify multiple conditions, use the AND logical operator to connect these
conditions. To specify one of multiple conditions, use the OR logical operator to
connect these conditions. In 2, you can query the staffs table for information
about the staff who were hired after 2000 and has a salary greater than 5000.
Examples
● Example 1: Use the comparison operator >, <, >=, <=, !=, <>, or = to specify a
query condition. For example, query the training table for information about
the students who learn SQL majorization.
SELECT * FROM training WHERE course_name = 'SQL majorization';
STAFF_ID COURSE_NAME COURSE_START_DATE
COURSE_END_DATE EXAM_DATE SCORE
------------ -------------------------------------------------- ---------------------- ----------------------
---------------------- ------------
10 SQL majorization 2017-06-15 12:00:00 2017-06-20 12:00:00
2017-06-25 12:00:00 90
1 rows fetched.
● Example 2: Query the staffs table for information about the staff who meet
the following conditions: hired later than 2000 and salary >5000.
SELECT * FROM staffs WHERE HIRE_DATE>'2000-01-01 00:00:00' AND SALARY>'5000';
STAFF_ID FIRST_NAME LAST_NAME EMAIL
PHONE_NUMBER HIRE_DATE EMPLOYMENT_ID SALARY
COMMISSION_PCT MANAGER_ID SECTION_ID
---------------------------------------- -------------------- ------------------------- -------------------------
-------------------- ---------------------- ------------- ----------------------------------------
---------------------------------------- ----------------------------------------
----------------------------------------
149 Eleni Zlotkey EZLOTKEY
011.44.1344.429018 2000-01-29 00:00:00 SA_MAN 10500 .
2 100 80
164 Mattea Marvins MMARVINS
011.44.1346.329268 2000-01-24 00:00:00 SA_REP 7200 .
1 147 80
165 David Lee DLEE
011.44.1346.529268 2000-02-23 00:00:00 SA_REP 6800 .
1 147 80
166 Sundar Ande SANDE
011.44.1346.629268 2000-03-24 00:00:00 SA_REP 6400 .
1 147 80
167 Amit Banda ABANDA
011.44.1346.729268 2000-04-21 00:00:00 SA_REP 6200 .
1 147 80
173 Sundita Kumar SKUMAR
011.44.1343.329268 2000-04-21 00:00:00 SA_REP 6100 .
1 148 80
179 Charles Johnson CJOHNSON
011.44.1644.429262 2000-01-04 00:00:00 SA_REP 6200 .
1 149 80
7 rows fetched.
● Example 3: Query the staffs and employment_history tables for staff's name
and hiring time.
SELECT e.start_date,s.first_name,s.last_name FROM employment_history e, staffs s WHERE e.staff_id =
s.staff_id;
START_DATE FIRST_NAME LAST_NAME
---------------------- -------------------- -------------------------
1989-09-21 00:00:00 Neena Kochhar
1993-10-28 00:00:00 Neena Kochhar
1993-01-13 00:00:00 Lex De Haan
1998-03-24 00:00:00 Den Raphaely
1999-01-01 00:00:00 Payam Kaufling
1998-03-24 00:00:00 Jonathon Taylor
1999-01-01 00:00:00 Jonathon Taylor
1987-09-17 00:00:00 Jennifer Whalen
1994-07-01 00:00:00 Jennifer Whalen
1996-02-17 00:00:00 Michael Hartstein
10 rows fetched.
Related Concepts
A Cartesian product is also called a direct product. For sets X and Y, the Cartesian
product of the two sets is presented as X × Y. Therefore, if X has I tuples and N
attributes and Y has J tuples and M attributes, their Cartesian product will have IxJ
tuples and N+M attributes. The two relations may have attributes with the same
name.
For example, perform Cartesian product operation for the areas and
employments tables.
-- There are four records in the areas table.
SELECT COUNT(*) FROM areas;
COUNT(*)
--------------------
4
1 rows fetched.
1 rows fetched.
-- If the areas and employments tables are jointly queried without specifying conditions, 76 (4x19) records
are returned.
SELECT areas.area_name, employments.employment_id FROM areas, employments;
Syntax
SELECT [ , ... ] FROM table_reference
[LEFT [OUTER] | RIGHT [OUTER] | FULL [OUTER] | INNER]
JOIN table_reference
[ON { predicate } [ { AND | OR } condition ] [ , ... n ]]
● table_reference:
{ table_name [ [AS] alias ]
| view_name [ [AS] alias]
| ( select query ) [ [AS] alias ]
}
● Outer join operator (+)
An outer join can be described either by the LEFT/RIGHT keyword or the
operator (+). A condition that contains (+) in the WHERE clause is used for
an outer join. Specifically, the table containing (+) is the right node of a left
join, and the table without (+) is the left node of the left join.
– The outer join operator can be used as follows:
▪ (+) can only be used in the WHERE clause, and the condition that
contains (+) does not belong to the OR clause.
▪ In a comparison condition, (+) allows for only six operators: =, <>, >,
<, >=, <=.
Usage
If multiple tables are included in the FROM clause for query, the database
executes a join query. The SELECT items of the query can be any columns in these
tables. For example:
If any two of these tables have the same column name, the table name must be
used to avoid reference ambiguity in the query.
Most join queries contain at least one query condition, which can be either in the
FROM or WHERE clause. For example:
SELECT table1.column, table2.column FROM table1 JOIN table2 ON(table1.column1 = table2.column2);
SELECT table1.column, table2.column FROM table1, table2 WHERE table1.column1 = table2.column2;
Inner Join
The keyword INNER JOIN is provided where INNER can be omitted. If an inner
join is used, the join execution sequence will follow the sequence of tables in the
statement.
Example: Use an inner join to query tables.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL,course_name CHAR(50),course_start_date DATETIME,
course_end_date DATETIME,exam_date DATETIME,score INT);
-- Insert record 1 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25 12:00:00',90);
-- Insert record 2 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date) VALUES(11,'BIG
DATA','2018-06-15 12:00:00','2018-06-20 12:00:00','2018-06-25 12:00:00');
-- Insert record 3 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(12,'Performance Turning','2018-06-25 12:00:00','2018-06-27 12:00:00','2018-06-29 12:00:00',95);
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8), graduate_school VARCHAR(64),
graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'doctor','Xidian University','2017-07-06 12:00:00','211');
-- Insert record 2 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(11,'master','Northwestern Polytechnical University','2017-07-06 12:00:00','211&985');
-- Insert record 3 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(12,'','Peking University','2017-07-06 12:00:00','211&985');
-- Insert record 4 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(13,'scholar','Peking University','2017-07-06 12:00:00','211&985');
-- Query for employee IDs, highest educational qualifications, and exam scores. Use the staff_id column in
the training and education columns for query.
SELECT e.staff_id, e.highest_degree, t.score FROM education e JOIN training t ON (e.staff_id = t.staff_id);
3 rows fetched.
Outer Join
In inner joins, two data sources specified are equal. However, in outer joins, one
data source is used as the base for the other source to match according to
conditions. An inner join returns records that have matching values in both tables.
If the rest records need to be returned, use an outer join. An outer join returns all
records that have matching values in either the left or right table. Outer joins
include left outer joins, right outer joins, and full outer joins, which are also called
left joins, right joins, and full joins, respectively.
● Left join: As shown in Figure 3-1, the query is driven by the left table. It is
joined with the right table according to the specified connection; and all data
from the base table and the data satisfying the condition from the right table
are obtained.
For example, change the query in Inner Join to a left join.
SELECT e.staff_id, e.highest_degree, t.score FROM education e LEFT JOIN training t ON (e.staff_id =
t.staff_id);
STAFF_ID HIGHEST_DEGREE SCORE
------------ ------------- ------------
10 doctor 90
11 master
12 95
13 scholar
4 rows fetched.
● Right join: As shown in Figure 3-2, the query is driven by the right table. On
the basis of the inner join, the data recorded in the right table but not in the
left table is also queried.
For example, change the query in Inner Join to a right join.
SELECT e.staff_id, e.highest_degree, t.score FROM education e RIGHT JOIN training t ON (e.staff_id =
t.staff_id);
STAFF_ID HIGHEST_DEGREE SCORE
------------ ------------- ------------
10 doctor 90
11 master
12 95
3 rows fetched.
● Full join: As shown in Figure 3-3, all records satisfying or not satisfying the
condition from either left or right table are returned. That is, a full join is the
sum of the left join and right join. If a full join is used on two tables, a left
join and then a right join will be performed, and the union set of the two
temporary result sets will be returned.
For example, use a full join on the education and training tables.
SELECT e.staff_id, e.highest_degree, t.score FROM education e FULL JOIN training t ON (e.staff_id =
t.staff_id);
STAFF_ID HIGHEST_DEGREE SCORE
------------ ------------- ------------
10 doctor 90
11 master
12 95
13 scholar
4 rows fetched.
Semi-Join
A semi-join is a special type of join, and there is no keyword specified in the SQL
statement. It is implemented by a subquery with IN or EXISTS following WHERE.
If multiple rows in the IN or EXISTS subquery match the conditions, the query
returns only one row instead of all the matched rows.
For example, view education information about the employees who have
participated in the training. Even if multiple rows in the training table match the
subquery conditions (that is, there are multiple employees in the same staff_id),
only one row is returned from the training table. For details about the definitions
of the education and training tables, see Inner Join.
SELECT staff_id, highest_degree, education_note FROM education WHERE EXISTS (SELECT * FROM training
WHERE education.staff_id = training.staff_id);
STAFF_ID HIGHEST_DEGREE EDUCATION_NOTE
------------ ------------- ----------------------------------------------------------------
10 doctor 211
11 master 211&985
12 211&985
3 rows fetched.
Anti-Join
An anti-join is a special type of join, and there is no keyword specified in the SQL
statement. It is implemented by a subquery with NOT IN or NOT EXISTS
following WHERE. It returns all rows that do not meet the condition. The concept
of an anti-join is opposite to that of a semi-join.
For example, query for education information about the employees who have not
participated in the training. For details about the definitions of the education and
training tables, see Inner Join.
SELECT staff_id, highest_degree, education_note FROM education WHERE staff_id NOT IN (SELECT staff_id
FROM training);
1 rows fetched.
If the subquery is in the OR branch of the WHERE clause, semi-joins and anti-joins cannot
be performed.
3.11.5 Subquery
A subquery is a query embedded into a statement for querying, table creation, or
data insertion, aiming to obtain a temporary result set. Subqueries are classified
into correlated subqueries and non-correlated subqueries.
● Correlated subquery: The subquery is executed based on attribute values one
by one of the outer query. That is:
– A subquery contains a reference to a table in the outer query.
– The values of a subquery depend on the table column values in the outer
query.
– Each subquery is executed once for every row of the outer query.
For examples of correlated subqueries, see example 1 and example 2 in
Examples.
The parent statement of a subquery can be a SELECT, UPDATE, or DELETE
statement.
● Non-correlated subquery: A subquery is independent from the outer query.
The subquery is executed before the outer query and does not need to obtain
values from the outer query. The execution result of the subquery is returned
to the outer query.
Syntax
Subquery syntax:
select_statement
{ query_block
| subquery {{ UNION [ALL] | MINUS } subquery } [...]
}
[ order_by_clause ]
[ row_limiting_clause ]
Query_block syntax:
select_statement
[ with cluase ] SELECT [ hint ] [ DISTINCT ] select_list
FROM {
[ table_reference
| join_clause
| inline_analytic_view ]
}
[,...] [where_clause] [group_by_clause]
Usage
● Subqueries can be used in the FROM and WHERE clauses. A subquery in the
FROM clause is also called an in-line view. You can nest any number of
subqueries within an in-line view. A subquery in the WHERE clause is also
called a nested subquery. A maximum of 127 subquery levels can be nested.
● If a column in a subquery has the same name as the column in the statement
contained in the subquery, the table name or alias must be used as the prefix
of the column. To make statements easier to read, specify table or view
names or aliases for columns.
Examples
● Example 1: Use a correlated subquery to query for the staff whose salary is
higher than the average salary in each department.
SELECT s1.last_name, s1.section_id, s1.salary
FROM staffs s1
WHERE salary >(SELECT avg(salary) FROM staffs s2 WHERE s2.section_id = s1.section_id)
ORDER BY s1.section_id;
For each row in the staffs table, the outer query uses a correlated subquery
to calculate the average salary in a department. The correlated subquery
performs the following steps for each row in the staffs table:
a. Determine section_id of the row.
b. Evaluate the outer query based on section_id.
c. If the salary in a row is higher than the department average salary, return
this row.
The subquery is executed once for each row in the staffs table.
● Example 2: For details about correlated subqueries in join queries, see Semi-
Join in JOIN Query.
● Example 3: Use a non-correlated subquery to query for the staff whose salary
is higher than the department average salary in the department with the ID
80.
SELECT staff_id, last_name, salary
FROM staffs
WHERE section_id = '80'
AND salary >(SELECT avg(salary)
FROM staffs
WHERE section_id= '80');
● Example 5: Insert all data in the staffs table to the staffs_new table.
INSERT staffs_new SELECT * FROM staffs;
Usage
● Query for the current user.
SELECT USER FROM SYS_DUMMY;
● Invoke a system function.
SELECT function_name FROM SYS_DUMMY;
● Obtain the current or next value of a sequence.
SELECT sequence_name .{ NEXTVAL | CURRVAL }FROM SYS_DUMMY;
● Use the table for calculating.
SELECT expression FROM SYS_DUMMY;
Examples
● Example 1: Invoke a system function to obtain the current system time.
SELECT to_char(sysdate,'yyyy-mm-dd hh24:mi:ss') FROM SYS_DUMMY;
TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS')
------------------------------------------------
2019-02-01 16:57:58
1 rows fetched.
● Example 2: Calculate the result of 7 plus 1.
SELECT 7+1 FROM SYS_DUMMY;
7+1
--------------------
8
1 rows fetched.
3.11.7 UNION
GaussDB 100 provides the UNION operator to combine the result sets of multiple
query blocks into one.
Syntax
select_statement UNION [ALL] select_subquery
Usage
● The number of columns in each query block must be the same.
● The corresponding column among each query block must be in the same data
type or data type group, and the data types must support UNION. Otherwise,
UNION is not allowed.
Table 3-20 describes data type groups and the combination rules of them.
● The keyword ALL indicates that all duplicate data is retained. If ALL is not
contained, all duplicate data is deleted.
Boolean BOOLEAN -
IMAGE
Examples
Query for information about the employees whose bonuses exceed 7000.
-- Delete the bonuses_depa1 table, if any.
DROP TABLE IF EXISTS bonuses_depa1;
-- Delete the bonuses_depa2 table, if any.
DROP TABLE IF EXISTS bonuses_depa2;
-- Create the bonuses_depa1 table.
CREATE TABLE bonuses_depa1(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER);
-- Create the bonuses_depa2 table.
CREATE TABLE bonuses_depa2(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(23,'wangxia','developer',5000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(24,'limingying','tester',7000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(25,'liulili','quality control',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(29,'liuxue','tester',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(21,'caoming','document developer',
11000);
-- Commit the transaction.
COMMIT;
-- Insert record 1 into the bonuses_depa2 table.
INSERT INTO bonuses_depa2(staff_id, staff_name, job, bonus) VALUES(30,'wangxin','developer',9000);
-- Insert record 2 into the bonuses_depa2 table.
INSERT INTO bonuses_depa2(staff_id, staff_name, job, bonus) VALUES(31,'wangxufeng','document
developer',6000);
-- Insert record 3 into the bonuses_depa2 table.
INSERT INTO bonuses_depa2(staff_id, staff_name, job, bonus) VALUES(34,'denggui','quality control',5000);
-- Insert record 4 into the bonuses_depa2 table.
INSERT INTO bonuses_depa2(staff_id, staff_name, job, bonus) VALUES(33,'liuying','quality control',10000);
-- Insert record 5 into the bonuses_depa2 table.
INSERT INTO bonuses_depa2(staff_id, staff_name, job, bonus) VALUES(35,'caojiongming','document
developer',12000);
-- Commit the transaction.
COMMIT;
-- Query for information about the employees whose bonuses exceed 7000.
SELECT staff_id, staff_name, bonus FROM bonuses_depa1 WHERE bonus > 7000 UNION ALL SELECT
staff_id, staff_name, bonus FROM bonuses_depa2 WHERE bonus > 7000 ;
6 rows fetched.
3.11.8 MINUS
GaussDB 100 provides the MINUS operator to subtract a result set.
For example, A MINUS B C returns the result set of A minus the result sets of B
and C. That is, only unique rows returned by A but not by B and C are returned.
Syntax
select_statement MINUS select_statement2 [ ... ]
Parameters
● select_statement1
A SELECT statement that generates the first result set.
● select_statement2
Specifies the SELECT statement that generates the second result set.
Examples
Query data using MINUS.
-- Delete the education table, if any.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school VARCHAR(64),
graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id, highest_degree, graduate_school, graduate_date, education_note) VALUES
(10, 'Doctor', 'Xidian University', '2017-07-06 12:00:00', '211');
-- Insert record 2 into the education table.
INSERT INTO education(staff_id, highest_degree, graduate_school, graduate_date, education_note) VALUES
(11, 'Master', 'Northwestern Polytechnical University', '2017-07-06 12:00:00', '211&985');
-- Insert record 3 into the education table.
INSERT INTO education(staff_id, highest_degree, graduate_school, graduate_date, education_note) VALUES
(12, 'Bachelor', 'Xi'an University of Architecture and Technology', '2017-07-06 12:00:00', 'not 211 or 985');
-- Commit the transaction.
COMMIT;
-- Delete the education_disable table.
DROP TABLE IF EXISTS education_disable;
-- Create the education_disable table.
CREATE TABLE education_disable(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school
VARCHAR(64), graduate_date DATETIME, education_note VARCHAR(70));
-- Insert a record into the education_disable table.
INSERT INTO education_disable(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'doctor','Xidian University','2017-07-06 12:00:00','211');
-- Commit the transaction.
COMMIT;
-- Query data using MINUS.
SELECT * FROM education MINUS SELECT * FROM education_disable WHERE staff_id=10;
Syntax
select_statement [START WITH condition ] CONNECT BY [ NOCYCLE ] [ PRIOR ] condition
Usage
● START WITH
Specifies one or more root rows of the hierarchy, that is, the start row of a
traverse. If multiple rows are specified, the query will traverse multiple root
rows.
● CONNECT BY
Specifies the relationship between parent rows and child rows of a tree-
structured data query. It is used in conjunction with PRIOR. For example,
CONNECT BY PRIOR staff_id=manager_id indicates that staff_id of the next
record is manager_id of the previous record.
● NOCYCLE
Instructs the database to return rows from a query even if CONNECT BY
LOOP exists in the data.
A loop occurs if:
For example, it is found that parent_id for region_id of 1 is 24684 and that
parent_id for region_id of 24684 is also 1. As a result, an infinite loop occurs.
After the data is corrected, the query will be recovered.
● PRIOR
Is a unary operator and has the same precedence as the unary + and -
arithmetic operators.
The PRIOR keyword can be on either side of the equal sign (=). For example,
in CONNECT BY PRIOR area_id=country_id where PRIOR is placed on the
left of the equal mark and with the child ID area_id, the query traverses data
in the direction of child nodes, that is, a top-down query. If the statement is
changed to CONNECT BY country_id= PRIOR are_id where PRIOR is placed
with the parent ID area_id, the query traverses data in the direction of parent
nodes, that is, a bottom-up query.
If multiple join conditions are specified after CONNECT BY, the PRIOR
keyword must be specified for each condition.
● CONNECT_BY_ISCYCLE pseudocolumn
The CONNECT_BY_ISCYCLE pseudocolumn returns 1 if the current row has a
child which is also its ancestor. Otherwise, it returns 0.
● CONNECT_BY_ISLEAF pseudocolumn
The CONNECT_BY_ISLEAF pseudocolumn returns 1 if the current row is a leaf
of the tree defined by the CONNECT BY condition. Otherwise, it returns 0.
Examples
Query the country table for the countries where all branches are located in
Europe. For details, see Figure 3-4.
-- Delete the country table, if any.
DROP TABLE IF EXISTS country;
-- Create the country table.
CREATE TABLE country (
area_id INT NOT NULL,
area_name VARCHAR(30),
country_id INT NOT NULL,
country_name VARCHAR(30)
);
-- Insert data.
INSERT INTO country VALUES(1, 'NSA', 21, 'Mexico');
INSERT INTO country VALUES(1, 'NSA', 22, 'USA');
INSERT INTO country VALUES(1, 'NSA', 23, 'Brazil');
INSERT INTO country VALUES(2, 'EU', 24, 'Canada');
INSERT INTO country VALUES(2, 'EU', 25, 'Italy');
INSERT INTO country VALUES(2, 'EU', 26, 'Britain');
INSERT INTO country VALUES(3, 'AS', 27, 'Japan');
INSERT INTO country VALUES(3, 'AS', 28, 'China');
INSERT INTO country VALUES(3, 'AS', 29, 'Singapore');
COMMIT;
-- Use the START WITH clause to query for the countries where all branches are located in Europe.
SELECT area_name, country_name FROM country START WITH area_id=2 CONNECT BY PRIOR
area_id=country_id;
AREA_NAME COUNTRY_NAME
------------------------------ ------------------------------
EU Canada
EU Italy
EU Britain
3 rows fetched.
3.11.10 GROUP BY
The GROUP BY clause is very important for database queries. It collects data
across multiple records and groups the results by one or more columns.
Syntax
GROUP BY { column_name | expression } [ , ... ]
How to Use
Expressions in the GROUP BY clause can be any columns from the table or view in
the FROM clause, regardless of whether the columns appear in the SELECT list.
The GROUP BY clause groups rows, but does not guarantee the order in result
sets. To sort groups, use the ORDER BY clause.
Examples
Query for the total number of employees in each department by using section_id
for grouping.
-- Delete the staffs table, if any.
DROP TABLE IF EXISTS staffs;
-- Create the staffs table.
CREATE TABLE staffs
(
staff_id NUMBER(6) not null,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
);
-- Insert data.
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (198, 'Donald', 'OConnell', 'DOCONNEL',
'650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'), 'SH_CLERK', 2200.00, null, 124, 50);
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (198, 'Donald', 'OConnell', 'DOCONNEL',
'650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'), 'SH_CLERK', 2400.00, null, 124, 50);
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (198, 'Donald', 'OConnell', 'DOCONNEL',
'650.507.9833', to_date('21-06-1999', 'dd-mm-yyyy'), 'SH_CLERK', 2600.00, null, 124, 50);
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844',
to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK', 4000.00, null, 124, 50);
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844',
to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK', 4200.00, null, 124, 50);
INSERT INTO staffs (staff_id, first_name, last_name, email, phone_number, hire_date, employment_id,
salary, commission_pct, manager_id, section_id) VALUES (199, 'Douglas', 'Grant', 'DGRANT', '650.507.9844',
to_date('13-01-2000', 'dd-mm-yyyy'), 'SH_CLERK', 4400.00, null, 124, 50);
COMMIT;
-- Query for the total number of employees in each department by using section_id for grouping.
SELECT section_id, COUNT(staff_id) FROM staffs GROUP BY section_id ORDER BY section_id;
SECTION_ID COUNT(STAFF_ID)
---------------------------------------- --------------------
50 6
60 2
100 1
3 rows fetched.
3.11.11 HAVING
The HAVING clause is used to filter data after grouping. HAVING returns groups
that satisfy the specified condition. The HAVING clause relies on GROUP BY.
Syntax
HAVING condition [ , ... ]
How to Use
Specify GROUP BY and HAVING after the WHERE clause and a hierarchical query.
If both GROUP BY and HAVING are specified, they can appear in any order.
Examples
Query for the total number of employees in departments with more than three
employees.
-- Delete the bonuses_depa1 table, if any.
DROP TABLE IF EXISTS bonuses_depa1;
-- Create the bonuses_depa1 table.
CREATE TABLE bonuses_depa1(section_id INT NOT NULL, staff_id INT NOT NULL, staff_name CHAR(50),
job VARCHAR(30), bonus NUMBER);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus)
VALUES(2,23,'wangxiayu','developer',9000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(2,23,'wangxia','developer',
5000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(2,24,'limingying','tester',
9000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(2,25,'liulili','quality
control',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(4,29,'liuxue','tester',7000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(4,21,'caoming','document
developer',7400);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(4,28,'caochun','tester',
7300);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(4,29,'caoxi','tester',7700);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, staff_id, staff_name, job, bonus) VALUES(1,30,'caoxixi','tester',7200);
-- Commit the transaction.
COMMIT;
-- Query for the total number of employees in departments with more than three employees.
SELECT section_id, COUNT(staff_id) FROM bonuses_depa1 GROUP BY section_id HAVING COUNT(staff_id)
> 3;
SECTION_ID COUNT(STAFF_ID)
------------ --------------------
4 4
2 4
2 rows fetched.
3.11.12 ORDER BY
The ORDER BY clause is used to sort rows returned by a query according to a
specified column. If there is no ORDER BY, a repeated query may retrieve data in
different orders.
Syntax
ORDER [SIBLINGS] BY { column_name | number | expression } [ ASC | DESC ][ NULLS FIRST | NULLS LAST ]
[ , ... ]
Usage
By default, the ORDER BY clause sorts records in ascending order. If a descending
order is needed, use the keyword DESC.
ORDER SIBLINGS BY specifies the columns used for sorting sibling nodes.
The NULLS FIRST | NULLS LAST keyword specifies the position of NULL values in
the ORDER BY sorting. FIRST indicates that NULL values are placed before non-
NULL values and LAST indicates that NULL values are placed after non-NULL
values. If this keyword is not specified, NULLS FIRST is used in ASC mode and
NULLS LAST is used in DESC mode by default.
Examples
● Query for the bonus information of different jobs in each department. Ensure
that the query results are first sorted by job in ascending order and then by
section_name in descending order.
-- Delete the bonuses_depa1 table, if any.
DROP TABLE IF EXISTS bonuses_depa1;
-- Create the bonuses_depa1 table.
CREATE TABLE bonuses_depa1(section_id INT NOT NULL, section_name VARCHAR(50), staff_id INT NOT
NULL, staff_name CHAR(50), job VARCHAR(30), bonus NUMBER);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, section_name,staff_id, staff_name, job, bonus)
VALUES(1,'devepment',23,'wangxia','developer',5000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, section_name,staff_id, staff_name, job, bonus) VALUES(2,'test',
24,'limingying','tester',7000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, section_name,staff_id, staff_name, job, bonus) VALUES(3,'quality',
25,'liulili','quality control',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, section_name,staff_id, staff_name, job, bonus) VALUES(2,'test',
29,'liuxue','tester',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(section_id, section_name,staff_id, staff_name, job, bonus) VALUES(4,'products',
21,'caoming','document developer',11000);
-- Commit the transaction.
COMMIT;
-- Query for the bonus information of different jobs in each department. Ensure that the query results are
first sorted by job in ascending order and then by section_name in descending order.
SELECT section_name, job, bonus FROM bonuses_depa1 ORDER BY job, section_name DESC;
5 rows fetched.
-- Query for the bonus information of different jobs in each department. Ensure that the query results are
first sorted by job in ascending order and then by section_name in descending order. In the ORDER BY
clause, use numbers to specify the positions of columns for sorting. A number indicates the corresponding
position in the query column list.
SELECT section_name, job, bonus FROM bonuses_depa1 ORDER BY 2, 1 DESC;
5 rows fetched.
3.11.13 WITH AS
Description
The WITH AS clause defines a SQL fragment, which will be used by the entire SQL
statement.
This makes SQL statements more readable. A table storing SQL fragments is
different from a base table. It is a virtual table, also called a view. The definition
and data corresponding to the view is not stored in the database but still in the
base table. If data in the base table changes, the data in the view changes
accordingly.
Syntax
WITH { table_name AS select_statement1 }[ , ...] select_statement2
Parameter Description
● table_name
Specifies the name of a user-defined table that stores SQL fragments.
● select_statement1
Specifies the SELECT statement that queries data from a base table.
● select_statement2
Specifies the SELECT statement that queries data from a user-defined table
that stores SQL fragments.
Examples
Use WITH AS to query data.
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school
VARCHAR(64),graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'Doctor','Xidian University','2017-07-06 12:00:00','211');
-- Insert record 2 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(11,'Master','Northwestern Polytechnical University','2017-07-06 12:00:00','211&985');
-- Insert record 3 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(12,'Scholar','Xi'an University of Architecture and Technology','2017-07-06 12:00:00','not 211 or 985');
-- Commit the transaction.
COMMIT;
-- Use WITH AS to query data.
WITH tmp AS (SELECT staff_id, highest_degree FROM education) SELECT * FROM tmp;
3.11.14 LIMIT
The LIMIT clause allows users to limit the rows returned by a query. Users can
also specify the offset and the number or percent of rows to be returned. This
clause helps generate top N reports. In addition, the ORDER BY clause can be
used to ensure ordered data in the result set to be returned.
Syntax
[ LIMIT [ start, ] count | LIMIT count OFFSET start | OFFSET start[ LIMIT count ] ]
How to Use
● start: Specifies the number of rows to skip before the first row is returned.
● count: Specifies the maximum number of rows to return.
If both start and count are specified, rows specified by start will be skipped
before rows specified by count are returned. This will be illustrated by example 1
below.
After start and count are specified, the keyword OFFSET can still be used. This
will be illustrated by example 2 below. In the following examples, LIMIT 5,20 is
equivalent to LIMIT 20 OFFSET 5 and OFFSET 5 LIMIT 20.
Examples
Make two queries with the first excluding LIMIT and the second including LIMIT,
and then compare the query results. Query for information about the employees
whose bonuses exceed 7000.
5 rows fetched.
-- Query for information about the employees whose bonuses exceed 7000. Use LIMIT 2,2 to skip the first 2
rows and query the rest 2 rows.
SELECT staff_name, job, bonus FROM bonuses_depa1 WHERE bonus > 7000 LIMIT 2,2;
2 rows fetched.
-- Query for information about the employees whose bonuses exceed 7000. Use LIMIT 2 OFFSET 2 to skip
the first 2 rows and query the rest 2 rows. In this case, the result is the same as that of LIMIT 2,2.
SELECT staff_name, job, bonus FROM bonuses_depa1 WHERE bonus > 7000 LIMIT 2 OFFSET 2;
2 rows fetched.
Syntax
FOR UPDATE
How to Use
The FOR UPDATE clause can only be specified in the top-level SELECT statement,
instead of a subquery.
Examples
In the following example, use the FOR UPDATE clause to lock the records that
meet the query condition.
-- Delete the education table, if any.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school VARCHAR(64),
graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'doctor','Xidian University','2017-07-06 12:00:00','211');
-- Insert record 2 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(11,'master','Northwestern Polytechnical University','2017-07-06 12:00:00','211&985');
-- Insert record 3 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(12,'scholar','Peking University','2017-07-06 12:00:00','211&985');
-- Commit the transaction.
COMMIT;
-- Use the FOR UPDATE clause to lock the records whose highest_degree is doctor in the education table.
SELECT staff_id, highest_degree FROM education WHERE highest_degree='doctor' FOR UPDATE;
STAFF_ID HIGHEST_DEGREE
------------ -------------
10 doctor
1 rows fetched.
3.12.1 BACKUP
Description
BACKUP physically backs up a database.
Precautions
● This statement can be executed only in the OPEN database state.
● Start the database in the NOMOUNT state and then manually switch to the
OPEN state to run BACKUP.
-- After the database is started in the NOMOUNT state, log in to the database as user SYS.
zsql / as SYSDBA;
-- Switch to the OPEN database mode. test is the database name.
ALTER DATABASE MOUNT;
ALTER DATABASE test archivelog;
ALTER DATABASE OPEN;
Syntax
BACKUP DATABASE { FULL | INCREMENTAL LEVEL level [CUMULATIVE]}
{ FORMAT 'dest_format' }
[ AS [ZLIB | ZSTD | LZ4] COMPRESSED BACKUPSET [LEVEL compress_level]]
[ TAG 'tag' ] [ PARALLELISM count ] [ SECTION THRESHOLD size ]
[ EXCLUDE FOR TABLESPACE space_list]
Parameter Description
● FULL
Specifies a full backup.
● INCREMENTAL LEVEL level
Specifies an incremental backup level.
The value can be 0 or 1.
0 indicates a full backup. 1 indicates an incremental backup based on the
previous backup.
● CUMULATIVE
Specifies an accumulative incremental backup.
● FORMAT
Specifies the path of a backup set.
● dest_format
Specifies a backup path format.
Examples
● Fully back up data to disks.
BACKUP DATABASE FULL FORMAT '?/full0822.bak';
● Incrementally back up data to disks.
--Set LEVEL to 0 to perform a full backup.
BACKUP DATABASE INCREMENTAL LEVEL 0 FORMAT '?/incr.bak' tag 'incr0822_bak';
-- Set LEVEL to 1 to perform an incremental backup based on the previous backup.
BACKUP DATABASE INCREMENTAL LEVEL 1 FORMAT '?/incr0823.bak' tag 'incr0823_bak';
● Perform an accumulative incremental backup.
-- Perform an accumulative incremental backup based on the previous full backup.
BACKUP DATABASE INCREMENTAL LEVEL 1 CUMULATIVE FORMAT '?/incr0824.bak' tag
'incr0824_bak';
● Perform a backup by using a compression algorithm.
-- Perform a full backup by using a compression algorithm.
BACKUP DATABASE FULL FORMAT '?/full001.bak' tag 'full001_bak' as compressed backupset;
-- Perform an incremental backup by using a compression algorithm.
BACKUP DATABASE INCREMENTAL LEVEL 0 FORMAT '?/incr001.bak' tag 'incr001_bak' as compressed
backupset;
-- Perform a full backup by using the ZSTD compression algorithm.
BACKUP DATABASE FULL FORMAT '?/fullzstd.bak' as zstd compressed backupset;
-- Perform a full backup by using the LZ4 compression algorithm.
BACKUP DATABASE FULL FORMAT '?/fulllz4.bak' as lz4 compressed backupset;
● Perform parallel backup.
-- The database automatically calculates the maximum size of each backup file. The number of
concurrent threads is 6.
BACKUP DATABASE FULL FORMAT '?/full011.bak' tag 'full011_bak' PARALLELISM 6;
-- The maximum size of each backup file is set to 1 GB and the number of concurrent threads is 2.
BACKUP DATABASE FULL FORMAT '?/full012.bak' tag 'full012_bak' PARALLELISM 2 SECTION
THRESHOLD 1G;
● Back up tablespaces excluding the specified ones.
-- Back up tablespaces excluding spc1.
BACKUP DATABASE FORMAT '?/exclude_bak1' EXCLUDE FOR TABLESPACE spc1;
-- Back up tablespaces excluding spc1 and spc2.
BACKUP DATABASE FORMAT '?/exclude_bak2' EXCLUDE FOR TABLESPACE spc1,spc2;
Syntax
BUILD [CASCADED STANDBY | STANDBY] DATABASE
[COMPRESS [ ZLIB | ZSTD |LZ4 ] [ LEVEL n ] ]
Parameter Description
● CASCADED STANDBY
Rebuilds a cascaded standby database.
● STANDBY
Rebuilds a standby database.
● COMPRESS [ ZLIB | ZSTD |LZ4 ]
Compresses the logs and data sent from the primary database when a
standby database is rebuilt.
[ZLIB | ZSTD | LZ4] indicates the compression algorithm. If this parameter is
not specified, the default value ZSTD is used. ZSTD is recommended.
– ZLIB: The compression ratio of ZLIB is slightly lower than ZSTD and
higher than LZ4, but the compression rate is far lower than ZSTD.
– ZSTD: The compression ratio of ZSTD is higher but the compression rate
is slightly lower than LZ4.
– LZ4: The compression rate of LZ4 is lower than ZSTD, but the
compression rate is higher.
● LEVEL n
Specifies a compression level within the value range [1, 9].
If the compression level is not specified, the default level 1 is used.
If CASCADED STANDBY and STANDBY are not specified, the role of the rebuilt database is
determined by the role of the peer database. If the peer is a primary database, the role of
the rebuilt database is standby. If the peer is a standby database, the role of the rebuilt
database is cascaded-standby.
Examples
● Rebuild a database with the compression algorithm set to ZSTD and
compression level set to 1.
build database compress zstd level 1;
Succeed.
3.12.3 DUMP
Description
During database migration or data backup, you need to import and export data.
GaussDB 100 allows you to run the DUMP statement to export data.
Syntax
DUMP { TABLE table_name | QUERY "select_query " }
INTO FILE 'file_name '
[ FILE SIZE 'uint64_file_size' ]
[ { FIELDS | COLUMNS } ENCLOSED BY 'ascii_char' [ OPTIONALLY ] ]
[ { FIELDS | COLUMNS } TERMINATED BY 'string ']
[ { LINES | ROWS } TERMINATED BY 'string ']
[ CHARSET 'string' ];
Parameter Description
● table_name
Specifies the name of a table whose data is to be exported.
● select_query
Specifies the records to be exported. select_query is a SELECT clause.
● file_name
Specifies the name of a file to store exported data.
● uint64_file_size
Specifies the size of each file storing exported data. If a file is full, a new file
will be created to store data. The default value is 0, indicating that no new
file will be created.
● FIELDS
Specifies the format of each column.
● COLUMNS
Specifies the format of each column. It is equal to FIELDS.
● ENCLOSED BY
Encloses column values with a pair of characters.
● ascii_char
Specifies the characters used to enclose each column value. For example, in
"ABC", the value is ABC and the characters used to enclose the value are
double quotation marks (""). By default, the characters are not specified.
Value range: a single ASCII character or an empty string. The value single
quotation marks ('') indicate that no characters are specified.
– Decimal ASCII characters range from 0 to 127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-22.
● OPTIONALLY
Encloses only character and binary data. By default, OPTIONALLY is not used.
● TERMINATED BY
Separates columns with delimiters.
● string
Specifies a column delimiter. The default value is a comma (,).
Value range: a single ASCII character
– Decimal ASCII characters range from 0 to 127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-22.
● LINES
Separates rows with delimiters if a record contains multiple rows.
● ROWS
It is a synonym of LINES.
string
Specifies a row delimiter. The default value is \n.
Value range: a single ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-22.
● CHARSET
Specifies the character set to be exported.
string
Currently, only the UTF8 (without BOM) character set (CHARSET = UTF8)
and GBK character set (CHARSET = GBK) are supported. The former is used
by default.
● row_terminated_char
Specifies a row delimiter. The default value is \n.
Value range: a single ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-22.
The single quotation mark (') is an escape character of SQL. If you need to use it as a
delimiter, use two single quotation marks ('') to represent it. For example:
.... enclosed by ''''....
The outer two single quotation marks are used to enclose the inner ones ('') that represent
a single quotation mark (').
If the character specified by ascii_char is included in the column value, the character will be
escaped again when the column value is exported. For example, if the specified character is
the double quotation mark (") which is included in the column value 1"1, the value will be
exported as "1""1".
Examples
● Export the training table, separating columns with vertical bars (|).
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL,course_name CHAR(50),course_start_date DATETIME,
course_end_date DATETIME,exam_date DATETIME,score INT);
-- Insert record 1 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25 12:00:00',90);
-- Insert record 2 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'information safety','2017-06-20 12:00:00','2017-06-25 12:00:00','2017-06-26 12:00:00',95);
-- Insert record 3 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'master all kinds of thinking methonds','2017-07-15 12:00:00','2017-07-20
12:00:00','2017-07-25 12:00:00',97);
-- Commit the transaction.
COMMIT;
-- Export the training table, separating columns with vertical bars (|).
DUMP TABLE training INTO FILE '/home/gaussdba/data/training_backup' FIELDS ENCLOSED BY '|';
● Export columns from training table with a condition being specified on the
course_name column, enclosing each column value with a pair of single
quotation marks ('') and separating columns with vertical bars (|).
DUMP QUERY "SELECT course_name,score,exam_date FROM training WHERE course_name = 'SQL
majorization'"
INTO FILE '/home/gaussdba/data/training_query_backup '
COLUMNS ENCLOSED BY ''''
COLUMNS TERMINATED BY '|';
3.12.4 EXP
Description
EXP logically exports data from a database.
Precautions
● When EXP is used to export data, SQL statements are assembled on the client
before being sent to a server, which then receives complete, specific DDL and
DML statements.
● The client operation log is specified by the LOG parameter and records the
EXP command.
● The server audit log {GSDB_DATA}/log/audit records the DDL and DML
statements.
Syntax
{EXP | EXPORT}[ keyword =param [ , ... ] ] [ ... ];
Parameters
● EXP
Specifies the command for logical export. It is equivalent to EXPORT.
● keyword
Specifies the keyword for logical export.
– USERS
Specifies users whose data is to be exported. Multiple users are separated
by commas (,), and % indicates all users.
No user can export data of user SYS, but user SYS can export the data of
other users. Common users must have the DBA role to export data of
specified users. Common users can export their own data only when they
have the SELECT ANY TABLE or READ ANY TABLE permission.
– TABLES
Specifies the tables to be exported. Multiple tables are separated by
commas (,), and % indicates all tables.
– DIST_RULES
Specifies distribution rules of exported data. Multiple rules are separated
by commas (,), and % indicates all rules.
This parameter is used only when GaussDB 100 is deployed in distributed
mode.
– FILE
Specifies the file that stores exported data. The value is a file name and
its full path, enclosed with double quotation marks (""). If the path is not
specified, the file will be stored in the current directory where the
command is executed.
– FILETYPE
Specifies the type of the files that store exported data.
▪ Y: Add comments.
▪ Y: Export.
▪ N: Do not export.
Default value: N
– SKIP_ADD_DROP_TABLE
Specifies whether to add DROP to the statement before exporting a
table.
Valid value:
▪ Y: Do not add.
▪ N: Add.
Default value: N
– SKIP_TRIGGERS
Specifies whether to export triggers.
Valid value:
▪ Y: Do not export.
▪ N: Export.
Default value: N
– QUOTE_NAMES
Specifies whether to enclose exported objects with double quotation
marks ("").
Valid value:
▪ Y: Enclose.
▪ N: Do not enclose.
Default value: N
– COMMIT_BATCH
Specifies the amount of data to be batch submitted.
Value range: natural number. 0 indicates that all the data in a table.
Default value: 1000
– INSERT_BATCH
Specifies the amount of data inserted by a single INSERT statement.
Value range: natural number
Default value: 1
– FEEDBACK
Specifies how many records need to be exported to trigger the display of
the export progress.
Value range: natural number 0 indicates that the progress is displayed
once for a table.
Default value: 10000, indicating the progress is displayed when 10000
records are exported.
– PARALLEL
Specifies the number of concurrent threads.
Value range: natural number
▪ Y: Export.
▪ N: Do not export.
Default value: N
– CREATE_USER
Specifies whether to export user definition statements, that is, DDL
statements used for creating users. This keyword must be used in
conjunction with USERS.
Valid value:
▪ Y: Export.
▪ N: Do not export.
Default value: N
– ROLE
Specifies whether to export role (non-SYS) definition statements, that is,
DDL statements used for creating roles. This keyword must be used in
conjunction with USERS.
Valid value:
▪ Y: Export.
▪ N: Do not export.
Default value: N
– GRANT
Specifies whether to export GRANT statements of users or roles. This
keyword must be used in conjunction with USERS and ROLES.
Valid value:
▪ Y: Export.
▪ N: Do not export.
Default value: N
– TABLESPACE
Specifies whether to export all tablespaces. Currently, exporting all
tablespaces allows for only those created by users, excluding system-
reserved ones. The file storage directory is the same as the default
tablespace path of the system.
Valid value:
▪ Y: Export.
▪ N: Do not export.
Default value: N
– TABLESPACE_FILTER
Specifies the filter for specifying tablespaces. Multiple tablespaces are
separated by commas (,). The specified tablespace is used only for
filtering. No creation statement is generated. The symbol % is not
supported, that is, filtering all tablespace is not supported.
– WITH_CR_MODE
Specifies whether to add CR_MODE for exporting tables and index
scripts.
Valid value:
▪ Y: Add.
▪ N: Do not add.
Default value: N
Examples
● Export data from tab1 and tab2 of the current user.
-- Delete the tab1 and tab2 tables.
DROP TABLE IF EXISTS tab1;
DROP TABLE IF EXISTS tab2;
-- Create the tab1 table.
CREATE TABLE tab1(ID INT NOT NULL,score INT,COMMENT1 VARCAHR(2000));
-- Insert data into the tab1 table.
INSERT INTO tab1 (1,'92','Test');
INSERT INTO tab1 (2,'98','Security');
INSERT INTO tab1 (3,'95','Development');
INSERT INTO tab1 (4,'97','O&M');
-- Commit the transaction.
COMMIT;
-- Create the tab2 table.
CREATE TABLE tab2(ID INT NOT NULL,score INT,COMMENT2 VARCAHR(2000));
-- Insert data into the tab2 table.
INSERT INTO tab2 (11,'93','Test suggestions');
INSERT INTO tab2 (12,'98','Security specifications');
INSERT INTO tab2 (13,'93','Development and maintenance');
INSERT INTO tab2 (14,'96','O&M');
-- Commit the transaction.
COMMIT;
EXP TABLES=tab1,tab2 FILE="file1.dmp";
-- Export user test_user, role test_role, and table structure information of the user:
EXP USERS = TEST_USER CONTENT = METADATA_ONLY CREATE_USER = Y ROLE = Y GRANT = Y
FILE = "file1.dmp";
● Export tablespaces.
EXP USERS = TEST_USER CONTENT = METADATA_ONLY TABLESPACE= Y FILE = "file1.dmp";
3.12.5 IMP
Description
IMP logically imports data to a database.
Precautions
● When IMP is used to import data, SQL statements are assembled on the
client before being sent to a server, which then receives complete, specific
DDL and DML statements.
● The client operation log is specified by the LOG parameter and records the
IMP command.
● The server audit log {GSDB_DATA}/log/audit records the DDL and DML
statements.
● When you import a .bin file and set content to DATA_ONLY or
METADATA_ONLY, the exported file must use the same content value.
● If IMP uses the -h, help, or option parameter and ends with a semicolon (;)
or slash (/), the help information about IMP will be displayed.
● User SYS cannot be used to logically import data.
● When FILETYPE is TXT, a maximum of 8 KB data of the CLOB, BLOB, TEXT, or
IMAGE type can be imported.
● If FILETYPE is BIN, user, table, and remap cannot be selected. You can only
fully import an exported file.
● IMP can import a file from the database of an earlier version to the current
database.
● If a file with the same name exists in the target directory when data is
logically imported, the system overwrites the existing file without any prompt.
Syntax
{IMP | IMPORT} [ keyword =param [ , ... ] ] [ ... ];
Parameters
● IMP
Specifies the command for logical import. It is equivalent to IMPORT.
● keyword
Specifies the keyword for logical import.
– USERS
▪ Y: Perform.
▪ N: Do not perform.
Default value: N
– CONTENT
Specifies whether to import table data or table definitions.
Valid value:
▪ Y: Execute.
▪ N: Do not execute.
Default value: N
– REMAP_TABLESPACE
Specifies tablespace mapping. For example, to import data from
tablespace A to tablespace B, REMAP_TABLESPACE will be A: B. Use
commas (,) to separate multiple mapping relationships.
– CREATE_USER
Specifies whether to import user definition statements, that is, DDL
statements for creating users.
Valid value:
▪ Y: Import.
▪ N: Do not import.
Default value: N
– PARALLEL
Specifies the number of parallel DML statements.
Value range: [1,32]
Default value: 1
– DDL_PARALLEL
Specifies the number of parallel DDL statements.
Value range: [1,32]
Default value: 1
– NOLOGGING
Specifies whether to enable nologging of redo logs. Only the bin format
is supported.
Valid value:
▪ Y: Enable.
▪ N: Disable.
Default value: N
– TIMING
Specifies whether to print the timing statistics about the import.
Valid value:
▪ ON: Print.
Examples
● Import data from tab1 and tab2 of the current user.
IMP TABLES=tab1,tab2 FILE="file1.dmp";
3.12.6 LOAD
Description
During database migration or data backup, you need to import and export data.
GaussDB 100 allows you to run the LOAD statement to import data.
Syntax
LOAD DATA INFILE "file_name" INTO TABLE table_name
[{ FIELDS | COLUMNS } ENCLOSED BY 'ascii_char' [ OPTIONALLY ]]
[{ FIELDS | COLUMNS } TERMINATED BY 'string']
[{ LINES | ROWS } TERMINATED BY 'string']
[ TRAILING COLUMNS( COLUMN1[ , COLUMN2, ... ] ) ]
[ IGNORE uint64_num { LINES | ROWS }]
[ CHARSET string ]
[ THREADS uint32_threads ]
[ ERRORS uint32_num ]
[ NOLOGGING ]
[ NULL2SPACE ]
[ DEBUG ];
Parameter Description
● file_name
Specifies the path and name of a file to be imported.
● table_name
Specifies the name of a table to store imported data.
● FIELDS
Specifies the format of each column.
● COLUMNS
Specifies the format of each column. It is equal to FIELDS.
● ENCLOSED BY
Encloses column values with a pair of characters.
● ascii_char
Specifies the characters used to enclose each column value. By default, no
characters are specified.
Value range: a single ASCII character, or an empty string ('') which indicates
that no characters are specified.
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-23.
● OPTIONALLY
Encloses only character and binary data. By default, they are enclosed with a
pair of single quotation marks ('').
● TERMINATED BY
Separates columns with delimiters.
string
Specifies a column delimiter. The default value is a comma (,).
Value range: one or more ASCII characters. A maximum of 10 characters are
allowed.
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-23.
● LINES
Separates rows with delimiters if a record contains multiple rows.
● ROWS
It is a synonym of LINES.
string
Specifies a row delimiter. The default value is \n.
Value range: a single ASCII character
– The value range of ASCII characters in decimal notation is 0-127.
– Hexadecimal ASCII characters range from \x00 to \x7F.
– Common escape characters are listed in Table 3-23.
● IGNORE
Specifies the number of lines to be ignored.
● uint64_num
Ignores the first uint64_num lines. The default value is 0.
● THREADS
Specifies the number of threads for concurrent data import.
● uint32_threads
Specifies the number of threads for concurrent import. The default value is 1.
Multi-thread import is used to improve efficiency. Deviation is allowed to
collect statistics about the number of errors. In addition, detailed information
about records that cause errors is recorded and the errors will not affect the
subsequent import.
Value range: [1, 128]
● ERRORS
Specifies the number of SQL statements that are allowed to cause errors.
● uint32_num
Specifies the number of SQL statements that are allowed to cause errors. The
default value is 0.
● NOLOGGING
Does not record redo or undo logs for imported data. This parameter is
available only when the target table is set to append only.
● DEBUG
Prints debugging information generated during tool running to a screen.
● CHARSET
Specifies the character set to be imported.
string
Currently, only the UTF8 (without BOM) character set (CHARSET = UTF8)
and GBK character set (CHARSET = GBK) are supported. The former is used
by default.
● TRAILING COLUMNS( COLUMN1[ , COLUMN2, ... ] )
Specifies the columns to which data is to be imported. COLUMN1[,
COLUMN2, ...] specifies column names and at least one name must be
specified.
● NULL2SPACE
Inserts a space to replace NULL, if an empty value of the CHAR or LOB type is
to be imported and NOT NULL is specified.
The single quotation mark (') is an escape character of SQL. If you need to use it as a
delimiter, use two single quotation marks ('') to represent it. For example:
.... enclosed by ''''....
The outer two single quotation marks are used to enclose the inner ones ('') that represent
a single quotation mark (').
If the character specified by ascii_char is included in the column value, the character will be
escaped again when the column value is exported. For example, if the specified character is
the double quotation mark (") which is included in the column value 1"1, the value will be
exported as "1""1".
Examples
Import data from the training_backup table file to the training_new table.
Description
RECOVER DATABASE replays all logs of the database or replays logs to a specified
time point, restoring database data.
Precautions
● RECOVER DATABASE (excluding RECOVER DATABASE UNTIL CANCEL) can
be executed only after RESTORE DATABASE is executed successfully. If the
RECOVER DATABASE execution fails, the database will fail to be opened.
● RECOVER DATABASE UNTIL CANCEL is executed only when the database
cannot be started due to log damages, and can be executed only in the
MOUNT database state. After running RECOVER DATABASE UNTIL CANCEL,
you must run ALTER DATABASE RESTLOGS or ALTER DATABASE IGNORE
LOGS to start the database.
Syntax
RECOVER DATABASE [ UNTIL [TIME 'time_string' | CANCEL] ]
Parameter Description
● UNTIL TIME
Restores the database to the time point specified by time_string.
● time_string
Specifies a time point for replaying database logs.
The format is YYYY-MM-DD HH-MM-SS, accurate to seconds.
● UNTIL CANCEL
Restores the database to the last available log point. It is used to ignore
damaged logs and forcibly start the database when the database cannot be
started due to log damages. This operation may incur data loss or damage
data consistency and can be operated only when the database cannot be
started due to log damages.
Examples
● Fully restore the database (by replaying all logs).
-- Fully back up the database to a disk to in the OPEN database state.
BACKUP DATABASE FULL FORMAT '?/full0824.bak';
-- Clear the data directory $GSDB_DATA/data and restore data files in the NOMOUNT database state.
RESTORE DATABASE FROM '?/full0824.bak';
-- Fully restore the database (by replaying all logs).
RECOVER DATABASE;
● Restore the database to the last available log point in the MOUNT database
state, ignoring the subsequent log damages.
● If the database can be restored to CONSISTENT POINT, the data consistency can
be ensured and ALTER DATABASE OPEN RESTLOGS can be executed successfully.
● Otherwise, the data consistency cannot be ensured. In this case, you can start the
database only by running ALTER DATABASE OPEN IGNORE LOGS.
RECOVER DATABASE UNTIL CANCEL;
ALTER DATABASE OPEN RESTLOGS;
Description
RESTORE DATABASE restores physical files and restores data files from backup
sets.
Precautions
To restore the database, the database must be in the NOMOUNT state, the
required data directory exists, and data files have been cleaned.
Syntax
RESTORE DATABASE FROM path [ DISCONNECT FROM SESSION ] [ PARALLELISM count ] [TABLESPACE
tablespace_name]
Parameter Description
● path
Specifies a backup file path. The value is the same as that of format in the
BACKUP statement.
● DISCONNECT FROM SESSION
Specifies asynchronous execution. The database returns a response
immediately after receiving the RESTORE request. To check whether the
execution is successful, query the status column in DV_BACKUP_PROCESSES.
If this parameter is not specified, synchronous execution is performed by
default. The database returns the result after the execution is complete.
● PARALLELISM count
If the backup media is a disk, concurrent threads can be enabled to restore
data in parallel, improving the restoration performance.
The value of count is an integer within the range of [1, 8]. If the number of
concurrent threads is not specified, four concurrent threads are started by
default.
● TABLESPACE tablespace_name
Specifies a tablespace whose data is to be restored based on full backup.
Examples
● Restore the database from a disk.
-- Fully back up the database to a disk to in the OPEN database state.
BACKUP DATABASE FULL FORMAT '?/full0824.bak';
-- Restore the database in the NOMOUNT database state.
RESTORE DATABASE FROM '?/full0824.bak';
● Restore the database from a disk and specify the number of concurrent
threads.
-- Fully back up the database to a disk to in the OPEN database state.
BACKUP DATABASE FULL FORMAT '?/full0826.bak';
-- Restore the database in the NOMOUNT database state and set the number of concurrent threads
to 2.
RESTORE DATABASE FROM '?/full0826.bak' PARALLELISM 2;
● Restore the data of a specified tablespace from the full backup file.
-- Fully back up the database to a disk to in the OPEN database state.
BACKUP DATABASE FULL FORMAT '?/full0826.bak';
-- Restore the data of the existing tsp tablespace in the NOMOUNT database state.
RESTORE DATABASE FROM '?/full0826.bak' TABLESPACE tsp;
Precautions
● This command is invoked only by ztrst and cannot be executed by users.
● The database must contain all redo logs generated from the time when the
backup started to the time when the disk data page became faulty.
● Data pages of temporary and nologging tablespaces cannot be restored.
● Data page of tablespace description cannot be restored.
Syntax
RESTORE BLOCKRECOVER DATAFILE file_id PAGE page_id FROM backup_path
Parameter Description
● file
Specifies the serial number of the data file where the damaged page is
located. The value range is [0,1022].
When a page of the database is damaged, the serial number of the
corresponding data file is recorded in run logs. Therefore, you can obtain the
serial number from run logs.
● page
Specifies the serial number of the damaged page in the data file. The
minimum value is 1, and the maximum value is the maximum number of
pages in the data file.
When a page of the database is damaged, the serial number of the damaged
page in the data file is recorded in run logs. Therefore, you can obtain the
serial number from run logs.
● path
Specifies the path of backup files. The value is the same as that of
dest_format in the BACKUP statement.
Examples
Restore a damaged page using a backup set in the MOUNT database state.
RESTORE BLOCKRECOVER DATAFILE 3 PAGE 2 FROM '?/full0824.bak';
3.12.10 SHUTDOWN
Description
SHUTDOWN gracefully shuts down the database. After SHUTDOWN is executed,
TCP listening is stopped. After all session transactions are complete, the main
process is stopped.
Precautions
● An error is returned if the SHUTDOWN statement is executed during
transaction execution.
● An error is returned if the SHUTDOWN statement is executed when the
primary server is demoted to the standby.
● If IMMEDIATE and ABORT are not specified, the NORMAL mode is used by
default. In NORMAL mode, no new connections are allowed after
SHUTDOWN is executed, and the main process is stopped after all
transactions are complete.
Syntax
SHUTDOWN [ IMMEDIATE | ABORT ]
Parameter Description
● IMMEDIATE
Stops receiving connection requests from clients, rolls back incomplete
transactions, and finally stops the main process.
● ABORT
Stops receiving connection requests from clients and immediately stops the
main process.
Examples
Shut down the database immediately.
SHUTDOWN ABORT;
3.12.11 VALIDATE
Description
VALIDATE performs checksum verification on a data page to check whether the
page is damaged. If a success message is returned, the page is undamaged. This
command is invoked only by ztrst and cannot be executed by users.
Precautions
● This command is invoked only by ztrst and cannot be executed by users.
● The database status must be MOUNT or OPEN.
● The value of PAGE_CHECKSUM cannot be OFF.
Syntax
VALIDATE DATAFILE file_id PAGE page_id
Parameter Description
● file_id
Specifies the ID of the data file where the damaged page is located. The value
range is [0, 1022].
● page_id
Specifies the ID of the damaged page in the data file. The minimum value is 1
and the maximum value is the maximum number of pages in the data file.
Examples
Validate the page 2 in data file 3.
VALIDATE DATAFILE 3 PAGE 2;
Transaction Management
Savepoint Operations
Savepoint operations include creating and deleting savepoints. For details, see
Table 3-25.
Event Isolation
Granting Permissions
In permission management, you can grant permissions to users or roles, revoke
permissions, create roles, and delete roles.
Table Locking
The SHARE and EXCLUSIVE table locking modes are supported. The following
table lists the related SQL statements.
Shutdown
Defining a Database
A database is a warehouse for organizing, storing, and managing data. Defining a
database includes creating a database and modifying database attributes. The
following table lists the related SQL statements.
Defining a Tablespace
A tablespace is used to manage data objects and corresponds to a catalog on a
disk. The following table lists the related SQL statements.
Defining a Table
A table is a special data structure in a database and is used to store data objects
and relationship between data objects. The following table lists the related SQL
statements.
Table Flashback
The time in the past to which a table can be flashed back depends on the amount
of undo data in the system. The following table lists the related SQL statements.
Defining an Index
An index indicates the sequence of values in one or more columns in a database
table. It is a data structure that improves the speed of data access to specific
information in a database table. The following table lists the related SQL
statements.
Defining a Role
A role is used to manage permissions. For database security, management and
operation permissions can be granted to different roles. The following table lists
the related SQL statements.
Defining a User
A user is used to log in to a database. Different permissions can be granted to
users for managing data accesses and operations of the users. The following table
lists the related SQL statements.
Defining a View
A view is a virtual table exported from one or more basic tables. It is used to
control data accesses of users. The following table lists the related SQL
statements.
Defining a Sequence
A sequence can generate numbers with the same interval and the numbers are
used as primary key values. When a sequence value is generated, the sequence is
incremented. The following table lists the related SQL statements.
Defining a Synonym
A synonym is an alias or alternate name for a schema object. It allows users to
easily access database objects owned by other users and saves database space.
The following table lists the related SQL statements.
Defining a Comment
You can use the COMMENT statement to add to the data dictionary a comment
about a table, a table column, or view. The following table lists the related SQL
statements.
Recycle Bin
A recycle bin is temporary storage for objects, such as indexes, tablespaces, and
tables deleted by running the DROP statement. To permanently remove them, run
the PURGE statement. To roll back the DROP operation, run the FLASHBACK
statement. The following table lists the related SQL statements.
Defining a Profile
A profile is a set of limits on database resources available to users. The following
table lists the related SQL statements.
Session Management
A session is a connection established between a user and the database. The
following table lists the related SQL statements.
Others
The following table lists other DDL statements.
Data Operations
Others
Description
ALTER DATABASE modifies a database.
Precautions
● To run this statement, you must have the ALTER DATABASE system
permission.
● Files cannot be automatically managed on the standby database server.
● Log files cannot be added to or deleted from the standby database server.
Syntax
ALTER DATABASE [ database_name ]
{ startup_clauses
| logfile_clauses
| archlogfile_clauses
| standby_database_clauses
| alter_datafile_clauses
| clear_logfile_clauses
| SWITCHOVER
| FAILOVER
| CANCEL RESTRICT
| CONVERT TO { READONLY | READWRITE } |{[ CASCADED ] PHYSICAL STANDBY [ MOUNT ] }
}
● startup_clauses:
{ MOUNT | OPEN [ RESETLOGS | READ ONLY | READ WRITE | RESTRICT | FORCE IGNORE LOGS] }
● logfile_clauses:
{ [ ARCHIVELOG | NOARCHIVELOG ] add_logfile_clauses | drop_logfile_clauses }
– add_logfile_clauses:
ADD LOGFILE redo_log_file_spec
redo_log_file_spec:
( { 'file_name' SIZE integer [ K | M | G | T | P | E ]
} [ , ... ]
)
– drop_logfile_clauses:
DROP LOGFILE ( 'file_name' )
● archlogfile_clauses:
DELETE ARCHIVELOG { ALL | UNTIL TIME 'date_string' } [ FORCE ]
● standby_database_clauses:
SET STANDBY DATABASE TO MAXIMIZE { PROTECTION
| AVAILABILITY
| PERFORMANCE
}
● alter_datafile_clauses:
DATAFILE { { 'file_name' | file_number
} [, ...]
}
{ autoextend_clause | resize_clause }
– autoextend_clause:
AUTOEXTEND { OFF
| ON [ NEXT integer [ K | M | G] ]
[ MAXSIZE { integer [ K | M | G]
| UNLIMITED
}
]
}
– resize_clause clause:
RESIZE integer [ K | M | G ]
– clear_logfile_clauses :
CLEAR LOGFILE file_id
Parameters
● database_name
Specifies the name of a database to be modified. If the database name is not
specified, the database in the MOUNT state is used.
● startup_clauses
Specifies the database status (MOUNT or OPEN).
– MOUNT: The database is mounted but not started. In this state, only
database administrators can modify the database and users cannot
establish connections or sessions with the database.
An error will be reported if you change the status of a database in the
MOUNT state to MOUNT again.
– OPEN: The database is successfully started.
When the database status is set to OPEN, the data dictionary is initialized
and heap file metadata is loaded to the memory.
▪ READ WIRTE enables read and write. It is the default state after the
database status becomes OPEN.
▪ READ ONLY enables only read. In this case, the database supports
only query.
▪ RESTRICT loads only core system catalogs, allowing for user SYS
only.
Before upgrading the database, switch the database status to
RESTRICT.
In RESTRICT mode, indexes can be recreated on system catalogs by
strictly following the provided instructions.
▪ FORCE IGNORE LOGS forcibly ignores log files when the database is
in the OPEN state.
It is used to forcibly ignore damaged logs when database log files are
damaged and cannot be restored to CONSISTENT POINT.
This operation cannot ensure database consistency. If the database is
not restored to CONSISTENT POINT, a message is displayed in run
logs and OPEN_INCONSISTENCY in the DV_DATABASE view
changes to TRUE.
● You can change the database status from MOUNT to OPEN, but cannot change it
from OPEN to MOUNT.
● You can change the database from the OPEN state to an OPEN substate, or
change between OPEN substates except READ WRITE and READ ONLY only by
performing the following steps. Otherwise, an error occurs.
1. Run the python zctl.py -t stop command to stop the database.
2. Start the database in NOMOUNT or MOUNT mode and connect it.
3. Switch the database status to a child status of OPEN.
● The RESTRICT mode is dedicated for upgrade. This mode needs to be used
together with the upgrade script upgrade.py. For details, see Installation and
Deployment > Installation Preparation > Upgrading a Database in GaussDB 100
V300R001C00 User Guide (Standalone). Other operations will cause database
exceptions, for example, abnormal exit.
● In RESTRICT mode, indexes can be recreated on system catalogs. You are advised
to perform this operation only when absolutely necessary. If the index structure of
system catalogs is sparse, occupying much space and affecting the service running
speed, you are advised to recreate indexes in RESTRICT mode. In RESTRICT mode,
run ALTER SYSTEM LOAD DICTIONARY FOR xxx; to load all system catalogs
before recreating indexes. You are not allowed to create system catalog indexes
online or change tablespaces of the system catalogs. In primary/standby mode,
when you rebuild indexes of the system catalog in the primary database, the
standby database must be in RESTRICT mode.
● logfile_clauses
Adds or deletes log files.
Enables and disables redo log archiving in the MOUNT database state. You
can add and delete redo logs only when the database is in the OPEN state.
– ARCHIVELOG
Enables redo log archiving.
– NOARCHIVELOG
Disables redo log archiving.
In primary/standby deployment of a database, you can set the redo log mode
only to ARCHIVELOG.
– add_logfile_clauses
Adds one or more redo log files to the primary server. If no directory is
specified, files are added to the $GSDB_HOME/data directory by default.
redo_log_file_spec
Specifies one or more redo log files. The parameter value must contain
the file size, and the file name or absolute path. The file size is unlimited.
▪ The total number of log files on the primary and standby servers
cannot exceed 256. If the number exceeds 256, an error will be
reported.
▪ SIZE integer [ K | M | G | T | P | E ]
Specifies the file size. The default unit is byte. K indicates KB, M
indicates MB, G indicates GB, T indicates TB, and E indicates EB.
– drop_logfile_clauses
Deletes one redo log file from the primary server.
● archlogfile_clauses
Deletes archived log files.
– ALL
Deletes all archived logs that meet the deletion policy.
The deletion policy specifies whether the space occupied by archived logs
has reached 85% of MAX_ARCH_FILES_SIZE, whether archived logs have
been backed up, and whether archived logs have been replayed by the
standby node. You can configure the deletion policy through the FORCE
keyword and the ARCH_CLEAN_IGNORE_STANDBY parameter. For
details about the parameters, see Parameters > Databases > Archive
Logs in GaussDB 100 V300R001C00 Database Reference.
– UNTIL TIME 'date_string'
Deletes the archived logs that meet the deletion policy and are generated
before the specified time.
● alter_datafile_clauses
Modifies attributes of one or more data files in the database. Currently, only
automatic extension of data files can be changed.
Data files can be specified by file name or file number.
– file_name: file name. The value can be a file name or an absolute path. If
the value is a file name, the database generates a full path based on the
file name and the data directory in the database instance path specified
by the ALTER DATABASE statement. If the value is an absolute path, the
database uses it without additional processing. The maximum length of
file_name is 256 bytes.
– file_number: ID of a data file in the database. For details, see the ID
column in Data Dictionary and Views > Dynamic Performance Views >
DV_DATA_FILES in GaussDB 100 V300R001C00 Database Reference.
– autoextend_clause
Enables or disables automatic extension, and specifies the extension size
and upper limit.
▪ OFF
Disables automatic extension.
▪ ON
Enables automatic extension.
If automatic extension is enabled (ON), you can configure the
following parameters:
○ NEXT: extension size. If this parameter is not set, the default
value 16MB is used.
○ MAXSIZE: extension upper limit. The upper limit cannot exceed
the size of the current file. If this parameter is set to UNLIMITED
or not set, the upper limit 8TB is used. The maximum value of
the upper limit must be less than 8TB. If both MAXSIZE and
NEXT are set, the value of MAXSIZE must be no less than that
of NEXT.
UNLIMITED
The automatic extension has no upper limit.
– resize_clause
Changes the size of a data file. You can obtain the current size of a data
file based on the file name or ID identified by the FILE_NAME or ID
column in Data Dictionary and Views > Dynamic Performance Views >
DV_DATA_FILES in GaussDB 100 V300R001C00 Database Reference.
This statement cannot be executed in the READ ONLY mode under the
OPEN database state. It can be executed in other modes under the OPEN
state.
When reducing the file size, ensure it is not smaller than the minimum
size required by the database system. The minimum size required by the
SYSTEM tablespace and UNDO tablespace is 128 MB, and that of other
data files is 1 MB.
When reducing the file size, ensure that the storage area of valid data is
not damaged. Otherwise, the statement execution will fail.
To obtain the number of pages occupied by valid data, see the value of
the HIGH_WATER_MARK column in Data Dictionary and Views >
Dynamic Performance Views > DV_DATA_FILES in GaussDB 100
V300R001C00 Database Reference. To calculate the required file size,
multiply the obtained value by the size of each page.
▪ integer [ K | M | G ]
Specifies the size of each data file.
K: The unit is KB.
M: The unit is MB.
G: The unit is GB.
● SWITCHOVER
Switches between the primary and standby database servers.
● FAILOVER
Promotes the standby database server to the primary.
● CANCEL RESTRICT
Cancels the RESTRICT state. After the database is upgraded, you need to
cancel the RESTRICT state.
Perform the cancellation after the execution of the ALTER SYSTEM INIT
DICTIONARY statement is complete.
● CONVERT TO
– [CASCADED] PHYSICAL STANDBY [MOUNT]
Changes the database role to standby (PHYSICAL STANDBY) or cascaded
standby (CASCADED PHYSICAL STANDBY). This operation can be
performed only in the MOUNT database state.
If MOUNT is specified, only the role is changed and the database status
is not changed. If MOUNT is not specified, the database automatically
changes to the READ ONLY mode under the OPEN state after the role is
changed.
– READONLY| READWRITE
Changes the database status. READWRITE and READONLY can be
switched online. In primary/standby deployment, such a switchover can
be performed only on the primary database.
Examples
● Change the database status.
-- Change the database status to MOUNT.
ALTER DATABASE MOUNT;
-- Change the database status from MOUNT to OPEN.
ALTER DATABASE OPEN;
-- Reset the database log sequence number to 1 in the MOUNT database state.
ALTER DATABASE OPEN RESETLOGS;
-- Change the database to read-only in the MOUNT database state.
ALTER DATABASE OPEN READ ONLY;
-- Change the database to readable/writable in the MOUNT database state.
ALTER DATABASE OPEN READ WRITE;
-- Change the database status from MOUNT to RESTRICT.
ALTER DATABASE OPEN RESTRICT;
Precautions
● You can modify your own indexes without additional permissions. To modify
indexes of other users, the ALTER ANY INDEX system permission is required.
Common users cannot modify objects of system users.
● The specified index name must exist. Otherwise, an error will be reported.
● Profiles cannot be created during database restart or rollback.
Syntax
ALTER INDEX [ schema_name. ]index_name
{ rebuild_clauses
| rename_clauses
}
Parameter Description
● [schema_name.]index_name
Specifies the name of an index to be modified.
● rebuild_clauses
– REBUILD ONLINE
Creates or rebuilds an index online. The advantage of creating or
rebuilding an index online is that the time for adding exclusive locks to
tables is greatly reduced, thereby preventing online service blocking.
– REBUILD TABLESPACE tablespace_name
Copies index data to other tablespaces.
● rename_clauses
– RENAME TO [schema_name.] index_name_new
Specifies the name of an index to be renamed.
Examples
● Rebuild an index on the posts table online and rename the index.
-- Delete the posts table.
DROP TABLE IF EXISTS posts;
-- Create the posts table.
CREATE TABLE posts(post_id CHAR(2) NOT NULL, post_name CHAR(6) PRIMARY KEY, basic_wage INT,
basic_bonus INT);
-- Delete the idx_posts index.
DROP INDEX IF EXISTS idx_posts;
-- Create the idx_posts index.
CREATE INDEX idx_posts ON posts(post_id ASC, post_name) ONLINE;
● Rebuild the partitioned index online for the partitioned table education and
rename the index.
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the partitioned table education.
CREATE TABLE education(staff_id INT NOT NULL, highest_degree CHAR(8), graduate_school
VARCHAR(64), graduate_date DATETIME, education_note VARCHAR(70))
PARTITION BY LIST(highest_degree)
(
PARTITION doctor VALUES ('DOCTOR'),
PARTITION master VALUES ('MASTER'),
PARTITION undergraduate VALUES ('SCHOLAR')
);
-- Insert record 1 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'DOCTOR','Xidian University','2017-07-06 12:00:00','211');
-- Insert record 2 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(11,'DOCTOR','Northwest A&F University','2017-07-06 12:00:00','211&985');
-- Insert record 3 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(12,'MASTER','Northwestern Polytechnical University','2017-07-06 12:00:00','211&985');
-- Insert record 4 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(15,'SCHOLAR','Xi'an University of Architecture and Technology','2017-07-06 12:00:00','NOT
211 OR 985');
-- Insert record 5 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(18,'MASTER','Xi'an University of Technology','2017-07-06 12:00:00','not 211 or 985');
-- Insert record 6 into the partitioned table education.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(20,'SCHOLAR','Capital Normal University','2017-07-06 12:00:00','211&985');
COMMIT;
-- Delete the idx_training index.
DROP INDEX IF EXISTS idx_training;
-- Create a partitioned index.
CREATE INDEX idx_training ON education(staff_id ASC, highest_degree) LOCAL (PARTITION doctor,
PARTITION master, PARTITION undergraduate);
-- Rebuild the partitioned index.
ALTER INDEX idx_training REBUILD ONLINE;
-- Rename the partitioned index.
ALTER INDEX idx_training RENAME TO idx_training_temp;
Description
ALTER PROFILE modifies a profile.
Precautions
● To run this statement, you must have the ALTER PROFILE system permission.
● The default profile can also be modified.
● PASSWORD_LIFE_TIME, PASSWORD_LOCK_TIME,
PASSWORD_GRACE_TIME, and PASSWORD_REUSE_TIME can be set to
scores (1 minute = 1/1440 days, and 1 second = 1/86400 days).
● Profiles cannot be created during database restart or rollback.
Syntax
ALTER PROFILE profile_name LIMIT password_parameters [ ... ]
password_parameters:
{ { FAILED_LOGIN_ATTEMPTS
| PASSWORD_LIFE_TIME
| PASSWORD_LOCK_TIME
| PASSWORD_GRACE_TIME
| PASSWORD_REUSE_TIME
| PASSWORD_REUSE_MAX
| SESSIONS_PER_USER
}
{ expr | UNLIMITED | DEFAULT }
}
Parameter Description
● profile_name
Specifies the name of a profile to be modified. If the profile name contains
spaces or special characters other than _#$, enclose the name with double
quotation marks ("") or backquotes (``).
● FAILED_LOGIN_ATTEMPTS
Specifies the maximum number of login attempts allowed before an account
is locked.
Default value: 10
● PASSWORD_LIFE_TIME
Specifies the maximum number of days that a password can be used.
Default value: 180
● PASSWORD_LOCK_TIME
Specifies the number of days an account will be locked after the specified
number of consecutive failed login attempts.
Default value: 1
● PASSWORD_GRACE_TIME
Specifies the number of days after the grace period begins during which a
warning is issued and login is allowed. If the database password is not
changed during this period, the password becomes invalid after the grace
period expires.
Default value: 7
● PASSWORD_REUSE_TIME
Specifies the number of days during which a password cannot be reused.
The value is a positive number. The integral part indicates the number of days
and its decimal part can be converted into hours, minutes, and seconds.
If the parameter value is changed to a smaller one, new passwords will be
checked based on the new parameter value.
If the parameter value is changed to a larger one (for example, changed from
a to b), the historical passwords before b days probably can be reused
because these historical passwords may have been deleted. New passwords
will be checked based on the new parameter value. The absolute time is used.
Historical passwords are recorded using absolute time and do not recognize
time changes.
● PASSWORD_REUSE_MAX
Specifies the number of password changes required before the current
password can be reused. If the parameter value is changed to a smaller one,
new passwords will be checked based on the new parameter value. If the
parameter value is changed to a larger one (for example, changed from a to
b), the historical passwords before the last b passwords probably can be
reused because these historical passwords may have been deleted. New
passwords will be checked based on the new parameter value.
PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME must be set in
conjunction with each other.
Set the two parameters as follows:
– If PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME are set to
UNLIMITED. The password can be reused without any restrictions.
– If PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME are set to
specified values, the password can be reused only when the conditions
specified by both the parameters are met.
– If either of PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME is
set to a specified value and the other is set to UNLIMITED, the password
cannot be reused. The value is a positive integer.
● SESSIONS_PER_USER
Specifies the number of connections of per user. The value must be less than
the maximum number of connections in the connection pool. This parameter
takes effect only after RESOURCE_LIMIT is enabled. To enable
RESOURCE_LIMIT, run ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;.
● UNLIMITED
Specifies no limitations.
● DEFAULT
Uses default values of the parameters.
Examples
Modify the pro_common profile.
-- Delete the pro_common profile.
DROP PROFILE pro_common CASCADE;
-- Create the pro_common profile.
CREATE PROFILE pro_common LIMIT PASSWORD_GRACE_TIME 10 PASSWORD_LOCK_TIME DEFAULT
PASSWORD_LIFE_TIME UNLIMITED;
-- Modify the pro_common profile to set PASSWORD_LIFE_TIME to 30 days.
ALTER PROFILE pro_common LIMIT PASSWORD_LIFE_TIME 30;
-- Delete the pro_common profile.
DROP PROFILE pro_common CASCADE;
Precautions
Users do not need system permissions for executing ALTER SQL_MAP.
Syntax
ALTER SESSION
{ SET
{ COMMIT_WAIT_LOGGING = { WAIT | NOWAIT }
| COMMIT_MODE = { IMMEDIATE | BATCH }
| TIME_ZONE ='[ + | - ]hh:mm'
| nls_param ='nls_param_value'
| current_schema = schema_value}
}|
{ { ENABLE | DISABLE } { TRIGGERS | INTERACTIVE TIMEOUT | NOLOGGING | OPTINFO_LOG }
}
Parameter Description
● COMMIT_WAIT_LOGGING = (WAIT | NOWAIT)
Specifies whether the server process committing the transaction waits for the
log writer (LGWR) process to write redo logs into files.
Default value: WAIT
– WAIT
The server process waits for the redo logs to be written. In most cases,
this value is recommended.
– NOWAIT
The server process does not wait for the redo logs to be written. That is,
the server process commits a transaction no matter whether the redo
logs are written into files. This value may cause data loss, but improves
the transaction processing speed.
● COMMIT_MODE = {IMMEDIATE | BATCH}
Specifies whether the LGWR process writes redo logs in batches.
Default value: IMMEDIATE
– IMMEDIATE
Writes redo logs immediately for each transaction commit. In this case,
transaction throughput may be reduced due to forcible disk I/O.
– BATCH
Buffers the redo logs and writes them in bathes into files when the
number of redo logs reaches a specified value. In this case, buffered redo
logs may be lost when instance failures occur.
● TIME_ZONE = '[+|–]{hh}:{mm}'
Specifies the time zone of the current session. A character string in the format
'[+|–]{hh}:{mm}' is used to set the difference between the local time zone of
a session and the UTC (GMT) time. The plus sign (+) indicates that the local
time zone is earlier than the GMT and the minus sign (–) indicates that local
the time zone is later than the GMT. For example, the Beijing time is GMT + 8
and you can use +08:00 to indicate the difference.
You can use the SESSIONTIMEZONE keyword to view the time zone of the
current session.
The default value is the system time zone of the client where the session is
initiated.
The valid time zone ranges from –12:00 to [+]14:00.
● nls_param = 'nls_param_value'
Specifies the language in which month and day names and abbreviations are
returned.
The possible values are:
– NLS_DATE_FORMAT with default format of YYYY-MM-DD HH24:MI:SS
– NLS_TIMESTAMP_FORMAT with the default format of YYYY-MM-DD
HH24:MI:SS.FF
– NLS_TIMESTAMP_TZ_FORMAT with the default format of YYYY-MM-DD
HH24:MI:SS.FF TZH:TZM
– NLS_TIME_FORMAT with the default format of HH:MI:SS.FF AM
– NLS_TIME_TZ_FORMAT with the default format of HH:MI:SS.FF AM TZR
● current_schema = schema_value
Change the schema for the current session. The default value is the schema of
the login user.
● { ENABLE | DISABLE } { TRIGGERS | INTERACTIVE TIMEOUT | INTERACTIVE
TIMEOUT }
– ENABLE TRIGGERS: Triggers are valid for the SQL statements executed in
the current session.
– DISABLE TRIGGERS: Triggers are invalid for the SQL statements executed
in the current session.
– ENABLE INTERACTIVE TIMEOUT: Interactive timeout is enabled. By
default, a session is closed if this session has no SQL request within 30
minutes.
– DISABLE INTERACTIVE TIMEOUT: Interactive timeout is disabled.
– ENABLE NOLOGGING: In the current session, redo logs and undo logs
are not recorded during data insertion.
– DISABLE NOLOGGING: In the current session, redo logs and undo logs
are recorded during data insertion.
– ENABLE OPTINFO_LOG: In the current session, the optimizer log is
enabled to write logs generated by the execution plan to log/opt/
zengine.opt. By default, the system shuts down 2 minutes later. To start
it, run the command again.
– DISABLE OPTINFO_LOG: In the current session, the optimizer log is
disabled.
Examples
● Wait for redo information to be written into redo log files and then commit
the transaction.
ALTER SESSION SET COMMIT_WAIT_LOGGING=WAIT;
● The LGWR process writes redo logs immediately for each transaction commit.
ALTER SESSION SET COMMIT_MODE=IMMEDIATE;
● Make triggers valid for the SQL statements executed in the current session.
ALTER SESSION ENABLE TRIGGERS;
● Make triggers invalid for the SQL statements executed in the current session.
ALTER SESSION DISABLE TRIGGERS;
● Disable the recording of redo logs and undo logs during data insertion in the
current session.
ALTER SESSION ENABLE NOLOGGING;
● Enable the recording of redo logs and undo logs during data insertion in the
current session.
ALTER SESSION DISABLE NOLOGGING;
Description
ALTER SQL_MAP creates a SQL mapping.
Precautions
● Only mappings between DML statements can be created.
● Source and target SQL statements are compared by using hash values,
including letter cases (uppercase or lowercase) and spaces. Therefore, they
must be completely consistent. Otherwise, mapping will not be triggered.
● Users do not need system permissions for executing ALTER SQL_MAP.
Syntax
ALTER SQL_MAP {src_sql_id | (src_select)} REWRITE TO (dest_select)
Parameter Description
● src_select
Specifies a source SQL statement.
● src_sql_id
Specifies the unique ID of the source SQL. The value is the hash value of the
SQL text. For details, see Data Dictionary and Views > Dynamic
Performance Views > DV_SQLSV$SQLAREA in GaussDB 100 V300R001C00
Database Reference.
● dest_select
Specifies a target SQL statement.
Examples
-- Enable the SQL mapping function.
ALTER SYSTEM SET enable_sql_map = true;
-- Create a SQL mapping.
ALTER SQL_MAP (select count(*) from SYS_DUMMY) REWRITE TO (select count(1) as cnt from
SYS_DUMMY);
-- Enter a source SQL statement, which will actually be mapped to the target SQL statement for execution.
select count(*) from SYS_DUMMY;
CNT
--------------------
1
1 rows fetched.
-- If another SQL statement of the same source is entered but has a different letter case, it will not be
mapped to the target statement.
select COUNT(*) from SYS_DUMMY;
COUNT(*)
--------------------
1
1 rows fetched.
Precautions
● You can modify your own sequences without additional permissions.
● To modify sequences of other users, the ALTER ANY SEQUENCE system
permission is required. Common users cannot modify objects of system users.
● Sequences cannot be modified during database restart or rollback.
Syntax
ALTER SEQUENCE [ schema_name. ]sequence_name
{ INCREMENT BY bigint
| { MAXVALUE bigint | NOMAXVALUE }
| { MINVALUE bigint | NOMINVALUE }
| { CYCLE | NOCYCLE }
| { CACHE bigint | NOCACHE }
| { ORDER | NOORDER }
} [ ... ]
Parameter Description
● [ schema_name. ]
Specifies the name of a user whose sequence is to be modified. If this
parameter is not specified, the current login user is used by default.
● sequence_name
Specifies the name of a sequence to be modified. It is optionally schema-
qualified.
● INCREMENT BY bigint
Specifies a sequence step.
The value is an integer other than 0. The default value is 1.
Examples
Modify the seq_auto_extend sequence.
-- Delete the seq_auto_extend sequence.
DROP SEQUENCE IF EXISTS seq_auto_extend;
-- Create the seq_auto_extend sequence starting with 10, and with INCREMENT BY set to 2, MAXVALUE
set to 200, and CYCLE specified.
CREATE SEQUENCE seq_auto_extend START WITH 10 MAXVALUE 200 INCREMENT BY 2 CYCLE;
-- Change INCREMENT BY to 4 and MAXVALUE to 400.
ALTER SEQUENCE seq_auto_extend MAXVALUE 400 INCREMENT BY 4 CYCLE;
Description
ALTER SYSTEM modifies database system parameters.
Precautions
To run this statement, you must have the ALTER SYSTEM system permission.
Syntax
ALTER SYSTEM
{ DUMP DATAFILE file_id PAGE page_id
| SWITCH LOGFILE
| SET parameter_name = parameter_value [ SCOPE = { MEMORY | PFILE | BOTH } ]
| LOAD DICTIONARY FOR [ schema_name.]object_name
| INIT DICTIONARY
| RELOAD HBA CONFIG
| REFRESH SYSDBA PRIVILEGE
| KILL SESSION 'session_id,serial'
| RESET STATISTIC
| CHECKPOINT
| { ADD | DELETE } LSNR_ADDR LISTENING_IP
| FLUSH {BUFFER | SQLPOOL}
}
Parameter Description
● DUMP DATAFILE file_id PAGE page_id
Specifies the data file page to be dumped.
– file_id
Specifies the file ID. The value is a positive integer within the range [0,
2147483648).
– page_id
Specifies the page number. The value is a positive integer within the
range [0, 2147483648).
● SWITCH LOGFILE
Switches log files.
● SET parameter_name = parameter_value [ SCOPE = { MEMORY | PFILE |
BOTH } ]
Sets system parameters. SCOPE is an optional parameter. It specifies where
parameter settings are to be written. PFILE or BOTH indicates that parameter
settings are written into the Zenith.ini configuration file. If SCOPE is not set,
the default value BOTH is used.
– MEMORY: Parameter settings are written into only memory and take
effect immediately but become invalid after a restart. MEMORY is
applicable to only dynamic system parameters.
– PFILE: Parameter settings are written into initial parameter files and take
effect after a restart. PFILE is applicable to both dynamic and static
system parameters. The settings of statistic system parameters can be
written into only initial parameter files.
– BOTH: Parameter settings are written into both initial parameter files and
memory, and take effect immediately. BOTH is applicable to only
dynamic system parameters.
If SCOPE is not set, the default value BOTH is used.
● LOAD DICTIONARY FOR [schema_name].object_name
Loads objects to the data dictionary.
● INIT DICTIONARY
Loads objects (such as system views, dynamic views, sequences, and roles)
except the system catalogs.
The prerequisites are that the database is in the RESTRICT mode and all
system catalogs are loaded by running the ALTER SYSTEM LOAD
DICTIONARY FOR [schema_name].object_name statement.
● RELOAD HBA CONFIG
Loads the zhba.conf file online.
● REFRESH SYSDBA PRIVILEGE
Updates the ciphertext and encryption key used for password-free login of
user SYSDBA online. The update does not affect the current client. The new
key is used for authenticating the password-free login of other clients.
● KILL SESSION 'session_id,serial'
Kills a session. session_id specifies the session ID, and serial specifies the
sequence number ID.
● RESET STATISTIC
Clears the statistics in the dynamic view DV_SYS_STATS.
● CHECKPOINT
Executes checkpoints for the current instance to ensure that all changes made
to the committed transactions are written into data files on the disk.
● { ADD | DELETE } LSNR_ADDR LISTENING_IP
Adds or deletes a listening IP address. The parameter value takes effect
immediately after being configured. Currently, zenith supports a maximum of
eight listening IP addresses.
An error is reported if the parameter is set to an NIC IP address that does not
exist.
When a listening IP address that is being used is deleted, the session
established through the IP address is disconnected and services are rolled
back.
● FLUSH BUFFER
Clears the database buffer.
● FLUSH SQLPOOL
Clears the SQLPOOL buffer.
Examples
● Switches log files.
ALTER SYSTEM SWITCH LOGFILE;
● Load objects (such as system views, dynamic views, sequences, and roles)
except the system catalogs.
This operation can be performed only when the database is in the RESTRICT
mode and all system catalogs are loaded by running the ALTER SYSTEM
LOAD DICTIONARY FOR [schema_name].object_name statement.
ALTER SYSTEM INIT DICTIONARY;
● Updates the ciphertext and encryption key used for password-free login of
user SYSDBA online.
ALTER SYSTEM REFRESH SYSDBA PRIVILEGE;
Precautions
● To run this statement, you must have the ALTER SYSTEM system permission.
● The current session and reserved sessions cannot be killed.
Syntax
ALTER SYSTEM KILL SESSION 'session_id,serial#'
Parameter Description
● session_id
Specifies the ID of a session to be killed.
● serial#
Specifies the serial ID of a session to be killed.
Examples
-- Query the ID of the session to be killed.
SELECT * FROM DV_SESSIONS WHERE USERNAME='JIM';
1 rows fetched.
Description
ALTER TABLESPACE modifies a tablespace.
Precautions
● To run this statement, you must have the ALTER TABLESPACE system
permission.
● To add a data file, delete a data file, modify automatic extension, or rename a
tablespace, the database must be in the OPEN state.
● To modify a data file name, the database must be in the MOUNT state.
● AUTOOFFLINE can be set only for user tablespaces.
Syntax
ALTER TABLESPACE tablespace_name
{ datafile_tempfile_clauses
| RENAME TO new_tablespace_name
| SHRINK SPACE KEEP integer [ K | M | G | T ]
| AUTOOFFLINE [ ON | OFF ]
}
● datafile_tempfile_clauses:
{ ADD DATAFILE {datafile_tempfile_spec [ , ... ]}
| DROP DATAFILE 'file_name'
| RENAME DATAFILE 'old_file_name' TO 'new_file_name'
| autoextend_clause
| OFFLINE DATAFILE 'file_name' [ , ... ]
}
– datafile_tempfile_spec:
file_name SIZE integer [ K | M | G ] [ autoextend_clause ]
▪ autoextend_clause:
AUTOEXTEND { OFF
| ON [ NEXT integer [ K | M | G] ]
}
[ MAXSIZE { integer [ K | M | G]
| UNLIMITED
}
]
Parameter Description
● tablespace_name
Specifies the name of a tablespace to be modified. If the tablespace does not
exist, an error will be reported.
● ADD DATAFILE
Adds a data file to a tablespace.
● DROP DATAFILE
Deletes data files from a tablespace. The data files must have never been
used (hwms is 0).
● autoextend_clause
Enables or disables automatic extension for a tablespace.
● RENAME DATAFILE
Changes a data file name in a tablespace. The name can be changed only in
the MOUNT database state. To change a data file name, the data file must
be closed. Currently, files will be closed without waiting for the database to be
in the restoration state.
● AUTOOFFLINE
Specifies whether automatic offline is enabled for tablespaces. If
AUTOOFFLINE is set to ON, automatic offline is enabled for user tablespaces.
When a file fails to be opened during database startup, the user tablespace is
automatically brought offline. If a user tablespace is faulty after the database
is started, the tablespace is not automatically brought offline.
If ALTER TABLESPACE tablespace_name AUTOOFFLINE ON has been set for
a user tablespace before it is damaged or faulty, the database can be started
in the MOUNT state. Otherwise, an error is reported and the database start
will fail.
● OFFLINE DATAFILE
Takes offline damaged data files. Files can be taken offline only in the
MOUNT database state. The data files can be taken offline as long as they
are not empty.
● datafile_tempfile_spec
Multiple data files can be separated by commas (,). Currently, data files
cannot contain Chinese characters.
file_name
Specifies the absolute path (path and file name) of a new data file. If a
relative path is specified, the file is stored in the data directory under the data
directory by default.
SIZE integer[ K | M | G ]
Specifies the size of each data file.
K: The unit is KB.
M: The unit is MB.
G: The unit is GB.
The value range for the undo tablespace is [1 MB, 32 GB) and that for other
tablespaces is [1 MB, 8 TB].
autoextend_clause
If AUTOEXTEND is set to on, you can manually specify the extension size.
– If AUTOEXTEND is not set, extension is disabled by default.
– If AUTOEXTEND OFF is set, automatic extension is disabled.
– If AUTOEXTEND ON is set, you can set the following parameters:
▪ NEXT: extension size. If this parameter is not set, the default value
16MB is used.
Examples
● Add a data file to the tbs_human tablespace.
-- Create the tbs_human tablespace:
CREATE TABLESPACE tbs_human DATAFILE 'dfile_tbs_01' SIZE 32M AUTOEXTEND ON NEXT 10M;
-- Add the data files privilege_dfile (32 MB), manager_dfile (32 MB), and section_dfile (32 MB) to
the tbs_human tablespace.
ALTER TABLESPACE tbs_human ADD DATAFILE 'privilege_dfile' SIZE 32M, 'manager_dfile' SIZE 32M,
'section_dfile' SIZE 32M;
● Delete the manager_dfile data file from the tbs_human tablespace.
ALTER TABLESPACE tbs_human DROP DATAFILE 'manager_dfile';
● Rename the privilege_dfile data file in the tbs_human tablespace to
new_privilege_dfile in the MOUNT database state.
-- Rename the privilege_dfile data file to new_privilege_dfile in the MOUNT database state.
ALTER TABLESPACE tbs_human RENAME DATAFILE 'privilege_dfile' TO 'new_privilege_dfile';
-- Change the database status to OPEN.
ALTER DATABASE OPEN;
● Take offline the damaged data file section_dfile in the tbs_human
tablespace in the MOUNT database state.
-- Take offline the damaged data file section_dfile in the tbs_human tablespace in the MOUNT
database state.
ALTER TABLESPACE tbs_human OFFLINE DATAFILE 'section_dfile';
-- Change the database status to OPEN.
ALTER DATABASE OPEN;
● Enable automatic extension for the tbs_human tablespace so that the
tablespace can be automatically expanded when it is full. You can specify the
extension size.
ALTER TABLESPACE tbs_human AUTOEXTEND ON NEXT 5M;
● Change the tablespace name tbs_human to data_tbs_human.
ALTER TABLESPACE tbs_human RENAME TO data_tbs_human;
Precautions
● To run this statement, you must have the ALTER ANY TABLE system
permission. Common users cannot modify objects of system users.
● An error message containing valid error information is returned if table name,
column name, or constraint name conflict occurs, the names are invalid, or
data conflicts with the verification.
● When you add a column to a table, ensure that there is no record in the table.
● When you modify a column, ensure that the all the values in this column are
NULL. If the values of a column are not NULL, you can only increase the
CHAR or VARCHAR size and cannot perform any operations for other data
types.
● The UNIQUE INDEX, PRIMARY KEY, and FOREIGN KEY inline constraints
cannot be contained in a statement for adding or modifying a column.
● shrink_clause, row_movement_clause, and character sets support only SQL
parsing.
● ALTER TABLE does not support foreign tables.
● enable_disable_clause supports only the FOREIGN KEY and CHECK constraints.
● Tables cannot be modified during database restart or rollback.
Syntax
ALTER TABLE [ schema_name. ]table_name
{ alter_table_properties
| column_clauses
| references_clause
| constraint_clauses
| partition_clauses
| enable_disable_clause
| set_interval_clause
}
● alter_table_properties:
{ physical_attributes_clause
| RENAME TO new_table_name
| shrink_clause
| row_movement_clause
| AUTO_INCREMENT [ = ] value
| {ENABLE | DISABLE} ALL TRIGGERS
}
– physical_attributes_clause:
{ PCTFREE integer
| APPENDONLY { ON | OFF }
}
– shrink_clause:
SHRINK SPACE [ COMPACT ]
– row_movement_clause:
{ ENABLE | DISABLE } ROW MOVEMENT
● column_clauses:
{ add_column_clause
| modify_column_clause
| drop_column_clause
| rename_column_clause
}
– add_column_clause:
-- Add one column.
ADD [ COLUMN ] column_name datatype_name [ DEFAULT expr [ON UPDATE expr ] ]
[ COMMENT 'string' ] [ COLLATE collation_name ] [AUTO_INCREMENT] [ inline_constraint ]
-- Add multiple columns.
ADD ( [ COLUMN ] { column_name datatype_name [ DEFAULT expr [ON UPDATE expr ] ]
[ COMMENT 'string' ] [ COLLATE collation_name ] [AUTO_INCREMENT] [ inline_constraint ] }
[ , ... ] )
▪ inline_constraint:
{ [ NOT ] NULL
| CHECK( expr )
| WITH [ LOCAL ] TIME ZONE
| PRIMARY KEY
| UNIQUE
}
[ ... ]
– modify_column_clause:
-- Modify a column.
MODIFY ( { column_name [ new_datatype_name ] [ DEFAULT expr [ ON UPDATE expr ] ]
[ COMMENT string ]
[ COLLATE collation_name ]
[ inline_constraint ] } [ , ... ]
)
-- Shrink the space occupied by a LOB column.
MODIFY LOB(column_name) (SHRINK SPACE)
– drop_column_clause:
DROP [ COLUMN ] column_name
– rename_column_clause:
RENAME COLUMN old_name TO new_name
● references_clause:
If no column is specified for a parent table, the primary key of the parent
table is used by default. If the primary key of the parent table does not exist,
an error will be reported.
REFERENCES [ schema_name. ]object_table [( column_name )]
● constraint_clauses:
{ ADD out_of_line_constraint
| DROP CONSTRAINT [ IF EXISTS ] constraint_name
| RENAME CONSTRAINT old_constraint_name TO new_constraint_name
}
– out_of_line_constraint:
CONSTRAINT [ IF NOT EXISTS ] constraint_name
{ UNIQUE( column_name [ , ... ] ) [ constraint_state_clause ]
| PRIMARY KEY( column_name [ , ... ] ) [ constraint_state_clause ]
| CHECK( expr )
| FOREIGN KEY( column_name [ , ... ] ) references_clause_ex
}
▪ constraint_state_clause:
In constraint_state_clause, all the parameters except
using_index_clause are used for syntax compatibility and do not take
effect in the current version. In addition, you can set these
parameters in a random order. If you set a parameter for more than
one time, only the last setting takes effect.
[ NOT DEFERRABLE | DEFERRABLE ]
[ INITIALLY { IMMEDIATE | DEFERRED } ]
[ RELY | NORELY ]
[ VALIDATE | NOVALIDATE ]
[ ENABLE | DISABLE ]
[ using_index_clause ]
○ using_index_clause:
USING INDEX
[ INITRANS integer
| TABLESPACE tablespace_name
| LOCAL [ ( { PARTITION partition_name [ TABLESPACE tablespace_name
| INITRANS integer
| PCTFREE integer
]
} [ , ... ]
)
]
] [ ... ]
▪ references_clause_ex:
If no column is specified for a parent table, the primary key of the
parent table is used by default. If the primary key of the parent table
does not exist, an error will be reported.
REFERENCES [ schema_name. ]object_table_name [( column_name [ , ... ] )]
[ ON DELETE { CASCADE
| SET NULL
}
]
● partition_clauses:
{ add_partition_clause
| drop_partition_clause
| truncate_partition_clause
| coalesce_partition_clause
}
– add_partition_clause:
ADD PARTITION partition_name
{ VALUES LESS THAN ( { partition_value
| MAXVALUE
}[ , ... ]
)
| VALUES ( partition_value [ , ... ]
| DEFAULT )
}
[ TABLESPACE tablespace_name ]
[ PCTFREE integer ]
– drop_partition_clause:
DROP PARTITION partition_name
– truncate_partition_clause:
TRUNCATE PARTITION partition_name [ DROP STORAGE
| REUSE STORAGE
| PURGE
]
– coalesce_partition_clause:
COALESCE PARTITION
● enable_disable_clause:
{ ENABLE | DISABLE } [VALIDATE | NOVALIDATE] CONSTRAINT constraint_name
● set_interval:
SET INTERVAL([interval_value])
Parameter Description
● [ schema_name.]
Specifies the name of a user whose table is to be modified. If this parameter
is not specified, the current login user is used by default.
● table_name
Specifies the name of a table to be modified. The table must exist.
● alter_table_properties
Modifies table storage. For example, LOB_storage_clause specifies that LOB
columns are stored in a separate segment. You can specify inline or out-of-
line storage for a table. Currently, only out-of-line storage is supported.
– physical_attributes_clause
Specifies the physical attribute of a table.
PCTFREE integer
Specifies the percentage of space reserved for a block. If the percentage
of available space of a data block is less than this value, you can only
update data of this block and cannot insert data into it. The value range
is [8, 80] and the default value is 10.
APPENDONLY { ON | OFF }
▪ Conversion between the VARCHAR type and the CHAR type (the
length must be no less than the length before the conversion).
▪ Changes of the NUMBER and DECIMAL types into a larger scale (the
values of scale, precision, and - scale must be no less than those
before the modification)
– drop_column_clause
Deletes a column.
– rename_column_clause
Renames a column.
● references_clause
Adds a FOREIGN KEY constraint. schema_name indicates the owner of the
referenced table, and object_table indicates the name of the referenced table.
column_name indicates the referenced column.
● constraint_clauses
Modifies a table constraint, including adding and deleting an inline or out-of-
line constraint.
– ADD out_of_line_constraint
Adds an out-of-line constraint.
IF NOT EXISTS
Does not throw an error if the out-of-line constraint already exists.
using_index_clause
Specifies an external index clause.
INITRANS integer
Initially allocated storage block space. integer specifies the space size.
TABLESPACE tablespace_name
Specifies a tablespace. tablespace_name specifies a tablespace name.
LOCAL
Creates a local index for a partitioned table.
ONLINE
Adds a constraint online.
CHECK( expr )
Specifies rules for checking values in a column. If NULL is inserted, TRUE
is returned.
references_clause_ex
Specifies a foreign key constraint clause.
ON DELETE { CASCADE | SET NULL }
Specifies how foreign key values in a child table are handled when
primary or unique values in the parent table are deleted.
▪ CASCADE
The foreign key values will be deleted.
▪ SET NULL
The foreign key values will be converted to NULL.
Inserts the data of the last partition into its previous partition and deletes
the last partition.
▪ The COALESCE PARTION statement is valid only for hash tables. You
do not need to specify the partition name in this statement.
DISABLE NOVALIDATE no no
● set_interval_clause
Sets the interval partition. This parameter is valid only for partitioned tables.
– SET INTERVAL() changes a range partitioned table to an interval
partitioned table.
– SET INTERVAL(interval_value) specifies the interval for an interval
partitioned table.
● rename_column_clause
Changes a table name. You can change the name of only the table in your
own schemas and cannot modify names of tables in the system tablespace.
Examples
● Add a column.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL, course_name VARCHAR(50), course_start_date
DATETIME, course_end_date DATETIME, exam_date DATETIME, score INT);
-- Add the full_masks column.
ALTER TABLE training ADD full_masks INT;
● Delete a column.
ALTER TABLE training DROP course_period;
● Add a constraint.
ALTER TABLE training ADD CONSTRAINT ck_training CHECK(staff_id>0);
ALTER TABLE training ADD CONSTRAINT uk_training UNIQUE(course_name);
● Rename a constraint.
ALTER TABLE training RENAME CONSTRAINT ck_training TO ck_new_training;
ALTER TABLE training RENAME CONSTRAINT uk_training TO uk_new_training;
● Delete a constraint.
ALTER TABLE training DROP CONSTRAINT uk_new_training;
● Rename a table.
ALTER TABLE training RENAME TO training_2018;
Precautions
● To run this statement, you must have the ALTER USER system permission.
● If the specified user does not exist, the error message "user *name* does not
exist" is displayed.
● Profiles cannot be created during database restart or rollback.
Syntax
ALTER USER user_name
{ IDENTIFIED BY new_password REPLACE old_password
| PASSWORD EXPIRE
| ACCOUNT { LOCK | UNLOCK }
| PROFILE profile_name
| DEFAULT TABLESPACE tablespace_name
} [ ... ]
Parameter Description
● user_name
Specifies the name of a user to be modified.
● IDENTIFIED BY
Specifies a new password.
● new_password
Specifies the new password of a user.
The password must comply with the following requirements:
– Contain 8 to 64 characters.
– Start with a letter, number sign (#), or an underscore (_) if the password
is not enclosed in single quotation marks ('').
– Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
– Contain only the following four character types and at least three of
them:
▪ Digits
▪ Lowercase letters
▪ Uppercase letters
– UNLOCK
Unlocks a user and allows this user to log in.
● profile_name
Specifies a profile referenced by a user.
The profile must be configured in advance. If no profile is specified, the
default profile is referenced by default.
● DEFAULT TABLESPACE
Specifies a user tablespace.
● tablespace_name
Specifies a tablespace name.
1 ` 9 & 17 \ 25 "
2 ~ 10 * 18 | 26 ,
3 ! 11 ( 19 [ 27 <
4 @ 12 ) 20 { 28 .
5 # 13 - 21 } 29 >
6 $ 14 _ 22 ] 30 /
7 % 15 = 23 : 31 ?
8 ^ 16 + 24 ' - -
Examples
● Create user user_test with the password gauss_123.
CREATE USER user_test IDENTIFIED BY gauss_123;
3.13.16 ANALYZE
Description
ANALYZE collects statistics about tables and indexes.
Precautions
● This statement can be executed only in the OPEN database state.
● User SYS and DBA can collect and delete the statistics about all users or
objects. Common users can collect statistics about themselves or their own
tables. The ANALYZE ANY permission can be used to collect statistics about
all users except SYS.
Syntax
ANALYZE { TABLE [ schema_name. ] table_name COMPUTE STATISTICS }
Parameter Description
● [schema_name.]table_name
Specifies the name of the table whose statistics are to be collected. The table
name cannot be the same as the names of tables of the current user.
● COMPUTE STATISTICS
Instructs the database to compute statistics and store them in the data
dictionary.
Examples
Collect statistics about the education table of user gaussdba.
-- Delete the gaussdba.education table.
DROP TABLE IF EXISTS gaussdba.education;
-- Create the gaussdba.education table.
CREATE TABLE gaussdba.education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school
VARCHAR(64), graduate_date DATETIME, education_note VARCHAR(70));
-- Analyze the collected statistics about the gaussdba.education table.
ANALYZE TABLE gaussdba.education COMPUTE STATISTICS;
3.13.17 COMMENT ON
Description
COMMENT ON adds comments about a table, view, or column to the data
dictionary.
Precautions
● You do not need to add comments to your own table. When you add
comments to a table of any user, the user who needs to execute the
statement has the COMMENT ANY TABLE permission.
● You can add comments about a table during table creation.
● Profiles cannot be created during database restart or rollback.
Syntax
COMMENT ON { TABLE [ schema_name. ] { table_name | view_name }
| COLUMN [ schema_name. ] { table_name. | view_name. } column_name
} IS 'string'
Parameter Description
● [ schema_name. ]
Username If this parameter is not specified, the current login user is used by
default.
● { table_name | view_name }
Specifies the name of a table or view to be commented.
● [schema_name.] { table_name. | view_name. } column_name
Specifies the name of a column to be commented.
● IS
Specifies comment content.
● string
Specifies comment content.
The content contains a maximum of 4000 bytes.
Examples
● Create a table and add comments for the table.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL, course_name VARCHAR(50), course_start_date
DATETIME, course_end_date DATETIME, exam_date DATETIME, score INT);
-- Add comments about the training table.
COMMENT ON COLUMN training.staff_id IS 'id of staffs taking training courses';
3.13.18 COMMIT
Description
COMMIT changes all operations in the work units of the current transaction to be
permanent and ends the transaction.
Precautions
The commission of data operations (INSERT, DELETE, and UPDATE) in GaussDB
100 is disabled by default. When a session exits, COMMIT must be explicitly
specified. Otherwise, the record will be lost.
Syntax
COMMIT [ TRANSACTION | PREPARED transaction_id | FORCE xid ]
Parameter Description
● TRANSACTION
Increases readability of the statement. This is an optional keyword.
● PREPARED
Prepares for a two-phase transaction commit. This is an optional keyword.
● transaction_id
Specifies the identifier of a transaction to be committed.
● FORCE
Forcibly commits residual transactions in RESTRICT mode. This keyword is
optional. Forcible commission of residual two-phase transactions is not
supported.
● xid
Specifies the identifier of the residual transaction to be forcibly committed.
The identifier can be obtained from the DV_TRANSACTIONS view. The value is
in form of SEG_ID.SLOT.XNUM enclosed with single quotation marks ('').
Examples
Create the training table, insert data, and update the data. Then, commit the
transaction.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL, staff_name VARCHAR(16), course_name CHAR(20),
course_start_date DATETIME, course_end_date DATETIME, exam_date DATETIME, score INT);
-- Insert record 1 into the training table.
INSERT INTO training(staff_id,staff_name,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'LIPENG','JAVA','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25 12:00:00',90);
-- Insert record 2 into the training table.
INSERT INTO training(staff_id,staff_name,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(11,'CAOM','JAVA','2017-06-20 12:00:00','2017-06-25 12:00:00','2017-06-26 12:00:00',95);
-- Update the staff_name and course_name columns in record 2.
UPDATE training SET staff_name='WANGPAN', course_name='INFORMATION SAFETY' WHERE staff_id=11;
-- Commit the transaction.
COMMIT;
Precautions
● To run this statement, you must have the CREATE DATABASE system
permission.
● This statement is invoked by the system during database installation.
● This statement can be executed only in the NOMOUNT database state.
● If a database fails to be created, restart the system before you recreate it after
rectifying the faults.
● During database creation, you can run the post-processing SQL script
initdb_customized.sql created in the $GSDB_HOME/admin/scripts directory.
● The post-processing SQL script must be created by users and the script name
must be initdb_customized.sql.
● If the post-processing script exists in DATADIR/admin/scripts/ (created by
users) when the database is started by running python zctl.py -t start [-D
DATADIR] NOMOUNT, only the script in this path is executed. Otherwise, the
post-processing script in $GSDB_HOME/admin/scripts is executed.
● Generally, DATADIR is $GSDB_DATA.
Syntax
CREATE DATABASE database_name
{ USER SYS IDENTIFIED BY password
| CONTROLFILE ( file_name [ , ... ] )
| database_logging_clauses
| tablespace_clauses
} [ ...]
● database_logging_clauses:
{ [ ARCHIVELOG | NOARCHIVELOG ] LOGFILE ( { 'file_name' SIZE integer [ K | M | G | T | P | E ]
[ BLOCKSIZE { 512 | 4096 } ]
} [ , ... ]
)
}
● tablespace_clauses:
{ default_tablespace
| temp_tablespace
| undo_tablespace
| system_tablespace
| nologging_tablespace
| nologging_undo_tablespace
} [ ...]
– default_tablespace:
DEFAULT TABLESPACE DATAFILE { datafile_tempfile_spec [ , ... ] }
– temp_tablespace:
TEMPORARY TABLESPACE TEMPFILE { datafile_tempfile_spec [ , ... ] }
– undo_tablespace:
UNDO TABLESPACE DATAFILE { datafile_tempfile_spec [ , ... ] }
– system_tablespace:
SYSTEM TABLESPACE DATAFILE { datafile_tempfile_spec [ , ... ] }
– nologging_tablespace:
NOLOGGING TABLESPACE TEMPFILE { datafile_tempfile_spec [ , ... ] }
– nologging_undo_tablespace:
NOLOGGING UNDO TABLESPACE TEMPFILE { datafile_tempfile_spec [ , ... ] }
▪ datafile_tempfile_spec:
{ 'file_name' SIZE integer [ K | M | G | T | P | E ]
[ autoextend_clause ]
}
○ autoextend_clause:
AUTOEXTEND { OFF
| ON [ NEXT integer [ K | M | G | T | P | E ] ]
}
[ MAXSIZE { integer [ K | M | G ]
| UNLIMITED
}
]
Parameter Description
● database_name
Specifies the name of a database to be created. The database name must be
unique on the server and comply with the identifier rule.
● USER SYS IDENTIFIED BY password
Specifies the password of user SYS for accessing the new database.
● CONTROLFILE ( file_name [, ...] )
Specifies a control file name. At least two files are specified, and the file size
is fixed to 10 MB.
● database_logging_clauses:
Creates a log group and members in the group and specifies whether logs will
be archived.
The size of a log file block can only be 512 or 4096.
– ARCHIVELOG
Enables log archiving.
– NOARCHIVELOG
Disables log archiving.
– LOGFILE
Specifies a log file.
– SIZE integer [ K | M | G | T | P | E ]
Specifies the file size. The default unit is byte. K indicates KB, M indicates
MB, G indicates GB, T indicates TB, and E indicates EB.
At least three redo log files are specified. The minimum file size is 56 MB
+ 16 KB + LOG_BUFFER_SIZE.
– BLOCKSIZE { 512 | 4096 }
specifies the block size. The unit is byte. Currently, only 512 bytes and
4096 bytes are supported.
– tablespace_clauses
Specifies the path and size of a SYSTEM, UNDO, TEMP, or default
tablespace and specifies how segments and extents are managed. If no
tablespace is specified when you create a table, the default tablespace is
used.
▪ default_tablespace
Specifies a default tablespace. The size of a data file in the USER
tablespace ranges from 1 MB to 8 TB.
▪ temp_tablespace
Specifies a temporary tablespace. The size of a data file in the TEMP
tablespace ranges from 5 MB to 8 TB.
▪ undo_tablespace
Specifies an undo tablespace. The size of a data file in the UNDO
tablespace ranges from 128 MB to 32 GB.
▪ system_tablespace
Specifies a system tablespace. The size of a data file in the SYSTEM
tablespace ranges from 128 MB and to 8 TB.
▪ nologging_tablespace
Specifies a nologging tablespace. The size of a data file in the
TEMP2 tablespace ranges from 1 MB to 8 TB.
▪ nologging_undo_tablespace
Specifies a nologging undo tablespace. The size of a data file in the
TEMP2_UNDO tablespace ranges from 128 MB to 32 GB.
▪ datafile_tempfile_spec
Multiple data files can be separated by commas (,). Currently, data
files cannot contain Chinese characters.
▪ file_name
Specifies the absolute path (path and file name) of a new data file. If
a relative path is specified, the file is stored in the data directory
under the data directory by default.
▪ SIZE integer[ K | M | G ]
Specifies the size of each data file.
▪ autoextend_clause
If AUTOEXTEND is set to ON, you can manually specify the
extension size.
If AUTOEXTEND is not set, automatic extension is disabled by
default.
If AUTOEXTEND OFF is set, automatic extension is disabled.
If AUTOEXTEND ON is set, you can set the following parameters:
○ NEXT: extension size. If this parameter is not set, the default
value 16MB is used.
○ MAXSIZE: extension upper limit. If this parameter is not set or is
set to UNLIMITED, the extension upper limit for the undo
tablespace is 32 GB and that for other tablespaces is 8 TB. If this
parameter is set, the value for the undo tablespace cannot be
greater than 32 GB and that for other tablespaces cannot be
greater than 8 TB. If both MAXSIZE and NEXT are set, the value
of MAXSIZE must be no less than that of NEXT.
Examples
● Create the human database.
CREATE DATABASE human CONTROLFILE
('cntl1', 'cntl2', 'cntl3')
LOGFILE
('log1' size 2G, 'log2' size 2G, 'log3' size 2G, 'log4' size 2G, 'log5' size 2G, 'log6' size 2G)
SYSTEM TABLESPACE DATAFILE 'system' size 1G
UNDO TABLESPACE DATAFILE 'undo' size 1G
DEFAULT TABLESPACE DATAFILE 'user1' size 1G autoextend on next 32M, 'user2' size 1G autoextend on
next 32M, 'user3' size 1G autoextend on next 32M, 'user4' size 1G autoextend on next 32M, 'user5' size 1G
autoextend on next 32M
TEMPORARY TABLESPACE TEMPFILE 'temp1' size 160M autoextend on next 32M, 'temp2' size 160M
autoextend on next 32M ARCHIVELOG;
Description
CREATE INDEX creates an index on a specified table. Indexes are primarily used to
enhance database query performance (though inappropriate use may comprise
the performance).
Precautions
● Indexes cannot be created on columns of CLOB, BlOB, or IMAGE type.
● To run this statement, you must have the CREATE INDEX or CREATE ANY
INDEX system permission. Common users cannot create objects of system
users.
● A maximum of 16 columns are allowed for a combination index and the total
length cannot exceed 3900 bytes. It is calculated based on data types with the
maximum length.
● Partitioned indexes can be created only on partitioned tables. The number of
partition indexes must be the same as that of partitioned tables. Otherwise,
an error is reported.
● Function-based indexes can be created for the UPPER and TO_CHAR
functions. The function parameter can only be one column, and the function-
based indexes cannot be converted into constraints.
● Profiles cannot be created during database restart or rollback.
Syntax
CREATE [ UNIQUE ] INDEX [IF NOT EXISTS ] [ schema_name. ]index_name ON table_index_clause
[ CRMODE { PAGE | ROW } ]
● table_index_clause:
[ schema_name. ]table_name ( { [function_name()]column_name [ ASC | DESC ] } [ ,... ] )
index_attributes
– index_attributes:
[
[ physical_attributes_clause ]
[ TABLESPACE {tablespace_name} ]
[index_partitioning_clauses]
[ ONLINE ]
]
▪ physical_attributes_clause:
INITRANS integer
▪ index_partitioning_clauses:
LOCAL [ ( { PARTITION partition_name [ TABLESPACE tablespace_name ]
[ INITRANS integer ]
[ PCTFREE integer ]
} [ , ... ]
)
]
Parameter Description
● UNIQUE
Creates a UNIQUE index. In this way, the system checks whether new values
are unique in the index column. Attempts to insert or update data which
would result in duplicate values in the index column will generate an error.
Currently, only B-tree supports UNIQUE.
● IF NOT EXISTS
Does not throw an error if the index already exists. Does not throw an error if
the index does not exist.
● [schema_name.]
Specifies a schema name. This parameter can be omitted if the schema name
is the same as the schema name of the table.
● index_name
Specifies the name of an index to be created.
● table_name
Specifies the name of the table (optionally schema-qualified) where an index
is to be created.
● function_name()
Specifies the name of a function based on which an index is created.
● column_name
Specifies the name of a column on which an index is to be created.
● ASC
Specifies an ascending (default) sort order.
● DESC
Specifies a descending sort order.
Currently, only an ascending order is supported even if DESC is specified.
● INITRANS
Specifies the initial size of an index transaction.
The value ranges from 1 to 255.
● TABLESPACE tablespace_name
Specifies the tablespace for an index. If no tablespace is specified, the default
tablespace is used.
● index_partitioning_clauses
Specifies the partial index of a partitioned table.
● LOCAL
Specifies a local partitioned index. The index is equipartitioned with the table
and the index partitioning is automatically maintained when partitions are
dropped or truncated. This ensures that the index always remains
equipartitioned with the table.
● PCTFREE
Specifies how much space should be left in a database block for inserting
indexes. The unit is % and the value range is [8, 80].
● ONLINE
Creates an index online.
Generally, exclusive locks are added to a table for index creation (DDL) on this
table, preventing concurrent UPDATE, DELETE, and INSERT operations and
thereby decreasing throughout of system catalog transactions. Therefore,
online index creation and rebuilding are used, during which share locks are
added to the table (exclusive locks are temporarily used only at the beginning
and end of the creation or rebuilding), allowing concurrent UPDATE, DELETE,
and INSERT operations and thereby ensuring the proper running of online
services.
● CRMODE { PAGE | ROW }
Specifies the CR mode of an index. If this parameter is not specified, the CR
mode of the table is used.
Examples
● Create an index online on the posts table.
-- Delete the posts table.
DROP TABLE IF EXISTS posts;
-- Create the common table posts.
CREATE TABLE posts(post_id CHAR(2) NOT NULL, post_name CHAR(6) PRIMARY KEY, basic_wage INT,
basic_bonus INT);
-- Create the idx_posts index.
CREATE INDEX idx_posts ON posts(post_id ASC, post_name) ONLINE;
Description
CREATE PROFILE creates a profile to associate with a user.
Precautions
● To run this statement, you must have the CREATE PROFILE system
permission.
Syntax
CREATE PROFILE profile_name LIMIT password_parameters [ ... ]
password_parameters:
{ { FAILED_LOGIN_ATTEMPTS
| PASSWORD_LIFE_TIME
| PASSWORD_LOCK_TIME
| PASSWORD_GRACE_TIME
| PASSWORD_REUSE_TIME
| PASSWORD_REUSE_MAX
| SESSIONS_PER_USER
}
{ expr | UNLIMITED | DEFAULT }
}
Parameter Description
● profile_name
Profile name
If the profile name contains spaces or special characters other than _#$,
enclose the name with double quotation marks ("") or backquotes (``).
● LIMIT
Specifies resource limitations in a profile for a user.
● FAILED_LOGIN_ATTEMPTS
Specifies the maximum number of login attempts allowed before an account
is locked.
Default value: 10
● PASSWORD_LIFE_TIME
Specifies the maximum number of days that a password can be used.
The default value is 180 (unit: day).
● PASSWORD_LOCK_TIME
Specifies the number of days an account will be locked after the specified
number of consecutive failed login attempts.
The default value is 1 (unit: day).
● PASSWORD_GRACE_TIME
Specifies the grace period (days), that is, the duration from the time when the
database sends a warning to the time when the password becomes invalid. If
the database password is not changed during this period, the password
becomes invalid after the grace period expires.
Examples
Create the pro_common profile.
-- Create the pro_common profile.
CREATE PROFILE pro_common LIMIT PASSWORD_GRACE_TIME 10 PASSWORD_LOCK_TIME DEFAULT
PASSWORD_LIFE_TIME UNLIMITED;
-- Delete the profile.
DROP PROFILE pro_common CASCADE;
Description
CREATE ROLE creates a database role.
STATISTICS: Has the permission to create, delete, and view WSR snapshots and
generate WSR reports, but does not have the permission to set WSR parameters.
Precautions
● To run this statement, you must have the CREATE ROLE system permission.
● A role name cannot be the same as any existing user name or role name in
the database. Otherwise, an error message will be displayed.
Syntax
CREATE ROLE role_name [ IDENTIFIED BY password [ ENCRYPTED ]]
Parameter Description
● role_name
Specifies the name of a role to be created.
If the role name contains spaces or special characters other than _#$, enclose
the name with double quotation marks ("") or backquotes (``).
● IDENTIFIED BY
Specifies the password for a role to be created.
● password
This is a reserved parameter.
● ENCRYPTED
Specifies that the password is ciphertext. In this case, the password will not be
verified.
Roles created in this mode use plaintext passwords for login. Therefore, you
are not advised to specify ENCRYPTED when creating a role.
Examples
Create the developers role.
-- Delete the developers role.
DROP ROLE developers;
-- Create the developers role.
CREATE ROLE developers;
Precautions
● Sequence values are generated based on BIGINT. Therefore, the value of a
sequence is an integer with a maximum of eight bytes (-263 to 263-1). If the
minimum or maximum value specified for a sequence exceeds the range, the
system uses the upper or lower boundary of the range as the minimum or
maximum of this sequence.
● Sequences do not apply to concurrent-session scenarios because sequences
may not be generated in order due to concurrency.
● To run this statement, you must have the CREATE SEQUENCE, CREATE ANY
SEQUENCE, or ALL PRIVILEGES system permission.
● After a sequence is created, you can use the NEXTVAL and CURRVAL functions
to obtain values of the sequence. Common users cannot create objects of
system users.
Syntax
CREATE SEQUENCE [ schema_name. ]sequence_name
[ INCREMENT BY bigint
| START WITH bigint
| { MAXVALUE bigint | NOMAXVALUE }
| { MINVALUE bigint | NOMINVALUE }
| { CYCLE | NOCYCLE }
| { CACHE bigint | NOCACHE }
| { ORDER | NOORDER }
] [ ... ]
Parameter Description
● [ schema_name. ]
Username If this parameter is not specified, the current login user is used by
default.
● sequence_name
Specifies the name of a sequence to be created. It is optionally schema-
qualified.
● INCREMENT BY bigint
Specifies a sequence step.
The value is an integer other than 0. The default value is 1.
– If the value is a positive integer, an incremental sequence is generated.
– If the value is a negative integer, a decremental sequence is generated.
● START WITH bigint
Specifies the start value of a sequence.
start: start value of a sequence.
– The default start value of an incremental sequence is its MINVALUE.
– The default start value of a decremental sequence is its MAXVALUE.
● MAXVALUE bigint | NOMAXVALUE
MAXVALUE: maximum value of a sequence
NOMAXVALUE: A sequence does not have a maximum value.
If this clause is not declared or NOMAXVALUE is declared in the clause, the
default value is used.
– The default maximum value for an incremental sequence is 263-1.
– The default maximum value for a decremental sequence is -1.
● MINVALUE bigint | NOMINVALUE
MINVALUE: minimum value of a sequence.
NOMINVALUE: A sequence does not have a minimum value.
If this clause is not declared or NOMINVALUE is declared in the clause, the
default value is used.
Examples
Create the seq_auto_extend sequence.
-- Delete the seq_auto_extend sequence.
DROP SEQUENCE IF EXISTS seq_auto_extend;
-- Create the seq_auto_extend sequence starting with 10, and with INCREMENT BY set to 2, MAXVALUE
set to 200, and CYCLE specified.
CREATE SEQUENCE seq_auto_extend START WITH 10 MAXVALUE 200 INCREMENT BY 2 CYCLE;
Description
CREATE SYNONYM creates a synonym.
Precautions
● When creating a synonym, specify the object for which the synonym is to be
created.
● The specified object must exist.
● You can create a synonym only for a table or view. When you create a
synonym for other objects, the error message "synonym object %s.%s is not
table or view type" is displayed.
● If an object is modified or deleted, an error will be reported when its synonym
is used.
● You can query the MY_SYNONYMS and ADM_SYNONYMS views to view
synonyms.
● To run this statement, you must have the CREATE SYNONYM, CREATE ANY
SYNONYM, or ALL PRIVILEGES permission. Common users cannot create
objects of system users.
● If you specify a schema name when creating a public synonym, an error will
be displayed indicating the inconsistency.
● To create a public synonym, you must have the CREATE PUBLIC SYNONYM
system permission. Otherwise, an error will be reported.
Syntax
CREATE [ OR REPLACE ] [ PUBLIC ] SYNONYM [ schema_name. ]synonym_name FOR
[ schema_name. ]object_name
Parameter Description
● OR REPLACE
Replaces a synonym if it exists when you create it.
● [ schema_name. ]
Specifies a user name. If this parameter is not specified, the current login user
is used by default.
● PUBLIC
Creates a public synonym. Other users can access the synonym without being
authorized but the system still checks the specified object.
You can create a synonym with the same name as a public synonym.
● synonym_name
Specifies the name of a synonym to be deleted. Specify the name by following
the database object naming convention.
● object_name
Specifies the name of an object for which a synonym is to be created. Specify
the name by following the database object naming convention. The number
of synonyms that can be created for a table or view is not limited.
The value is a table or view name.
Examples
● Create a public synonym and a private synonym for the privilege_view view.
-- Delete the privilege table.
DROP TABLE IF EXISTS privilege;
-- Create the privilege table.
CREATE TABLE privilege(staff_id INT PRIMARY KEY, privilege_name VARCHAR(64) NOT NULL,
privilege_description VARCHAR(64), privilege_approver VARCHAR(10));
-- Create the view privilege_view.
CREATE OR REPLACE VIEW privilege_view AS SELECT staff_id, privilege_name from privilege;
-- Create a public synonym for the privilege_view view.
CREATE OR REPLACE PUBLIC SYNONYM pri_vi for privilege_view;
● Create a public synonym and a private synonym for the privilege table.
-- Create a public synonym for the privilege table.
CREATE OR REPLACE PUBLIC SYNONYM pri for privilege;
-- Create a private synonym for the privilege table.
CREATE OR REPLACE SYNONYM pri for privilege;
Precautions
● To create a table for the current user, you must have the CREATE TABLE
system permission. To create a table for other users, you must have the
CREATE ANY TABLE system permission. Common users cannot create objects
of system users.
● Ensure that the storage space is sufficient.
● You must specify a table name and column names (including the data type
and size of each column).
● An incremental sequence supports only the INT and BIGINT data types. Only
one incremental sequence can be created in a table, and the sequence
column must be the primary key or unique index of this table.
● Currently, character sets and row_movement_clause support only SQL parsing.
● If no column is specified when a foreign key is created, the primary key of the
parent table is used by default. If the parent table has no primary key, an
error will be reported.
● In a temporary table, a BLOB column is defined as RAW(8000) and a CLOB
column is defined as VARCHAR(8000 BYTE).
● A local temporary table supports only ON COMMIT PRESERVE ROWS and
the table name must start with a number sign (#). In addition,
LOCAL_TEMPORARY_TABLE_ENABLED=TRUE must be set to specify whether
to enable local temporary tables.
● A global temporary table can be a transaction- or session-level temporary
table.
– ON COMMIT PRESERVE ROWS: If a session ends, the temporary table
data is deleted but the table structure remains.
– ON COMMIT DELETE ROWS: If a transaction ends, the temporary table
data is deleted but the table structure remains.
– If the ON COMMIT {DELETE | PRESERVE} ROWS clause is not specified,
a transaction-level temporary table is created by default.
Syntax
[ON COMMIT {DELETE | PRESERVE} ROWS] can be used only for temporary
tables.
TABLESPACE tablespace_name cannot be used for temporary tables.
● relational_properties:
AUTO_INCREMENT and DEFAULT cannot be used together.
( {column_name datatype_name [ DEFAULT expr [ ON UPDATE expr ] ] [ AUTO_INCREMENT ]
[ COMMENT 'string' ] [ COLLATE collation_name ] [ inline_constraint ]} [, ... ] )
– inline_constraint:
references_clause cannot be used for temporary tables.
PRIMARY KEY and UNIQUE cannot be used together.
[ CONSTRAINT constraint_name ] { [ NOT ] NULL
| UNIQUE
| PRIMARY KEY
| CHECK( expr )
| references_clause
}[...]
▪ references_clause:
REFERENCES [ schema_name. ]object_table [( column_name )] [ON DELETE { CASCADE |
SET NULL } ]
– out_of_line_constraint:
[ CONSTRAINT constraint_name ] { UNIQUE( column_name [ , ... ] ) [ using_index_clause ]
| PRIMARY KEY( column_name [ , ... ] ) [ using_index_clause ]
| CHECK( expr )
| FOREIGN KEY( column_name [ , ... ] ) references_clause_ex
}[ ,...]
▪ using_index_clause:
USING INDEX [ INITRANS integer
| TABLESPACE tablespace_name
| LOCAL [ ( { PARTITION partition_name [ TABLESPACE tablespace_name
| INITRANS integer
| PCTFREE integer
]
} [ , ... ]
)
]
] [ ...]
▪ references_clause_ex:
REFERENCES [ schema_name. ]object_table_name [( column_name [ , ... ] )]
[ ON DELETE { CASCADE | SET NULL } ]
● AS query:
SELECT [SQL_CALC_FOUND_ROWS] [ DISTINCT ] expression
[ [ AS ] name ] [ , ... ]
[ FROM table_reference [ [AS] alias ] [ , ... ] ]
[ WHERE { condition | [ NOT ] EXISTS ( correlated subquery ) } ]
[ [START WITH condition ] CONNECT BY [ NOCYCLE ] [ PRIOR ] condition ]
[ GROUP BY { column_name | number } [ , ... ] ]
[ HAVING condition [ , ... ] ]
[ { UNION [ ALL ] } select ]
[ ORDER BY { column_name | number } [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ] [ , ... ] ]
[ LIMIT [ offset_expr, ] count_expr | LIMIT count_expr OFFSET offset_expr | OFFSET offset_expr
[ LIMIT count_expr ] ]
● physical_properties:
{ segment_attributes_clause
}
– segment_attributes_clause:
TABLESPACE tablespace_name cannot be used for temporary tables.
{ physical_attributes_clause
| TABLESPACE tablespace_name
} [ ... ]
▪ physical_attributes_clause:
{ PCTFREE integer
| INITRANS integer
| MAXTRANS integer
} [ ...]
● table_properties:
[ column_properties ]
[ AUTO_INCREMENT [ = ] value ]
[ AS subquery]
– column_properties:
[ LOB_storage_clause ]
[ APPENDONLY { ON | OFF } ]
▪ LOB_storage_clause:
LOB ( LOB_item ) STORE AS { [ ( LOB_parameters ) ] }
○ LOB_parameters:
[ TABLESPACE tablespace_name
| { ENABLE | DISABLE } STORAGE IN ROW
][ ... ]
Parameter Description
● GLOBAL
Creates a global table.
● TEMPORARY
Creates a temporary table.
● IF NOT EXISTS
Does not throw an error if a table already exists.
● [schema_name.]table_name
Specifies the name of a table to be created. The table name must be unique
for a user. The name of a local temporary table must start with a number sign
(#). When creating a local temporary table, set
LOCAL_TEMPORARY_TABLE_ENABLED to TRUE and do not specify GLOBAL.
● relational_properties
Specifies table properties, including column names, data types, inline
constraints, and out-of-line constraints.
● DEFAULT expr [ON UPDATE expr]
Specifies an expression used to calculate the default value of a column. In
DDL, DEFAULT specifies a constant expression, the data type of this constant
will be checked for data compatibility.
– ON UPDATE expr specifies the default update value to be used if
DEFAULT is not set. It is compatible with the mainstream syntax in the
industry.
▪ If the inserted value is greater than the existing numbers, the value is
inserted into the column and the next number increments from this
new value. That is, a sequence can increment, skipping some
numbers.
– If an increment sequence is updated by using the UPDATE statement and
a generated value duplicates with the existing number, an error will be
reported. If the value is greater than the existing number, the existing
number is replaced with the value and the sequence proceeds with
incrementing from this value.
– To prevent overflow, you are advised to set the maximum value of a
sequence to 0x7FFFFFFFFFFFFFFF.
● COMMENT 'string'
Adds a comment for a column. You can view comments by querying the
USER_COL_COMMENTS system view.
● COLLATE collation_name
Specifies a collation rule for a column. The rule specifies how data is sorted
and compared.
collation_name can be set to the following parameters:
– UTF8_BIN: applicable to the UTF8 character set. All characters are
considered as binary strings and are compared from the most significant
bit to the least significant bit. The characters to be compared are case-
sensitive.
– UTF8_GENERAL_CI: applicable to the UTF8 character set. The characters
to be compared are case-insensitive.
– UTF8_UNICODE_CI: applicable to the UTF8 character set. The characters
to be compared are case-insensitive.
– GBK_BIN: applicable to the GBK character set. The characters to be
compared are case-sensitive.
● DISTINCT
Deduplicates column data. Single- or multi-column deduplication is
supported.
● [AS] name
Specifies the alias of a column to be printed.
● FROM table_reference [ [AS] alias ] [ , ... ]
Specifies a table to be queried.
– table_reference
Specifies the referenced table in a query. If it is a temporary table that
contains LOB columns, inline storage and out-of-line storage cannot be
used.
– [AS] alias
Specifies the alias of a table to be queried, facilitating join queries.
● WHERE { condition | [ NOT ] EXISTS ( correlated subquery )
Specifies conditions used for filtering rows.
correlated subquery
Specifies a correlated subquery.
● START WITH condition CONNECT BY [ NOCYCLE ] [ PRIOR ] condition
Specifies a clause for querying tree-structured data. If a table contains tree-
structured data, you can use this clause to query data.
– START WITH
Specifies the row that is the root of a tree-structured data query.
– CONNECT BY
Specifies the relationship between parent rows and child rows of a tree-
structured data query. It is used in conjunction with PRIOR.
– NOCYCLE
Instructs the database to return rows from a query even if CONNECT BY
LOOP exists in the data.
– PRIOR
PRIOR is a unary operator and has the same precedence as the unary +
and - arithmetic operators. The PRIOR keyword can be on either side of
the equal sign (=). If PRIOR is placed together with the parent ID, the
query traverses data in the direction of parent nodes. If it is placed
together with the child ID, the query traverses data in the direction of
child nodes.
● GROUP BY { column_name | number } [ , ... ]
Groups data based on attributes. Data is sorted before being grouped.
– column_name
Specifies the column based on which data is grouped.
– number
Specifies the sequence number of column_name in the table.
● HAVING condition [ , ... ]
Specifies the conditions used for filtering the result set returned by GROUP
BY.
▪ ENABLE
Inline storage is used.
▪ DISABLE
Out-of-line storage is used.
● CRMODE { PAGE | ROW }
Specifies the CR mode of a table. If this parameter is not specified, the CR
mode of the current instance is used.
– CRMODE PAGE indicates the page-level MVCC.
– CRMODE ROW specifies the row-level MVCC.
● NOLOGGING
Specifies that a table is a nologging table. Different from a common table, a
nologging table does not record logs for better performance. As a result, a
Examples
● Create a global session-level temporary table sections.
-- Delete the sections table.
DROP TABLE IF EXISTS sections;
-- Create the sections table.
CREATE GLOBAL TEMPORARY TABLE sections
(
section_id NUMBER(4) not null,
section_name VARCHAR2(30),
manager_id NUMBER(6),
place_id NUMBER(4)
) ON COMMIT PRESERVE ROWS;
-- Insert record 1.
insert into sections (section_id, section_name, manager_id, place_id)
values (10, 'Administration', 200, 1700);
-- Insert record 2.
insert into sections (section_id, section_name, manager_id, place_id)
values (20, 'Marketing', 201, 1800);
-- Insert record 3.
insert into sections (section_id, section_name, manager_id, place_id)
values (30, 'Purchasing', 114, 1700);
-- Commit the transaction.
COMMIT;
-- Display place_id in DISTINCT mode.
SELECT DISTINCT place_id FROM sections;
● Create the education table.
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school
VARCHAR(64), graduate_date DATETIME, education_note VARCHAR(70));
● -- Create the training table.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training
(
staff_id INT NOT NULL,
course_name VARCHAR(50),
course_start_date DATETIME,
course_end_date DATETIME,
exam_date DATETIME,
score INT
);
● Create a temporary table that contains BLOB and CLOB columns defined as
RAW(8000) and VARCHAR(8000), respectively.
-- Delete the STAFFS table.
CREATE GLOBAL TEMPORARY TABLE STAFFS
(
staff_id INT NOT NULL,
course_name BLOB,
COMMENT CLOB
);
-- Query the STAFFS table.
DESC STAFFS;
Name Null? Type
----------------------------------- -------- ------------------------------------
STAFF_ID NOT NULL BINARY_INTEGER
COURSE_NAME RAW(8000)
COMMENT VARCHAR(8000 BYTE)
Description
CREATE TABLESPACE creates a tablespace.
Precautions
● To run this statement, you must have the CREATE TABLESPACE system
permission.
● Do not set the data file path to the run log directory, log archiving directory,
or any directory that may be cleared.
● AUTOOFFLINE can be set only for user tablespaces.
Syntax
CREATE TABLESPACE tablespace_name [ EXTENTS integer ]
DATAFILE { datafile_tempfile_spec [, ... ] } [NOLOGGING] [autooffline_clause]
● datafile_tempfile_spec:
'file_name' SIZE integer [ K | M | G ] [ autoextend_clause ]
– autoextend_clause:
AUTOEXTEND { OFF
| ON [ NEXT integer [ K | M | G ] ]
[ MAXSIZE integer [ K | M | G ] | UNLIMITED ]
}
● autooffline_clause clause:
AUTOOFFLINE [ ON | OFF ]
Parameter Description
● tablespace_name
Specifies a tablespace name. The value must be different from existing
tablespace names. Otherwise, an error is reported.
EXTENTS
Specifies the number of pages in an extent.
The value must be integral power of 2 within the range [8, 8192]. If EXTENTS
is not specified, an extent contains 8 pages by default.
Increasing the number of pages in a single extent can improve I/O
performance. However, if there are small tables in the tablespace and the
table data volume does not reach the size of an extent, space will be wasted.
● DATAFILE
Specifies table files.
● datafile_tempfile_spec
Multiple data files can be separated by commas (,). Data files cannot contain
Chinese characters.
– file_name
Specifies the absolute path (path and file name) of a new data file. If a
relative path is specified, the file is stored in the data directory under the
data directory by default.
– SIZE integer[ K | M | G ]
Specifies the size of each data file.
K: The unit of the file size is KB.
M: The unit of the file size is MB.
G: The unit of the file size is GB.
The value range for the undo tablespace is [1 MB, 32 GB) and that for
other tablespaces is [1 MB, 8 TB].
– autoextend_clause
If AUTOEXTEND is set to on, you can manually specify the extension size.
If both MAXSIZE and NEXT are set, the value of MAXSIZE must be
no less than that of NEXT.
● NOLOGGING
Specifies that a tablespace is a nologging tablespace. Tables in this tablespace
are nologging tables. A nologging table can be stored only in a nologging
tablespace.
– CREATE DATABASE creates tablespaces temp2 and temp2_undo by
default to store common data and undo data of nologging tables,
respectively.
– Globally, tmp2_undo is the only tablespace to store undo data of
nologging tables. This tablespace cannot be deleted or renamed.
– You can create multiple tablespaces for storing data of nologging tables.
– You can use DV_TABLESPACES to display nologging tablespaces. The
value in the temporary column for nologging tablespaces is true.
● autooffline_clause
Specifies whether automatic offline is enabled for tablespaces. If
AUTOOFFLINE is set to ON, automatic offline is enabled for user tablespaces.
When a file fails to be opened during database startup, the user tablespace is
automatically brought offline. If a user tablespace is faulty after the database
is started, the tablespace is not automatically brought offline.
Examples
● Create the vedio_space tablespace with the size of 32 MB.
CREATE TABLESPACE vedio_space DATAFILE 'vedio_dfile1' SIZE 32M;
● Create the image_space tablespace. An extent contains 128 pages. Create the
image_space tablespace with the size of 32 MB. When the tablespace is full,
it can automatically extend. You can specify the extension size.
CREATE TABLESPACE image_space EXTENTS 128 DATAFILE 'image_dfile1' SIZE 32M AUTOEXTEND ON
NEXT 10M;
Context
GaussDB 100 supports range partitioning, hash partitioning, list partitioning, and
interval partitioning.
● In range partitioning, a table is partitioned based on ranges defined by values
in one or more columns, with no overlap between the ranges of values
assigned to different partitions. Each range has a dedicated partition for data
storage.
Precautions
● To create a table for the current user, you must have the CREATE TABLE
system permission. To create a table for other users, you must have the
CREATE ANY TABLE system permission. Common users cannot create objects
of system users.
Restrictions on creating a partitioned table are as follows:
● A partition key contains a maximum of 16 columns.
● Partition keys support the following data types: INT, INTEGER, BIGINT,
NUMBER, DECIMAL, REAL, DOUBLE, NUMERIC, VARCHAR, VARCHAR2, CHAR,
BINARY, RAW, VARBINARY, DATE, DATETIME, and TIMESTAMP.
● Currently, range partitioning, list partitioning, hash partitioning, and interval
partitioning are supported.
● A maximum of 4,194,304 interval partitions are supported. If the total
number of interval partitions exceeds 4194304, an error is reported.
Syntax
CREATE TABLE [ IF NOT EXISTS ][ schema_name. ]table_name
[ relational_properties ]
[ physical_properties ]
[ TABLESPACE tablespace_name]
[ table_properties ]
● relational_properties:
AUTO_INCREMENT and DEFAULT cannot be used together.
– inline_constraint:
PRIMARY KEY and UNIQUE cannot be used together.
[ CONSTRAINT constraint_name ] { [ NOT ] NULL
| UNIQUE
| PRIMARY KEY
| CHECK( expr )
| references_clause
}[...]
▪ references_clause:
REFERENCES [ schema_name. ]object_table_name ( column_name )
[ ON DELETE { CASCADE | SET NULL } ]
– out_of_line_constraint:
[ CONSTRAINT constraint_name ] { UNIQUE( column_name [ , ... ] ) [ using_index_clause ]
| PRIMARY KEY( column_name [ , ... ] ) [ using_index_clause ]
| CHECK( expr )
| FOREIGN KEY( column_name [ , ... ] ) references_clause_ex
}
▪ using_index_clause:
USING INDEX
[ INITRANS integer
| TABLESPACE tablespace_name
| LOCAL [ ( { PARTITION partition_name [ TABLESPACE tablespace_name
| INITRANS integer
| PCTFREE integer
]
} [ , ... ]
)
]
] [ ...]
▪ references_clause_ex:
REFERENCES [ schema_name. ]object_table_name ( column_name [ , ... ] ) [ ON DELETE
{ CASCADE | SET NULL } ]
● physical_properties:
{ segment_attributes_clause
– segment_attributes_clause:
{ physical_attributes_clause
| TABLESPACE tablespace_name
} [ ... ]
▪ physical_attributes_clause:
[ { PCTFREE integer
| INITRANS integer
} [ ...]
]
● table_properties:
[ column_properties ]
[ table_partitioning_clauses ]
[ AUTO_INCREMENT [ = ] value ]
– column_properties:
[ LOB_storage_clause ]
[ APPENDONLY { ON | OFF } ]
▪ LOB_storage_clause:
LOB ( LOB_item ) STORE AS { LOB_segname [ ( LOB_parameters ) ] }
○ LOB_parameters:
[ TABLESPACE tablespace_name
| { ENABLE | DISABLE } STORAGE IN ROW
] [ ... ]
– table_partitioning_clauses:
{ range_partitioning
| list_partitioning
| hash_partitioning
| interval_partitioning
}
▪ range_partitioning:
PARTITION BY RANGE ( partition_key [ , ... ] )
( { PARTITION partition_name VALUES LESS THAN ( { partition_value
| MAXVALUE
} [ , ... ]
)
[ TABLESPACE tablespace_name ]
[physical_attributes_clause]
} [ , ... ]
)
▪ list_partitioning:
PARTITION BY LIST ( partition_key [ , ... ] )
( { PARTITION partition_name VALUES ( partition_value [ , ... ]
|[ DEFAULT ]
)
[ TABLESPACE tablespace_name ]
[ physical_attributes_clause ]
} [ , ... ]
)
▪ hash_partitioning:
PARTITION BY HASH ( partition_key [ , ... ] )
{ ( { PARTITION partition_name
[ TABLESPACE tablespace_name ]
[ physical_attributes_clause ]
} [ , ... ]
)
| PARTITIONS partition_count [ STORE IN (tablespace_name [ , ... ]) ]
}
▪ interval_partitioning:
PARTITION BY RANGE ( partition_key ) INTERVAL ( interval_value )
( { PARTITION partition_name VALUES LESS THAN
( partition_value )
[ TABLESPACE tablespace_name ]
[ physical_attributes_clause ]
} [ , ... ]
)
Parameter Description
● IF NOT EXISTS
Does not throw an error and replaces the existing table if a table already
exists.
● [schema_name.]table_name
Specifies the name of a table to be partitioned. The table name must be
unique for a user.
● tablespace_name
Specifies the tablespace of a range partition.
bit to the least significant bit. The characters to be compared are case-
sensitive.
– UTF8_GENERAL_CI: applicable to the UTF8 character set. The characters
to be compared are case-insensitive.
– UTF8_UNICODE_CI: applicable to the UTF8 character set. The characters
to be compared are case-insensitive.
– GBK_BIN: applicable to the GBK character set. The characters to be
compared are case-sensitive.
– GBK_CHINESE_CI: applicable to the GBK character set. The characters to
be compared are case-insensitive.
● inline_constraint
Adds a column constraint. It is included in the column definition. Currently,
the NULL, NOT NULL, UNIQUE, PRIMARY KEY, UNIQUE INDEX, FOREIGN KEY,
and CHECK constraints are supported.
● [ NOT ] NULL
Specifies whether a column can hold NULL values.
– NOT NULL: The column cannot hold NULL values.
– NULL: The column can hold NULL values.
● UNIQUE
Specifies that values in a column must be unique. NULL values are allowed.
The UNIQUE constraint can be added to multiple columns in a table.
● PRIMARY KEY
Specifies a primary key including one or more columns that uniquely identify
a row in the table. NULL values are not allowed. Only one primary key can be
created for a table.
● CHECK( expr )
Specifies rules for checking values in a column.
● references_clause
Adds a FOREIGN KEY constraint. schema_name indicates the owner of the
referenced table, and object_table indicates the name of the referenced table.
column_name indicates the referenced column.
If no column is specified for a parent table, the primary key of the parent
table is used by default. If the primary key of the parent table does not exist,
an error will be reported.
● ON DELETE { CASCADE | SET NULL }
Specifies how foreign key values in a child table are handled when primary or
unique values in the parent table are deleted.
– CASCADE
The foreign key values will be deleted.
– SET NULL
The foreign key values will be converted to NULL.
● out_of_line_constraint
Adds a table constraint. It is included as a separate line in the table
definition . Currently, UNIQUE, PRIMARY KEY, and CHECK constraints are
supported.
● FOREIGN KEY
Specifies a foreign key.
● TABLESPACE tablespace_name
Specifies a tablespace.
● LOCAL
Creates local indexes. It is a default attribute.
● PCTFREE integer
Specifies the percentage of space reserved for a block. If the percentage of
available space of a data block is less than this value, you can only update
data of this block and cannot insert data into it. The value range is [8, 80]
and the default value is 10.
● INITRANS
Specifies the number of transaction slots in an initial data block.
● table_properties
Specifies table attributes.
● AUTO_INCREMENT [=] value
Specifies a start value for an incremental sequence.
If no value is specified, the sequence increments from 1.
● APPENDONLY { ON | OFF }
– APPENDONLY ON indicates that a new page will be used for when
different threads insert data into the same table even though there are
pages that are not full. In this case, the page lock waiting duration is
shortened but the page space is wasted.
▪ ENABLE
Inline storage is used.
▪ DISABLE
Out-of-line storage is used.
● range_partitioning
Creates range partitions.
● partition_key
Specifies the list columns contained in a partition key. The length of each
column cannot exceed 2000 bytes.
Partition keys support the following data types:
INT, INTEGER, BIGINT, NUMBER, DECIMAL, REAL, DOUBLE, NUMERIC,
VARCHAR, VARCHAR2, CHAR, BINARY, RAW, VARBINARY, DATE, DATETIME,
and TIMESTAMP
● PARTITION
Specifies a partitioned table.
● partition_name
Specifies the name of a range partition to be created.
● VALUES LESS THAN
Specifies the upper boundary of a range partition.
● partition_value
Specifies the upper boundary of a range partition.
This parameter is mandatory for each partition.
The data type of an upper boundary must be the same as that of the
partition key.
In a partition list, partitions are arranged in ascending order of upper
boundary values. Therefore, a partition with a certain upper boundary value is
placed before another partition with a larger upper boundary value.
● MAXVALUE
Specifies a keyword.
This parameter is used for range partitioning. MAXVALUE specifies the upper
boundary of the last range partition.
In interval partitioning, MAXVALUE cannot be used to specify the upper
boundary of the last range partition.
● list_partitioning
Creates list partitions based on a partitioning key. A partition contains a
maximum of 500 lists.
DEFAULT
Specifies a default partition used for storing default values.
● hash_partitioning
Creates hash partitions. Hash partitions are created on a specified column.
STORE IN
Specifies a tablespace for storing hash partitions.
● interval_partitioning
Creates interval partitions.
Examples
● Create the hash partition table employment_history.
-- Delete the employment_history table.
DROP TABLE IF EXISTS employment_history;
-- Create the hash partition table employment_history.
create table employment_history
(
staff_id NUMBER(6),
start_date DATE,
end_date DATE,
employment_id VARCHAR2(10),
section_id NUMBER(4)
) PARTITION BY HASH(start_date) PARTITIONS 2;
Precautions
● To run this statement, you must have the CREATE USER system permission.
● The user name cannot be the same as an existing user name or role name in
the database. Otherwise, the error message "user *name* already exists" is
displayed.
● When creating a user, you need to specify the user name and password.
● SYS and PUBLIC are preset users and cannot be created.
Syntax
CREATE USER user_name
IDENTIFIED BY password
[ DEFAULT TABLESPACE tablespace_name
| PASSWORD EXPIRE
| ACCOUNT { LOCK | UNLOCK }
| PROFILE profile_name
| ENCRYPTED
] [ ... ]
Parameter Description
● user_name
Username
– A user name cannot contain spaces or the following special characters:
semicolon (;), vertical line (|), backquote (`), dollar sign ($), bit operator
(&), greater than (>), less than (<), double quotation mark ("), single
quotation mark ('), and exclamation point (!), space, and copyright
symbol (©). In addition, enclosing them in double quotation marks or
backquotes is also forbidden.
– If a user name contains special characters other than the preceding
special characters, enclose the name with double quotation marks ("") or
backquotes (``).
– Since SYSDBA and CLSMGR are database keywords, users with these
names cannot log in to the database. Such users are not recommended.
● IDENTIFIED BY
Specifies a password for the user to be created.
● password
Specifies the password of the user to be created.
The password must comply with the following requirements:
– Contain 8 to 64 characters.
– Start with a letter, number sign (#), or an underscore (_) if the password
is not enclosed in single quotation marks ('').
– Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
– Contain only the following four character types and at least three of
them:
▪ Digits
▪ Lowercase letters
▪ Uppercase letters
1 ` 9 & 17 \ 25 "
2 ~ 10 * 18 | 26 ,
3 ! 11 ( 19 [ 27 <
4 @ 12 ) 20 { 28 .
5 # 13 - 21 } 29 >
6 $ 14 _ 22 ] 30 /
7 % 15 = 23 : 31 ?
8 ^ 16 + 24 ' - -
Examples
Create user jessica and specify PASSWORD EXPIRE.
CREATE USER jessica IDENTIFIED BY database_123 PASSWORD EXPIRE;
Function
CREATE VIEW creates a view.
A view is a virtual table, not a base table. Only view definition is stored in the
database and view data is not. The data is stored in a base table. If data in the
base table changes, the data queried from the view changes accordingly. In this
sense, a view is like a window through which users can know their interested data
and data changes in the database.
Precautions
To run this statement, you must have the CREATE VIEW or CREATE ANY VIEW
system permission. Common users cannot create objects of system users.
Syntax
CREATE [ OR REPLACE ] VIEW [ schema_name. ]view_name [ ( alias [ ,... ] ) ] AS subquery
Parameter Description
● [OR REPLACE]
Updates a view if the view to be created exists.
● [schema_name.] view_name
Specifies the name of a view to be created.
● [( alias [ ,... ])]
Specifies aliases of columns in a view. If no alias is specified, the column
aliases are automatically derived from the subquery result.
● AS subquery
Specifies a subquery.
Examples
● Create the view privilege_view.
-- Delete the privilege table.
DROP TABLE IF EXISTS privilege;
-- Create the privilege table.
CREATE TABLE privilege(staff_id INT PRIMARY KEY, privilege_name VARCHAR(64) NOT NULL,
privilege_description VARCHAR(64), privilege_approver VARCHAR(10));
-- Create the view privilege_view.
CREATE OR REPLACE VIEW privilege_view AS SELECT staff_id, privilege_name from privilege;
3.13.30 DELETE
Description
DELETE deletes records from a table.
Precautions
● To run this statement, you must have the DELETE permission for the table or
have the DELETE ANY TABLE system permission.
● The commit of the DELETE transaction is disabled by default. Before the
session exits, you need to explicitly commit the transaction. Otherwise, records
will be lost.
Syntax
Delete records from a table.
DELETE FROM [ schema_name. ]table_name
[ WHERE condition ]
[ ORDER BY { column_name [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ] } [ , ... ] ]
[ LIMIT [ start, ] count
| LIMIT count OFFSET start
| OFFSET start[ LIMIT count ] ]
or
DELETE FROM table_ref_list USING join_table
● table_ref_list:
[ schema_name.]table_name
● join_table:
table_reference [LEFT [OUTER] | RIGHT [OUTER] | INNER ] JOIN table_reference ON conditional_expr
table_reference:
{ [ schema_name. ]table_name [ [AS] alias ]
| [ schema_name. ]view_name [ [AS] alias]
| ( select query ) [ [AS] alias ]
| join_table
}
Parameter Description
● [ schema_name. ]table_name
Specifies the name of a table whose data is to be deleted.
● condition
Specifies the condition for deleting data.
● ORDER BY
Specifies a column based on which a result set is sorted.
● ASC | DESC
Specifies whether the ordering sequence is ascending or descending.
● NULLS FIRST | NULLS LAST
Specifies the position of NULL values in the ORDER BY column. FIRST
indicates that NULL values are placed before non-NULL values and LAST
indicates that NULL values are placed after non-NULL values. If this
parameter is not specified, NULLS LAST is used in ASC mode and NULLS
FIRST is used in DESC mode by default.
● start,count
count specifies the maximum number of rows to return, while start specifies
the number of rows to skip before the first row is returned. When both are
specified, rows specified by start will be skipped before rows specified by
count are returned.
● table_ref_list
Specifies tables whose data is to be deleted. Temporary tables are not
supported in the list.
● table_reference
Specifies a table or view to be queried, or a subquery.
● join_table
Specifies a set of tables for join query.
– LEFT [OUTER] JOIN returns all records from the left table and the
matched records from the right table. The result is NULL from the right
side, if there is no match.
– RIGHT [OUTER] JOIN returns all records from the right table and the
matched records from the left table. The result is NULL from the left side,
when there is no match.
– INNER JOIN returns records that have matching values in both tables.
– conditional_expr
Specifies the conditions for joining two tables.
– table_reference
Specifies tables whose data is to be deleted.
– table_name
Specifies the name of a table whose data is to be deleted.
– view_name
Specifies the name of a view whose data is to be deleted.
– select query
Specifies a subquery whose returned result is to be deleted.
Examples
● Batch delete the records with the same staff_id values from the training and
education tables.
-- Delete the education and training tables.
DROP TABLE IF EXISTS education;
DROP TABLE IF EXISTS training;
-- Create the education and training tables.
CREATE TABLE education(staff_id INT, first_name VARCHAR(20));
CREATE TABLE training(staff_id INT, first_name VARCHAR(20));
-- Insert data.
INSERT INTO education VALUES(1, 'ALICE');
INSERT INTO education VALUES(2, 'BROWN');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(3, 'BOB');
-- Batch delete the records with the same staff_id values from the training and education tables.
DELETE training FROM education JOIN training ON education.staff_id = training.staff_id;
Alternatively,
DELETE FROM training USING education JOIN training ON training.staff_id = education.staff_id;
Precautions
● You can modify your own indexes without additional permissions.
● To delete indexes of other users, the DROP ANY INDEX system permission is
required. Common users cannot delete objects of system users.
● Profiles cannot be created during database restart or rollback.
Syntax
DROP INDEX [ IF EXISTS ] [ schema_name. ]index_name [ ON [schema_name.]table_name ]
Parameter Description
● IF EXISTS
Does not throw an error if the index does not exist.
● [schema_name.] index_name
Specifies the name of an index to be deleted.
● ON [ schema_name. ] table_name
Specifies a table whose index is to be deleted.
Examples
Delete an index.
DROP INDEX IF EXISTS idx_training;
Precautions
● To run this statement, you must have the DROP PROFILE system permission.
● If a profile is referenced by a user, the profile cannot be deleted. In this case,
you need to specify CASCADE in the statement to delete the profile.
● If the profile to be deleted does not exist, the deletion fails.
● After the profile referenced by a user is deleted by running the statement with
CASCADE specified, the default profile is automatically referenced by the user.
● The default profile cannot be deleted.
● Profiles cannot be created during database restart or rollback.
Syntax
DROP PROFILE profile_name [ CASCADE ]
Parameter Description
● profile_name
Specifies the name of a profile to be deleted.
● CASCADE
Deletes a profile that has been referenced by a user. If you do not specify
CASCADE for such a profile, this profile will fail to be deleted. After the
profile is deleted, GaussDB 100 assigns DEFAULT PROFILE to the user.
Examples
Delete the pro_common profile.
-- Create the pro_common profile.
CREATE PROFILE pro_common LIMIT PASSWORD_GRACE_TIME 10 PASSWORD_LOCK_TIME DEFAULT
PASSWORD_LIFE_TIME UNLIMITED;
-- Delete the pro_common profile.
DROP PROFILE pro_common CASCADE;
Precautions
● To delete a role by running this statement, you must meet one of the
following conditions: you have the DROP ANY ROLE system permission; you
are the creator of this role; or this role has been granted to you, with WITH
ADMIN OPTION specified.
● The role name must exist. Otherwise, an error will be reported.
● 数据库重启回滚期间不支持该操作。
Syntax
DROP ROLE role_name
Parameter Description
role_name
Specifies the name of a role to be deleted.
Examples
Delete the developers role.
-- Create the developers role.
CREATE ROLE developers;
-- Delete the developers role.
DROP ROLE developers;
Precautions
● You can delete your own sequences without additional permissions.
● To delete sequences of other users, the DROP ANY SEQUENCE system
permission is required. Common users cannot delete objects of system users.
● Profiles cannot be created during database restart or rollback.
Syntax
DROP SEQUENCE [ IF EXISTS ] [ schema_name. ]sequence_name
Parameter Description
● IF EXISTS
Does not throw an error if a sequence to be deleted does not exist.
● [schema_name.] sequence_name
Specifies the name of a sequence to be deleted.
Examples
Delete the seq_auto_extend sequence.
Description
DROP SQL_MAP deletes an SQL mapping.
Precautions
Users do not need system permissions for executing ALTER SQL_MAP.
Syntax
DROP SQL_MAP [ IF EXISTS ] (src_select)
Parameter Description
● IF EXISTS
Specifies that deleting a SQL mapping can succeed even if it does not exist.
● src_select
Specifies a source SQL statement.
Examples
Enable the SQL mapping function.
alter system set enable_sql_map = true;
Enter a source SQL statement, which will actually be mapped to the target SQL
statement for execution.
select count(*) from SYS_DUMMY;
CNT
--------------------
1
1 rows fetched.
1 rows fetched
Precautions
● You can delete synonyms under your own schemas without additional
permissions. To delete synonyms under a schema of other users, the DROP
ANY SYNONYM system permission is required. To delete a public synonym,
the DROP PUBLIC SYNONYM permission is required. Currently, you can
delete only schemas under your own and PUBLIC schemas. Common users
cannot delete objects of system users.
● When deleting a synonym, specify its name. If IF EXISTS is not specified in the
statement, an error will be reported if the synonym to be deleted does not
exist.
● Deleting a synonym has no impact on the associated object of this synonym.
If a user is deleted, all the synonyms of this user are also deleted.
● Profiles cannot be created during database restart or rollback.
Syntax
DROP [ PUBLIC ] SYNONYM [ IF EXISTS ] [ schema_name. ]synonym_name [ FORCE ]
Parameter Description
● PUBLIC
Deletes public synonyms. You can reference information about creating a
synonym.
● [IF EXISTS]
Does not throw an error if the synonym to be deleted does not exist.
● [schema_name.]synonym_name
Specifies the name of a synonym to be deleted. Specify the name by following
the database object naming convention.
● FORCE
Forcibly deletes synonyms.
Examples
● Delete a public synonym pri_vi.
DROP PUBLIC SYNONYM IF EXISTS pri_vi FORCE;
Precautions
● You can delete your own tables without additional permissions. To delete
tables of other users, the DROP ANY TABLE system permission is required.
Common users cannot delete objects of system users.
● If the recycle bin is enabled, the table is moved to the recycle bin rather than
being removed immediately from the database after this statement is
executed. In this case, you can run the FLASHBACK statement to roll back the
table.
Tables in the SYSTEM tablespace are immediately removed from the database
rather than being moved to the recycle bin. In addition, such tables cannot be
rolled back by running the FLASHBACK statement.
By default, the recycle bin is enabled in GaussDB 100. If the recycle bin is not
enabled, run the ALTER SYSTEM SET RECYCLEBIN = TRUE statement to
enable it.
● Profiles cannot be created during database restart or rollback.
Syntax
DROP [TEMPORARY] TABLE [ IF EXISTS ] [ schema_name. ]table_name [CASCADE CONSTRAINTS][ PURGE ]
Parameter Description
● TEMPORARY
Deletes a temporary table. The value must be a name of a temporary table.
● IF EXISTS
Does not throw an error if the table to be deleted does not exist.
● [schema_name.]table_name
Specifies the name of a table to be deleted.
● CASCADE CONSTRAINTS
If the table to be deleted is referenced by a foreign key of another table, an
error will be reported when you delete this parent table. In this case, you can
specify CASCADE CONSTRAINTS in the statement to delete the parent table
and the foreign key.
● PURGE
Removes a table from the recycle bin.
Examples
● Delete the privilege table, moving it to the recycle bin.
DROP TABLE IF EXISTS privilege;
● Delete the privilege table, removing it from the recycle bin.
DROP TABLE IF EXISTS privilege PURGE;
● Delete the privilege temporary table, moving it to the recycle bin.
DROP TEMPORARY TABLE privilege;
Precautions
● To run this statement, you must have the DROP TABLESPACE system
permission.
● To delete offline tablespaces, the database must be in the OPEN state.
● The SYSTEM, undo, and temporary tablespaces cannot be deleted.
● In the MOUNT database state, you need to manually delete a tablespace on
the primary and standby nodes because the primary node does not
synchronize the deletion to other nodes.
Syntax
DROP TABLESPACE tablespace_name [ INCLUDING CONTENTS [ { AND | KEEP } DATAFILES ] ]
Parameter Description
● tablespace_name
Tablespace name
● INCLUDING CONTENTS {AND | KEEP} DATAFILES
Specifies whether to delete data files when a tablespace is deleted.
– AND
Deletes data files when a tablespace is deleted.
– KEEP
Does not delete data files when a tablespace is deleted.
● DATAFILES
Specifies data files.
● If this parameter is not specified, an error will be reported when you delete a
tablespace containing objects such as tables, indexes, and LOB or when you delete
the default tablespace.
● If INCLUDING CONTENTS AND DATAFILES is specified, a tablespace is deleted
together with objects in it. However, an error message will be reported when you
delete the default tablespace or a tablespace whose objects are associated with
objects in another tablespace. For example, the foreign key, some partitions, or
index of another tablespace is in the tablespace to be deleted.
Examples
● Delete a tablespace and its data files in the OPEN database state.
-- Create the human_space3 tablespace in the OPEN database state.
CREATE TABLESPACE human_space3 DATAFILE 'human_dfile3' SIZE 32M AUTOEXTEND ON NEXT 10M;
-- Delete the human_space3 tablespace and its data files in the OPEN database state.
DROP TABLESPACE human_space3 INCLUDING CONTENTS AND DATAFILES;
● Delete a tablespace and retain its data files in the OPEN database state.
-- Create the human_space4 tablespace in the OPEN database state.
CREATE TABLESPACE human_space4 DATAFILE 'human_dfile4' SIZE 32M;
-- Delete the human_space4 tablespace and retain its data files in the OPEN database state.
DROP TABLESPACE human_space4 INCLUDING CONTENTS KEEP DATAFILES;
Description
DROP USER deletes an existing database user.
Precautions
● To run this statement, you must have the DROP USER system permission.
● If the specified user does not exist and if exists is not specified, the following
error message is displayed: "user name does not exist".
● Profiles cannot be created during database restart or rollback.
● A user has a maximum of 50,000 objects. Before deleting a user, you are
advised to delete objects of the user first.
● Perform DROP USER only when absolutely necessary. Deleted objects cannot
be restored even if the execution of DROP USER is interrupted.
Syntax
DROP USER [ if exists ] user_name [ CASCADE ]
Parameter Description
● user_name
Specifies the name of a user to be deleted.
● if exists
Does not throw an error if the user to be deleted does not exist. If the
specified user does not exist, the system returns a message indicating that the
operation is successful. If the user exists, the system deletes the user.
● CASCADE
– If CASCADE is not specified and a user has database objects that are not
deleted, the following error message is displayed when the user is
deleted:
GS-00815, user objects is being used, can not drop
Examples
Delete user zwx003 and forcibly delete the database objects of this user.
-- Create user zwx003 with the password database_123 specified.
CREATE USER zwx003 IDENTIFIED BY database_123;
-- Delete user zwx003 and forcibly delete the database objects of this user.
DROP USER zwx003 CASCADE;
Precautions
● 用户能删除自己的视图,删除其他用户的视图,需要DROP ANY VIEW权限,普通
用户不可以删除系统用户对象。
● 数据库重启回滚期间不支持该操作。
Syntax
DROP VIEW [ IF EXISTS ] [ schema_name. ]view_name
Parameter Description
● IF EXISTS
Does not throw an error if the view to be deleted does not exist.
● [schema_name.] view_name
Specifies the name of a view to be deleted.
Examples
Delete the privilege_view view.
-- Delete the privilege table.
DROP TABLE IF EXISTS privilege;
-- Create the privilege table.
CREATE TABLE privilege(staff_id INT PRIMARY KEY, privilege_name VARCHAR(64) NOT NULL,
privilege_description VARCHAR(64), privilege_approver VARCHAR(10));
-- Create the privilege_view view.
CREATE OR REPLACE VIEW privilege_view AS SELECT staff_id, privilege_name from privilege;
-- Delete the privilege_view view.
DROP VIEW IF EXISTS privilige_view;
Precautions
EXPLAIN displays only DML execution plans.
Syntax
EXPLAIN [ PLAN FOR ] statement
Parameter Description
● PLAN FOR
Examples
View a DML execution plan.
-- Delete the posts table.
DROP TABLE IF EXISTS posts;
-- Create the posts table.
CREATE TABLE posts(post_id CHAR(2) NOT NULL, post_name CHAR(16) NOT NULL, basic_wage INT,
basic_bonus INT);
-- Insert record 1 into the posts table.
INSERT INTO posts(post_id,post_name,basic_wage,basic_bonus) VALUES('A','GENERAL MANAGER',
50000,5000);
-- Insert record 2 into the posts table.
INSERT INTO posts(post_id,post_name,basic_wage,basic_bonus) VALUES('B','PROJECT MANAGER',
10000,5000);
-- Insert record 3 into the posts table.
INSERT INTO posts(post_id,post_name,basic_wage,basic_bonus) VALUES('C','STAFF',3000,1000);
-- Commit the transaction.
COMMIT;
-- View the execution plan.
EXPLAIN SELECT * FROM posts WHERE post_id='A';
Description
FLASHBACK TABLE restores an earlier state of a table in the event of human or
application error.
The time in the past to which the table can be flashed back is dependent on the
amount of undo data in the system. In addition, GaussDB 100 cannot restore a
table to an earlier status across any DDL operations that change the structure of
the table.
Precautions
● To run this statement, you must have the FLASHBACK permission. To flash
back tables of other users, the FLASH ANY TABLE permission is required.
● You can flash back tables from the undo data or recycle bin.
– The undo data records the new and updated data objects. TO SCN expr
and TO TIMESTAMP expr flash back objects from undo data.
– The recycle bin records the objects deleted by DROP. TO BEFORE DROP
flashes back objects from the recycle bin.
● By default, GaussDB 100 does not forcibly convert the DATE type and this
type needs to be converted by using functions.
● Profiles cannot be created during database restart or rollback.
Syntax
FLASHBACK TABLE [ schema_name. ]table_name TO { SCN expr | TIMESTAMP expr | BEFORE { DROP
[ RENAME TO table_name ] | TRUNCATE FORCE } }
Parameter Description
● schema_name
Specifies a schema containing the table to be flashed back. If this parameter
is not specified, the current schema is used.
● table_name
Specifies one or more tables to be flashed back.
This statement is subject to the following restrictions:
– Table flashback is invalid for the following objects: materialized views,
system catalogs, foreign tables, or individual table partitions or
subpartitions.
– The following DDL operations change the table structure. Therefore, do
not use TO SCN or TO TIMESTAMP for flashback.
▪ Specify the system-generated recycle bin name of the table you want
to retrieve.
Examples
-- Create the human_bonus tablespace.
CREATE TABLESPACE human_bonus DATAFILE 'bonus2018' SIZE 32M;
-- Delete the bonus_2018 table.
DROP TABLE IF EXISTS bonus_2018;
-- Create the bonus_2018 table.
CREATE TABLE bonus_2018(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER) TABLESPACE human_bonus;
-- Insert record 1 into the bonus_2018 table.
INSERT INTO bonus_2018(staff_id, staff_name, job, bonus) VALUES(20,'LIMING','developer',5000);
-- Insert record 2 into the bonus_2018 table.
INSERT INTO bonus_2018(staff_id, staff_name, job, bonus) VALUES(21,'LIYU','tester',7000);
-- Insert record 3 into the bonus_2018 table.
INSERT INTO bonus_2018(staff_id, staff_name, job, bonus) VALUES(22,'WANGQIMING','developer',8000);
-- Commit the transaction.
COMMIT;
-- Delete data from the bonus_2018 table.
TRUNCATE TABLE bonus_2018;
-- Flash back to one minute ago (the time point must be one minute ago or earlier than the time when the
table was created. Otherwise, an error is reported).
FLASHBACK TABLE bonus_2018 TO TIMESTAMP SYSTIMESTAMP-1/1440;
-- Query data in the bonus_2018 table.
SELECT * FROM bonus_2018;
-- Delete the bonus_2018 table.
DROP TABLE IF EXISTS bonus_2018;
-- Query data in the bonus_2018 table.
SELECT * FROM bonus_2018;
-- Flash back the bonus_2018 table to the time point before the DROP operation.
FLASHBACK TABLE bonus_2018 TO BEFORE DROP;
-- Query data in the bonus_2018 table.
SELECT * FROM bonus_2018;
3.13.43 GRANT
Description
GRANT grants system permissions or roles to users or other roles.
SYS permissions not granted to a user or roles of the user cannot be granted to
other users or roles by the user.
Precautions
● To grant a system permission, you must meet one of the following
requirements:
– This system permission has been granted to you, with WITH ADMIN
OPTION specified.
Syntax
GRANT { ALL [ PRIVILEGES ]
|{ system_privilege_name | role_name } [ , ... ] } TO grantee [ WITH ADMIN OPTION ]
● grantee:
{ user_name | role_name } [ , ... ] }
GRANT { object_privilege_name | ALL [PRIVILEGES] } [, ...] ON [schema_name.]object_name TO grantee
[ WITH GRANT OPTION ]
● object_privilege_name:
{ SELECT | UPDATE | DELETE | INSERT | ALTER | INDEX | EXECUTE | READ | REFERENCES } [, ... ]
● grantee:
{ user_name | role_name } [ , ... ]
Parameter Description
● system_priviege_name
Specifies a system permission to be granted. The following table lists the
supported roles and system permissions. Y indicates that a role or user has
the permission, and - indicates that a role or user does not have the
permission.
permissions for
objects of any
user to other
users.
NOTICE
The following operations have great impact on the system. Perform them only
when absolutely necessary.
CREATE ANY TABLE
CREATE ANY INDEX
CREATE ANY SEQUENCE
CREATE ANY VIEW
CREATE ANY SYNONYM
CREATE ANY PROCEDURE
DROP ANY TABLE
DROP ANY INDEX
DROP ANY SEQUENCE
DROP ANY VIEW
DROP ANY SYNONYM
LOCK ANY TABLE
DROP ANY TABLE
DROP ANY PROCEDURE
ALTER ANY TABLE
ALTER ANY INDEX
ALTER ANY SEQUENCE
UPDATE ANY TABLE
INSERT ANY TABLE
DELETE ANY TABLE
ANALYZE ANY
ALTER ANY TRIGGER
CREATE ANY TRIGGER
DROP ANY TRIGGER
EXECUTE ANY PROCEDURE
● role_name
Specifies a role name. For details, see the description about the ROLE
statement. If a role is granted to a user/role, the user/role will have all the
system permissions of the role. Role granting in a circle is not allowed. For
example, grant role1 to role2, role2 to role3, and finally role3 to role1.
There are four default roles in the system: DBA, RESOURCE, CONNECT, and
STATISTICS. For details about permissions of the roles, see Data Dictionary
and Views > User Views > ROLE_SYS_PRIVS in GaussDB 100 V300R001C00
Database Reference. The DBA role has all system permissions. Grant the DBA
role to a common user only when absolutely necessary.
● ALL [ PRIVILEGES ]
Specifies all system permissions. PRIVILEGES can be omitted.
● [ schema_name. ]
Specifies a user name. If this parameter is not specified, the current login user
is used by default.
● WITH ADMIN OPTION
If WITH ADMIN OPTION is specified in the statement, then:
– grantee has the permission to grant the system permission or role to
other users or roles (that is, transfer the grant permission). If a
permission is revoked, the permission to grant this permission to other
users is not revoked.
– grantee has the permission to revoke system permissions or roles from
itself.
– Once being specified, WITH ADMIN OPTION can be revoked only by
running the REVOKE statement. If WITH ADMIN OPTION is revoked, the
permissions in the same statement as WITH ADMIN OPTION are also
revoked.
● object_privilege_name: Specifies an object permission name. Each object has
an independent permission set. Currently, the following object types are
supported: tables, views, sequences, stored procedures, functions, triggers, and
advanced system packages.
The following table lists the objects and object permissions. Y indicates that
an object permission is supported; - indicates that an object permission is not
supported; Reserved indicates a reserved object permission. An owner has all
permissions for its objects. System administrators (user SYS and role DBA)
have all the object permissions in the table.
Table Y Y Y Y Y Y Y Y -
Sequence Y - - - - - - Y -
Stored - - - - - - - - Y
procedur
e
Trigger - - - - - - - - -
Function - - - - - - - - Y
Advance - - - - - - - - Y
d system
package
Examples
● Create user joe and grant the CREATE SESSION permission to joe.
-- Delete user joe.
DROP USER joe CASCADE;
-- Create user joe with the password database_123 specified.
CREATE USER joe IDENTIFIED BY database_123;
-- Grant the CREATE SESSION permission to user joe.
GRANT CREATE SESSION TO joe;
● Create user jim, user glow, and role testers, and grant the testers role to
users jim and glow.
-- Delete the testers role.
DROP ROLE testers;
-- Create the testers role.
CREATE ROLE testers;
-- Grant the permissions CREATE SESSION, CREATE USER, CREATE ROLE, and CREATE TABLE to the
testers role.
GRANT CREATE SESSION, CREATE USER, CREATE ROLE, CREATE TABLE TO testers;
-- Delete user jim.
DROP USER jim CASCADE;
-- Create user jim with the password database_123 specified.
CREATE USER jim IDENTIFIED BY database_123;
-- Delete user glow.
DROP USER glow CASCADE;
-- Create user glow with the password database_123 specified.
CREATE USER glow IDENTIFIED BY database_123;
-- Grant the testers role to users jim and glow.
GRANT testers TO jim, glow;
● Create the employees table and user jim, and grant the permission for the
table to jim.
-- Delete the employees table.
DROP TABLE IF EXISTS employees;
3.13.44 INSERT
Description
INSERT inserts data into a table.
Precautions
● To run this statement, you must have the INSERT permission for the table or
have the INSERT ANY TABLE system permission. Common users are not
allowed to insert objects of user SYS.
● The commit of the INSERT transaction is disabled by default. Before the
session exits, you need to explicitly commit the transaction. Otherwise, records
will be lost.
● If INSERT...SELECT is used, the number of columns in select_list must be the
same as that of columns to be inserted.
● The data size of a single record must be less than 64000 bytes.
Syntax
● Create a record and insert it into a table.
INSERT [hint_info] [IGNORE] [ INTO ] [ schema_name. ]table_name [ ( column_name [ , ... ] ) ]
VALUES ( expression [ , ... ] )
● Insert records generated by the return result of the SELECT clause into a
table.
INSERT [IGNORE] [ INTO ] [ schema_name. ]table_name [table_alais][ ( column_name
[ , ... ] ) ]select_clause
– select_clause
SELECT [ DISTINCT ] select_list FROM table_list [ where_clause ] [ group_by_clause ]
[ order_by_clause ] [ limit_clause ]
● Insert a record and update the value that would cause a duplicate value in the
primary key column.
INSERT [ INTO ] [ schema_name. ]table_name [ ( column_name [ , ... ] ) ] VALUES ( expression
[ , ... ] ) ON DUPLICATE KEY UPDATE {column_name = expression} [ , ... ]
Parameter Description
● IGNORE
Ignores records that cause the error of repeated keywords. This parameter
cannot be used together with ON DUPLICATE KEY UPDATE.
● table_name
Specifies the name of a table into which data is inserted.
● column_name
Specifies the names of columns into which data is inserted.
If data is to be inserted into all columns of a table, column names can be
omitted in the INSERT statement.
Value range: an existing column name
● expression
Specifies a value to be inserted or an expression evaluating to values to be
inserted.
● select_clause
Specifies the SELECT clause that generates the records to be inserted. For
details, see SELECT parameters in this document.
● select_list
Specifies the column to be queried.
● table_list
Specifies the tables to be queried, which can be tables, views, or a subquery.
● DISTINCT
Returns only one copy of each set of duplicate rows selected.
Value range: existing column name or column expression
● where_clause
Specifies the conditions that the query result set must satisfy.
● group_by_clause
Specifies the grouping rules that the query result set must satisfy.
● order_by_clause
Specifies the sorting rules that the query result set must satisfy.
● limit_clause
Specifies the boundary for the query result set.
● ON DUPLICATE KEY UPDATE
Updates the value that would cause a duplicate value in the primary key
column. The system traverses each column based on the index creation
sequence to search for duplicate data. For example, if indexes are created on
the f3, f2, and f1 columns of the t1 table in sequence, GaussDB 100 traverses
f3, f2, and f1 in sequence to search for duplicate data.
Examples
Insert data to the training table.
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT NOT NULL PRIMARY KEY,course_name CHAR(50),course_start_date
DATETIME, course_end_date DATETIME,exam_date DATETIME,score INT);
-- Insert record 1 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25 12:00:00',90);
-- Insert record 2 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(11,'information safety','2017-06-20 12:00:00','2017-06-25 12:00:00','2017-06-26 12:00:00',95);
-- Insert record 3 into the training table.
INSERT INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(12,'master all kinds of thinking methonds','2017-07-15 12:00:00','2017-07-20 12:00:00','2017-07-25
12:00:00',97);
-- Commit the transaction.
COMMIT;
-- Insert record 5 into the training table with ON DUPLICATE KEY UPDATE specified. In this case, values
that would cause a duplicate value in the primary key column will be updated.
INSERT INTO TRAINING
VALUES (12,'INFORMATION234','2018-06-20 12:00:00','2018-06-25 12:00:00','2018-06-26 12:00:00',94)
ON DUPLICATE KEY UPDATE
STAFF_ID=STAFF_ID,COURSE_NAME='INFORMATION234',COURSE_START_DATE='2018-06-20 12:00:00',
COURSE_END_DATE='2018-06-25 12:00:00',
EXAM_DATE='2018-06-23 12:00:00',SCORE=88;
-- Commit the transaction.
COMMIT;
Precautions
● The user who executes the statement must have the LOCK ANY TABLE
permission.
● The LOCK TABLE statement must be executed in a transaction. Locks are
automatically released when the transaction is committed or rolled back. If
[NOWAIT | WAIT integer] is not specified, the transaction waits until locks
are released.
Syntax Structure
LOCK TABLE { [ schema_name. ]table_name } [ , ... ]
IN lockmode MODE [ NOWAIT | WAIT integer ]
lockmode:
{ SHARE | EXCLUSIVE }
Parameter Description
● [ schema_name. ]
Username. If this parameter is not specified, the current login user is used by
default.
● table_name
Specifies the name (optionally schema-qualified) of a table to be locked.
Tables are locked one-by-one in the order specified in the LOCK TABLE
statement.
Value range: an existing table name
● NOWAIT
Does not wait for a lock to be released. If the lock cannot be released
immediately, the LOCK TABLE statement exists and throws an error.
● WAIT
Waits for a lock to be released.
● integer
Specifies the wait timeout duration. 0 indicates that the timeout duration is
unlimited. That is, a transaction waits until the required lock is released.
● SHARE
Permits concurrent queries and DML operations on the locked table but
prohibits DDL operations on it.
SHARE in a statement explicitly requests for a share lock. DML statements
implicitly request for share locks.
● EXCLUSIVE
Permits concurrent queries on the locked table but prohibits any other activity
on it.
In this mode, only reads from the table can proceed in parallel with a
transaction holding this lock. EXCLUSIVE in a statement explicitly requests for
an exclusive lock. DDL statements implicitly request for exclusive locks and
certain operations implicitly request for exclusive locks on system catalogs.
Examples
Lock the bonus_2017 table.
-- Delete the bonus_2017 table.
DROP TABLE IF EXISTS bonus_2017;
-- Create the bonus_2017 table.
CREATE TABLE bonus_2017(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER);
-- Lock the bonus_2017 table.
LOCK TABLE bonus_2017 IN SHARE MODE;
-- Insert data into the bonus_2017 table.
INSERT INTO bonus_2017(staff_id, staff_name, job, bonus) VALUES(23,'limingwang','developer',5000);
-- Insert data into the bonus_2017 table.
INSERT INTO bonus_2017(staff_id, staff_name, job, bonus) VALUES(24,'liyuyu','tester',7000);
-- Insert data into the bonus_2017 table.
INSERT INTO bonus_2017(staff_id, staff_name, job, bonus) VALUES(25,'wangqizhi','developer',8000);
-- Commit the transaction.
COMMIT;
3.13.46 MERGE
Description
MERGE selects rows from one or more sources for update or insertion into a table
or view.
You can specify conditions to determine whether to update or insert into the
target table or view.
Precautions
● If an ambiguous column exists in the ON, INSERT WHERE, or UPDATE
WHERE clause, specify which table this column belongs to.
● ROWNUM cannot be used in the ON, INSERT WHERE, or UPDATE WHERE
clause.
Syntax Structure
MERGE INTO [ schema_name. ] table_name
USING { [ schema_name. ] table_name
| [ schema_name. ] view_name
| select_query } [ alias ]
ON ( condition )
{ WHEN MATCHED THEN UPDATE SET column_name = expression [ , ... ] [ WHERE ( condition ) ]
|WHEN NOT MATCHED THEN INSERT ( column_name [ , ... ] ) VALUES ( expression [ , ... ] ) [ WHERE
( condition )]
}[ ... ]
Parameter Description
● INTO table_name
Specifies the target table or view to be updated or inserted into.
● USING
Specifies the source of the data to be updated or inserted. The source can be
a table, view, or the result of a subquery.
view_name
Specifies the name of a view.
select_query
Specifies a subquery.
● alias
Specifies a temporary table alias for the target table so that it can be
referenced by other queries. An alias is used for brevity or to eliminate
ambiguity for self-joins. When an alias is provided, it completely hides the
actual name of the table or function.
● ON (condition)
Specifies a condition upon which the MERGE operation either updates or
inserts. For each row in the target table for which the search condition is true,
the database updates the row with corresponding data from the source table.
If the condition is not true for any rows, the database inserts into the target
table based on the corresponding source table row.
condition
Examples
Select rows from the new_bonuses_depa1 table to update data in the
bonuses_depa1 table.
-- Delete the bonuses_depa1 table.
DROP TABLE IF EXISTS bonuses_depa1;
-- Delete the new_bonuses_depa1 table.
DROP TABLE IF EXISTS new_bonuses_depa1;
-- Create the bonuses_depa1 table.
CREATE TABLE bonuses_depa1(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER);
-- Create the new_bonuses_depa1 table.
CREATE TABLE new_bonuses_depa1(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30),
bonus NUMBER);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(23,'wangxia','developer',5000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(24,'limingying','tester',7000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(25,'liulili','quality control',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(29,'liuxue','tester',8000);
-- Insert data into the bonuses_depa1 table.
INSERT INTO bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(21,'caoming','document developer',
11000);
-- Commit the transaction.
COMMIT;
-- Query data in the bonuses_depa1 table.
SELECT * FROM bonuses_depa1;
5 rows fetched.
-- Insert record 1 into the new_bonuses_depa1 table.
INSERT INTO new_bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(23,'wangxia','developer',7000);
-- Insert record 2 into the new_bonuses_depa1 table.
INSERT INTO new_bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(27,'wangxuefen','document
developer',7000);
-- Insert record 3 into the new_bonuses_depa1 table.
INSERT INTO new_bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(28,'denghui','quality control',
8000);
-- Insert record 4 into the new_bonuses_depa1 table.
INSERT INTO new_bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(25,'liulili','quality control',
10000);
-- Insert record 5 into the new_bonuses_depa1 table.
INSERT INTO new_bonuses_depa1(staff_id, staff_name, job, bonus) VALUES(21,'caoming','document
developer',12000);
-- Commit the transaction.
COMMIT;
-- Query data in the new_bonuses_depa1 table.
SELECT * FROM new_bonuses_depa1;
5 rows fetched.
-- Select rows from the new_bonuses_depa1 table to update data in the bonuses_depa1 table.
MERGE INTO bonuses_depa1 BD1 USING new_bonuses_depa1 NBD1 ON (BD1.staff_id = NBD1.staff_id)
WHEN MATCHED THEN UPDATE SET BD1.bonus = NBD1.bonus
WHEN NOT MATCHED THEN INSERT (staff_id, staff_name, job, bonus) VALUES (NBD1.staff_id,
NBD1.staff_name, NBD1.job, NBD1.bonus);
-- Query data in the bonuses_depa1 table.
SELECT * FROM bonuses_depa1;
7 rows fetched.
3.13.47 PURGE
Description
PURGE removes a table, index, or tablespace from the recycle bin.
Precautions
● You can purge tables (PURGE TABLE), indexes (PURGE INDEX), tablespaces
(PURGE TABLESPACE), and recycle bins (PURGE RECYCLEBIN).
● You can purge your own objects (table/index) without additional permissions.
● To purge objects of other users, the following permissions are required:
– To purge tables of other users, you must have the DROP ANY TABLE
permission.
– To purge indexes of other users, you must have the DROP ANY INDEX
permission.
– To purge the recycle bin, you must have the PURGE DBA_RECYCLEBIN
permission.
– To purge tablespaces, you must have the DROP TABLESPACE permission.
● Profiles cannot be created during database restart or rollback.
Syntax
PURGE { TABLE [schema_name.]table_name
| INDEX index_name
| TABLESPACE tablespace_name
| RECYCLEBIN
}
Parameter Description
● [ schema_name. ]
Username. If this parameter is not specified, the current login user is used by
default.
● TABLE [schema_name.]table_name
Specifies the name of a table to be removed from the recycle bin.
● INDEX index_name
Specifies the name of an index to be removed from the recycle bin.
● TABLESPACE tablespace_name
Specifies the name of a tablespace to be removed from the recycle bin. Value
range: view or table name
● RECYCLEBIN
Removes all objects from the recycle bin.
Examples
Remove the subsidies_dep1_2018 table from the recycle bin.
-- Delete the human_subsidies_2018 tablespace in the MOUNT database state.
DROP TABLESPACE human_subsidies_2018;
-- Create the human_subsidies_2018 tablespace in the OPEN database state.
CREATE TABLESPACE human_subsidies_2018 DATAFILE 'subsidies2018' SIZE 32M;
-- Delete the subsidies_dep1_2018 table.
DROP TABLE IF EXISTS subsidies_dep1_2018;
-- Create the subsidies_dep1_2018 table.
CREATE TABLE subsidies_dep1_2018(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30),
subsidies NUMBER) TABLESPACE human_subsidies_2018;
-- Delete the subsidies_dep1_2018 table.
DROP TABLE IF EXISTS subsidies_dep1_2018;
-- Query the recycle bin.
SELECT * FROM SYS.SYS_RECYCLEBIN;
1 rows fetched.
Precautions
● An error is reported if you remove a nonexistent savepoint.
● When a savepoint is removed, savepoints later than this savepoint are also
removed.
Syntax
RELEASE SAVEPOINT savepoint_name;
Parameter Description
● savepoint_name
Specifies the name of a savepoint to be removed.
Examples
-- Define a savepoint sp1.
SAVEPOINT sp1;
-- Delete the TEST table.
DROP TABLE IF EXISTS TEST;
-- Create the TEST table.
CREATE TABLE TEST(ID INT NOT NULL);
INSERT INTO TEST VALUES(1);
-- Define a savepoint sp2.
SAVEPOINT sp2;
INSERT INTO TEST VALUES(2);
-- Remove the savepoint sp1.
RELEASE SAVEPOINT sp1;
After RELEASE SAVEPOINT sp1 is executed, sp1 and sp2 are removed.
3.13.49 REPLACE
Description
REPLACE inserts data into a table or replaces existing data in a table. If the data
to be inserted has the primary key or unique key conflicts with the existing data,
the REPLACE statement deletes the existing data and then inserts the new data.
Precautions
● To execute this statement, you must have the DELETE and INSERT
permissions for the table.
● If no primary key conflict occurs, data is inserted directly. Otherwise, the data
is deleted and then inserted. If the number of affected rows is 1, data is
directly inserted. If the value of affected rows is 2, data is deleted and then
inserted.
● The commit of the REPLACE transaction is disabled by default. Before the
session exits, you need to explicitly commit the transaction. Otherwise, records
will be lost.
● If REPLACE...SELECT is used, the number of columns in select_list must be the
same as that of columns to be inserted.
● In the REPLACE...SET statement, if col_name does not have a default value,
SET col_name = col_name + 1 equals col_name = NULL. If col_name has a
default value, SET col_name = col_name + 1 equals SET col_name = Default
value of col_name + 1.
Syntax
● Directly replace a record.
REPLACE [hint_info] [ INTO ] [ schema_name. ]table_name [ ( column_name [ , ... ] ) ] VALUES
( expression [ , ... ] )
● Replace a record by using SELECT.
REPLACE [ INTO ] [ schema_name. ]table_name [table_alais][ ( column_name [ , ... ] ) ]select_clause
– select_clause
SELECT [ DISTINCT ] select_list FROM table_list [ where_clause ] [ group_by_clause ]
[ order_by_clause ] [ limit_clause ]
● Replace a record by using an expression.
REPLACE [ INTO ] [ schema_name. ]table_name SET {column_name = expression} [ , ... ]
Parameter Description
● table_name
Specifies the name of a table into which data is inserted.
● column_name
Specifies the names of columns into which data is inserted.
If data is to be inserted into all columns of a table, column names can be
omitted in the INSERT statement.
Value range: an existing column name
● expression
Specifies a value to be inserted or an expression evaluating to values to be
inserted.
● select_clause
Specifies the SELECT clause that generates the records to be inserted. For
details, see SELECT parameters in this document.
● select_list
Specifies the columns to be queried.
● table_list
Specifies the tables to be queried, which can be tables, views, or a subquery.
● DISTINCT
Returns only one copy of each set of duplicate rows selected.
Value range: existing column name or column expression
● where_clause
Specifies the conditions that the query result set must satisfy.
● group_by_clause
Specifies the grouping rules that the query result set must satisfy.
● order_by_clause
Specifies the sorting rules that the query result set must satisfy.
● limit_clause
Specifies the boundary for the query result set.
Examples
-- Delete the training table.
DROP TABLE IF EXISTS training;
-- Create the training table.
CREATE TABLE training(staff_id INT PRIMARY KEY,course_name CHAR(50),course_start_date DATETIME,
course_end_date DATETIME,exam_date DATETIME,score INT);
-- Insert record 1 into the training table.
REPLACE INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'SQL majorization','2017-06-15 12:00:00','2017-06-20 12:00:00','2017-06-25 12:00:00',90);
-- Replace record 1 in the training table.
REPLACE INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
VALUES(10,'information safety','2017-06-20 12:00:00','2017-06-25 12:00:00','2017-06-26 12:00:00',95);
-- Use REPLACE...SELECT to replace record 1 in the training table.
REPLACE INTO training(staff_id,course_name,course_start_date,course_end_date,exam_date,score)
select 10,'master all kinds of thinking methonds','2017-07-15 12:00:00','2017-07-20 12:00:00','2017-07-25
12:00:00',97 from SYS_DUMMY;
-- Use SET to insert record 2 into the training table.
REPLACE /*+ nologging*/ INTO training SET staff_id = 11,course_name = 'information technology',
course_start_date = '2017-07-20 12:00:00', course_end_date = '2017-07-25 12:00:00',exam_date =
'2017-07-26 12:00:00',score = 95;
-- Commit the transaction.
COMMIT;
3.13.50 REVOKE
Description
REVOKE revokes system permissions or roles from a user.
Precautions
● To revoke a system permission, you must meet one of the following
requirements:
– You have this system permission with WITH ADMIN OPTION specified.
– You have the GRANT ANY PRIVILEGE system permission.
● To revoke a role, you must meet one of the following requirements:
– You are granted with this role, with WITH ADMIN OPTION specified.
– You have the GRANT ANY ROLE system permission.
– You are the creator of this role.
● If you revoke a permission from a user but this user does not have this
permission, an error message is displayed.
● The permissions of the DBA role cannot be revoked. The initial rights of the
DBA role are determined when the database is created. Permissions can be
granted to the DBA role but cannot be revoked.
Syntax
● Revoke system permissions.
REVOKE { ALL [ PRIVILEGES ]
|{ system_privilege_name | role_name } [ , ... ] } FROM revokee
revokee:
{ user_name | role_name } [ , ... ]
object_privilege_name:
{ SELECT | UPDATE | DELETE | INSERT | ALTER | INDEX | EXECUTE | READ | REFERENCES } [, ... ]
Parameter Description
● system_privilege_name
Specifies the name of a system permission to be revoked.
System permissions supported by the database are listed in Table 3-53.
● role_name
Specifies the name of a role to be revoked. After a role is revoked from users
or other roles, permissions of this role are revoked from these users or roles.
● ALL [ PRIVILEGES ]
All system permissions. PRIVILEGES can be omitted.
● object_privilege_name
Object permission name.
● ALL [ PRIVILEGES ]
Permission of all objects. PRIVILEGES can be omitted.
● [ schema_name. ]
Username. If this parameter is not specified, the current login user is used by
default.
● revokee
A user or role whose permissions are revoked. A user can specify a maximum
of 63 users or roles at a time.
● user_name
Name of the user whose permissions are revoked.
Examples
● Revoke a system permission from user joe.
-- Delete user joe.
DROP USER joe CASCADE;
-- Create user joe.
CREATE USER joe IDENTIFIED BY database_123;
-- Grant system permissions CREATE SESSION, CREATE TABLE, CREATE ANY INDEX, and CREATE
USER to user joe.
GRANT CREATE SESSION, CREATE TABLE, CREATE ANY INDEX, CREATE USER TO joe;
3.13.51 ROLLBACK
Description
ROLLBACK rolls back (undoes) work done in the current transaction and
terminates the transaction.
Precautions
● You are advised to explicitly end transactions in application programs using
either a COMMIT or ROLLBACK statement. If you do not explicitly commit
the transaction and the program terminates abnormally, the database rolls
back the last uncommitted transaction.
● The CREATE TABLESPACE and ALTER DATABASE DDL statements cannot be
rolled back.
Syntax
ROLLBACK [ TO SAVEPOINT savepoint_name ]
Parameter Description
● TO SAVEPOINT
Rolls back the current transaction to a specified savepoint.
● savepoint_name
Specifies the name of a savepoint to be created.
Examples
Create the posts table and insert data into the table. Roll back all the operations
and terminate the transaction.
-- Delete the posts table.
DROP TABLE IF EXISTS posts;
3.13.52 SAVEPOINT
Function
SAVEPOINT names and marks the current point in the processing of a transaction.
Then, you can roll back a transaction being executed to a specified savepoint.
Operations before the savepoint are valid, and those after that become invalid.
You can create multiple savepoints in a transaction.
Precautions
After the rollback to a savepoint, the transaction status is the same as that when
the savepoint is created. All the work after the savepoint is undone.
Syntax
SAVEPOINT savepoint_name
Parameter Description
savepoint_name
Specifies the name of a savepoint to be created.
Examples
Roll back a transaction to a savepoint.
-- Delete the bonus_2017 table.
DROP TABLE IF EXISTS bonus_2017;
-- Create the bonus_2017 table.
CREATE TABLE bonus_2017(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bonus
NUMBER);
-- Insert record 1 into the bonus_2017 table.
INSERT INTO bonus_2017(staff_id, staff_name, job, bonus) VALUES(23,'limingwang','developer',5000);
-- Commit the transaction.
COMMIT;
-- Create a savepoint s1.
SAVEPOINT s1;
-- Insert record 2 into the bonus_2017 table.
INSERT INTO bonus_2017(staff_id, staff_name, job, bonus) VALUES(24,'liyuyu','tester',7000);
-- Create a savepoint s2.
SAVEPOINT s2;
-- Query data in the bonus_2017 table.
SELECT * FROM bonus_2017;
2 rows fetched.
-- Roll back to the savepoint s1.
ROLLBACK TO SAVEPOINT s1;
-- Query data in the bonus_2017 table.
SELECT * FROM bonus_2017;
1 rows fetched.
3.13.53 SELECT
Description
SELECT retrieves data from tables or views.
Precautions
● To access a table of another user, the user who runs the statement must have
the READ ANY TABLE or SELECT ANY TABLE system permission, or the READ
or SELECT object permission for the table.
● In a SELECT statement, an outer query does not support GROUP BY if a
subquery references a column of the outer query, for example, SELECT a,b,
(SELECT t2.c FROM t2 WHERE t2.a=t1.a) FROM t1 GROUP BY a,b;.
● The number of levels in a hierarchical query cannot exceed 256. Otherwise, an
error is returned.
● To use ORDER BY or LIMITF in a SELECT clause in a statement containing set
operators, such as UNION and MINUS, pack the SELECT clause together with
its ORDER BY or LIMIT clause in a pair of parenthesis.
Syntax
SELECT [hint_info] [SQL_CALC_FOUND_ROWS] [ DISTINCT ] { expression
[ [ AS ] name ] } [ , ... ]
[ FROM { table_reference [ AS OF {SCN(scn_number) | TIMESTAMP(date)} ] [ [AS] alias ] } [ , ... ] ]
[ WHERE { condition | [ NOT ] EXISTS ( correlated subquery ) } ]
[ [START WITH condition ] CONNECT BY [ NOCYCLE ] [ PRIOR ] condition ]
[ GROUP BY { column_name | expression } [ , ... ] ]
[ HAVING condition [ , ... ] ]
[ { UNION [ ALL ] | MINUS } select ]
[ ORDER [SIBLINGS] BY { column_name | number | expression } [ ASC | DESC ][ NULLS FIRST | NULLS
LAST ] [ , ... ] ]
[ LIMIT [ start, ] count | LIMIT count OFFSET start | OFFSET start[ LIMIT count ] ]
[ FOR UPDATE ]
● hint_info:
{/*+ {access_method_hint | join_order_hint | join_method_hint | parallel_hint }[...] */}
– access_method_hint:
{ FULL(table_name [...])
| INDEX(table_name index_name[...])
| NO_INDEX(table_name index_name[...])
| INDEX_ASC(table_name index_name[...])
| INDEX_DESC(table_name index_name[...])
| INDEX_FFS(table_name index_name[...])
| NO_INDEX_FFS(table_name index_name[...])
}
– join_order_hint:
{ ORDERED
| LEADING(table_name[...])
}
– join_method_hint:
{ USE_NL(table_name[...])
| USE_MERGE(table_name[...])
| USE_HASH(table_name[...])
}
– parallel_hint
{ parallel(degree)
}
● table_reference:
{ [ schema_name. ]table_name [partition(partition_name)][ [AS] alias ]
| [ schema_name. ]view_name [ [AS] alias]
| ( select query ) [ [AS] alias ]
| join_table
}
– join_table:
INNER join by default
When INNER is selected, the ON condition can be omitted.
table_reference [LEFT [OUTER] | RIGHT [OUTER] | FULL [OUTER] | INNER] JOIN table_reference
ON conditional_expr
▪ (+) can only be used in the WHERE clause, and the condition that
contains (+) does not belong to the OR clause.
▪ In a comparison condition, (+) allows for only six operators: =, <>, >,
<, >=, <=.
– predicate:
{ expression { = | <> | != | > | >= | < | <= } { ALL | ANY } expression | ( select )
| string_expression [ NOT ] LIKE string_expression
| expression [ NOT ] BETWEEN expression AND expression
| expression IS [ NOT ] NULL
| ( select | expression [,...n] ) [ NOT ] IN ( select | expression [ , ... n ] )
| [ NOT ] EXISTS ( select )
}
Parameter Description
● hint_info
– Specifies special comments in a SQL statement that pass instructions to
the database optimizer. The optimizer uses these hints to choose an
execution plan for the statement, unless there are some conditions that
prevent the optimizer from doing so.
– Exercise caution when using hint_info. You are advised to use hints for a
table query only when you have collected statistics about the table and
evaluated the execution plan without hints by using EXPLAIN PLAN.
– In later database versions, database conditions may change and query
performance will be enhanced, which will affect the use of hints. Note
that short-term benefits generated by hints do not necessarily lead to
long-term improvement.
– Currently, parallel hints support only full table scanning of a single table.
Aggregate functions, ORDER BY, GROUP BY, and building of hash tables
are supported.
● SQL_CALC_FOUND_ROWS
It is a reserved word. SQL_CALC_FOUND_ROWS records the number of rows
the SELECT statement would have returned without LIMIT specified. Then,
you can use the FOUND_ROWS() function to obtain the number.
SQL_CALC_FOUND_ROWS can be specified only after the first SELECT
keyword in the SELECT statement. If SELECT is a clause of a UNION, UNION
ALL, or MINUS statement, SQL_CALC_FOUND_ROWS can be specified only in
the first SELECT clause. This reserved word is valid only when the
FOUND_ROWS() function is used.
For details, see FOUND_ROWS() in Other Functions.
● DISTINCT
Returns only one copy of each set of duplicate rows selected.
Value range: existing column name or column expression
● AS OF {SCN(scn_number) | TIMESTAMP(date)}
Queries the table data of a specified SCN or at a specified time point.
Currently, temporary tables and system views cannot be queried.
– AS OF
Performs a flashback query.
– SCN(scn_number)
Queries the result set of the table with the SCN specified by scn_numer.
– TIMESTAMP(date)
Queries the result set at the time point specified by date. date must be a
valid past timestamp (convert a string to a time type using the
TO_TIMESTAMP function).
Note: The specified time point must be earlier than the table creation
time. Otherwise, an error is reported.
● START WITH condition CONNECT BY [ NOCYCLE ] [ PRIOR ] condition
Specifies a clause for querying tree-structured data. If a table contains tree-
structured data, you can use this clause to query data.
– START WITH
Specifies the row that is the root of a tree-structured data query.
– CONNECT BY
Specifies the relationship between parent rows and child rows of a tree-
structured data query. It is used in conjunction with PRIOR.
– NOCYCLE
Instructs the database to return rows from a query even if CONNECT BY
LOOP exists in the data.
– PRIOR
PRIOR is a unary operator and has the same precedence as the unary +
and - arithmetic operators. The PRIOR keyword can be on either side of
the equal sign (=). If PRIOR is placed together with the parent ID, the
query traverses data in the direction of parent nodes. If it is placed
together with the child ID, the query traverses data in the direction of
child nodes.
– CONNECT_BY_ISCYCLE pseudocolumn
The CONNECT_BY_ISCYCLE pseudocolumn indicates whether the current
tuple will form the tree-structured data into a loop. It is valid only when
the NOCYCLE keyword is used in a hierarchical query clause. The
CONNECT_BY_ISCYCLE pseudocolumn returns 1 if the current row has a
child which is also its ancestor. Otherwise, it returns 0.
– CONNECT_BY_ISLEAF pseudocolumn
The CONNECT_BY_ISLEAF pseudocolumn returns 1 if the current row is a
leaf of the tree defined by the CONNECT BY condition. Otherwise, it
returns 0. This information indicates whether a given row can be further
expanded to show more of the hierarchy.
– LEVEL pseudocolumn
For each row returned by a hierarchical query, the LEVEL pseudocolumn
returns 1 for a root row, 2 for a child of a root, and so on. A root row is
the highest row within an inverted tree. A child row is any nonroot row. A
parent row is any row that has children. A leaf row is any row without
children.
● expression
Specifies a field or field expression to be queried.
● table_reference
Specifies a table or view to be queried, or a subquery.
[partition(partition_name)]
Specifies the partition of a table for query. partition_name indicates the
partition name.
● condition
Restricts the rows selected to those that satisfy one or more conditions.
Examples
● Perform a join query between the education and training_beijing_branch
tables.
Precautions
The isolation level can only be set before the transaction is executed, and the
isolation level cannot be changed during the transaction execution.
Syntax
SET TRANSACTION ISOLATION LEVEL { SERIALIZABLE | READ COMMITTED | CURRENT COMMITTED }
Parameter Description
● SERIALIZABLE
Completely isolates a transaction from others. It is the most strict level.
● READ COMMITTED
Reads only data committed before the query begin, preventing dirty data. It is
the default level. At this level, data read by an SQL statement is the data of
the same snapshot.
● CURRENT COMMITTED
Data read by an SQL statement is the latest committed data at the read time.
All read data is no longer snapshot data.
Examples
Set the isolation level.
--Set the isolation level to READ COMMITED before the transaction is executed.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Run statement 1 in the transaction.
DROP TABLE IF EXISTS bouns_2017;
-- Run statement 2 in the transaction.
CREATE TABLE bouns_2017(staff_id INT NOT NULL, staff_name CHAR(50), job VARCHAR(30), bouns
NUMBER);
-- Run statement 3 in the transaction.
INSERT INTO bouns_2017(staff_id, staff_name, job, bouns) VALUES(23,'limingwang','developer',5000);
-- Run statement 4 in the transaction.
INSERT INTO bouns_2017(staff_id, staff_name, job, bouns) VALUES(24,'liyuyu','tester',7000);
-- Run statement 5 in the transaction.
INSERT INTO bouns_2017(staff_id, staff_name, job, bouns) VALUES(25,'wangqizhi','developer',8000);
Precautions
● You can run the TRUNCATE TABLE statement to delete all records from a
table. You can truncate your own table without additional permissions. To
truncate tables of other users, you must have the DROP ANY TABLE
permission.
● The TRUNCATE statement cannot be rolled back.
● You can also use the DELETE statement to delete data.
● A table cannot be truncated during database restart or rollback.
Syntax
TRUNCATE TABLE [ schema_name. ]table_name [ PURGE ] [ { DROP | REUSE } STORAGE ]
Parameter Description
● [schema_name.] table_name
Specifies the name of a table to be truncated.
● Not specifying PURGE, DROP STORAGE, or REUSE STORAGE
After you run the TRUNCATE statement for a table, the table is moved to the
recycle bin. You can run the FLASHBACK statement to restore the table.
● PURGE
Removes a table from the recycle bin and releases the tablespace of this
table. It is an equivalent of DROP STORAGE.
● DROP STORAGE
Releases tablespaces from truncated tables. Then, the released tablespaces
are returned to the system and can be used by other segments. DROP
STORAGE is the default.
● REUSE STORAGE
Does not release tablespaces from truncated tables.
Examples
Delete all data from the education table and release the tablespace.
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school VARCHAR(64),
graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'doctor','Xidian University','2017-07-06 12:00:00','211');
-- Insert record 2 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
3.13.56 UPDATE
Description
UPDATE updates row values in a table.
Precautions
● The commit of the UPDATE transaction is disabled by default. Before the
session exits, you need to explicitly commit the transaction. Otherwise, records
will be lost.
● To run this statement, you must have the UPDATE permission for the table or
have the UPDATE ANY TABLE system permission. Common users are not
allowed to update objects of user SYS.
● Multiple temporary tables cannot be batch updated.
Syntax
(col_name[,...]) = (expression[,...]) can be used only when the join_table clause is
used.
UPDATE table_reference SET { [col_name = expression] [ , ... ] | (col_name[,...]) = (SELECT expression[,...]) }
[ WHERE condition ]
● table_reference:
{ [ schema_name. ] table_name
| join_table
}
● join_table:
table_reference [LEFT [OUTER] | RIGHT [OUTER] | INNER ] JOIN table_reference ON conditional_expr
Parameter Description
● table_reference
Specifies tables to be updated.
Value range: existing tables
● table_name
Specifies names of tables to be updated.
Value range: existing table names
● col_name
Specifies names of columns to be updated.
Value range: existing column names
● expression
Specifies a value assigned to a column or an expression that assigns the value.
● condition
Specifies an expression that returns a Boolean value. Only rows for which this
expression returns true are updated.
● join_table
Specifies a set of tables for join query.
– INNER JOIN returns records that have matching values in both tables.
– LEFT [OUTER] JOIN returns all records from the left table and the
matched records from the right table. The result is NULL from the right
side, if there is no match.
– RIGHT [OUTER] JOIN returns all records from the right table and the
matched records from the left table. The result is NULL from the left side,
when there is no match.
Examples
In the training table, update the first_name column for the records whose
staff_id is the same as staff_id in the education table.
-- Delete the education and training tables.
DROP TABLE IF EXISTS education;
DROP TABLE IF EXISTS training;
-- Create the education and training tables.
CREATE TABLE education(staff_id INT, first_name VARCHAR(20));
CREATE TABLE training(staff_id INT, first_name VARCHAR(20));
-- Insert data.
INSERT INTO education VALUES(1, 'ALICE');
INSERT INTO education VALUES(2, 'BROWN');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(1, 'ALICE');
INSERT INTO training VALUES(3, 'BOB');
-- In the training table, update the first_name column for the records whose staff_id is the same as
staff_id in the education table.
UPDATE training INNER JOIN education ON training.staff_id = education.staff_id SET training.first_name =
'ALAN';
Update the partition key score of the partitioned table student_score to update a
record from one partition to another.
-- Delete the student_score table.
DROP TABLE IF EXISTS student_score;
-- Create a partitioned table student_score.
CREATE TABLE student_score(id INT NOT NULL, score INT) PARTITION BY RANGE(score)
(
PARTITION P1 VALUES LESS THAN(60),
PARTITION P2 VALUES LESS THAN(100)
);
-- Insert record 1 into the student_score table.
INSERT INTO student_score(id, score) VALUES(20180102, 50);
-- Insert record 2 into the student_score table.
INSERT INTO student_score(id, score) VALUES(20180121, 80);
-- Update record 1 in the student_score table.
UPDATE student_score SET score=70 WHERE id =20180102;
-- Commit the transaction.
COMMIT;
3.13.57 WITH AS
Description
The WITH AS clause defines a SQL fragment, which will be used by the entire SQL
statement.
This makes SQL statements more readable. A table storing SQL fragments is
different from a base table. It is a virtual table, also called a view. The definition
and data corresponding to the view is not stored in the database but still in the
base table. If data in the base table changes, the data in the view changes
accordingly.
Syntax
WITH { table_name AS select_statement1 }[ , ...] select_statement2
Parameter Description
● table_name
Specifies the name of a user-defined table that stores SQL fragments.
● select_statement1
Specifies the SELECT statement that queries data from a base table.
● select_statement2
Specifies the SELECT statement that queries data from a user-defined table
that stores SQL fragments.
Examples
Use WITH AS to query data.
-- Delete the education table.
DROP TABLE IF EXISTS education;
-- Create the education table.
CREATE TABLE education(staff_id INT, highest_degree CHAR(8) NOT NULL, graduate_school
VARCHAR(64),graduate_date DATETIME, education_note VARCHAR(70));
-- Insert record 1 into the education table.
INSERT INTO education(staff_id,highest_degree,graduate_school,graduate_date,education_note)
VALUES(10,'Doctor','Xidian University','2017-07-06 12:00:00','211');
3.14.1 Examples
This example demonstrates the entire process of using a stored procedure,
including creating, calling, and deleting stored a procedure.
Statements
● Use a stored procedure without parameters.
-- Prepare a basic table for a stored procedure.
-- Delete the duplicate temporary table if any.
DROP TABLE IF EXISTS table_temp;
-- Create a temporary table as a basic table.
CREATE TABLE table_temp(f1 INT, f2 VARCHAR2(20));
NOTICE
Stored procedures and functions are stored in the same system catalog. If a
stored procedure to be created has the same name as an existing user-
defined function, creating the stored procedure will fail. Therefore, before
creating a stored procedure, you need to delete the user-defined function with
the same name.
-- Delete the user-defined function with the same name as the stored procedure.
DROP FUNCTION IF EXISTS p_no_param;
-- Delete the existing stored procedure with the same name.
DROP PROCEDURE IF EXISTS p_no_param;
NOTICE
In the declaration statement of a stored procedure, the slash (/) indicates the
end of the statement and cannot be omitted. In addition, the slash (/) must
be in a separate line.
F1 F2
------------ --------------------
1 xxx
1 xxx
2 rows fetched.
-- Delete the stored procedure.
DROP PROCEDURE p_no_param;
● Use a stored procedure with IN parameters.
-- Prepare a basic table for a stored procedure.
-- Delete the duplicate temporary table if any.
DROP TABLE IF EXISTS table_temp;
-- Create a temporary table as a basic table.
CREATE TABLE table_temp(f1 INT, f2 INT, f3 VARCHAR2(20));
NOTICE
Stored procedures and functions are stored in the same system catalog. If a
stored procedure to be created has the same name as an existing user-
defined function, creating the stored procedure will fail. Therefore, before
creating a stored procedure, you need to delete the user-defined function with
the same name.
-- Delete the user-defined function with the same name as the stored procedure.
DROP FUNCTION IF EXISTS p_with_param;
-- Delete the existing stored procedure with the same name.
DROP PROCEDURE IF EXISTS p_with_param;
NOTICE
In the declaration statement of a stored procedure, the slash (/) indicates the
end of the statement and cannot be omitted. In addition, the slash (/) must
be in a separate line.
-- Create a stored procedure. The first and second parameters have the default value 0. The third
parameter does not have a default value.
CREATE OR REPLACE PROCEDURE p_with_param(param1 INT := 0, param2 INT DEFAULT 0,param3
VARCHAR2) IS
BEGIN
INSERT INTO table_temp VALUES(param1,param2,param3);
COMMIT;
END;
/
-- Specify the values of all input parameters when executing the stored procedure.
-- Run CALL to execute the stored procedure.
CALL p_with_param(1,1,'xxx');
-- Run EXEC to execute the stored procedure.
EXEC p_with_param(1,1,'xxxx');
NOTICE
You must specify values for parameters that do not have a default value. If a
parameter has neither a default value nor a specified value, an error is
returned.
-- When executing a stored procedure, specify a value only for parameters that do not have a default
value.
-- Run CALL to execute the stored procedure.
CALL p_with_param(param3=>'yyy');
-- Run EXEC to execute the stored procedure.
EXEC p_with_param(param3=>'yyyy');
-- Query data in the temporary table.
SELECT * FROM table_temp;
F1 F2 F3
------------ ------------ --------------------
1 1 xxx
1 1 xxxx
0 0 yyy
0 0 yyyy
4 rows fetched.
-- Delete the stored procedure.
DROP PROCEDURE p_with_param;
NOTICE
Precautions
● If the name of a user-defined stored procedure is the same as that of a
system function, the database preferentially invokes the system function. To
make the stored procedure preferential, configure it in the $GSDB_DATA/cfg/
udf.ini file in the format of user_name.procedure_name. The configuration
takes effect only after the database is restarted.
● The permission for the udf.ini file must be limited to users in the database
user group dbgrp. The permission is 600.
● You can specify values for all parameters in the parameter list or use => to
specify values for some parameters. If a parameter has neither a default value
nor a specified value, an error is returned.
● You are not allowed to assign constants to an IN OUT or OUT parameter.
● You are advised to query the SYS_PROCS view for the ID of the record that
causes an error.
● To execute a stored procedure (including a customized function) without
parameters, you can directly specify the stored procedure name or customized
function name without parentheses.
● You can use a semicolon (;) or a slash (/) as a terminator. However, the two
terminators cannot be used together. If they are used together, an error is
reported.
Syntax
{ CALL | EXEC } [schema_name.]procedure_name[(param[,...])];
Parameter Description
● CALL
Revokes a stored procedure.
● EXEC
Executes a stored procedure.
● schema_name
Examples
● Use a stored procedure without parameters.
-- Prepare a basic table for a stored procedure.
-- Delete the duplicate temporary table if any.
DROP TABLE IF EXISTS table_temp;
-- Create a temporary table as a basic table.
CREATE TABLE table_temp(f1 INT, f2 VARCHAR2(20));
NOTICE
Stored procedures and functions are stored in the same system catalog. If a
stored procedure to be created has the same name as an existing user-
defined function, creating the stored procedure will fail. Therefore, before
creating a stored procedure, you need to delete the user-defined function with
the same name.
-- Delete the user-defined function with the same name as the stored procedure.
DROP FUNCTION IF EXISTS p_no_param;
-- Delete the existing stored procedure with the same name.
DROP PROCEDURE IF EXISTS p_no_param;
NOTICE
In the declaration statement of a stored procedure, the slash (/) indicates the
end of the statement and cannot be omitted. In addition, the slash (/) must
be in a separate line.
F1 F2
------------ --------------------
1 xxx
1 xxx
2 rows fetched.
NOTICE
Stored procedures and functions are stored in the same system catalog. If a
stored procedure to be created has the same name as an existing user-
defined function, creating the stored procedure will fail. Therefore, before
creating a stored procedure, you need to delete the user-defined function with
the same name.
-- Delete the user-defined function with the same name as the stored procedure.
DROP FUNCTION IF EXISTS p_with_param;
-- Delete the existing stored procedure with the same name.
DROP PROCEDURE IF EXISTS p_with_param;
NOTICE
In the declaration statement of a stored procedure, the slash (/) indicates the
end of the statement and cannot be omitted. In addition, the slash (/) must
be in a separate line.
-- Create a stored procedure. The first and second parameters have the default value 0. The third
parameter does not have a default value.
CREATE OR REPLACE PROCEDURE p_with_param(param1 INT := 0, param2 INT DEFAULT 0,param3
VARCHAR2) IS
BEGIN
INSERT INTO table_temp VALUES(param1,param2,param3);
COMMIT;
END;
/
-- Specify the values of all input parameters when executing the stored procedure.
-- Run CALL to execute the stored procedure.
CALL p_with_param(1,1,'xxx');
-- Run EXEC to execute the stored procedure.
EXEC p_with_param(1,1,'xxxx');
NOTICE
You must specify values for parameters that do not have a default value. If a
parameter has neither a default value nor a specified value, an error is
returned.
-- When executing a stored procedure, specify a value only for parameters that do not have a default
value.
-- Run CALL to execute the stored procedure.
CALL p_with_param(param3=>'yyy');
-- Run EXEC to execute the stored procedure.
EXEC p_with_param(param3=>'yyyy');
-- Query data in the temporary table.
SELECT * FROM table_temp;
F1 F2 F3
------------ ------------ --------------------
1 1 xxx
1 1 xxxx
0 0 yyy
0 0 yyyy
4 rows fetched.
-- Delete the stored procedure.
DROP PROCEDURE p_with_param;
Precautions
● If you are sure that the stored procedure to be deleted exists, IF EXISTS is not
required. Otherwise, you are advised to use DROP PROCEDURE IF EXISTS
procedure_name; to avoid a nonexistence error. Common users cannot delete
the objects of system users.
● You can use a semicolon (;) or a slash (/) as a terminator. However, the two
terminators cannot be used together. If they are used together, an error is
reported.
Syntax
DROP PROCEDURE [ IF EXISTS ] [schema_name.]procedure_name;
Parameter Description
● IF EXISTS
Does not throw an error if a stored procedure to be deleted does not exist.
● schema_name
Specifies the schema to which a stored procedure belongs.
● procedure_name
Specifies the name of a stored procedure to be deleted.
Examples
DROP PROCEDURE IF EXISTS p_no_param;
Precautions
Stored procedures and user-defined functions share the same system catalog.
Therefore, do not use the same name for a stored procedure and a user-defined
function. Common users cannot create objects of system users.
Syntax
CREATE [ OR REPLACE ] PROCEDURE [ IF NOT EXISTS ] [schema_name.]procedure_name(args_list)
{ IS | AS }
[ param_list ]
BEGIN
statement;
END;
/
Parameter Description
● OR REPLACE
Replaces an existing stored procedure.
● IF NOT EXIST
Does not throw an error if a stored procedure to be created exists.
● procedure_name
Specifies the name of a stored procedure to be created.
● schema_name
Specifies the schema to which a stored procedure belongs.
● args_list
Specifies a list of parameters in a stored procedure. The list includes input
parameters (IN), output parameters (OUT), and input and output parameter
(IN OUT). A parameter can be one of them. A default value can be specified
for an input parameter.
– IN is the default mode of a parameter. In this mode, a parameter already
has a value when the procedure is running and the value does not
change in the procedure body.
– A parameter in OUT mode is assigned with a value only within a
procedure body. The parameter passes a value back to the procedure that
invokes it.
– A parameter in IN OUT mode passes values to the procedure it locates
and also passes values back to the procedure that invokes it.
● param_list
Specifies new parameters and their default values. The value can be empty.
For details about the variable declaration syntax, see DECLARE Syntax.
● statement
Specifies the statement of a stored procedure. You are not allowed to leave
this parameter empty because an error will be reported if it is empty. You can
use basic, dynamic, control, exception, or other statements. For details about
basic statements, see Basic Statements; dynamic statements, see Dynamic
Statements; control statements, see Control Statements; other statements,
see Other Statements; user-defined functions, see User-defined Functions;
and stored procedures, see Creating a Stored Procedure.
Examples
-- Delete the existing stored procedure with the same name.
DROP PROCEDURE IF EXISTS Zenith_Test_003;
-- Delete the user-defined function with the same name as the stored procedure.
DROP FUNCTION IF EXISTS Zenith_Test_003;
-- Create the stored procedure.
CREATE OR REPLACE PROCEDURE Zenith_Test_003(param1 IN VARCHAR2,param2 IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE('Hello Zenith:'||param1||','||param2);
END Zenith_Test_003;
/
Precautions
● Variables of stored procedures must be declared before being used. However,
there is a special case. That is, if the FOR loop is used to traverse the cursor,
the index_name variable in the FOR loop can be used without being declared.
● Currently, DECLARE in GaussDB 100 is used to define variables. In the
implementation blocks of functions, stored procedures, anonymous blocks,
and triggers, variables declared in stored procedures have the highest priority.
● Names of variables declared in a stored procedure must be valid identifiers
and cannot be keywords or numbers.
● For details about the data types supported by common variables, see Data
Type. You can specify a default value for a common variable.
● To declare a system cursor variable, use the SYS_REFCURSOR keyword. To
declare a common cursor variable, run TYPE type_name1 IS REF CURSOR to
declare a cursor type, and then declare a cursor variable of this type.
● Currently, cursor variables are implemented based on weak data types. That
is, the RETURN return_type clause cannot be used to specify the data type of
the returned result in the definition statement of a cursor type. Instead, the
data type of the row opened by the cursor is used.
● When a record type is declared, the data type of each column can be a basic
data type or a RECORD type. The SYS_REFCURSOR type is not supported.
● The TYPE statement only declares a cursor or record type and does not
generate variables. A variable is generated only after a variable of the
declared type is declared.
Syntax
● Declare a common variable and its default value.
DECLARE variant_name data_type [ { { := } | DEFAULT } default_expr];
● Declare a system cursor variable.
DECLARE cursor_name SYS_REFCURSOR;
● Declare a cursor type and a variable of this type.
DECLARE TYPE type_name1 IS REF CURSOR;
ref_cursor_name type_name1;
Parameter Description
● variant_name
Specifies the name of a variable to be declared.
● data_type
Specifies the data type of a variable. For details about the available types, see
Data Type.
● DEFAULT default_expr
Specifies the default value of a variable. The default value can be a constant
or an expression.
● cursor_name
Specifies the name of a system cursor variable to be declared.
● SYS_REFCURSOR
Specifies that a cursor variable to be declared is a system cursor variable.
● TYPE type_name1 IS REF CURSOR
Declares a cursor type. type_name1 indicates the name of the type to be
declared.
● ref_cursor_name type_name1
Declares a cursor variable of the declared type. This parameter is used
together with TYPE type_name1 IS REF CURSOR.
● TYPE type_name2 IS RECORD (field_name data_type [,...])
Declares a record type. type_name2 indicates the name of the type to be
declared.
● rec_name type_name2
Declares a record variable of the declared type. This parameter is used
together with TYPE type_name2 IS RECORD (field_name data_type [,...]).
Examples
● Prepare data (create the test table and insert a record into it).
-- Delete the existing t1 table.
DROP TABLE IF EXISTS test;
-- Create the test table.
CREATE TABLE test(f_int1 INTEGER,f_int2 INTEGER, f_int3 INTEGER, f_bigint1 BIGINT, f_bigint2
BIGINT, f_bigint3 BIGINT, f_bool1 INTEGER, f_bool2 INTEGER, f_num1 NUMBER(38, 0),f_num2
NUMBER(38, 0), f_dec1 DECIMAL(38, 0), f_dec2 DECIMAL(38, 0), f_num10 NUMBER(38, 10), f_dec10
DECIMAL(38, 10), f_float FLOAT,f_double DOUBLE, f_real REAL, f_char1 CHAR(128),f_char2
CHAR(128), f_varchar1 VARCHAR(512),f_varchar2 VARCHAR2(512), f_date1 DATE, f_date2 DATE,
f_time DATE, f_timestamp TIMESTAMP);
-- Insert a record into the test table.
INSERT INTO test
VALUES(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,'a','b','c','d','2017-01-01','2017-01-01','2017-01-01','201
7-01-01');
-- Commit the transaction.
COMMIT;
f_int3 INTEGER;
f_bigint1 BIGINT;
f_bigint2 BIGINT;
f_bigint3 BIGINT;
f_bool1 INTEGER;
f_bool2 INTEGER;
f_num1 NUMBER(38, 0);
f_num2 NUMBER(38, 0);
f_dec1 DECIMAL(38, 0);
f_dec2 DECIMAL(38, 0);
f_num10 NUMBER(38, 10);
f_dec10 DECIMAL(38, 10);
f_float FLOAT;
f_double DOUBLE;
f_real REAL;
f_char1 CHAR(128);
f_char2 CHAR(128);
f_varchar1 VARCHAR(512);
f_varchar2 VARCHAR2(512);
f_date1 DATE;
f_date2 DATE;
f_time DATE;
f_timestamp TIMESTAMP;
BEGIN
SELECT * INTO
f_int1,f_int2,f_int3,f_bigint1,f_bigint2,f_bigint3,f_bool1,f_bool2,f_num1,f_num2,f_dec1,f_dec2,f_num10,f
_dec10,f_float,f_double,f_real,f_char1,f_char2,f_varchar1,f_varchar2,f_date1,f_date2,f_time,f_timestamp
from test;
DBMS_OUTPUT.PUT_LINE('f_int1 is ' || f_int1 );
DBMS_OUTPUT.PUT_LINE('f_int2 is ' || f_int2 );
DBMS_OUTPUT.PUT_LINE('f_int3 is ' || f_int3 );
DBMS_OUTPUT.PUT_LINE('f_bigint1 is ' || f_bigint1 );
DBMS_OUTPUT.PUT_LINE('f_bigint2 is ' || f_bigint2 );
DBMS_OUTPUT.PUT_LINE('f_bigint3 is ' || f_bigint3 );
DBMS_OUTPUT.PUT_LINE('f_bool1 is ' || f_bool1 );
DBMS_OUTPUT.PUT_LINE('f_bool2 is ' || f_bool2 );
DBMS_OUTPUT.PUT_LINE('f_num1 is ' || f_num1 );
DBMS_OUTPUT.PUT_LINE('f_num2 is ' || f_num2 );
DBMS_OUTPUT.PUT_LINE('f_dec1 is ' || f_dec1 );
DBMS_OUTPUT.PUT_LINE('f_dec2 is ' || f_dec2 );
DBMS_OUTPUT.PUT_LINE('f_num10 is ' || f_num10 );
DBMS_OUTPUT.PUT_LINE('f_dec10 is ' || f_dec10 );
DBMS_OUTPUT.PUT_LINE('f_float is ' || f_float );
DBMS_OUTPUT.PUT_LINE('f_double is ' || f_double );
DBMS_OUTPUT.PUT_LINE('f_real is ' || f_real );
DBMS_OUTPUT.PUT_LINE('f_char1 is ' || f_char1 );
DBMS_OUTPUT.PUT_LINE('f_char2 is ' || f_char2 );
DBMS_OUTPUT.PUT_LINE('f_varchar1 is ' || f_varchar1 );
DBMS_OUTPUT.PUT_LINE('f_varchar2 is ' || f_varchar2 );
DBMS_OUTPUT.PUT_LINE('f_date1 is ' || f_date1 );
DBMS_OUTPUT.PUT_LINE('f_date2 is ' || f_date2 );
DBMS_OUTPUT.PUT_LINE('f_time is ' || f_time );
DBMS_OUTPUT.PUT_LINE('f_timestamp is ' || f_timestamp);
END;
/
● Declare a cursor type tcur and a cursor variable cursor_k (in bold) of this
type.
DECLARE
TYPE tcur IS REF CURSOR;
cursor_k tcur;
rec test%rowtype;
BEGIN
OPEN cursor_k FOR SELECT * FROM test;
FETCH cursor_k INTO rec;
CLOSE cursor_k;
END;
/
● Declare a record type item_def and a record variable item (in bold) of this
type.
DECLARE
TYPE item_def IS RECORD (
f_int1 integer,
f_int2 integer,
f_int3 integer,
f_bigint1 bigint,
f_bigint2 bigint,
f_bigint3 bigint,
f_bool1 integer,
f_bool2 integer,
f_num1 number(38, 0),
f_num2 number(38, 0),
f_dec1 DECIMAL(38, 0),
f_dec2 DECIMAL(38, 0),
f_num10 number(38, 10),
f_dec10 decimal(38, 10),
f_float float,
f_double double,
f_real real,
f_char1 char(128),
f_char2 char(128),
f_varchar1 varchar(512),
f_varchar2 varchar2(512),
f_date1 date,
f_date2 date,
f_time date,
f_timestamp timestamp
);
item item_def;
BEGIN
SELECT * INTO item FROM test;
DBMS_OUTPUT.PUT_LINE('item.f_int1 is ' || item.f_int1 );
DBMS_OUTPUT.PUT_LINE('item.f_int2 is ' || item.f_int2 );
DBMS_OUTPUT.PUT_LINE('item.f_int3 is ' || item.f_int3 );
DBMS_OUTPUT.PUT_LINE('item.f_bigint1 is ' || item.f_bigint1 );
DBMS_OUTPUT.PUT_LINE('item.f_bigint2 is ' || item.f_bigint2 );
DBMS_OUTPUT.PUT_LINE('item.f_bigint3 is ' || item.f_bigint3 );
DBMS_OUTPUT.PUT_LINE('item.f_bool1 is ' || item.f_bool1 );
DBMS_OUTPUT.PUT_LINE('item.f_bool2 is ' || item.f_bool2 );
DBMS_OUTPUT.PUT_LINE('item.f_num1 is ' || item.f_num1 );
DBMS_OUTPUT.PUT_LINE('item.f_num2 is ' || item.f_num2 );
DBMS_OUTPUT.PUT_LINE('item.f_dec1 is ' || item.f_dec1 );
DBMS_OUTPUT.PUT_LINE('item.f_dec2 is ' || item.f_dec2 );
DBMS_OUTPUT.PUT_LINE('item.f_num10 is ' || item.f_num10 );
DBMS_OUTPUT.PUT_LINE('item.f_dec10 is ' || item.f_dec10 );
DBMS_OUTPUT.PUT_LINE('item.f_float is ' || item.f_float );
DBMS_OUTPUT.PUT_LINE('item.f_double is ' || item.f_double );
DBMS_OUTPUT.PUT_LINE('item.f_real is ' || item.f_real );
DBMS_OUTPUT.PUT_LINE('item.f_char1 is ' || item.f_char1 );
DBMS_OUTPUT.PUT_LINE('item.f_char2 is ' || item.f_char2 );
DBMS_OUTPUT.PUT_LINE('item.f_varchar1 is ' || item.f_varchar1 );
DBMS_OUTPUT.PUT_LINE('item.f_varchar2 is ' || item.f_varchar2 );
DBMS_OUTPUT.PUT_LINE('item.f_date1 is ' || item.f_date1 );
DBMS_OUTPUT.PUT_LINE('item.f_date2 is ' || item.f_date2 );
Assignment Statements
● Syntax
-- Assign a value to a declared variable.
variant_name := variant_expr;
● Parameter Description
– variant_name
Specifies the name of a declared variable. The value must be a variable or
input parameter declared in a stored procedure.
– variant_expr
Specifies an expression used to assign a value to a declared variable. The
expression can be a common expression, an expression involving
functions and variables, or a CASE or WHEN expression.
Variables in an expression match the variables declared in stored
procedures first, and then match table or column names. Therefore,
ensure that names of variables declared in the stored procedure are
different from the table and column names. Functions in an expression
match built-in functions first, then the functions in the advanced
package, and finally the user-defined functions. Therefore, ensure that
names of user-defined functions are different from those of built-in
functions and functions in the advanced package.
● Examples
– Assign a value to a variable by using a common expression. (in bold)
CREATE OR REPLACE PROCEDURE Zenith_Test_004(param1 in out varchar2)
IS
tmp varchar2(20) := '12345678';
BEGIN
param1 := param1 || tmp;
END Zenith_Test_004;
/
SQL Statements
Currently, only the UPDATE, INSERT, DELETE, MERGE, SELECT ... INTO, COMMIT,
and ROLLBACK statements in stored procedures can be immediately executed.
Other four SQL statements can be executed only after the EXECUTE IMMEDIATE
statement is executed. Otherwise, an error occurs during stored procedure
compiling. For details about dynamic SQL statements, see Dynamic Statements.
● Precautions
– Variables in DML statements match the variables declared in stored
procedures first and then match column names. Therefore, ensure that
variable names are different from the column names.
– When you run the SELECT ... INTO statement, if variant_list is used to
store data after the INTO keyword, the number of columns in column_list
after the SELECT keyword must be the same as the number of variables
in variant_list. If a record variable instead of variant_list is used to store
data after the INTO keyword, the preceding restriction becomes invalid.
In addition, if more than one record is returned, the
"ERR_TOO_MANY_ROWS" error message is displayed when you assign
values using INTO {variant_list | record_variant}; if no record is returned,
the "NO_DATA_FOUND" error message is displayed when you assign
values using INTO {variant_list | record_variant}.
– If the variable name in a DML statement is the same as a table or
column name, variables except the following ones in the DML statement
are replaced with the variables declared in stored procedures:
● Examples
– Prepare data (create the T_PROC_1 table and insert data into it).
-- Delete the existing T_PROC_1 table.
DROP TABLE IF EXISTS T_PROC_1;
-- Create the T_PROC_1 table.
CREATE TABLE T_PROC_1 (f_int1 INTEGER,f_int2 INTEGER, f_int3 INTEGER, f_bigint1 BIGINT,
f_bigint2 BIGINT, f_bigint3 BIGINT, f_bool1 INTEGER, f_bool2 INTEGER, f_num1 NUMBER(38,
0),f_num2 NUMBER(38, 0), f_dec1 DECIMAL(38, 0), f_dec2 DECIMAL(38, 0), f_num10
NUMBER(38, 10), f_dec10 DECIMAL(38, 10), f_float FLOAT,f_double DOUBLE, f_real REAL,
f_char1 CHAR(128),f_char2 CHAR(128), f_varchar1 VARCHAR(512),f_varchar2 VARCHAR2(512),
f_date1 DATE, f_date2 DATE, f_time DATE, f_timestamp TIMESTAMP);
-- Insert data into the T_PROC_1 table.
INSERT INTO T_PROC_1
VALUES(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,'a','b','c','d','2017-01-01','2017-01-01','2017-01-01
','2017-01-01');
-- Commit the transaction.
COMMIT;
DECLARE
a INT;
BEGIN
MERGE INTO T_PROC_1 USING SYS_DUMMY ON (f_int1 = 2) WHEN NOT MATCHED THEN
INSERT (f_int1) VALUES(2);
SELECT f_int1 INTO a FROM T_PROC_1 LIMIT 1;
IF (a = 2) THEN
COMMIT;
ELSE
ROLLBACK;
END IF;
END;
/
Syntax
EXECUTE IMMEDIATE sql_statement [ INTO variant_name[, ...] ] [ USING { [IN] [OUT] variant_name }
[, ...] ]
Parameter Description
● EXECUTE IMMEDIATE
Dynamically issues SQL statements.
● sql_statement
Specifies the SQL statements to be dynamically issued. Currently, only the
UPDATE, INSERT, DELETE, MERGE, SELECT ... INTO, COMMIT, and
ROLLBACK statements in stored procedures can be immediately executed.
Other four SQL statements can be executed only after the EXECUTE
IMMEDIATE statement is executed. Otherwise, an error occurs during stored
procedure compiling.
● INTO
Saves the query result to a variable. If INTO is used, sql_statement must be a
SELECT statement and the result set can contain only one record. The number
of columns in the SELECT clause must be the same as that of variables in the
INTO clause.
● USING
The number and data types of variables in the USING clause must be the
same as those of parameters in sql_statement.
Examples
SQL> SET serveroutput ON;
ON
SQL> DECLARE
SQL> a INT;
SQL> b CHAR(16);
SQL> c VARCHAR(16);
SQL> BEGIN
SQL> a := 10;
SQL> b := 'abc';
SQL> c := 'efc';
SQL> EXECUTE IMMEDIATE 'BEGIN
dbms_output.put_line(''a=''||:x);dbms_output.put_line(''b=''||:y);dbms_output.put_line(''c=''||:z); :x :=
11; :y := ''aaa''; :z := ''bbb'';END;' USING OUT a, OUT b, OUT c;
SQL> DBMS_OUTPUT.PUT_LINE('a='||a);
SQL> DBMS_OUTPUT.PUT_LINE('b='||b);
SQL> DBMS_OUTPUT.PUT_LINE('c='||c);
SQL> END;
SQL> /
a=
b=
c=
a=11
b=aaa
c=bbb
Precautions
● BEGIN-END blocks are basic statement blocks and cannot be empty.
● FOR LOOP and WHILE LOOP statement blocks cannot be empty.
● LOOP statement blocks must contain EXIT and CONTINUE. Otherwise, an
infinite loop occurs.
● If a statement block has an infinite loop and does not respond, you can create
a new connection on the client, query the DV_SESSIONS system catalog for
the corresponding session ID, and run the ALTER SYSTEM KILL session_id;
statement to restore the statement block.
GOTO Statement
● Syntax
GOTO label_name;
● Parameter Description
label_name
Specifies the name of a label to be declared. The value cannot conflict with a
variable name or user name. A label can be defined before or after the GOTO
statement.
● Example (GOTO statement in bold)
<<main>>
DECLARE
x NUMBER;
BEGIN
x := 0;
GOTO outer_loop;
x := 100;
<<outer_loop>>
LOOP
DBMS_OUTPUT.PUT_LINE(' outer in ');
<<inner_loop>>
LOOP
DBMS_OUTPUT.PUT_LINE('Inside loop: x = ' || x);
x := x + 1;
IF x > 3 THEN
DBMS_OUTPUT.PUT_LINE(' BEGIN EXIT ');
EXIT inner_loop WHEN x > 4;
DBMS_OUTPUT.PUT_LINE(' AFTER EXIT ');
ELSE
CONTINUE inner_loop WHEN x > 1;
END IF;
DBMS_OUTPUT.PUT_LINE(' after continue ');
END LOOP;
DBMS_OUTPUT.PUT_LINE(' outer_loop ');
EXIT outer_loop;
DBMS_OUTPUT.PUT_LINE(' After loop: x = ' || x);
IF x < 6 THEN
GOTO inner_loop;
END IF;
END LOOP;
END;
/
● Parameter Description
– index_name
Specifies a loop index. A loop variable is a local variable defined in the
FOR LOOP statement. If the name of a local variable duplicates with that
of an external variable, the FOR LOOP statement uses the local variable.
– REVERSE
Specifies that a reverse loop is used. If REVERSE is specified, the iteration
proceeds downward from upper_bound to lower_bound. If REVERSE is
not specified, the iteration proceeds upward from lower_bound to
upper_bound.
– lower_bound..upper_bound
Specifies the upper and lower boundary values of a loop variable. The
values can be numbers or an expression that evaluate to numbers.
lower_bound indicates the value of the lower boundary value and
upper_bound indicates that of the higher boundary. The value of
lower_bound must be smaller than that of upper_bound. The double dots
(..) function as a range operator.
– statements
Specifies a loop body. The value cannot be empty (it can be a NULL
statement). Otherwise, an error will be reported.
● Example (FOR loop in bold)
Declare
x bool;
BEGIN
x := FALSE;
FOR i IN 1..3 LOOP
DBMS_OUTPUT.PUT_LINE('here:' || i);
END LOOP;
DBMS_OUTPUT.PUT_LINE('x:' || x);
END;
/
LOOP Statement
● Syntax
LOOP
statements;
EXIT [ WHEN condition1 ];
CONTINUE [ WHEN condition2 ];
END LOOP;
● Parameter Description
– LOOP/END LOOP
Specifies the start and end of a loop. Generally, LOOP is used together
with IF.
– statements
Specifies a loop body. The value cannot be empty (it can be a NULL
statement). Otherwise, an error will be reported.
– EXIT [ WHEN condition1 ]
Specifies the conditions for a loop to continue.
condition1 indicates the condition expression for exiting the loop. If this
condition is TRUE, the loop exits.
– CONTINUE [ WHEN condition2 ]
Specifies the conditions for a loop to continue.
condition2 indicates the condition expression for the loop to continue. If
this condition is TRUE, the loop continues.
● Example (LOOP statement in bold)
DECLARE
x NUMBER;
BEGIN
x := 0;
LOOP
DBMS_OUTPUT.PUT_LINE('Inside loop: x = ' || x);
x := x + 1;
IF x > 10 THEN
DBMS_OUTPUT.PUT_LINE(' BEGIN EXIT ');
EXIT WHEN x > 20;
DBMS_OUTPUT.PUT_LINE(' AFTER EXIT ');
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE(' After loop: x = ' || x);
END;
/
● Parameter Description
– condition
● Parameter Description
– Basic CASE functions
▪ variant_name
Specifies the name of a variable used to check a condition.
▪ ELSE statement3
If the variable value is neither expr1 nor expr2, statement3 is
executed.
– Searched CASE functions
▪ ELSE statement3
If neither variant_name = expr1 nor variant_name = expr2 is TRUE,
statement3 is executed
● Examples
– Basic CASE function (in bold)
DECLARE
class CHAR(1) := 'S';
age VARCHAR2(15);
BEGIN
CASE class
WHEN 'S' THEN age := '3-4 years';
WHEN 'M' THEN age := '4-5 years';
WHEN 'P' THEN age := '5-6 years';
ELSE age := 'No such class';
END CASE;
DBMS_OUTPUT.PUT_LINE(age);
END;
/
IF Statement
● Syntax
IF condition THEN
statement
ELSIF condition THEN
statement
ELSIF
...
ELSE
statement
END IF;
NOTICE
● Parameter Description
– condition
Specifies a condition for a WHILE loop to start.
– statement
Specifies the statements that will be executed if a specified condition is
TRUE.
● Example (IF statement in bold)
DECLARE
v_sal INT;
BEGIN
v_sal := 1;
v_sal := v_sal + 1;
DBMS_OUTPUT.PUT_LINE('value1:'||v_sal);
IF v_sal < 2 THEN
DBMS_OUTPUT.PUT_LINE('value2:'||v_sal);
ELSIF v_sal = 2 THEN
IF v_sal != 2 THEN
DBMS_OUTPUT.PUT_LINE('value3:'||v_sal);
ELSE
DBMS_OUTPUT.PUT_LINE('value3x:'||v_sal);
END IF;
ELSIF v_sal = 4 THEN
DBMS_OUTPUT.PUT_LINE('value4:'||v_sal);
ELSE
DBMS_OUTPUT.PUT_LINE('value5:'||v_sal);
END IF;
DBMS_OUTPUT.PUT_LINE('value6:'||v_sal+2);
END;
/
● Parameter Description
– WHEN expr THEN sql
expr specifies an exception expression. sql specifies the statement for
handling the exception specified by expr. An exception can be handled
only once.
– WHEN OTHERS THEN sql1
Specifies the statement for handling an exception that matches no
specified exception expressions. WHEN OTHERS THEN sql1 must be
placed at the end of an EXCEPTION statement.
● Example (EXCEPTION statement in bold)
-- Delete the existing test_pl_excpt1 stored procedure.
DROP PROCEDURE IF EXISTS test_pl_excpt1;
-- Create the test_pl_excpt1 stored procedure.
CREATE OR REPLACE PROCEDURE test_pl_excpt1
AS
v_age INTEGER;
v_name VARCHAR(30);
BEGIN
v_age:=89;v_age:= v_age/0;
DBMS_OUTPUT.PUT_LINE('correct');
EXCEPTION
WHEN Zero_divide THEN SYS.DBMS_OUTPUT.PUT_LINE('Zero divide');
SYS.DBMS_OUTPUT.PUT_LINE(SQLCODE || 'error ' || sqlerrm);
WHEN value_error THEN SYS.DBMS_OUTPUT.PUT_LINE('value error');
SYS.DBMS_OUTPUT.PUT_LINE(SQLCODE || 'error ' || sqlerrm);
WHEN OTHERS THEN SYS.DBMS_OUTPUT.PUT_LINE('other error');
SYS.DBMS_OUTPUT.PUT_LINE(sqlcode||'error'||sqlerrm);
END;
/
– SQLERRM Syntax
SQLERRM([ errorcode ]);
1 CASE_NOT_FOUND 902
2 CURSOR_ALREADY_OPEN 904
3 DUP_VAL_ON_INDEX 729
4 INVALID_CURSOR 905
5 INVALID_NUMBER 636
6 NO_DATA_FOUND 906
7 PROGRAM_ERROR 908
8 ROWTYPE_MISMATCH 926
9 STORAGE_ERROR 911
10 SYS_INVALID_ROWID 639
11 TIMEOUT_ON_RESOURCE 723
12 TOO_MANY_ROWS 915
13 VALUE_ERROR 635
14 ZERO_DIVIDE 637
User-defined Exceptions
● Syntax
exception_name EXCEPTION;
A declared exception is thrown by the RAISE statement.
● Parameter Description
exception_name
Specifies the name of an exception to be declared.
● Example (user-defined exception in bold)
CREATE OR REPLACE PROCEDURE employee_income ( salary number )
IS
low_income EXCEPTION; // Declare an exception.
BEGIN
IF salary < 30000 THEN
RAISE low_income; // Throws the exception.
END IF;
EXCEPTION
The WHEN low_income THEN // Handle the exception.
DBMS_OUTPUT.PUT_LINE ('low_income:'||salary);
END;
/
BEGIN
employee_income (10000);
END;
/
Output:
low_income:10000
Internal Exceptions
● Description
An internal exception is the one declared in GaussDB 100. The corresponding
error code and message are automatically raised when the defined exception
is matched.
● Syntax
-- Define an exception.
exception_name EXCEPTION;
-- Associate the exception with an internal error code.
PRAGMA EXCEPTION_INIT (exception_name, error_code);
● Parameter Description
– exception_name
Specifies the name of an exception to be declared.
– error_code
Specifies the error code to be associated with an exception. The code
range is (0, 100000) or [-20999, -20000].
● Examples
DECLARE
my_except EXCEPTION;
PRAGMA EXCEPTION_INIT(my_except, 932);
BEGIN
...
EXCEPTION
WHEN my_except THEN
...
WHEN OTHERS THEN
...
END;
/
NULL Statement
The NULL statement is a ''no-op" (no operation) statement and is usually used in
an empty function or an loop body. The syntax is as follows:
NULL;
● Parameter Description
label_name
Specifies the name of a label to be declared.
● 示例(LABEL语句以粗体显示)
<<main>>
DECLARE
aaa NUMBER;
j NUMBER;
bbb NUMBER;
BEGIN
aaa := 2;
j := 1;
bbb := 0;
<<outer_loop>>
bbb := bbb + 1;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
FOR i IN 1..5 LOOP
DBMS_OUTPUT.PUT_LINE('i:' || i);
<<inner_loop>>
WHILE j < 6 LOOP
DBMS_OUTPUT.PUT_LINE('j:' || j);
j := j + 1;
FOR k IN 1..5 LOOP
DBMS_OUTPUT.PUT_LINE('k:' || k);
FOR l IN 1..5 LOOP
DBMS_OUTPUT.PUT_LINE('l:' || l);
EXIT inner_loop WHEN bbb = 2;
GOTO outer_loop;
END LOOP;
END LOOP;
END LOOP;
END LOOP;
END;
/
Incorrect case 1:
DECLARE
bbb NUMBER;
BEGIN
bbb := 1;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
GOTO my_label;
FOR i IN 1..5 LOOP
<<my_label>>
bbb := bbb + 1;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
END LOOP;
END;
/
Incorrect case 2:
DECLARE
bbb NUMBER;
BEGIN
bbb := 1;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
GOTO my_label;
IF bbb = 1 THEN
<<my_label>>
DBMS_OUTPUT.PUT_LINE('bbb = 1');
bbb := bbb + 1;
ELSIF bbb = 2 THEN
DBMS_OUTPUT.PUT_LINE('bbb = 2');
END IF;
END;
/
Incorrect case 3:
DECLARE
bbb NUMBER;
BEGIN
bbb := 1;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
IF bbb = 1 THEN
<<my_label>>
DBMS_OUTPUT.PUT_LINE('bbb = 1');
bbb := bbb + 1;
ELSIF bbb = 2 THEN
GOTO my_label;
DBMS_OUTPUT.PUT_LINE('bbb = 2');
END IF;
END;
/
Incorrect case 4:
DECLARE
bbb NUMBER;
BEGIN
bbb := 3;
DBMS_OUTPUT.PUT_LINE('bbb:' || bbb);
CASE bbb
WHEN 1 THEN
<<my_label>>
DBMS_OUTPUT.PUT_LINE('bbb = 1');
bbb := bbb + 1;
WHEN 2 THEN
DBMS_OUTPUT.PUT_LINE('bbb = 2');
bbb := bbb + 1;
GOTO my_label;
ELSE
DBMS_OUTPUT.PUT_LINE('bbb = 3');
END CASE;
END;
/
Incorrect case 5:
DECLARE
aaa NUMBER;
BEGIN
aaa := 2;
GOTO my_label;
BEGIN
<<my_label>>
DBMS_OUTPUT.PUT_LINE('aaa:' || aaa);
END;
aaa := 3;
END;
/
Incorrect case 6:
DECLARE
aaa NUMBER;
BEGIN
aaa := 2;
BEGIN
FOR l IN 1..3 LOOP
GOTO my_label;
DBMS_OUTPUT.PUT_LINE('l:' || l);
END LOOP;
END;
aaa := 3;
EXCEPTION
when no_data_found then
<<my_label>>
dbms_output.put_line(aaa);
END;
/
Incorrect case 7:
DECLARE
aaa NUMBER;
BEGIN
aaa := 2;
<<my_label>>
FOR l IN 1..3 LOOP
GOTO my_label;
DBMS_OUTPUT.PUT_LINE('l:' || l);
END LOOP;
<<my_label>>
aaa := 3;
END;
/
DBMS_OUTPUT.PUT_LINE( '-------------------------------------' );
CLOSE cv;
END;
/
3.14.12 Cursors
Description
GaussDB 100 supports the following types of cursors: explicit cursors, reference
cursors, FOR loop cursors, and implicit cursors.
Precautions
● Cursors do not support the RETURN clause.
● Reference cursors can be returned as output parameters, but explicit and
implicit cursors cannot.
● Implicit cursors do not need to be declared. Explicit cursors are bound to the
SELECT statement when declared, and the statement is verified in the
compilation phase. Reference cursors can be dynamically associated to a
specific SELECT statement, and the statement is verified in the execution
phase.
● Currently, cursor variables are implemented based on weak data types. That
is, the RETURN return_type clause cannot be used to specify the data type of
the returned result in the definition statement of a cursor type. Instead, the
data type of the row opened by the cursor is used.
Declaring a Cursor
● Syntax
– Declare an explicit cursor.
CURSOR cursor_name [(param_list)] IS (select_statement);
– Declare a reference cursor. (Reference cursors declared using the
following two methods are equivalent).
▪ Method 1:
cursor_name SYS_REFCURSOR;
▪ Method 2:
-- Define a REF CURSOR type (which is equivalent to the SYS_REFCURSOR type):
TYPE type_name IS REF CURSOR;
-- Declare a cursor of the REF CURSOR type:
cursor_name type_name;
● Parameter Description
– cursor_name
Specifies the name of the cursor that you are declaring.
– param_list
Specifies a list of parameters for an explicit cursor.
When declaring an explicit cursor, you can use the listed parameters in
the WHERE clause of the SELECT statement following the keyword IS.
This parameter is optional for explicit cursors. If an explicit cursor has
parameters defined, the actual parameter values must be transferred
when the cursor is opened.
– select_statement
Specifies the SELECT statement for declaring an explicit cursor.
When declaring an explicit cursor, specify the SELECT statement
following the keyword IS. The SELECT statement can be a table or view
query, or even a join query. It can concatenate WHERE, ORDER BY, and
GROUP BY clauses, but cannot concatenate the INTO clause. For details
about the SELECT statement, see SELECT.
– SYS_REFCURSOR
Specifies a system cursor, which is used to declare a reference cursor.
When cursor_name SYS_REFCURSOR is used to declare a cursor, no
SELECT statement is required. A declared cursor variable can be used as
an output parameter and bound to the SQL statement in the cursor
opening phase.
– type_name
Specifies the name of the REF CURSOR type that you are defining.
● Examples
-- Delete the existing test table.
DROP TABLE IF EXISTS test;
-- Create the test table.
CREATE TABLE test(a int,b int);
-- Declare a cursor.
DECLARE
TYPE type_name IS RECORD (
a INT,
b INT
);
CURSOR c1 IS SELECT * FROM test ORDER BY a; // Declare an explicit cursor.
c2 sys_refcursor; // Declare a reference cursor (method 1).
abc type_name;
TYPE tcur IS REF CURSOR;
cursor_k tcur; // Declare a reference cursor (method 2).
rec test%rowtype;
BEGIN
OPEN c2 FOR SELECT a FROM test ORDER BY a;
CLOSE c2;
OPEN c2 FOR SELECT a,b FROM test ORDER BY a;
● Parameter Description
– course_name
Specifies the name of the target cursor.
– variant_list
Specifies a variable list, which is used to store data fetched from a cursor.
– record_variant
Specifies a record variable, which is used to store data fetched from a
cursor.
● Examples (The statements that use a cursor are displayed in bold.)
DECLARE
c2 SYS_REFCURSOR;
abc test%rowtype;
BEGIN
OPEN c2 FOR SELECT a FROM test ORDER BY a;
CLOSE c2;
OPEN c2 FOR SELECT a,b FROM test ORDER BY a;
FETCH c2 INTO abc;
CLOSE c2;
DBMS_OUTPUT.PUT_LINE('result is ' || abc.a);
DBMS_OUTPUT.PUT_LINE('result is ' || abc.b);
END;
/
● Parameter Description
– index_name
Specifies a loop index.
– cursor_name
Specifies the name of a cursor for FOR LOOP.
– select_statement
Specifies an implicit cursor for traversing.
– statement
Specifies the loop body. This parameter cannot be empty.
● Examples
– Use an explicit cursor FOR LOOP (in bold).
DECLARE
CURSOR c1 IS
SELECT a,b FROM test ORDER BY a;
BEGIN
DELETE FROM test;
INSERT INTO test(a,b) VALUES(1,100);
INSERT INTO test(a,b) VALUES(1,100);
FOR item IN c1
LOOP
DBMS_OUTPUT.PUT_LINE('A = ' || item.a || ',B = ' || item.b);
DBMS_OUTPUT.PUT_LINE('CURSOR%ISOPEN is ' || c1%ISOPEN);
DBMS_OUTPUT.PUT_LINE('CURSOR%FOUND is ' || c1%FOUND);
DBMS_OUTPUT.PUT_LINE('CURSOR%NOTFOUND is ' || c1%NOTFOUND);
DBMS_OUTPUT.PUT_LINE('CURSOR%ROWCOUNT is ' || c1%ROWCOUNT);
END LOOP;
DBMS_OUTPUT.PUT_LINE('after for loop');
DBMS_OUTPUT.PUT_LINE('CURSOR%ISOPEN is ' || c1%ISOPEN);
END;
/
Cursor Attributes
Cursor attributes are classified into %ISOPEN, %FOUND, %NOTFOUND, and
%ROWCOUNT.
● %ISOPEN checks whether a cursor is open.
● %FOUND and %NOTFOUND checks whether the last fetch is successful. The
logics of them are opposite to each other.
● %ROWCOUNT returns the number of records read from a cursor.
For details about the attribute return values of explicit cursors, see Table 3-56. For
details about the attribute return values of implicit cursors, see Table 3-57.
Attribute Description
Attribute Description
EXEC proc1;
EXEC proc2;
PL/SQL procedure successfully completed.
ResultSet #1
1
----------
1
Description
An anonymous block is a statement that can be directly executed. Specifically, it
will be immediately compiled and executed once defined. Anonymous blocks are
not stored in databases, and need to be defined each time they are used.
Precautions
Parameter values must be different from column names in tables because
anonymous blocks give parameters precedence. If they repeat, the column values
cannot be obtained.
Syntax
DECLARE
[param-list]
BEGIN
statement;
END;
Parameters
● param-list
Specifies a list of parameters. Default parameter values can be used. The
format is as follows: variant_name data_type [ { := | DEFAULT }
default_expr ];
● statement
Specifies an anonymous block statement. This parameter cannot be empty
because leaving it empty will raise an error report.
Such a statement can contain basic statements, dynamic statements, control
statements, exception statements, other statements, functions, or stored
procedures. For details about basic statements, see Basic Statements. For
details about dynamic statements, see Dynamic Statements. For details
about control statements, see Control Statements. For details about other
statements, see Other Statements. For details about user-defined functions,
see User-defined Functions. For details about user-defined stored
procedures, see Creating a Stored Procedure.
Examples
Create an anonymous block.
NOTICE
Stored procedures and functions are stored in the same system catalog. If a stored
procedure to be created has the same name as an existing user-defined function,
creating the stored procedure will fail. Therefore, before creating a stored
procedure, you need to delete the user-defined function with the same name.
NOTICE
In a statement for creating a stored procedure or anonymous block, the last slash
(/) is used to indicate the end of the definition statement. It cannot be omitted
and must be placed in a different line.
Table 3-58 lists the advanced packages provided by GaussDB 100. Common users
can use the DBMS_JOB package only after being granted with the EXECUTE
permission. All users have permission to use other advanced packages.
Package Description
Package Description
3.15.1 DBMS_LOB
Description
The DBMS_LOB package is used to process data of the LOB type.
Interfaces
● GETLENGTH
Description: Obtains the length information of the LOB type.
Interface:
DBMS_LOB.GETLENGTH (
lob_loc IN LOB)
RETURN INTEGER;
Parameters:
lob_loc: Specifies a LOB whose length is to be calculated.
Return value: length of the LOB
● SUBSTR
Description: Obtains the substring of the LOB type.
Interface:
DBMS_LOB.SUBSTR (
lob_loc IN LOB,
amount IN INTEGER,
offset IN INTEGER)
RETURN VARCHAR2;
Parameters:
– lob_loc: Specifies a LOB whose length is to be calculated.
3.15.2 DBMS_JOB
Description
The DBMS_JOB package is used to execute scheduled jobs.
Interfaces
● BROKEN
Description: Changes job status to blocked.
Interface:
DBMS_JOB.BROKEN (
job IN BINARY_BIGINT,
broken IN BOOLEAN,
next_date IN DATE DEFAULT SYSDATE);
Parameters:
job: ID of the job to be modified
broken: block flag, allowing for values true and false
next_date: next update time
● REMOVE
Description: Deletes a specified job.
Interface:
DBMS_JOB.REMOVE (
job IN BINARY_BIGINT );
Parameters:
job: ID of the job to be deleted
Note that you can use the REMOVE interface to remove an existing job. If the
job is ongoing, its current execution will not be affected, but it will not be
executed next time.
● RUN
Description: Runs a specified job.
Interface:
DBMS_JOB.RUN (
job IN BINARY_BIGINT,
force IN BOOLEAN DEFAULT FALSE);
Parameters:
job: ID of the job to be executed
force: This parameter is only for syntax compatibility, and does not take effect
currently.
● SUBMIT
Description: Creates a job.
Interface:
DBMS_JOB.SUBMIT (
job OUT BINARY_BIGINT,
what IN VARCHAR2,
next_date IN DATE DEFAULT sysdate,
interval IN VARCHAR2 DEFAULT 'null',
no_parse IN BOOLEAN DEFAULT FALSE,
instance IN BINARY_INTEGER DEFAULT 0,
force IN BOOLEAN DEFAULT FALSE);
Parameters:
– job: ID of the job to be created
– what: PL/SQL block involved in job execution
– next_date: next execution time
– interval: execution interval expression
– no_parse: If it is set to false, what and interval will be parsed during
creation. If it is set to true, the two will not be parsed during creation.
– instance: This parameter is only for syntax compatibility, and does not
take effect currently.
– force: This parameter is only for syntax compatibility, and does not take
effect currently.
Example 1
create table test_job(
id varchar2(30),
dt varchar2(30)
);
declare
jobno number;
begin
dbms_job.submit(jobno,'job_proce_t();', sysdate, 'sysdate+1/24/60');
commit;
end;
/
declare
jobno int;
begin
select job into jobno from user_jobs where what='job_proce_t();';
dbms_job.broken(jobno, true, sysdate);
commit;
end;
/
declare
jobno int;
begin
select job into jobno from user_jobs where what='job_proce_t();';
dbms_job.run(jobno);
commit;
end;
/
declare
jobno int;
begin
select job into jobno from user_jobs where what='job_proce_t();';
dbms_job.remove(jobno);
commit;
end;
/
Example 2
By default, the system creates a full statistics collection job and a data change
collection job when being started.
CREATE OR REPLACE PROCEDURE GATHER_DB_STATS(
estimate_percent NUMBER DEFAULT 30,
force BOOLEAN DEFAULT TRUE
)
--force false: don't gather when cbo is disable
IS
cbo_enable VARCHAR(3);
BEGIN
--check cbo flag
IF force = FALSE THEN
SELECT VALUE INTO cbo_enable FROM SYS.DV_PARAMETERS WHERE NAME='CBO';
IF UPPER(cbo_enable) = 'OFF' THEN
RETURN;
END IF;
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
/
END;
/
DECLARE
JOBNO NUMBER;
BEGIN
DBMS_JOB.SUBMIT(JOBNO,'GATHER_DB_STATS(estimate_percent=>30, force=>FALSE);', TRUNC(SYSDATE
+1) + 1/24, 'TRUNC(sysdate+1) +1/24');
COMMIT;
END;
/
DECLARE
JOBNO NUMBER;
BEGIN
DBMS_JOB.SUBMIT(JOBNO,'GATHER_CHANGE_STATS(estimate_percent=>30, change_percent=>10,
force=>FALSE);', SYSDATE, 'SYSDATE+15/24/60');
COMMIT;
END;
/
3.15.3 DBMS_OUTPUT
Description
The DBMS_OUTPUT package is used to debug stored procedures and functions or
displays information in zsql.
To view the output of DBMS_OUTPUT in zsql, set serveroutput to ON. The default
parameter value is OFF.
SET serveroutput ON
Interfaces
● PUT_LINE
Description: Outputs characters.
Interface:
DBMS_OUTPUT.PUT_LINE(varchar2);
Examples
BEGIN
DBMS_OUTPUT.PUT ('hello, ');
DBMS_OUTPUT.PUT_LINE('database!');-- Output"hello, database!".
END;
/
3.15.4 DBMS_RAFT
Description
This package is used only in GS-Paxos clusters. If it is used in standalone scenarios,
GS-00133 "RAFT: raft is not enabled, or raft module is not inited error" will be
reported.
Interfaces
● RAFT_ADD_MEMBER
Description: Adds a standby node.
● RAFT_DEL_MEMBER
Description: Deletes a standby node.
● RAFT_MONITOR_INFO
Description: Monitors node status. If node status is abnormal, an error will be
returned.
● RAFT_QUERY_INFO
Description: Queries for node status.
● RAFT_SET_PARAM
Description: Sets parameters.
● RAFT_VERSION
Description: Returns version information.
3.15.5 DBMS_RANDOM
Description
The DBMS_RANDOM package is a built-in random number generator provided by
GaussDB 100. It is used to generate random numbers and characters.
Interfaces
● STRING
Description: Generates a random string with the number of characters and
pattern specified.
Interface:
DBMS_RANDOM.STRING (
opt IN CHAR,
len IN INTEGER )
RETURN VARCHAR2;
Parameters:
– opt: Specifies the format of a returning string. By default, the string is
returned in uppercase letters.
1 rows fetched.
● VALUE
Description: Generates a random number. If no range is specified, the random
number will be greater than or equal to 0 and less than 1. If a range is
specified, the random number will be greater than or equal to low and less
than high.
Interface:
– No range is specified.
DBMS_RANDOM.VALUE
RETURN NUMBER;
– A range is specified.
DBMS_RANDOM.VALUE(
low IN NUMBER,
high IN NUMBER)
RETURN NUMBER;
Parameters:
– low: Specifies the smallest number in a range from which to generate a
random number.
The number generated may be equal to low.
– high: Specifies the largest number below which to generate a random
number.
The number generated will be less than high.
Return value: A number will be returned.
Examples:
-- No range is specified.
SELECT DBMS_RANDOM.VALUE FROM SYS_DUMMY;
VALUE
----------------------------------------
.2515
1 rows fetched.
-- A range is specified.
SELECT DBMS_RANDOM.VALUE(100,200) FROM SYS_DUMMY;
DBMS_RANDOM.VALUE(100,200)
----------------------------------------
156.2419
1 rows fetched.
3.15.6 DBMS_SQL
Description
The DBMS_SQL package provided by GaussDB 100 is used to execute SQL
statements and return execution results.
Interfaces
DBMS_SQL.RETURN_RESULT
Description: Returns execution result sets.
Precautions:
● Currently, only SQL query results can be returned, and they are not allowed to
be returned through a remote procedure call (PRC).
● Once a statement is returned, it is accessible only to the client or the direct
caller that returns it.
● If a statement executed by clients or any recursion statement is a SQL query
and throws an error, the statement results cannot be returned.
● If an error is raised in executing a stored procedure after the procedure
returns results, the statement results cannot be returned.
● This interface can be used only for stored procedures and anonymous blocks.
Currently, RETURN_RESULT returns cursors only to clients, rather than upper-
layer callers.
● A maximum of 2000 cursors can be returned by a SQL statement at a time.
Interface:
DBMS_SQL.RETURN_RESULT(rc IN OUT SYS_REFCURSOR);
EXEC proc_return(2);
PL/SQL procedure successfully completed.
ResultSet #1
1
------------
1
1 rows fetched.
ResultSet #2
2
------------
2
1 rows fetched.
ResultSet #3
3
------------
3
1 rows fetched.
3.15.7 DBMS_STANDARD
Description
The DBMS_STANDARD package is a standard package provided by GaussDB 100.
It is used for transaction management and exception handling.
Interfaces
● SQLCODE
Description: Obtains the current error code.
Interface:
SQLCODE
Parameters:
error_code: error code. The value is an integer.
Remarks: SQLCODE can be used only in a stored procedure body.
● SQLERRM
Description: Obtains the current error information.
Interface:
SQLERRM
Parameters:
message: error information. The value is a string consisting of up to 2048
bytes.
Remarks:
– SQLERRM can be used only in a stored procedure body.
– If there is no error, the return value is null.
● SLEEP
Description: Specifies sleep time for waiting (in seconds).
Interface:
SLEEP(SECONDS IN INTEGER);
Parameters:
SECONDS: sleep time
The value is an integer in the range [1, 2147483647].
● RAISE_APPLICATION_ERROR
Description: Throws a specified exception (including error code and error
information).
Interface:
RAISE_APPLICATION_ERROR (error_code, message[, { TRUE | FALSE } ]);
Parameters:
– error_code: error code.
3.15.8 DBMS_STATS
Description
The DBMS_STATS package provided by GaussDB 100 is used for optimizing
statistics.
Interfaces
● DBMS_STATS.AUTO_SAMPLE_SIZE
Description: Returns the default sample size.
Interface:
DBMS_STATS.AUTO_SAMPLE_SIZE
● DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO
Description: Flushes system monitoring information.
Interface:
DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();
Permission control
The SYS user and DBA can invoke this interface. The ANALYZE ANY system
permission can be invoked to invoke this interface.
● DBMS_STATS.GATHER_TABLE_STATS
Description: Collects statistics about a specified table.
Interface:
DBMS_STATS.GATHER_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2,
partname VARCHAR2 DEFAULT NULL,
estimate_percent NUMBER DEFAULT 10,
block_sample BOOLEAN DEFAULT TRUE,
method_opt VARCHAR2 DEFAULT NULL,
degree NUMBER DEFAULT NULL,
granularity VARCHAR2 DEFAULT NULL,
cascade BOOLEAN DEFAULT NULL,
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT NULL,
stattype VARCHAR2 DEFAULT NULL,
force BOOLEAN DEFAULT NULL);
Parameters:
– ownname: name of the user who will collect statistics
– tabname: name of the table whose statistics will be collected
● DBMS_STATS.PURGE_STATS
Description: Purges old versions of statistics saved in the dictionary at a
specified time.
Interface:
DBMS_STATS.PURGE_STATS(before_timestamp TIMESTAMP );
Parameters:
ownname: username
tabname: table name
Other parameters are optional. They are only for syntax compatibility and do
not take effect currently.
Permission control
– The SYS user and DBA can collect and delete the statistics of all objects.
Common users can only collect and delete their own tables.
– The ANALYZE ANY permission can be used to collect statistics on all users
except the SYS user.
● DBMS_STATS.DELETE_SCHEMA_STATS
Description: Deletes statistics about a specified schema.
Interface:
DBMS_STATS.DELETE_SCHEMA_STATS (
ownname VARCHAR2,
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT FALSE,
force BOOLEAN DEFAULT FALSE
);
Parameters:
ownname: username
Other parameters are optional. They are only for syntax compatibility and do
not take effect currently.
Permission control
– The SYS user and DBA can collect and delete the statistics of all objects.
Common users can only collect and delete their own tables.
– The ANALYZE ANY permission can be used to collect statistics on all users
except the SYS user.
3.15.9 DBMS_UTILITY
Description
The DBMS_UTILITY package provided by GaussDB 100 is used for data type
processing and calculation.
Interfaces
DBMS_UTILITY.GET_TIME
Description: Returns time (in the unit of 10 ms), which is used to calculate the
time consumed by a program block. The type of return values is unit64.
The time values are used to calculate time differences. Obtaining the time for
once is meaningless.
Interface:
DBMS_UTILITY.GET_TIME();
Parameters: none
Examples:
SELECT DBMS_UTILITY.GET_TIME() FROM SYS_DUMMY;
DBMS_UTILITY.GET_TIME()
-----------------------
156419546633
1 rows fetched.
DBMS_UTILITY.COMPILE_SCHEMA
Interface:
DBMS_UTILITY.COMPILE_SCHEMA(
schema IN VARCHAR2,
compile_all IN BOOLEAN DEFAULT TRUE,
reuse_settings IN BOOLEAN DEFAULT FALSE
);
Parameters:
Note:
Common users can compile only their own objects, and users SYS and DBA can
recompile the objects in all schemas.
3.15.10 DBMS_DIAGNOSE
Description
The DBMS_DIAGNOSE package is used to control the permissions to access
functions. Non-SYS users can access functions only after the EXECUTE ON
DBMS_DIAGNOSE permission is granted.
Interfaces
● DBMS_DIAGNOSE.DBA_IND_POS
Syntax:
DBMS_DIAGNOSE.DBA_IND_POS('column_list','column_id')
DBMS_DIAGNOSE.DBA_IND_POS('2,3,1,0,4','4')
-------------------------------------------
5
1 rows fetched.
● DBMS_DIAGNOSE.DBA_LISTCOLS
Syntax:
DBMS_DIAGNOSE.DBA_LISTCOLS(user_name,table_name|view_name, column_list)
DBMS_DIAGNOSE.DBA_LISTCOLS('GAUSSDBA','TEST','0,1')
----------------------------------------------------------------
A, B
1 rows fetched.
● DBMS_DIAGNOSE.DBA_PARTITIONED_INDSIZE
Syntax:
DBMS_DIAGNOSE.DBA_PARTITIONED_INDSIZE(size_type,user_name,table_name,index_name)
Purpose: It is a diagnosis function and is used to return the size of the index
in the partitioned table.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– size_type: size type
▪ 0: number of bytes
▪ 1: number of pages
▪ 2: number of extents
– user_name is a string. It must be an existing username in the database.
– table_name is a string. It must be an existing table name of the
corresponding user in the database. Note that the table must be a
partitioned table.
– index_name is a string. This parameter is optional. If this parameter is not
specified, the total size of all indexes in the partitioned table is returned.
– Only local indexes in the partitioned table are supported. The size of
global indexes is 0.
Examples:
Return the total size of indexes in the partitioned table.
-- Create a partitioned table.
create table test_part_t1(f1 int, f2 real, f3 number, f4 char(30), f5 varchar(30), f6 date, f7 timestamp)
PARTITION BY RANGE(f1)
(
PARTITION p1 values less than(10),
PARTITION p2 values less than(20),
PARTITION p3 values less than(30),
PARTITION p4 values less than(MAXVALUE)
);
-- Create indexes on the partitioned table.
create index idx_t1_1 on test_part_t1(f2,f3);
create index idx_t1_2 on test_part_t1(f4,f5) local;
-- Insert data into the partitioned table.
insert into test_part_t1 values(5, 15, 28, 'abcd', 'abcd', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'));
insert into test_part_t1 values(6, 16, 29, '16', '29', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'));
DBMS_DIAGNOSE.DBA_PARTITIONED_INDSIZE(0,'GAUSSDBA','TEST_PART_T1')
------------------------------------------------------------------
65536
1 rows fetched.
● DBMS_DIAGNOSE.DBA_PARTITIONED_LOBSIZE
Syntax:
DBMS_DIAGNOSE.DBA_PARTITIONED_LOBSIZE(size_type,user_name,table_name,column_id)
Purpose: It is a diagnosis function and is used to return the segment size of
the LOB column in the partitioned table.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– size_type: size type
▪ 0: number of bytes
▪ 1: number of pages
▪ 2: number of extents
– user_name is a string. It must be an existing username in the database.
– table_name is a string. It must be an existing table name of the
corresponding user in the database. The table must be a partitioned
table.
– column_id is an integer. It is optional. If it is not specified, the total value
of all LOB columns in the partitioned table is returned. Its value must be
the ID of the column where the LOB column is located.
– By default, if the storage space consumed by LOB data does not exceed
4000 bytes (including the extra overhead of 12 to 14 bytes), the LOB data
adopts the inline mode and data is stored in the heap segment instead of
the LOB segment. Therefore, the result of this function does not contain
the LOB data stored in inline mode.
Examples:
Return the total size of the LOB column in the partitioned table.
-- Create a partitioned table.
create table test_part_t1(f1 int, f2 real, f3 number, f4 char(30), f5 varchar(30), f6 date, f7
timestamp,f8 clob)
PARTITION BY RANGE(f1)
(
PARTITION p1 values less than(10),
PARTITION p2 values less than(20),
PARTITION p3 values less than(30),
PARTITION p4 values less than(MAXVALUE)
);
-- Create indexes on the partitioned table.
create index idx_t1_1 on test_part_t1(f2,f3);
create index idx_t1_2 on test_part_t1(f4,f5) local;
-- Insert data into the partitioned table.
insert into test_part_t1 values(5, 15, 28, 'abcd', 'abcd', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'),'xxx');
insert into test_part_t1 values(6, 16, 29, '16', '29', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'),'yyy');
-- Query the total size of the LOB column in the partitioned table.
SELECT DBMS_DIAGNOSE.DBA_PARTITIONED_LOBSIZE(0,'GAUSSDBA','TEST_PART_T1');
DBMS_DIAGNOSE.DBA_PARTITIONED_LOBSIZE(0,'SYS','TEST_PART_T1')
------------------------------------------------------------------
0
1 rows fetched.
● DBMS_DIAGNOSE.DBA_PARTITIONED_TABSIZE
Syntax:
DBMS_DIAGNOSE.DBA_PARTITIONED_TABSIZE(size_type,user_name,table_name)
Purpose: It is a diagnosis function and is used to return the size of the
partitioned table.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– size_type: size type
▪ 0: number of bytes
▪ 1: number of pages
▪ 2: number of extents
– user_name is a string. It must be an existing username in the database.
– table_name is a string. It must be an existing table name of the
corresponding user in the database. The table must be a partitioned
table.
Examples:
Return the size of the partitioned table.
-- Create a partitioned table.
create table test_part_t1(f1 int, f2 real, f3 number, f4 char(30), f5 varchar(30), f6 date, f7 timestamp)
PARTITION BY RANGE(f1)
(
PARTITION p1 values less than(10),
PARTITION p2 values less than(20),
PARTITION p3 values less than(30),
PARTITION p4 values less than(MAXVALUE)
);
-- Create indexes on the partitioned table.
create index idx_t1_1 on test_part_t1(f2,f3);
create index idx_t1_2 on test_part_t1(f4,f5) local;
-- Insert data into the partitioned table.
insert into test_part_t1 values(5, 15, 28, 'abcd', 'abcd', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'));
insert into test_part_t1 values(6, 16, 29, '16', '29', to_date('2018/01/24', 'YYYY/MM/DD'),
to_timestamp('2018-01-24 16:00:00.00', 'YYYY-MM-DD HH24:MI:SS.FF3'));
-- Return the size of the partitioned table.
select DBMS_DIAGNOSE.DBA_PARTITIONED_TABSIZE(0,'GAUSSDBA','TEST_PART_T1');
DBMS_DIAGNOSE.DBA_PARTITIONED_TABSIZE(0,'GAUSSDBA','TEST_PART_T1')
-------------------------------------------------------------------
65536
1 rows fetched.
● DBMS_DIAGNOSE.DBA_SEGSIZE
Syntax:
DBMS_DIAGNOSE.DBA_SEGSIZE(size_type,table_entry)
Purpose: It is a diagnosis function and is used to return the size of an
ordinary table.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– size_type: size type
▪ 0: number of bytes
▪ 1: number of pages
▪ 2: number of extents
– table_entry: entry of an ordinary table, which can be obtained by
querying the system catalog SYS_TABLES.
Examples:
Return the size of an ordinary table.
-- Create an ordinary table.
CREATE TABLE TEST(A INT, B INT);
-- Insert data.
INSERT INTO TEST VALUES(1,1);
-- Show the table size.
SELECT DBMS_DIAGNOSE.DBA_SEGSIZE(0, T.ENTRY) FROM SYS_TABLES T WHERE T.NAME = 'TEST';
DBMS_DIAGNOSE.DBA_SEGSIZE(0, T.ENTRY)
--------------------------------------
65536
1 rows fetched.
● DBMS_DIAGNOSE.DBA_SPACE_NAME
Syntax:
DBMS_DIAGNOSE.DBA_SPACE_NAME(space_id)
Purpose: It is a diagnosis function and is used to return the name of the table
where the tablespace ID is located.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– This function is used to return the name of the table where the
tablespace ID is located. If the tablespace ID does not exist, an error is
reported.
– space_id: tablespace ID
Examples:
Return the name of the table where the tablespace ID is located.
SELECT DBMS_DIAGNOSE.DBA_SPACE_NAME(0);
DBMS_DIAGNOSE.DBA_SPACE_NAME(0)
----------------------------------------------------------------
SYSTEM
1 rows fetched.
● DBMS_DIAGNOSE.DBA_SPCSIZE
Syntax:
DBMS_DIAGNOSE.DBA_SPCSIZE(space_id,size_type_name)
Purpose: It is a diagnosis function and is used to return the tablespace size.
Note:
DBMS_DIAGNOSE.DBA_SPCSIZE(0,'TOTAL')
--------------------------------------
134217728
1 rows fetched.
● DBMS_DIAGNOSE.DBA_TABTYPE
Syntax:
DBMS_DIAGNOSE.DBA_TABTYPE(table_type_id)
Purpose: It is a diagnosis function and is used to return the table type name
corresponding to the table type ID.
Note:
– This function is a diagnosis function and cannot be directly invoked by
non-SYS users.
– If the table type ID does not exist and is not NULL, UNKNOWN_TYPE is
returned.
– table_type_id: table type ID
▪ 0: HEAP, ordinary table (heap table)
DBMS_DIAGNOSE.DBA_TABTYPE(3)
----------------------------------------------------------------
SESSION_TEMP
1 rows fetched.
● DBMS_DIAGNOSE.DBA_USER_NAME
Syntax:
DBMS_DIAGNOSE.DBA_USER_NAME(user_id)
DBMS_DIAGNOSE.DBA_USER_NAME(1)
----------------------------------------------------------------
PUBLIC
1 rows fetched.
Precautions
● If the name of a UDF is the same as that of a system function, the database
preferentially invokes the system function. To make the UDF preferential,
configure it in the $GSDB_DATA/cfg/udf.ini file in the format of
user_name.function_name. Only one UDF can be written in a line, and no
comments are allowed. The UDF name is case-sensitive during the
configuration in udf.ini. The configuration takes effect only after the database
is restarted.
● If a compilation error occurs in the UDF specified in udf.ini, GaussDB 100
determines that the UDF cannot be matched and tries to use the system
function. In this case, the compilation error information of the UDF will be
overwritten.
● The permission for the udf.ini file must be limited to users in the database
user group dbgrp. The permission is 600.
● When running a UDF without parameters, you can directly specify the UDF
name without parentheses.
● Be cautious to use statements that affect transaction commit or rollback in a
UDF body. If there are such statements and the UDF is contained in a DML
operation, the error GS-00973 will be reported, indicating that the operation
is not executed. However, the statements can be normally invoked in stored
procedures or anonymous blocks.
Syntax
CREATE [OR REPLACE] [IF [NOT] EXIST] FUNCTION [schema_name.]function_name [(args_list)] RETURN
data_type
{ IS | AS }
[param-list]
BEGIN
statement;
RETURN expression;
...
END;
Parameter Description
● schema_name
Specifies the owner of the function.
● function_name
Specifies the name of the function.
● args_list
Specifies a list of input parameters. Default parameter values can be used.
● data_type
Specifies the data type of return values.
● param-list
Specifies a list of variables for declaration. Default variable values can be
used. The format is as follows: variant_name data_type [:= default_expr];
● statement
Specifies the statement where the function appears.
You can use basic, dynamic, control, exception, or other statements. For
details about basic statements, see Basic Statements. For details about
dynamic statements, see Dynamic Statements. For details about control
statements, see Control Statements. For details about other statements, see
Other Statements.
● expression
Specifies an expression for return values. The value can be a common variable
or expression. Subquery results cannot be directly returned.
Examples
● Create a UDF ztest_f1.
NOTICE
Stored procedures and functions are stored in the same system catalog. If the
UDF to be created has the same name as an existing stored procedure,
creating the UDF will fail. Therefore, before creating a UDF, you need to
delete the stored procedure with the same name.
NOTICE
In a statement for creating a UDF, the last slash (/) is used to indicate the end
of the definition statement. It cannot be omitted and must be placed in a
different line.
ID
------------
1
1 rows fetched.
-- The outer SQL statement for the UDF USER_FUNC_QUERY_TEMP UDF is an INSERT statement. The
table used by the outer SQL statement is the same as that used by the SQL statement in the UDF. The
execution results of the INSERT statement are returned.
INSERT INTO USER_FUNC_TEMP VALUES(USER_FUNC_QUERY_TEMP(1));
1 rows affected.
-- The outer SQL statement of the UDF USER_FUNC_QUERY_TEMP is an UPDATE statement. The
table used by the outer SQL statement is the same as that used by the SQL in the UDF. The error
message "GS-00927: The trigger or user-defined function used by a SQL statement which is adjusting
a table %s.%s did not find the table." is returned.
UPDATE USER_FUNC_TEMP SET ID=100 WHERE ID = USER_FUNC_QUERY_TEMP(1);
GS-00927, [7:1]The trigger or user-defined function used by a SQL statement which is adjusting a
table %s.%s did not find the table..
-- The outer SQL statement for the USER_FUNC_QUERY_TEMP UDF is a DELETE statement. The table
used by the outer SQL statement is the same as that used by the SQL statement in the UDF. The error
message "GS-00927:"The trigger or user-defined function used by a SQL statement which is adjusting
a table %s.%s did not find the table." is returned.
DELETE FROM USER_FUNC_TEMP WHERE ID = USER_FUNC_QUERY_TEMP(1);
GS-00927, [7:1]The trigger or user-defined function used by a SQL statement which is adjusting a
table %s.%s did not find the table..
3.17 Triggers
A trigger is a special type of stored procedure that is triggered by a specified
event. It is generally used for auditing and backing up data.
3.17.1 Examples
This example demonstrates the entire process of using a trigger, including creating
and deleting the trigger.
Statements
-- Delete a table T_TRIG, if any:
DROP TABLE IF EXISTS T_TRIG;
-- Delete a table T_TRIG_LOG, if any:
DROP TABLE IF EXISTS T_TRIG_LOG;
-- Delete a sequence TRIG_LOG_SEQ, if any:
DROP SEQUENCE IF EXISTS TRIG_LOG_SEQ;
-- Delete a trigger TRIG_AFTER_INSERT, if any:
DROP TRIGGER IF EXISTS TRIG_AFTER_INSERT;
NOTICE
-- Create table 1:
CREATE TABLE T_TRIG_1 (F_INT1 INT, F_INT2 INT, F_CHAR1 CHAR(16), F_DATE DATE);
Succeed.
-- Insert a piece of data to table 1:
INSERT INTO T_TRIG_1 VALUES(1,2,'A','2017-12-11 14:08:00');
1 rows affected.
-- Create a row trigger of Table 1, which is triggered each time a statement is inserted into Table 1, and a
statement for updating Table 1 exists in the trigger.
CREATE OR REPLACE TRIGGER TEST_TRIG AFTER INSERT ON T_TRIG_1
FOR EACH ROW
BEGIN
UPDATE T_TRIG_1 SET F_INT1 = 1;
END;
/
Succeed.
-- Insert a piece of data to table 1, which fires the trigger to update the same table and causes an error:
INSERT INTO T_TRIG_1 VALUES(1,2,'A','2017-12-11 14:08:00');
NOTICE
In a statement for creating a trigger, the last slash (/) is used to indicate the end
of the definition statement. It cannot be omitted and must be placed in a different
line.
-- Create the TRIG_AFTER_INSERT trigger. After a record is inserted into the T_TRIG table, a record with
description "after insert" will be written into the T_TRIG_LOG table.
CREATE OR REPLACE TRIGGER TRIG_AFTER_INSERT AFTER INSERT ON T_TRIG
BEGIN
INSERT INTO T_TRIG_LOG VALUES(TRIG_LOG_SEQ.NEXTVAL,'after insert',systimestamp);
END;
/
-- Create the TRIG_BEFORE_INSERT trigger. Before a record is inserted into the T_TRIG table, a record with
description "before insert" will be written into the T_TRIG_LOG table.
CREATE OR REPLACE TRIGGER TRIG_BEFORE_INSERT BEFORE INSERT ON T_TRIG
BEGIN
INSERT INTO T_TRIG_LOG VALUES(TRIG_LOG_SEQ.NEXTVAL,'before insert',systimestamp);
END;
/
-- Insert a record into the T_TRIG table:
INSERT INTO T_TRIG VALUES (1,systimestamp);
-- Query the T_TRIG table:
SELECT * FROM T_TRIG;
ID CREATE_DATE
------------ --------------------------------
1 2018-09-11 15:59:36.970759
1 rows fetched.
-- Query the T_TRIG_LOG table:
SELECT * FROM T_TRIG_LOG;
2 rows fetched.
-- Delete a trigger:
DROP TRIGGER IF EXISTS T_TRIG;
Description
Create a trigger.
Precautions
● OF column_name is supported only in row triggers, and the column data
type cannot be LOB.
● DDL and DCL operations are not allowed inside triggers. Common users
cannot create objects for system users.
● If a row trigger uses the insert operation (BEFORE | AFTER INSERT) as the
trigger event, such an operation on the table on which the trigger is created is
not allowed inside the trigger. If a row trigger uses the update operation
(BEFORE | AFTER UPDATE) as the trigger event, such an operation on the
table on which the trigger is created is not allowed inside the trigger. If a row
trigger uses the delete operation (BEFORE | AFTER DELETE) as the trigger
event, such an operation on the table on which the trigger is created is not
allowed inside the trigger.
● A maximum of eight triggers can be created on a table.
● Triggers cannot be created on local temporary tables.
● Triggers of a common user cannot be created on the table of user SYS.
Syntax
CREATE [ OR REPLACE ] TRIGGER [ schema_name. ]trigger_name
{ BEFORE | AFTER } { DELETE | INSERT | UPDATE [ OF column_name[,...] ] } [ OR ... ] ON table_name
[FOR EACH ROW]
[ param_list ]
BEGIN
statements;
END;
Parameter Description
● OR REPLACE
Replaces a trigger if it already exists.
● schema_name
Specifies the owner of the trigger to be created.
● trigger_name
Specifies the name of the trigger to be created.
● { BEFORE | AFTER }
Specifies the timing of a trigger. BEFORE indicates that the trigger runs before
the specified database operation, and AFTER indicates that the trigger runs
after the specified database operation.
● { DELETE | INSERT | UPDATE [ OF column_name[,...] ] } [ OR ... ]
Specifies a trigger event, that is, an operation upon which a trigger fires.
– DELETE: The trigger fires when there is a delete operation in the
database.
– INSERT: The trigger fires when there is an insert operation in the
database.
– UPDATE: The trigger fires when there is an update operation in the
database.
– MERGE: The trigger fires when there is an update or insert operation
from other data sources.
– [ OR ... ] indicates that multiple trigger events specified are connected by
OR. For example, INSERT OR DELETE indicates that the trigger event is
an insert or delete operation.
● table_name
Specifies the table on which a trigger will be created.
● [FOR EACH ROW]
Specifies a row trigger. If no FOR EACH ROW is used, a statement trigger will
be created.
● param_list
Declares a list of parameters. For details about declaration syntax, see
DECLARE Syntax.
● statements
Specifies statements inside a trigger. You are not allowed to leave this
parameter empty because an error will be reported if it is empty. You can use
basic, dynamic, control, exception, or other statements. For details about basic
statements, see Basic Statements; dynamic statements, see Dynamic
Statements; control statements, see Control Statements; other statements,
see Other Statements; user-defined functions, see User-defined Functions;
and stored procedures, see Creating a Stored Procedure.
Examples
-- Delete a table T_TRIG, if any:
DROP TABLE IF EXISTS T_TRIG;
-- Delete a table T_TRIG_LOG, if any:
DROP TABLE IF EXISTS T_TRIG_LOG;
-- Delete a sequence TRIG_LOG_SEQ, if any:
DROP SEQUENCE IF EXISTS TRIG_LOG_SEQ;
-- Delete a trigger TRIG_AFTER_INSERT, if any:
DROP TRIGGER IF EXISTS TRIG_AFTER_INSERT;
-- Delete a trigger TRIG_BEFORE_INSERT, if any:
DROP TRIGGER IF EXISTS TRIG_BEFORE_INSERT;
-- Create the T_TRIG table:
CREATE TABLE T_TRIG (ID INT, CREATE_DATE TIMESTAMP);
NOTICE
In a statement for creating a trigger, the last slash (/) is used to indicate the end
of the definition statement. It cannot be omitted and must be placed in a different
line.
-- Create the TRIG_AFTER_INSERT trigger. After a record is inserted into the T_TRIG table, a record with
description "after insert" will be written into the T_TRIG_LOG table.
CREATE OR REPLACE TRIGGER TRIG_AFTER_INSERT AFTER INSERT ON T_TRIG
BEGIN
INSERT INTO T_TRIG_LOG VALUES(TRIG_LOG_SEQ.NEXTVAL,'after insert',systimestamp);
END;
/
-- Create the TRIG_BEFORE_INSERT trigger. Before a record is inserted into the T_TRIG table, a record with
description "before insert" will be written into the T_TRIG_LOG table.
CREATE OR REPLACE TRIGGER TRIG_BEFORE_INSERT BEFORE INSERT ON T_TRIG
BEGIN
INSERT INTO T_TRIG_LOG VALUES(TRIG_LOG_SEQ.NEXTVAL,'before insert',systimestamp);
END;
/
-- Insert a record into the T_TRIG table:
INSERT INTO T_TRIG VALUES (1,systimestamp);
-- Query the T_TRIG table:
SELECT * FROM T_TRIG;
ID CREATE_DATE
------------ --------------------------------
1 2018-09-11 15:59:36.970759
1 rows fetched.
-- Query the T_TRIG_LOG table:
SELECT * FROM T_TRIG_LOG;
2 rows fetched.
Description
Delete a trigger.
Precautions
If the trigger to be deleted exists, the keyword IF EXISTS will be optional. If there
is uncertainty about whether the trigger to be deleted exists, DROP TRIGGER IF
EXISTS trigger_name; is recommended. This prevents errors from being returned
because of a trigger not existed. Common users cannot delete the objects of
system users.
Syntax
DROP TRIGGER [ IF EXISTS ] [ schema_name. ]trigger_name;
Parameters
● IF EXISTS
Indicates that no error will be reported and the delete operation will be
displayed as successful if the trigger to be deleted does not exist.
● schema_name
Specifies the owner of the trigger to be deleted.
● trigger_name
Specifies the name of the trigger to be deleted.
Examples
-- Delete a trigger:
DROP TRIGGER IF EXISTS TRIG_BEFORE_INSERT1;
Description
Modify a trigger.
Syntax
ALTER TRIGGER [ schema_name. ]trigger_name { ENABLE | DISABLE };
Parameters
● schema_name
Specifies the owner of the trigger to be modified.
● trigger_name
Specifies the name of the trigger to be modified.
● ENABLE
The trigger takes effect.
● DISABLE
The trigger does not take effect.
Examples
-- Enable the trigger:
ALTER TRIGGER trigger_name ENABLE;
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
5 Glossary
Term Description
A–E
backup A backup, or the process of backing up, refers to the copying and
archiving of computer data. Backup data can be used for
restoration in case of data loss.
Term Description
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands
are entered through keyboards or similar devices and are compiled
and executed by applications. The results are displayed in text or
graphic forms on the terminal interface.
core dump When a program stops abnormally, core dump, memory dump, or
system dump records the state of working memory of the
program at that point in time. The states of key programs are
often dumped at the same time. For example, information about
processor registers, including program metrics, stack pointers,
memory management, other processors, and OS flags are often
dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
Term Description
core file A file that is created when memory overwriting, assertion failures,
or access to invalid memory occurs in a process, causing it to fail.
This file is then used for further analysis.
A core file stores memory dump data, and supports binary mode
and specified ports. The name of a core file consists of the word
"core" and the OS process ID.
The core file is available regardless of the type of platform.
data flow An operator that exchanges data among query fragments. By their
operator input/output relationships, data flows can be categorized into
Gather flows, Broadcast flows, and Redistribution flows. Gather
combines multiple query fragments of data into one. Broadcast
forwards the data of one query fragment to multiple query
fragments. Redistribution reorganizes the data of multiple query
fragments and then redistributes the reorganized data to multiple
query fragments.
database A binary file that stores user data and the internal data of a
file database system.
Term Description
dirty page A page that has been modified and is not written to a permanent
device.
dump file A specific type of trace file. A dump file contains diagnostic data
during an event response, whereas a trace file contains
continuously generated diagnostic data.
Term Description
F–J
free space A mechanism for managing free space in a table. This mechanism
manageme enables a database system to record free space in each table and
nt establish an easy-to-find data structure, accelerating operations
(such as INSERT) performed on the free space.
GNU The GNU Project was publicly announced on September 27, 1983
by Richard Stallman, aiming at building an OS composed wholly
of free software. GNU is a recursive acronym for "GNU's Not
Unix!". Stallman announced that GNU should be pronounced as
Guh-NOO. Technically, GNU is similar to Unix in design, a widely
used commercial OS. However, GNU is free software and contains
no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for
each node in the case of strong consistency.
incrementa Incremental backup stores all file changes since the last valid
l backup backup.
Term Description
junk tuple A tuple that is deleted using the DELETE and UPDATE statements.
When deleting a tuple, GaussDB 100 only marks the tuples that
are to be cleared. The VACUUM thread will then periodically clear
these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
P–T
page Smallest memory unit for row storage in the relational object
structure in GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB
server 100 HA system and works with all standby servers. At any time,
only one node in the HA system is identified as the primary server.
Term Description
QPS Query Per Second (QPS) means the number of queries that a
server can respond to per second.
query Each query job can be split into one or more query fragments.
fragment Each query fragment consists of one or more query operators and
can independently run on a node. Query fragments exchange data
through data flow operators.
query An iterator or a query tree node, which is a basic unit for the
operator execution of a query. Execution of a query can be split into one or
more query operators. Common query operators include scan, join,
and aggregation.
RPO Recovery point objective (RPO) refers to the latest status that a
database system and the data can be restored to after a disaster,
and it is usually represented by time.
RTO Recovery time objective (RTO) refers to the duration between the
database system failure caused by a disaster and its restoration to
proper running.
schema A database object set that includes the logical structure, such as
tables, views, sequences, stored procedures, synonyms, clusters,
and database links.
Term Description
SSL Secure Sockets Layer (SSL) is a network security protocol first used
by Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides
three basic security services, all of which use the public key
technology. SSL ensures the security of service communication
through a network by establishing a secure connection between a
client and a server and then sending data through this connection.
stop word In computing, stop words are words which are filtered out before
or after processing of natural language data (text), saving storage
space and improving search efficiency.
Term Description
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
Issue 04
Date 2019-12-28
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
3 Database Configuration.......................................................................................................12
3.1 Configuring the Database Connection.......................................................................................................................... 12
3.1.1 Disabling the Use of 0.0.0.0 and :: for Listening..................................................................................................... 12
3.1.2 Changing the Default Listening Port.......................................................................................................................... 13
3.1.3 Setting the Maximum Number of Connections...................................................................................................... 13
3.1.4 Configuring Remote Connection Control.................................................................................................................. 13
3.1.4.1 Configuring the User Whitelist.................................................................................................................................. 14
3.1.4.2 Configuring the IP Address Whitelist and Blacklist............................................................................................ 15
3.1.4.2.1 Enabling IP Address Whitelist/Blacklist Checking............................................................................................15
3.1.4.2.2 Configuring the IP Address Whitelist................................................................................................................... 16
3.1.4.2.3 Configuring the IP Address Blacklist.................................................................................................................... 16
3.1.5 Configuring the SSL Private Key................................................................................................................................... 17
3.1.6 Configuring the Aging Time of Non-Authentication Sessions...........................................................................18
3.1.7 Disabling Local Trust Authentication..........................................................................................................................18
3.1.8 Establishing TCP/IP Connections in SSL Mode........................................................................................................ 19
3.2 Managing Users, Roles, and Permissions..................................................................................................................... 33
3.2.1 Checking for Unknown Users........................................................................................................................................ 33
3.2.2 Checking the DBA Role....................................................................................................................................................33
Overview
GaussDB 100 is a high-performance and high-reliability distributed relational
database developed by Huawei Technologies Co., Ltd. It supports automatic
horizontal sharding and breaks the storage and performance bottlenecks of a
single server, applying to massive data storage and processing.
The framework of GaussDB 100 is component-based and can be used for a
standalone database or a cluster. To enhance the security of GaussDB 100, a series
of security rules are formulated based on Huawei security requirements. This
document describes how to perform security hardening on GaussDB 100 and Linux
where the database is running. Security hardening is performed after GaussDB 100
is installed.
GaussDB 100 is compatible with the user habits of mainstream databases. You can
use native GaussDB 100 interface names or their corresponding names in the
mainstream databases. For details, see Interface Mapping (GaussDB 100 Native
Interface Names vs. Mainstream Database Interface Names). The interfaces
mentioned in this document use their native GaussDB 100 names.
Applicable Scope
This document is applicable to all Huawei products that use GaussDB 100.
Intended Audience
This document is intended for all GaussDB 100 users.
Item Description
For details about items in security configuration rules, see Table 1-1.
Risk level Level of a risk brought by a check item: high, medium, and
low
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Example Conventions
The following table describes some example information in this document. You
can replace the example information as needed.
Information Description
Format Description
Change History
Version Change Description Date
2 OS Configuration
You can perform security hardening on the OS where GaussDB 100 is running.
Parameter Description
Parameters used for GaussDB 100 OS configuration are described in Table 2-1. Set
these parameters as needed.
Recommended value: 2
Check method
grep -P '^[^#]*Protocol\s*' /etc/ssh/sshd_config
Setting rp_filter to 1 enables the server's reverse path filtering mechanism. In this
way, the server checks whether the reverse path of each incoming data packet is
the optimal one. If it is not, the packet is discarded to prevent the server from IP
spoofing attacks.
Configuration method
Step 1 Run the sysctl -a command as user root to check the rp_filter values of all
interfaces on the server.
sysctl -a | grep rp_filter | grep -v arp_filter
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.rp_filter = 1
Step 2 Open the /etc/sysctl.conf file, change the value of rp_filter from 0 to 1 for all
interfaces, and save the file.
vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 1
----End
NOTICE
After the reverse path filtering mechanism is enabled, packet loss may occur on
NICs. In this case, check the system route settings.
Recommended value: 1
Check method
Run the sysctl -a command as user root to check the rp_filter values of all
interfaces on the server.
sysctl -a | grep rp_filter | grep -v arp_filter
Expected result: 1
The database OS account has the permission to access all database files. Once the
password of this account is disclosed, the database is seriously threatened.
Therefore, removing the remote login permission from this account improves
database security.
Configuration method
Change the shell of the database OS account (for example, gaussdba) to /sbin/
nologin.
usermod gaussdba -s /sbin/nologin
Check method
grep -P '^gaussdba:.*?:/sbin/nologin$' /etc/passwd
If the maximum number of files that can be opened in processes is too small, SQL
operations will fail once the maximum number is exceeded.
Configuration method
Run the following command as user root to change the maximum number of files
that can be opened in processes. The configuration takes effect upon the next
login to the OS.
echo "* soft nofile 1000000" >> /etc/security/limits.conf
echo "* hard nofile 1000000" >> /etc/security/limits.conf
Check method
ulimit -a|grep "open files"
Expected result:
open files (-n) 1000000
3 Database Configuration
You can perform security hardening on GaussDB 100. Do not run the database as
user root.
If the database is deployed in HA mode, perform security hardening on both the primary
and standby nodes.
0.0.0.0 indicates that all available IPv4 addresses on the local host are listened,
and :: indicates that all available IPv6 addresses on the local host are listened.
Configuration method
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_ADDR' AND VALUE IN ('0.0.0.0', '::');
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------
0 rows fetched.
Expected result:
NAME VALUE
---------------------------------------------------------------- ----------------
0 rows fetched.
Expected result:
NAME VALUE
---------------------------------------------------------------- --------------------------------------------
SESSIONS 200
● User whitelist: You can add users to zhba.conf so that these users access the
database only through the IP addresses specified in zhba.conf.
● IP address whitelist: Only the IP addresses specified by TCP_INVITED_NODES
can be used to access the database.
● IP address blacklist: The IP addresses specified by TCP_EXCLUDED_NODES
cannot be used to access the database.
The IP address blacklist has the highest priority. If an IP address is configured in all
the three lists, it cannot be used for remote access.
When the user whitelist, IP address whitelist, and IP address blacklist are all
enabled:
● Users in the user whitelist can use the IP addresses in the user whitelist and IP
address whitelist to remotely connect to databases (the IP addresses must not
be in the IP address blacklist).
● If the IP address of a client is in the user whitelist (zhba.conf) or IP address
whitelist and not in the IP address blacklist, it will pass the verification for
login regardless of whether the user is in the user whitelist.
If user SYS locally logs in to a database in password-free mode, the login will not be limited
by the user whitelist, IP address whitelist, or IP address blacklist.
If user SYS logs in to a database using an encrypted password, the login will be limited by
the IP address blacklist.
● During authentication, the system checks connection requests with records in the
zhba.conf file in sequence, so the record sequence is very important. You are advised to
configure the record with a weak connection mode (host) and strict connection
parameters ahead of the record with a strict connection mode (hostssl) and weak
connection parameters. For example, it is usually expected that the TCP/IP connection
on the local or trusted network is set up in host mode and the connection on the
remote or untrusted network is set up in hostssl mode. In this case, the record with the
host connection mode and the IP address 127.0.0.1 or the IP address of a trusted
network needs to be configured ahead of the record with the hostssl connection mode
and an IP address covering clients within a wider range.
● To log in to the database, add the whitelist to both the primary and standby DNs after a
primary/standby switchover. For details, see Configuring the IP Address Whitelist.
Configuration method
Step 1 Add an HBA entry (TYPE, USER, and ADDRESS) to the zhba.conf file. The save
path of zhba.conf is $GSDB_DATA/cfg/zhba.conf. host indicates a TCP or SSL
connection, and hostssl indicates an SSL connection. If SSL is enabled on the
server but is not configured on the client, the server rejects the SSL connection.
host user 127.0.0.1,192.168.3.222,20AB::9217:acff:feab:fcd0/64
hostssl user 192.168.2.223
Step 2 Run the following statement to load the user whitelist. The whitelist takes effect
immediately after the statement is executed.
ALTER SYSTEM RELOAD HBA CONFIG;
----End
Recommended value: not empty
Check method
SELECT * FROM DV_HBA;
Expected result:
SQL> SELECT * FROM DV_HBA;
TYPE USER_NAME
ADDRESS
----------------------------------------------------------------
----------------------------------------------------------------
----------------------------------------------------------------
host USER
127.0.0.1/32,192.168.3.222/32,20ab::9217:acff:feab:fcd0/64
hostssl USER
192.168.2.223/32,20ab::9217:acff:feab:fcd0/64
2 rows fetched.
NOTICE
Configuration method
Run the ALTER SYSTEM statement to enable IP address whitelist/blacklist
checking. The configuration takes effect immediately after the statement is
executed.
ALTER SYSTEM SET TCP_VALID_NODE_CHECKING = TRUE;
Expected result:
NAME VALUE
---------------------------------------------------------------- --------------------
TCP_VALID_NODE_CHECKING TRUE
The value of TCP_INVITED_NODES cannot exceed 1024 bytes. Otherwise, an error will be
reported.
Configuration method
Run the ALTER SYSTEM statement to configure an IP address whitelist. The
configuration takes effect immediately after the statement is executed.
-- Add 127.0.0.1, 192.168.3.222, and 20ab::9217:acff:feab:fcd0 to the IP address whitelist.
ALTER SYSTEM SET TCP_INVITED_NODES = '(127.0.0.1/32,192.168.3.222/32,20ab::9217:acff:feab:fcd0/64)';
-- Add the 192.168.3.0/24 network segment to the IP address whitelist.
ALTER SYSTEM SET TCP_INVITED_NODES = '(192.168.3.0/24)';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------
TCP_INVITED_NODES (127.0.0.1/32,192.168.3.222/32,20ab::9217:acff:feab:fcd0/64)
The value of TCP_INVITED_NODES cannot exceed 1024 bytes. Otherwise, an error will be
reported.
Configuration method
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------
TCP_EXCLUDED_NODES (192.168.3.222/32,20ab::9217:acff:feab:fcd0/64)
The _FACTOR_KEY parameter can be modified only by running the ALTER SYSTEM SET
_FACTOR_KEY command. If you modify it in the zengine.ini file, the database will fail to be
started.
Run the ALTER SYSTEM statement to configure the local login key and restart the
database for the configuration to take effect.
ALTER SYSTEM SET _FACTOR_KEY = 'jQ4IAgxiJR1ezCPrvtZLUQ==';
ALTER SYSTEM SET LOCAL_KEY = '8Vw4Gm2Ktu7B8XIzTlVRK9EOH
+lpSNbIlhfVbaJ0RDdbgyUyHsYT6UxYGSEbZg7BXki6gSP8slEU8haWxiUgNg==';
Check method
Run the following statement to check whether the values of _FACTOR_KEY and
LOCAL_KEY are different from the default values configured in the database:
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE (NAME = '_FACTOR_KEY' AND VALUE =
'dc4hoQWGQs7/Uv3AiherFw==') or (NAME = 'LOCAL_KEY' AND VALUE =
'UTiYlBoTC71MvTyBvWhVDodc0VAop1GMe135ZCov8Pv4xsnlEHn9Bs/pjRo7ZNM1BXq8Z4XuyRjfaNpY/
7McEQ==');
Expected result:
NAME VALUE
---------------------------------------------------------------- ----------------
0 rows fetched.
Configuration method
You can configure the aging time in either of the following ways:
● Set the UNAUTH_SESSION_EXPIRE_TIME parameter in the zengine.ini
configuration file and restart the database for the setting to take effect. The
path of the zengine.ini file is {GSDB_DATA}/cfg/zengine.ini.
● Run the ALTER SYSTEM statement to set UNAUTH_SESSION_EXPIRE_TIME.
ALTER SYSTEM SET UNAUTH_SESSION_EXPIRE_TIME = 60 SCOPE = BOTH;
Recommended value: 60
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'UNAUTH_SESSION_EXPIRE_TIME';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------
UNAUTH_SESSION_EXPIRE_TIME 60
Configuration method
You can disable local trust authentication in either of the following ways:
● Run the python zctl.py -t kill command to stop the database, change the
value of ENABLE_SYSDBA_LOGIN to FALSE in the zengine.ini configuration
file, and finally run the python zctl.py -t start command to start the
database. The path of the zengine.ini file is {GSDB_DATA}/cfg/zengine.ini.
● When the database is running, run the ALTER SYSTEM statement to set
ENABLE_SYSDBA_LOGIN to FALSE, and then run the python zctl.py -t kill
&& python zctl.py -t start command to restart the database for the
modification to take effect.
ALTER SYSTEM SET ENABLE_SYSDBA_LOGIN = FALSE;
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------
ENABLE_SYSDBA_LOGIN FALSE
Prerequisites
The formal certificates and keys for the server and client have been obtained from
the Certificate Authority (CA). Currently, only SSL certificates in PEM format are
supported. Assume the private key and certificate for the server are server.key and
server.crt, the private key and certificate for the client are client.key and
client.crt, and the CA root certificate is cacert.pem.
Related Concepts
GaussDB 100 supports the SSL protocol. As a highly secure protocol, SSL
authenticates bidirectional identification between the server and client using
digital signatures and digital certificates to enhance data transmission security.
GaussDB 100 supports the TLSv1.2 protocol. As a highly secure protocol, TLSv1.2
authenticates bidirectional identification between the server and client using
digital signatures and digital certificates to enhance data transmission security.
Precautions
In HA mode, enable the SSL function on both primary and standby nodes to
ensure the security of bidirectional data transmission.
Procedure
Assume that the path is /home/gaussdba/app. Replace it with the actual path.
● Set server parameters. The parameter settings take effect after the database
is restarted.
alter system set SSL_CA = '/home/gaussdba/app/cacert.pem';
alter system set SSL_CERT = '/home/gaussdba/app/server.crt';
alter system set SSL_KEY = '/home/gaussdba/app/server.key';
alter system set SSL_VERIFY_PEER = TRUE;
NOTICE
without
requiring
the client to
provide a
certificate,
which is not
recommend
ed.
Set
ZSQL_SSL_K
EY_PASSWD
to the actual
ciphertext. If
you do not
set this
parameter,
the system
prompts you
to enter the
private key
ciphertext
when you
set up a
connection
in
interactive
mode on
the client.
Unidire The client Copy the certificate and Set the To prevent
ctional verifies the private key of the server to following TCP-based
Authen server's the /home/gaussdba/app environme link
tication certificate, directory and set the nt spoofing,
whereas the following parameters: variables: you are
server does ● SSL_CERT ● ZSQL_S advised to
not verify SL_CA use the SSL
the client's ● SSL_KEY certificate
certificate. ● (Optio authenticati
The server nal) on. Besides
loads the ZSQL_S configuring
certificate SL_CRL the
information ● ZSQL_S certificate
and sends SL_MO and private
the DE key of the
information server, you
to the client. are advised
The client to set
verifies the variable
server's ZSQL_SSL_
certificate MODE to
according to VERIFY_CA
the root or
certificate. VERIFY_FUL
L.
Step 3 Log in to the server where GaussDB 100 is located as user root to change the
permissions of server and client keys.
Copy server.crt, server.key, and cacert.pem to the /home/gaussdba/app
directory. Ensure that the permission is 600, the owner is the database installation
user (for example, gaussdba), and the owner group is the group to which the
database installation user belongs (for example, dbgrp). Run the following
commands to modify the permissions:
cd /home/gaussdba/app
chown gaussdba:dbgrp server.crt server.key cacert.pem
chmod 600 server.crt server.key cacert.pem
Step 5 If an encrypted private key has been configured, use the zencrypt tool to generate
the password ciphertext (SSL_KEY_PASSWORD).
1. Run the following command to generate the password ciphertext
(SSL_KEY_PASSWORD):
zencrypt -e aes256 -f jQ4IAgxiJR1ezCPrvtZLUQ== -k
8Vw4Gm2Ktu7B8XIzTlVRK9EOH
+lpSNbIlhfVbaJ0RDdbgyUyHsYT6UxYGSEbZg7BXki6gSP8slEU8haWxiUgNg==
-- Enter the plaintext password of the private key (for example, Aa123456) to generate password
ciphertext:
Please enter password to encrypt:
********
Please input password again:
********
Cipher: PLFJgfZwLSOJFRp6o7qsj604fFwPEu2MLxH7m/F/aMg=
– -e indicates that the generated private key is encrypted using the AES algorithm
(256-bit key). -f indicates the value of _FACTOR_KEY, and -k indicates the value of
LOCAL_KEY. For values of -f and -K, see Configuring the SSL Private Key.
– The private key is sensitive data. It is strongly recommended that users use
encrypted private keys and periodically use Zenith to update the encrypted keys,
which ensures data security.
– The requirements for ciphertext complexity are consistent with the database
password requirements.
– It is recommended that the private key ciphertext be a string containing no less
than 2048 characters.
2. When the database instance is running, update the SSL private key ciphertext
(for example: Aa123456), or update the configuration item in the
configuration file.
alter system set SSL_KEY_PASSWORD = 'PLFJgfZwLSOJFRp6o7qsj604fFwPEu2MLxH7m/F/aMg=';
Step 6 Restart the GaussDB 100 server to make the configuration take effect.
-- Stop the database service.
python $GSDB_HOME/bin/zctl.py -t stop
-- Start the database service.
python $GSDB_HOME/bin/zctl.py -t start
----End
NAME DATATYPE
VALUE
---------------------------------------------------------------- --------------------
----------------------------------------------------------------
HAVE_SSL GS_TYPE_BOOLEAN
TRUE
● Disable SSL.
Method 1: When the database instance is running, change the values of
parameters SSL_CA, SSL_CERT, SSL_KEY, and SSL_KEY_PASSWORD to null, and
then restart the database. This can disable SSL on servers.
Method 2: Delete SSL parameters from the configuration file, or delete the SSL
certificate from the disk. Then, restart the database.
Reference
Set parameters related to SSL authentication in the zengine.ini file on the
GaussDB 100 server. For details, see Table 3-2.
SSL_KEY Server private key file used to Refer to the name of the
decrypt digital signatures and actual private key. You are
data encrypted using the public advised to set this parameter
key. to the absolute path of the
server private key file.
Otherwise, the private key
may fail to be loaded.
Default value: empty,
indicating no server private
key is available.
SSL_CIPHER Encryption algorithm used for SSL For details about encryption
communication. algorithms supported by
GaussDB 100, see Table 3-4.
Default value: empty,
indicating that the peer end
can use all encryption
algorithms supported by
GaussDB 100. The algorithms
are sorted based on security
strength.
ZSQL_SSL Client private key file Absolute path of a certificate file, for
_KEY used to decrypt digital example:
signatures and data export ZSQL_SSL_KEY='/home/gaussdba/app/
client.key'
encrypted using the
public key. Default value: empty
ZSQL_SSL Root certificate file for Absolute path of a certificate file, for
_CA issuing client example:
certificates. The root export ZSQL_SSL_CA='/home/gaussdba/app/
cacert.pem'
certificate is used to
verify the server Default value: empty
certificate.
ZSQL_SSL CRL file for checking Absolute path of a certificate file, for
_CRL whether the server example:
certificate is in the CRL. export ZSQL_SSL_CRL='/home/gaussdba/app/
root.crl'
If it is, the certificate is
invalid. Default value: empty, indicating no CRL
● Currently, GaussDB 100 SSL transmission supports encryption algorithms with the
encryption strength higher than strong.
● The default value of SSL_CIPHER is empty, which indicates that all encryption
algorithms listed in the preceding table are supported. You are advised to retain the
default value, unless there are other special requirements on the encryption algorithm.
● If multiple encryption algorithms are specified, separate them with semicolons (;). For
example, use SSL_CIPHER=DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256
in zengine.ini.
If the OpenSSL component is installed in the operating system by default, you can
directly create a self-signed certificate. Otherwise, you need to install the
component before creating a self-signed certificate.
The OpenSSL component is installed by default on SUSE 10/11. The following
describes how to create a server self-signed certificate on SUSE. The default
installation path of OpenSSL on SUSE is /usr/share/ssl.
● When creating a self-signed certificate fails, delete the index.txt file from /usr/
share/ssl/misc/demoCA and run the touch index.txt command to recreate index.txt.
● To generate a client certificate, perform Step 9 through Step 15. In addition, replace
server with client in the commands of Step 9, Step 12, and Step 14.
Step 3 Go to the misc directory and view the files in the directory.
cd /usr/share/ssl/misc
ls
Step 4 Run the following command to create a CA. The system will create a demoCA
folder in the current directory.
./CA.sh -newca
Step 5 Enter a password for protecting the root certificate. The password must contain a
minimum of four characters, for example, 1234.
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
Step 7 Enter a password for protecting the root certificate. The password must contain a
minimum of four characters, for example, 1234.
Enter pass phrase for ./demoCA/private/./cakey.pem:
Step 8 Run the following command to add a CA extension item for the root certificate:
openssl x509 -req -in ./demoCA/careq.pem -extfile /etc/ssl/openssl.cnf -extensions v3_ca -signkey ./demoCA/
private/cakey.pem -out ./demoCA/cacert.pem -CAcreateserial -days 365
Step 9 Run the following command to generate the server certificate request file
server.req and server private key server.key based on different encryption
algorithms:
b. Create a certificate signature request file based on the DSA private key
file.
openssl req -new -key server.key -out server.req
Step 10 If only the RSA encryption algorithm is used, enter the protection password of the
server private key. The password must contain 8 to 64 characters, for example,
Aa123456.
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
Step 12 Run the following command to sign the generated server certificate request file.
Then, the official server certificate server.crt will be generated.
openssl ca -in server.req -out server.crt
Step 13 Determine whether to sign the certificate and whether to submit the certificate
request that has passed the authentication.
● Yes: Sign the certificate and submit the request.
● No: Does not sign the certificate or submit the request.
Step 14 Run the following command to remove the password protection for the server
private key:
● Based on the RSA encryption algorithm:
openssl rsa -in server.key -out server.key
If password protection for the server private key is not removed, use the zencrypt tool to
encrypt the password.
zencrypt -e aes256 -k work_key -f factor_key
The work_key and factor_key are values of LOCAL_KEY and _FACTOR_KEY respectively
obtained from the server parameter view. Set SSL_KEY_PASSWORD based on the
generated ciphertext.
Step 15 If only the RSA encryption algorithm is used, enter the password of the server
private key, for example, Aa123456.
----End
Configuration method
Revoke the DBA role from a user or from the roles inherited by this user.
REVOKE DBA FROM user_name;
REVOKE DBA FROM role_name;
Step 2 Check whether the DBA role is granted to the inherited roles.
SELECT GRANTEE, GRANTED_ROLE FROM ADM_ROLE_PRIVS WHERE GRANTEE = 'role_name' AND
GRANTEE != 'SYS' AND GRANTED_ROLE = 'DBA';
----End
Expected result: roles of the user
Risk level: medium
Step 2 Check whether the CREATE USER permission is granted to the inherited roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'CREATE USER';
----End
Expected result: none
Risk level: medium
Step 2 Check whether the ALTER USER permission is granted to the inherited roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'ALTER USER';
----End
Expected result: none
Risk level: medium
Configuration method
Revoke the DROP USER permission from a user or from the roles inherited by this
user.
REVOKE DROP USER FROM user_name;
REVOKE DROP USER FROM role_name;
Step 2 Check whether the DROP USER permission is granted to the inherited roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'DROP USER';
----End
Expected result: none
Risk level: medium
Step 2 Check whether the CREATE DATABASE permission is granted to the inherited
roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'CREATE DATABASE';
----End
Expected result: none
Risk level: medium
Step 1 Revoke an object permission with WITH GRANT OPTION specified from a user.
REVOKE ALL ON object_name FROM user_name;
Step 2 Grant the object permission to this user again, without WITH GRANT OPTION
specified.
GRANT privilege_name ON object_name TO user_name;
----End
Check method
SELECT * FROM ADM_TAB_PRIVS WHERE GRANTABLE = 'YES';
Revoke the GRANT ANY PRIVILEGE permission from a user or from the roles
inherited by this user.
REVOKE GRANT ANY PRIVILEGE FROM user_name;
REVOKE GRANT ANY PRIVILEGE FROM role_name;
Step 2 Check whether the GRANT ANY PRIVILEGE permission is granted to the inherited
roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'GRANT ANY PRIVILEGE';
----End
Expected result: none
Risk level: medium
Step 2 Check whether the GRANT ANY ROLE permission is granted to the inherited roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'GRANT ANY ROLE';
----End
Users with the GRANT ANY OBJECT PRIVILEGE permission can grant any object
permission to any user. Therefore, grant this permission only when absolutely
necessary.
Configuration method
Revoke the GRANT ANY OBJECT PRIVILEGE permission from a user or from the
roles inherited by this user.
REVOKE GRANT ANY OBJECT PRIVILEGE FROM user_name;
REVOKE GRANT ANY OBJECT PRIVILEGE FROM role_name;
Check method
1. Check whether the user has the GRANT ANY OBJECT PRIVILEGE permission.
SELECT USERNAME, PRIVILEGE FROM DB_USER_SYS_PRIVS WHERE USERNAME = 'user_name' AND
PRIVILEGE = 'GRANT ANY OBJECT PRIVILEGE';
2. Check whether the roles inherited by this user have the GRANT ANY OBJECT
PRIVILEGE permission.
Step 2 Check whether the GRANT ANY OBJECT PRIVILEGE permission is granted to the
inherited roles.
SELECT ROLE, PRIVILEGE FROM ROLE_SYS_PRIVS WHERE ROLE = 'role_name' AND ROLE != 'DBA' AND
PRIVILEGE = 'GRANT ANY OBJECT PRIVILEGE';
----End
Every user automatically belongs to user PUBLIC. For database security, do not
grant object permissions to user PUBLIC.
Configuration method
REVOKE ALL ON object_name FROM public;
Check method
SELECT * FROM ADM_TAB_PRIVS WHERE GRANTEE='PUBLIC';
Configuration method
ALTER PROFILE profile_name LIMIT PASSWORD_REUSE_TIME 60;
Recommended value: 60
Check method
SELECT RESOURCE_NAME, THRESHOLD FROM ADM_PROFILES WHERE
RESOURCE_NAME='PASSWORD_REUSE_TIME';
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
PASSWORD_REUSE_TIME 60
Configure the number of password changes required before the current password
can be reused. The configuration prevents a password from being cracked due to
repeated use. Before setting this parameter, you can run the SELECT Username,
PROFILE FROM ADM_USERS; statement to check the profile configuration of the
user.
NOTICE
Configuration method
ALTER PROFILE profile_name LIMIT PASSWORD_REUSE_MAX 3;
Recommended value: 3
Check method
SELECT RESOURCE_NAME, THRESHOLD FROM ADM_PROFILES WHERE
RESOURCE_NAME='PASSWORD_REUSE_MAX';
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
PASSWORD_REUSE_MAX 3
Recommended value: 10
Check method
SELECT RESOURCE_NAME, THRESHOLD FROM ADM_PROFILES WHERE
RESOURCE_NAME='FAILED_LOGIN_ATTEMPTS';
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
FAILED_LOGIN_ATTEMPTS 10
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
PASSWORD_LOCK_TIME 1
The initial user SYS is a system administrator and has all system permissions. For
database security, change the password of SYS as soon as possible after the
database is installed.
Configuration method
ALTER USER user_name IDENTIFIED BY newpassword REPLACE oldpassword;
Check method
Use the new password to log in to the database. If the login is successful, the
password has been changed successfully.
Before setting this parameter, you can run the SELECT Username, PROFILE FROM
ADM_USERS; statement to check the profile configuration of the user.
Configuration method
ALTER PROFILE profile_name LIMIT PASSWORD_LIFE_TIME 60;
Check method
SELECT RESOURCE_NAME, THRESHOLD FROM ADM_PROFILES WHERE
RESOURCE_NAME='PASSWORD_LIFE_TIME';
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
PASSWORD_LIFE_TIME 60
Recommended value: 7
Check method
SELECT RESOURCE_NAME, THRESHOLD FROM ADM_PROFILES WHERE
RESOURCE_NAME='PASSWORD_GRACE_TIME';
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
PASSWORD_GRACE_TIME 7
Expected result:
RESOURCE_NAME THRESHOLD
---------------------------------------------------------------- --------------------
SESSIONS_PER_USER 100
NAME VALUE
----------------------------------------------------------------
----------------------------------------------------------------
RESOURCE_LIMIT TRUE
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 3
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME ='AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 1
Configuration method
ALTER SYSTEM SET AUDIT_LEVEL = 2;
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 2
Configuration method
ALTER SYSTEM SET AUDIT_LEVEL = 4;
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 4
If the audit level is set to 8, the parsing and execution of stored procedures are
audited, for example, EXECUTE (EXEC) and CALL. In addition, the definitions of
anonymous blocks in stored procedures are audited.
Configuration method
ALTER SYSTEM SET AUDIT_LEVEL = 8;
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 8
Configuration method
ALTER SYSTEM SET AUDIT_LEVEL = 3;
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 3
Configuration method
ALTER SYSTEM SET AUDIT_LEVEL = 5;
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 5
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 6
Recommended value: 3
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'AUDIT_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
AUDIT_LEVEL 15
Check method
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'LOG_HOME';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
LOG_HOME /home/gaussdba/data/log
Configuration method
You can configure the maximum capacity of an audit log file in either of the
following ways:
Check method
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = '_AUDIT_MAX_FILE_SIZE';
Expected result:
NAME VALUE
---------------------------------------------------------------- ------------------------
_AUDIT_MAX_FILE_SIZE 10M
Recommended value: 10
Check method
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = '_AUDIT_BACKUP_FILE_COUNT';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
_AUDIT_BACKUP_FILE_COUNT 10
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
LOG_HOME /home/gaussdb/data/log
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
_LOG_FILE_PERMISSIONS 600
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
_LOG_PATH_PERMISSIONS 700
Recommended value: 7
Check method
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = '_LOG_LEVEL';
Expected result:
NAME VALUE
---------------------------------------------------------------- -------------------------
_LOG_LEVEL 7
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
Issue 04
Date 2019-12-28
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://e.huawei.com
Contents
5 Machine-to-Machine Interfaces.........................................................................................35
5.1 Common.py............................................................................................................................................................................. 35
5.2 Common.pyc........................................................................................................................................................................... 35
5.3 GaussLog.py............................................................................................................................................................................ 35
5.4 GaussLog.pyc.......................................................................................................................................................................... 35
Overview
GaussDB 100 is a high-performance and high-reliability distributed relational
database developed by Huawei Technologies Co., Ltd., breaking the storage and
performance bottlenecks of a single server.
GaussDB 100 is compatible with the user habits of mainstream databases. You can
use native GaussDB 100 interface names or their corresponding names in the
mainstream databases. For details, see Interface Mapping (GaussDB 100 Native
Interface Names vs. Mainstream Database Interface Names). The interfaces
mentioned in this document use their native GaussDB 100 names.
Applicable Scope
This document is designed for all Huawei products that use GaussDB 100.
Intended Audience
This document is intended for all GaussDB 100 users.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Change History
Version Change Description Date
02 Added: 2019-04-05
● Descriptions of users SYS and public
in Users and Roles
Modified:
● Role STATISTICS added to Users and
Roles
● Procedure in Viewing Permissions of
a User or Role
● Directory and file permissions in
Configuring File Permission Security
Policies
● Configuring Client Access
Authentication
● Descriptions about how to change a
password in Configuring Password
Security Policies
Security threats are diverse and cannot be overcome by pure technologies. You
need to set up a security management system based on security maintenance
suggestions to safeguard your application systems.
Patch Management
A patch can fix system faults and expand system applications. You need to
manage program patches using special regulations and specify personnel to check
patches for the operating system and GaussDB 100. To install a patch, you must
contact Huawei technical support.
Defect Reporting
When GaussDB 100 is attacked, Huawei uses different processing methods based
on the attack situation.
If a user reports a system attack, Huawei will use either of the following methods
to resolve the problem:
● If a security accident occurs on site, Huawei technical support engineers will
provide remote or onsite support and work together with the user to quickly
resolve the problem and minimize the attack impact on the system.
● If no security accident occurs, Huawei technical support engineers will report
the problem to the R&D team. After the R&D team provides a solution,
Huawei technical support engineers will analyze the solution impact on the
onsite services and propose corresponding suggestions.
This chapter mainly describes how administrators set account permissions, file
permissions, and audit policies to ensure the high reliability and stability of a
database.
● User whitelist: You can add users to zhba.conf so that these users access the
database only through the IP addresses specified in zhba.conf.
● IP address whitelist: Only the IP addresses specified by TCP_INVITED_NODES
can be used to access the database.
● IP address blacklist: The IP addresses specified by TCP_EXCLUDED_NODES
cannot be used to access the database.
The IP address blacklist has the highest priority. If an IP address is configured in all
the three lists, it cannot be used for remote access.
When the user whitelist, IP address whitelist, and IP address blacklist are all
enabled:
● Users in the user whitelist can use the IP addresses in the user whitelist and IP
address whitelist to remotely connect to databases (the IP addresses must not
be in the IP address blacklist).
● If the IP address of a client is in the user whitelist (zhba.conf) or IP address
whitelist and not in the IP address blacklist, it will pass the verification for
login regardless of whether the user is in the user whitelist.
If user SYS locally logs in to a database in password-free mode, the login will not be limited
by the user whitelist, IP address whitelist, or IP address blacklist.
If user SYS logs in to a database using an encrypted password, the login will be limited by
the IP address blacklist.
Precautions
● Before enabling IP address whitelist checking, ensure that at least one of
TCP_INVITED_NODES and TCP_EXCLUDED_NODES is set. Otherwise, the
error message "GS-00254: For invited and excluded nodes is both empty, ip
whitelist function can't be enabled" will be displayed.
● User SYS can only locally log in to a database.
Prerequisites
Before configuring a user whitelist, IP address blacklist, or IP address whitelist,
ensure that LSNR_ADDR and LSNR_PORT have been configured. Otherwise, the
configuration will not take effect. Do as follows:
Method 1:
Step 1 Check whether the listening IP address and port have been configured on the
server.
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_ADDR';
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_PORT';
----End
Method 2:
Step 1 Check whether the listening IP address and port have been configured on the
server.
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_ADDR';
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_PORT';
Step 2 Restart the database for the configurations of the listening IP address and
listening port number to take effect.
cd ${GSDB_DATA}/bin
python zctl.py -t stop
python zctl.py -t start
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs
the GaussDB 100 database.
Step 3 Add an HBA entry (TYPE, USER, and ADDRESS) to the zhba.conf file.
cd ${GSDB_DATA}/cfg
vim zhba.conf
host user 127.0.0.1,192.168.3.222,20AB::9217:acff:feab:fcd0/64
● ADDRESS lists the IP addresses allowed for database connections. Separate multiple IP
addresses with commas (,). HBA entries are independent from each other and their
order in the whitelist does not affect the whitelist functionality.
● If a username contains special characters such as number sign (#) and tab characters,
enclose the name with double quotation marks (""). In host "#abc" 127.0.0.1 and host
"abc" 127.0.0.1, the strings enclosed in the double quotation marks are usernames.
● If a string is "*" or *, all users will be listed.
● The IP addresses can be IPv4 or IPv6 addresses, or a network segment with the subnet
mask or prefix length specified. All the following formats are valid:
– 192.168.3.222 indicates an IPv4 host.
– 192.168.3.0/24 indicates an IPv4 segment with the specified subnet mask length
24.
– 20AB::9217:acff:feab:fcd0 indicates an IPv6 host.
– 20AB::9217:acff:feab:fcd0/64 indicates an IPv6 segment with the specified subnet
prefix length 64.
● When editing the zhba.conf file, do not press Tab to enter a space. Otherwise, the error
message "GS-00220, hba line(20) format is not correct" will be displayed when you load
the user whitelist online.
2. Run the following statement to load the user whitelist online. The whitelist
takes effect immediately after the statement is executed.
ALTER SYSTEM RELOAD HBA CONFIG;
3. Query the DV_HBA view to check whether the user whitelist is configured
successfully.
SELECT * FROM SYS.DV_HBA;
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs
the GaussDB 100 database.
Step 2 Query for a configured IP address whitelist and a configured IP address blacklist.
zsql gaussdba/database_123@127.0.0.1:1888
SELECT VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_INVITED_NODES';
SELECT VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_EXCLUDED_NODES';
Step 3 Configure the IP address whitelist or blacklist online. The configuration takes
effect immediately, and you do not need to restart the database.
Step 4 Enable IP address whitelist checking online. The function takes effect immediately,
and you do not need to restart the database.
ALTER SYSTEM SET TCP_VALID_NODE_CHECKING = true;
Run the following command to check whether the function takes effect:
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_VALID_NODE_CHECKING';
NAME VALUE
---------------------------------------------------------------- --------------------
TCP_VALID_NODE_CHECKING TRUE
----End
Database Permissions
● System permissions
With system permissions, users can create, modify, delete, and query database
objects, log in to a database, and grant permissions to other users.
For details about system permissions, see the system permission matrix table
in SQL Syntax Reference > SQL Syntax > GRANT in GaussDB 100
V300R001C00 R&D Documentation (Standalone).
By default, only database administrators (DBA) have the system permissions.
They can run the GRANT or REVOKE statement to grant system permissions
to or revoke system permissions from other users. If ADMIN OPTION is
contained in the GRANT statement for granting a permission to a user, the
user can grant this permission to other users.
For security purposes, grant system permissions only to reliable users.
● Object permissions
With object permissions, users can perform corresponding operations, such as
SELECT, INSERT, UPDATE, DELETE, EXECUTE, DROP, LOCK, TRUNCATE, and
ALTER on database objects.
Only object owners, database administrators, and authorized users with
GRANT OPTION can run the GRANT or REVOKE statement to grant or
revoke object permissions.
For details about object permissions, see the system permission matrix table
in SQL Syntax Reference > SQL Syntax > GRANT in GaussDB 100
V300R001C00 R&D Documentation (Standalone).
Database Roles
A role is a collection of users with the same database permissions. GaussDB 100
has the following preset roles:
● DBA
Database administrator role, who has all system permissions. You are advised
not to grant the DBA role to other users.
● RESOURCE
Base object creation role, who has permissions for CREATE PROCEDURE,
CREATE SEQUENCE, CREATE TABLE, and CREATE TRIGGER.
● CONNECT
Connection role, who has the permission for CREATE SESSION.
● STATISTICS
Has the permission to create, delete, and view WSR snapshots, and the
permission to generate WSR reports.
Procedure
Step 1 View all users.
SELECT * FROM DB_USERS;
The new password must meet the following password security requirements:
● Contain 8 to 64 characters.
● Start with a letter, number sign (#), or an underscore (_) if the password is
not enclosed in single quotation marks ('').
● Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
● Contain only the following four character types and at least three of them:
– Digits
– Lowercase letters
– Uppercase letters
– Spaces or special characters (For details about the list of special
characters supported by GaussDB 100, see the table below.)
● Enclose spaces and special characters excluding _#$ with single quotation
marks ('').
● If the password contains the special character $, use the escape character \
when connecting to the database through zsql. Otherwise, the login will fail.
NOTICE
1 ` 9 & 17 \ 25 '
2 ~ 10 * 18 | 26 "
3 ! 11 ( 19 [ 27 ,
4 @ 12 ) 20 { 28 <
5 # 13 - 21 } 29 .
6 $ 14 _ 22 ] 30 >
7 % 15 = 23 ; 31 /
8 ^ 16 + 24 : 32 ?
----End
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
● Contain 8 to 64 characters.
● Start with a letter, number sign (#), or an underscore (_) if the password is
not enclosed in single quotation marks ('').
● Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
● Contain only the following four character types and at least three of them:
– Digits
– Lowercase letters
– Uppercase letters
– Spaces or special characters (For details about the list of special
characters supported by GaussDB 100, see the table below.)
● Enclose spaces and special characters excluding _#$ with single quotation
marks ('').
● If the password contains the special character $, use the escape character \
when connecting to the database through zsql. Otherwise, the login will fail.
1 ` 9 & 17 \ 25 "
2 ~ 10 * 18 | 26 ,
3 ! 11 ( 19 [ 27 <
4 @ 12 ) 20 { 28 .
5 # 13 - 21 } 29 >
6 $ 14 _ 22 ] 30 /
7 % 15 = 23 : 31 ?
8 ^ 16 + 24 ' - -
For details about user permissions, see SQL Syntax Reference > SQL Syntax >
GRANT in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
----End
Examples
● Create a security administrator Tom, and grant the CREATE USER permission
to the administrator.
CREATE USER Tom IDENTIFIED BY '1234@abc';
GRANT CREATE USER TO Tom;
● Create a role role_r that has permission to query the films table, create a
user user_read, and grant role_r to user_read.
Prerequisites
● You have read through Users and Roles.
● The user or role to be viewed exists.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
For example, to view the system permissions of user joe, run the following
command:
SELECT * FROM ADM_SYS_PRIVS WHERE GRANTEE ='JOE';
4 rows fetched.
----End
Prerequisites
● You have read through Users and Roles.
● The user to be modified exists.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
You can run the GRANT or REVOKE statement to grant or revoke user
permissions.
● Grant permissions.
GRANT CREATE SESSION TO joe;
● Revoke permissions.
REVOKE CREATE SESSION FROM joe;
For details about the permissions, see SQL Syntax Reference > SQL Syntax >
GRANT in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
The new password must meet the following password security requirements:
● Contain 8 to 64 characters.
● Start with a letter, number sign (#), or an underscore (_) if the password is
not enclosed in single quotation marks ('').
● Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
● Contain only the following four character types and at least three of them:
– Digits
– Lowercase letters
– Uppercase letters
– Spaces or special characters (For details about the list of special
characters supported by GaussDB 100, see the table below.)
● Enclose spaces and special characters excluding _#$ with single quotation
marks ('').
● If the password contains the special character $, use the escape character \
when connecting to the database through zsql. Otherwise, the login will fail.
NOTICE
1 ` 9 & 17 \ 25 "
2 ~ 10 * 18 | 26 ,
3 ! 11 ( 19 [ 27 <
4 @ 12 ) 20 { 28 .
5 # 13 - 21 } 29 >
6 $ 14 _ 22 ] 30 /
7 % 15 = 23 : 31 ?
8 ^ 16 + 24 ' - -
----End
Prerequisites
The user to be deleted exists.
Related Concepts
● When running DROP USER to delete a user, you must use CASCADE to delete
the referenced objects (excluding databases) of the user. The locked
referenced objects of a user cannot be deleted until they are unlocked or the
processes that lock them are killed.
● When DROP USER is used to delete a user, the user database will not be
deleted.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
----End
Password Complexity
You need to specify a password when creating a database, creating a user, or
modifying a user. The password complexity must meet requirements. Otherwise,
you will be prompted to enter the password again. The password complexity must
meet the following requirements:
● Contain 8 to 64 characters.
● Start with a letter, number sign (#), or an underscore (_) if the password is
not enclosed in single quotation marks ('').
● Cannot be the same as the username or the username spelled backwards
(case-insensitive in verification).
● Contain only the following four character types and at least three of them:
– Digits
– Lowercase letters
– Uppercase letters
– Spaces or special characters (For details about the list of special
characters supported by GaussDB 100, see the table below.)
● Enclose spaces and special characters excluding _#$ with single quotation
marks ('').
● If the password contains the special character $, use the escape character \
when connecting to the database through zsql. Otherwise, the login will fail.
1 ` 9 & 17 \ 25 "
2 ~ 10 * 18 | 26 ,
3 ! 11 ( 19 [ 27 <
4 @ 12 ) 20 { 28 .
5 # 13 - 21 } 29 >
6 $ 14 _ 22 ] 30 /
7 % 15 = 23 : 31 ?
8 ^ 16 + 24 ' - -
Password Reuse
A password can be reused only when it meets the requirements of reuse days
(PASSWORD_REUSE_TIME) and reuse times (PASSWORD_REUSE_MAX).
Large values of the two parameters bring higher security. However, if the values of the
parameters are set too large, inconvenience may occur. The default values of the two
parameters meet the security requirements. You can change the parameter values as
needed for higher security.
● PASSWORD_REUSE_TIME
Specifies the number of days during which a password cannot be reused.
The value is a positive number. The integral part indicates the number of days
and its decimal part can be converted into hours, minutes, and seconds.
If the parameter value is changed to a smaller one, new passwords will be
checked based on the new parameter value.
If the parameter value is changed to a larger one (for example, changed from
a to b), the historical passwords before b days probably can be reused
because these historical passwords may have been deleted. New passwords
will be checked based on the new parameter value. The absolute time is used.
Historical passwords are recorded using absolute time and do not recognize
time changes.
● PASSWORD_REUSE_MAX
Specifies the number of password changes required before the current
password can be reused. If the parameter value is changed to a smaller one,
new passwords will be checked based on the new parameter value. If the
parameter value is changed to a larger one (for example, changed from a to
b), the historical passwords before the last b passwords probably can be
reused because these historical passwords may have been deleted. New
passwords will be checked based on the new parameter value.
PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME must be set in
conjunction with each other.
Set the two parameters as follows:
– If PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME are set to
UNLIMITED. The password can be reused without any restrictions.
– If PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME are set to
specified values, the password can be reused only when the conditions
specified by both the parameters are met.
– If either of PASSWORD_REUSE_MAX and PASSWORD_REUSE_TIME is
set to a specified value and the other is set to UNLIMITED, the password
cannot be reused. The value is a positive integer.
For example, run the following commands:
-- View the configured PASSWORD_REUSE_TIME:
SELECT * FROM ADM_PROFILES WHERE RESOURCE_NAME = 'PASSWORD_REUSE_TIME';
-- Change the value of PASSWORD_REUSE_TIME to 60:
ALTER PROFILE DEFAULT LIMIT PASSWORD_REUSE_TIME 60;
-- View the configured PASSWORD_REUSE_MAX:
SELECT * FROM ADM_PROFILES WHERE RESOURCE_NAME='PASSWORD_REUSE_MAX';
-- Change the value of PASSWORD_REUSE_MAX to 3:
ALTER PROFILE DEFAULT LIMIT PASSWORD_REUSE_MAX 3;
● PASSWORD_LIFE_TIME
Specifies the maximum number of days that a password can be used.
Default value: 180
Password Change
During database creation, the database administrator SYS is created. The
password of SYS needs to be periodically changed for account security. It is
recommended that the database administrators and common users periodically
change their own passwords to prevent password leakage. Database
administrators can change their own and common users' passwords. If common
users forget their passwords, they can ask the administrators to change their
passwords.
● A database administrator cannot change the password of another
administrator.
● A database administrator can change the password of a common user
without being required to provide the common user's old password.
● A database administrator can change its own password but is required to
provide the old password.
For example, to change the password of user Tom, run the following command:
ALTER USER Tom IDENTIFIED BY '1234@abc' REPLACE '5678@def';
1234@abc and 5678@def are the new password and old password of user Tom,
respectively. The new password must comply with the complexity requirements.
Otherwise, the password change will fail.
NOTICE
Active users cannot be deleted. You need to disconnect their sessions before deleting them.
When a user is deleted with CASCADE is use, all the objects belonging to the user are
deleted.
Configuration Description
Item
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
Table 3-7 lists the audit objects and the corresponding open flags. To audit
multiple objects, set AUDIT_LEVEL to a sum. For example, if DDL, DCL, and DML
operations need to be audited at the same time, set AUDIT_LEVEL to 7.
DDL 1 00000001
DCL 2 00000010
DML 4 00000100
PL 8 00001000
The change to the audit logging level takes effect immediately after the statement
is executed and you do not need to restart the database.
----End
Prerequisites
● Audit has been enabled.
● GaussDB 100 is running properly and operations such as addition,
modification, deletion, and queries have been performed in the database.
Otherwise, no audit results will be available.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs
the GaussDB 100 database.
Step 2 Go to the audit log directory.
cd $GSDB_DATA/log/audit
----End
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/gaussdb_123@192.168.0.1:1888
default value is 10. Users can increase the value as required, but the performance
will be affected.
● Run the following commands to set the disk space for each audit file:
-- View the currently configured space:
SELECT NAME, VALUE FROM SYS.DV_PARAMETERS WHERE NAME='_AUDIT_MAX_FILE_SIZE';
NAME VALUE
------------------------------ ----------------------------------------------------------------
_AUDIT_MAX_FILE_SIZE 1M
1 rows fetched.
-- Change the value of _AUDIT_MAX_FILE_SIZE to 10M:
ALTER SYSTEM SET _AUDIT_MAX_FILE_SIZE=10M;
● Run the following commands to set the maximum number of audit files:
-- View the currently configured number:
SELECT NAME, VALUE FROM SYS.DV_PARAMETERS WHERE NAME='_AUDIT_BACKUP_FILE_COUNT';
NAME VALUE
------------------------------ ----------------------------------------------------------------
_AUDIT_BACKUP_FILE_COUNT 2
1 rows fetched.
-- Change the value of _AUDIT_BACKUP_FILE_COUNT to 5:
ALTER SYSTEM SET _AUDIT_BACKUP_FILE_COUNT=5;
----End
Related Concepts
When being installed, GaussDB 100 automatically configures the permissions for
its files, including files (such as log files) generated during the running process.
File permissions are configured as follows:
Suggestions
When being installed, GaussDB 100 automatically configures the permissions for
its files, including files (such as log files) generated during the running process.
The specified permissions meet requirements in most scenarios. If you have any
special requirements for the related permissions, you are advised to periodically
check the permission settings to ensure that the permissions meet the product
requirements.
This chapter describes how administrators change the system password and check
system accounts, processes, services, and ports to ensure normal running of the
operating system and database.
You are advised to update the password periodically for the privileged user root of
the Linux operating system. A new password must comply with secure password
policies.
You are advised to quarterly check running accounts in the operating system and
application system to detect unsuitable accounts or account permissions,
periodically update account passwords, and delete unnecessary accounts.
----End
----End
----End
5 Machine-to-Machine Interfaces
5.1 Common.py
Description
Common.py is used to check whether directories exist and delete temporary files.
It is a common function library for other tools and cannot be executed
independently.
5.2 Common.pyc
Description
Common.pyc is a bytecode file. Invoking Common.py generates a Common.pyc
file, which cannot be executed independently.
5.3 GaussLog.py
Description
GaussLog.py is used to define log levels, read and write logs, as well as enable
and disable logging. It is a common function library for other tools and cannot be
executed independently.
5.4 GaussLog.pyc
Description
GaussLog.pyc is a bytecode file. Invoking GaussLog.py generates a GaussLog.pyc
file, which cannot be executed independently.
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
Overview
GaussDB 100 is an enterprise-level relational database engine developed by Huawei. It
features high performance, high availability, high scalability, and easy O&M; and can run
stably and efficiently on the x86 open architecture. GaussDB 100 supports SQL standards and
the syntax of mainstream commercial databases, facilitating application development and
migration. Using the secure, reliable storage provided by GaussDB 100 for relational and
structured data, you can develop and manage highly available, high-performance service
applications in finance, telecom, cloud, and industry digitalization fields. The framework of
GaussDB 100 is component-based and can be used for a standalone database or a cluster.
This document applies to the following scenarios:
l The communication with the server is normal. A serious database fault or service
exception occurs. For example:
– Database fault
– OS fault
– Hardware fault
l The communication with the server is abnormal. The server cannot be remotely
connected.
– Network fault
– Other hardware fault
GaussDB 100 software packages are classified into basic and compatible packages. They
differ in the names of various interfaces. A compatible package is used to offer compatibility
with the usage habits of mainstream databases in the industry. The interfaces mentioned in
this document use names from basic packages. If you have installed compatible packages, you
can use either the interface names of basic packages or those of compatible packages by
referring to Interface Mapping (Basic Packages vs. Compatible Packages). For details
about how to install the basic and compatible packages, see "Installation and Deployment" in
GaussDB 100 V300R001C00 User Guide (Standalone).
Intended Audience
The document is intended for GaussDB 100 database administrators and provides guidance
for emergency maintenance and troubleshooting.
The following information is mandatory for a database administrator:
l Knowledge about a relational database. The theory helps you get familiar with GaussDB
100 and its usage.
l Knowledge about OSs, You will need it when you install, run, and maintain GaussDB
100.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Example Conventions
The following table describes some example information in this document. You can replace
the example information as needed.
Information Description
Parameters of GaussDB 100 tools are parsed in sequence. If a parameter is specified for
multiple times, the last value takes effect.
[ x | y | ... ] Indicates that one item is selected from two or more options or no
item is selected.
{ x | y | ... } Indicates that one item is selected from two or more options.
{ x | y | ... } [ ... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with spaces.
{ x | y | ... } [ ,... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with commas (,).
Change History
Issue Change Time
Description
03 Added: 2019-06-06
Pre-check Fails
During Automatic
HA Upgrade
2 Troubleshooting Process
3 Fault Demarcation
When a fault occurs, identify fault location, for example, hardware, network, OS, or database,
and take troubleshooting measures accordingly to restore database services. To locate a fault,
perform the following steps:
Step 1 Log in to the server where GaussDB 100 is deployed using SSH or other remote login tools.
If the OS encounters a kernel panic (an internal fatal error is detected), the system takes about
20 minutes to restart. Try to connect to the server again every 5 minutes. If the connection
fails for 20 minutes, it indicates that the server fails to respond due to a fault or the network
connectivity is abnormal. In this case, contact the administrator for further fault locating on
site.
If the connection fails, follow the instructions provided in Locating Network Faults.
Step 2 Locally log in to a server where GaussDB 100 is deployed as user root.
If you cannot log in to the server, follow the instructions provided in Locating OS Faults.
If the zengine status is open, mount, or nomount, it indicates that the database service is
started.
gaussdba 14342 2.5 6.8 1059336 510404 ? Sl May21 104:44 /home/
gaussdba/app/bin/zengine open -D /home/gaussdba/data
If the process is not displayed, follow the instructions provided in Locating Database Faults.
Common disk faults include insufficient disk space, bad blocks of disks, and unmounted
disks.
1. Check whether any unmounted disks exist.
fdisk -l
If the disk has the preceding problems, follow the instructions provided in Locating Disk
Faults.
----End
4 Fault Locating
This section describes how to locate faults after determining the fault types by referring to
Fault Demarcation.
Step 1 Locally log in to a server where GaussDB 100 is deployed as user root.
If you cannot log in to the server, follow the instructions provided in Locating OS Faults.
Step 2 Log in to the GaussDB 100 database.
zsql
conn jack/database_123@192.168.0.1:1888
jack/database_123 indicates the username and password used for logging in to the database.
192.168.0.1 indicates the IP address of the database server. 1888 indicates the connected port.
Step 3 Run the following SQL statement to query running SQL statements:
SELECT SID,SERIAL#, EVENT, PROGRAM, CLIENT_IP, (SYSDATE - SQL_EXEC_START)*86400,
WAIT_SID, CURRENT_SQL,SQL_ID, MODULE FROM DV_SESSIONS WHERE STATUS = 'ACTIVE';
Where,
l If stopping the blocked session affects the normal running of the database, you are
advised to contact Huawei technical support to confirm whether the session can be
stopped.
----End
If the login is successful and the system responds slowly, collect the following information:
l Online users
-- View online users.
who
l CPU usage
Check whether any processes cause high CPU usage.
top -H
l I/O usage
iostat -x 1 3
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
xvda 0.01 1.10 0.07 1.01 0.83 10.25
20.40 0.03 24.88 4.04 26.36 1.38 0.15
xvde 0.00 0.39 0.13 1.62 3.47 33.83
42.78 0.03 18.01 4.94 19.05 1.21 0.21
dm-0 0.00 0.00 0.13 2.01 3.47 33.83
34.94 0.03 15.71 4.95 16.40 0.99 0.21
– rrqm/s: number of merge read operations per second, that is, delta(rmerge)/s
– wrqm/s: number of merge write operations per second, that is, delta(wmerge)/s
– r/s: number of read I/O on devices per second, that is, delta(rio)/s
– w/s: number of write I/O on devices per second, that is, delta(wio)/s
– rKB/s: number of Kbytes read per second, which is half of rsec/s because the size
of each sector is 512 bytes
– wKB/s: number of Kbytes written per second, which is half of wsec/s
– avgrq-sz: average size of data (sectors) of each I/O operation performed on a
device, that is, delta(rsect + wsect)/delta(rio + wio)
– avgqu-sz: average I/O queue length, that is, delta(aveq)/s/1000 (the unit of aveq is
ms)
– await: average wait time (in ms) for each I/O operation, that is, delta(ruse + wuse)/
delta(rio + wio)
– svctm: average service time (in ms) for each I/O operation, that is, delta(use)/
delta(rio + wio)
– %util: what percentage of a second is used for I/O operations, or in what
percentage of a second the I/O queue is not empty, that is, delta(usr)/s/1000 (the
unit of usr is ms)
l Memory usage
Use the top command to check which processes occupy more memory than expected.
vmstat 1 3
l OS status
– View system log information (/var/log/messages) or dmesg information as user
root to check whether errors occurred in the OS.
– Run the sysctl –a and cat /etc/sysctl.conf commands as user root to obtain system
parameter information.
– Run uname -a to query system kernel information.
– Run the required command to query the OS version.
n To check the SUSE Linux OS version, run cat /etc/SuSE-release.
n To check the Red Hat version, run cat /etc/redhat-release.
n To check the EulerOS version, run cat /etc/euleros-release.
– Use the cat /proc/cpuinfo and cat /proc/meminfo commands to obtain CPU and
memory information.
For details about error codes, see GaussDB 100 V300R001C00 Error Code Reference.
l Run Logs
A run log prints GaussDB 100 running information. When a database is faulty, examine
the zengine.rlog file.
The log directory is $GSDB_DATA/log/run/zengine.rlog by default.
l Debug Logs
A debug log prints debug information during GaussDB 100 running. When a database is
faulty and debug logging has been enabled, examine the zengine.dlog file.
The log directory is $GSDB_DATA/log/debug/zengine.dlog by default.
l Alarm Logs
An alarm log prints alarm information during GaussDB 100 running. For details about
the alarm information, see zenith_alarm.log.
The log directory is $GSDB_DATA/log/zenith_alarm.log.
l zctl Logs
A zctl log prints information about O&M operations performed by zctl.py. To obtain
complete O&M information of GaussDB 100, examine the zctl-yyyy-mm-dd_xxx.log file
and the zenithstatus.log file, which contains output information upon database startup.
The log directory is $GSDB_DATA/log/zctl-yyyy-mm-dd_xxx.log.
l Startup Logs
A startup log records output information upon database startup. For details, see
zenithstatus.log. To obtain complete O&M information of GaussDB 100, examine the
zctl-yyyy-mm-dd_xxx.log file and the zenithstatus.log file, which contains output
information upon database startup.
The log directory is $GSDB_DATA/log/zenithstatus.log.
l Trace Logs
A trace log records the information about database session deadlocks. For details about
session deadlocks, see zengine_03_xxxxxx.trc.
The OS has a core dump mechanism. If this mechanism is enabled, core files are generated
for each core dump, which has an impact on the OS performance and disk space. GaussDB
100 allows core files to be generated even if the core mechanism is not configured for the OS.
In this way, coredump problem locating does not affect other programs in the OS.
If the disk is RAID 5, run the df -h command to check whether the disk space is full.
Actually, the disk space may not be full, but multiple disks in a RAID group may be
faulty, or junk data may exist in the directory. You can use a third-party RAID controller
monitoring tool, such as MegaCLI, to monitor RAID disk faults, or check for and delete
redundant files (such as core files) in the disks.
l Disks have bad blocks. In this case, the OS rejects read and write operations to protect
the file system. You can use the bad block check tool, for example, badblocks, to check
whether bad blocks exist.
root:~ # badblocks /dev/xvda1 -s -v
Checking blocks 0 to 30681000
Checking for bad blocks (read-only test): 306809600674112/ 306810000000
30680964
30680973
...
done
Pass completed, 37 bad blocks found.
Check the disk mounting status and mount the unmounted disks. Otherwise, you need to
mount them again after the restart.
Emergency handling is an act of troubleshooting major and urgent accidents that occur during
system or device running, to quickly recover services and minimize loss caused by the
accidents.
For details about how to locate a database fault, see Locating Database Faults.
Guidelines
An emergency refers to a situation where unexpected faults affect a wide range of services or
devices or severely affect the QoS of a database. The emergency plan must be immediately
started if any of the following problems occur.
When a major accident occurs, services must be recovered as soon as possible. Do not blindly
take actions, or the problem impact will become more severe.
The key to improving the efficiency of emergency handling is as follows:
l When there is an emergency fault, maintenance personnel must keep calm and contact
Huawei technical support.
l They should convene onsite maintenance personnel as soon as possible to confirm what
operations have been performed before the fault occurs, including but not limited to:
– Modification to data configurations
– Upgrade
– Network cable removal and insertion
6 Common Faults
Problem
The installation is successful and the parameter settings are correct, but the primary and
standby databases fail to be connected. The error information is as follows:
GS-00326, replica thread closed, replica agent error, remote ip [192.168.0.1] not
configured in archive destination [srv_replica.c:262]
Cause Analysis
The HA primary/standby connection relationship is determined by the link specified by the
ARCHIVE_DEST_N parameter (including the peer IP address and port number). The
initiator uses the parameter to send a connection. After receiving the connection, the receiver
parses the peer IP address and compares it with the peer IP address configured in the local
configuration file. If the two IP addresses are the same, the connection is accepted. Otherwise,
the connection is denied.
In most cases, this check works properly. If multiple NICs exist on the machine where the
connection initiator is located, or if multiple IP addresses are used by a NIC, the source IP
address used by the connection initiator may be different from the IP address configured on
the receiver. As a result, the connection fails, as illustrated in the following figure.
The source IP address used by A to connect to B may be ip2, but the IP address configured in
B is ip1. As a result, A fails to connect to B.
Procedure
Step 1 Log in to a server where GaussDB 100 is deployed as user root.
Step 3 If multiple NICs exist on the machine, or if multiple IP addresses are used by a NIC,
determine the IP address in use and perform the following operations:
1. Set the LOCAL_HOST attribute of the ARCHIVE_DEST_N parameter in the
$GSDB_DATA/cfg/zengine.ini file of the local machine to the IP address in use.
2. Set the SERVICE IP address of the ARCHIVE_DEST_N parameter in the
$GSDB_DATA/cfg/zengine.ini file of the peer machine to the IP address in use.
----End
6.2.1.1 Message "Can not get instance '/date_directory' process pid" Is Displayed
Problem
The database cannot be started. The error information is as follows:
Can not get instance '/home/gaussdba/data' process pid.
Cause Analysis
The database memory space is insufficient.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Modify the SGA parameters in the zengine.ini file.
SGA_BUFF_SIZE must meet the following requirements:
114 MB ≤ SGA_BUFF_SIZE < shmmax, where shmmax is the Linux kernel parameter
defining the maximum size of a shared memory segment
SGA_BUFF_SIZE = LOG_BUFFER_SIZE + SHARED_POOL_SIZE +
DATA_BUFFER_SIZE + TEMP_BUFFER_SIZE
Set LOG_BUFFER_SIZE, SHARED_POOL_SIZE, DATA_BUFFER_SIZE, and
TEMP_BUFFER_SIZE as needed by referring to "Parameters" in GaussDB 100
V300R001C00 Database Reference Information (Standalone).
----End
Problem
l The database cannot be started. The error information is as follows:
GS-00201: The parameter name "_LOG_LEVELS" was invalid
Failed to load params
instance startup failed
Cause Analysis
The database parameter name or value is incorrect.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Open the run log file and view the log recorded when the database fails to be started.
If multiple parameter names are incorrect, only the first incorrect parameter name is displayed
in the command output. Therefore, you need to view run logs to find all incorrect parameter
names.
cd $GSDB_DATA/log/run
vim zengine.rlog
Based on the error information, correct the parameter names or values in the zenigne.ini file
by referring to "Parameters" in GaussDB 100 V300R001C00 Database Reference
(Standalone).
----End
6.2.1.3 Message "Error, open Log file failed:backup trace catch an exception." Is
Displayed
Problem
The database cannot be started. The error information is as follows:
Error, open Log file failed:backup trace catch an exception.
Cause Analysis
The redo log file permission is insufficient.
Procedure
Step 1 Go to the redo log directory.
cd $GSDB_DATA/data
Step 2 Assign permissions to all redo log files. The recommended permission is 0600.
chmod 0600 log1 log2 log3 log4 log5 log6
----End
6.2.1.4 Message "WARNING: could not create listen socket for "LOCALHOST"
(postmaster.c:1245)." Is Displayed
Problem
The database cannot be started. The error information is as follows:
HINT: Is another postmaster already running on port 1888? If not, wait a few
seconds and retry.
WARNING: could not create listen socket for "LOCALHOST" (postmaster.c:1245).
Cause Analysis
During the installation, the default listening port number 1888 is used for the database. The
port number has been occupied by another process.
Procedure
Step 1 Check the process that occupies the port number. Assume the process name is a.
netstat - anop | grep 1888
Step 2 Choose either of the following methods based on your service needs:
l Method 1: To start the database service first, forcibly stop the process that occupies port
1888.
a. Run the following command to check the PID that occupies port 1888:
ps -ef | grep a
l Method 2: To ensure that other services also run properly, change the listening port of the
database.
a. Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
b. Modify the zengine.ini file to replace the listening port number LSNR_PORT of
the database with an unoccupied port number.
-- Open the zengine.ini file.
vim $GSDB_DATA/cfg/zengine.ini
-- Change the value of LSNR_PORT.
LSNR_PORT = 1887
In HA mode, change the values of LSNR_PORT for both primary and standby
databases.
c. Restart the database service.
-- Start the database service.
python $GSDB_HOME/bin/zctl.py -t start
----End
Problem
The database cannot be started. The error information is as follows:
database directory "/opt/gaussdb/data" does not exist.
Cause Analysis
The software package directory does not exist. It may have been deleted by mistake.
Procedure
Use the backup file to restore the data folder. The data file may vary according to the version.
If the restoration fails, contact Huawei technical support.
Problem
The database cannot be started. The error information is as follows:
Can not get dn '/opt/gaussdba/data' process pid.
Cause Analysis
The OS memory is insufficient.
Procedure
Step 1 Run the top command as user root to check whether the memory is sufficient.
GaussDB 100 requires that the size of each database instance be greater than or equals 8 GB.
top
top - 19:16:18 up 2 days, 7:58, 3 users, load average: 0.08, 0.06, 0.05
Tasks: 501 total, 1 running, 500 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 7471588 total, 746488 free, 2720100 used, 4005000 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 2691672 avail Mem
Step 3 If unreleased shared memory exists, run the following command to release it:
ipcrm -m 32768
----End
6.2.1.7 Message "Log replay stopped at 1:2, it did not reach the least recovery
point (LRP) 1:3." Is Displayed
Problem
The database fails to be started. The run log records the information about the damaged log
file ("Checksum failed when read data from file").
NOTE
In the "Log replay stopped at 1:2, it did not reach the least recovery point (LRP) 1:3." message, "1:2"
(the actual log point where the replay stops) and "1:3" (the least log point where the replay is expected to
stop) are used as examples. The actual values vary according to the actual situation.
For details about the path of run logs, see "Database System Management > Managing Logs > Viewing
Logs" in GaussDB 100 V300R001C00 User Guide (Standalone).
Cause Analysis
When the database loads log files from disks, an error is reported and the system exits
loading. Possible causes are as follows:
l A silent fault occurs on the disk. As a result, the log file is damaged.
l The write operation is invalid. As a result, the log file is damaged.
Procedure
Step 1 Determine the location where the log is damaged. Open the run log and check the "Checksum
failed when read data from file" log recorded when the database is started. The following is an
example:
[RCY] current lfn 29429779, rcy point lfn 29408078, consistent point 29465711,
lrp point lfn 29465717,Checksum failed when read data from file;
Where,
l current lfn indicates the actual log point for replay.
l rcy point lfn indicates the log point where the replay starts.
l consistent point indicates the log point that can ensure consistency.
l lrp point lfn indicates the least log point, until which logs must be replayed to ensure
zero data loss in the database.
Step 2 Take countermeasures.
l The replay fails because the redo log is damaged. current lfn (actual replay log point) is
greater than or equals consistent point (consistency log point) and less than lrp point
lfn (LRP point).
In this case, run the following command when the database is in MOUNT state. Some
lost data cannot be restored.
RECOVER DATABASE UNTIL CANCEL;
ALTER DATABASE OPEN RESETLOGS;
l The replay fails because the redo log is damaged. current lfn is less than consistent
point.
In this case, run the following command when the database is in MOUNT state.
However, database exceptions may occur, because data becomes inconsistent after the
database is started. Exercise caution when performing this operation. You are advised to
contact Huawei technical support to confirm whether the operation can be performed.
RECOVER DATABASE UNTIL CANCEL;
ALTER DATABASE OPEN FORCE IGNORE LOGS;
NOTE
----End
Problem
The database cannot be started. The "invalid log file head checksum" log is recorded in run
logs.
NOTE
For details about the path of run logs, see "Database System Management > Managing Logs > Viewing
Logs" in GaussDB 100 V300R001C00 User Guide (Standalone).
Cause Analysis
When the database loads log files from disks, an error is reported and the system exits
loading. Possible causes are as follows:
l A silent fault occurs on the disk. As a result, the log file is damaged.
l The write operation is invalid. As a result, the log file is damaged.
Procedure
Step 1 Start the database in MOUNT mode.
cd $GSDB_HOME/bin
python zctl.py -t start -m MOUNT
Step 2 Find the value of FILE_NAME in the "invalid log file head checksum file" log from the run
logs.
Step 3 Check the DV_LOG_FILES view and record the value of STATUS corresponding to the
FILE_NAME in Step 2.
SELECT ID, STATUS , FILE_NAME FROM DV_LOG_FILES;
----End
Problem
If a database process fault occurs or permissions are incorrect, the database cannot be stopped.
The possible error information is as follows:
l The database process status is abnormal. The error information is as follows:
GS-00305, wait for server response timeout.
Cause Analysis
l The database process status is abnormal.
The database process status may be abnormal. You need to stop the abnormal process
and then stop the database.
l The file permission is insufficient.
Check whether the permission for the zctl.py file is sufficient. If not, you cannot run the
zctl.py command to stop the database.
Procedure
l The database process status is abnormal.
a. Log in to a server where GaussDB 100 is deployed as user root.
b. Run the following command to check the process status and record the PID of the
zengine process whose STAT is D:
ps aux
c. Run the following command to stop the faulty process. Assume that the PID of the
process is 324.
kill -9 324
Problem
The database fails to be started. Analysis on run logs finds that the residual transaction fails to
be rolled back.
Cause Analysis
When the database is started, it goes through two important phases: log recovery and residual
transaction rollback. When a residual transaction is rolled back in the database, if the page of
the residual transaction is damaged, the residual transaction fails to be rolled back. As a result,
the database cannot be started. The residual transaction rollback is not controlled by the user.
Therefore, the user cannot choose to skip the damaged residual transaction. In this case, you
can run the COMMIT FORCE command to forcibly commit the residual transaction to start
the database.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
The value of SEG_ID.SLOT.XNUM consists of the values of SEG_ID, SLOT, and XNUM.
Ensure the value format is correct.
----End
Problem
Symptom 1: A client reports error GS-00880, indicating that pages are damaged. For
example:
GS-00880, Page 3-44 corrupted
Symptom 2: A database stops suddenly, and the run log records "ABORT INFO:page
corrupted". For example:
ERROR>[BUF] ABORT INFO:page corrupted(file 0, page 9632).checksum level 1, page
size 8192,cks 12745,from_disk 1, changed 0
Cause Analysis
If the database detects that the checksum value on a page does not match, the page has been
damaged. If the page damage does not affect the normal running of the database, the client
will return error GS-00880 and print the ID of the damaged page. If the database cannot run
properly due to the page damage, it will stop and record "ABORT INFO:page corrupted" in
the run log. Possible causes are as follows:
Solution 1
If the disk pages of a standalone or HA primary database are faulty and the databases cannot
be started, you can back up files and replay logs to repair the damaged pages.
Prerequisites
l The database contains all redo logs generated from the time when the backup started to
the time when the disk data pages became faulty.
l The ztrst tool package GAUSSDB100-V300R001C00-RESTORE.tar.gz has been
obtained from the GAUSSDB100-V300R001C00-TOOLS.tar.gz package.
l Environment variables required by ztrst have been configured. For details, see Step 4 in
"Database System Management > Backup and Restoration > Data Restoration Using
ztrst" in GaussDB 100 V300R001C00 User Guide (Standalone).
l There are no tmp_data and export_data directories in the temporary data directory
specified by parameter -D.
l The backup set specified by parameter -B must be valid, and the backup time must be
earlier than the time when bad pages are generated.
l The database version used in file backup must be the same as the tool version.
NOTE
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 If the database already stops, start it in MOUNT mode. If the database status is normal, skip
this step.
cd $GSDB_HOME/bin
python zctl.py -t start -m MOUNT
Step 3 Obtain the data file and page sequence numbers of the damaged page.
l If the database status is normal, the numbers can be obtained from the error information
returned by the client. The following error information indicates that the sequence
number of the data file is 3 and that of the page is 44.
GS-00880, Page 3-44 corrupted
l If the database stops due to a disk fault, the numbers can be obtained from the run log.
The following log indicates that the sequence number of the data file is 0 and that of the
page is 9632.
ERROR>[BUF] ABORT INFO:page corrupted(file 0, page 9632).checksum level 1,
page size 8192,cks 12745,from_disk 1, changed 0
Step 4 Run the ztrst tool to repair the damaged disk page.
ztrst Changeme_123:1881 -D /temp -B /data/backup/bak1 -P 3_44 -S 192.168.0.1:1888
For details about ztrst parameters, see "Database Management Tools > ztrst" in GaussDB 100
V300R001C00 Operation Guide to Tools (Standalone).
Step 5 After page repair is complete, run the following command to restart the database if it has been
started in MOUNT mode; or skip this step if the database status is OPEN.
cd $GSDB_HOME/bin
python zctl.py -t stop
python zctl.py -t start
----End
Solution 2
In HA scenarios, if the primary database has a damaged page and it does not stop or can be
restarted after a stop, you can use a standby database to automatically repair the page.
Prerequisites
Procedure
Step 1 If the primary database already stops, start it. Otherwise, skip this step.
cd $GSDB_HOME/bin
python zctl.py -t start
Step 3 Re-execute the database service that reports the error. The primary database will automatically
repair the damaged page.
NOTE
The automatic repair function can be enabled only on the primary database. If the primary database does
not receive a correct page from a standby database within the timeout period, it will stop, and the
location and details of the damaged page will be recorded in the run log. In this case, you can use the
backup set to manually repair the page.
You can change the value of BLOCK_REPAIR_TIMEOUT to adjust the timeout period for obtaining
correct pages from a standby node when the automatic data page repair function is enabled on the
primary database. The default timeout period is 60s.
----End
Problem
Fail to generate archive logs. The error information is as follows:
LOG: archive command failed with exit code 1
-bash: cd: archive_log/: Permission denied
Cause Analysis
l Archive logs do not exist. As a result, the archive logs fail to be generated.
l The permission for archive logs is changed and becomes insufficient.
l The directory for storing archive logs is incorrectly modified.
Procedure
l The log file does not exist.
a. Obtain the backup file of the log file and restore the log file.
l The log file permission is insufficient.
a. Log in to a server where GaussDB 100 is deployed as user root.
b. Run the following command to change the permission for the log file.
The log file directory is $GSDB_DATA/archive_log.
-- The permission for the directory is 0700.
chmod 700 $GSDB_DATA/archive_log
-- The permission for the files in the directory is 0600.
chmod 600 $GSDB_DATA/archive_log/*
Problem
Audit logs cannot be generated during database running. The $GSDB_DATA/log/audit file is
empty, but no error message is displayed.
Cause Analysis
l The permission for archive logs is changed and becomes insufficient.
l The directory for storing archive logs is incorrectly modified.
Procedure
l The log file permission is insufficient.
a. Log in to a server where GaussDB 100 is deployed as user root.
b. Run the following command to change the permission for the log file.
The log file directory is $GSDB_DATA/log/audit.
-- The permission for the directory is 0700.
chmod 700 $GSDB_DATA/log/audit
-- The permission for the files in the directory is 0600.
chmod 600 $GSDB_DATA/log/audit/*
Problem
The data of the primary database fails to be backed up. The error information is as follows:
GS-00719: session killed
Cause Analysis
Switchover is probably being performed on the standby database at the same time. As a result,
the primary database also performs a switchover and its backup fails.
Procedure
Step 1 After the switchover is complete on the standby database, back up data again.
----End
Problem
If data restoration by running the RESTORE command fails, the following error information
is displayed:
Cause Analysis
During data restoration, the database needs to read log file information and then restore data.
If the permissions for log files are insufficient, data restoration fails.
If the permissions for log files are sufficient and the data restoration still fails, the database
file to be backed up may be damaged or lost. In this case, you need to check the backup
database file information.
Procedure
l The log file permission is insufficient.
a. Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
b. Stop the database service.
python $GSDB_HOME/bin/zctl.py -t stop
Problem
Fail to modify the zengine.ini configuration file. The error information is as follows:
could not open configuration file "/home/gaussdba/data/cfg/zengine.ini":"No such
file or directory"
Cause Analysis
The configuration file fails to be modified because it is damaged or does not exist.
Procedure
Step 1 Obtain the backup file of the configuration file and restore the configuration file.
----End
Problem
The database hangs during connection or query.
Procedure
Step 1 Locally log in to a server where GaussDB 100 is deployed as user root.
If you cannot log in to the server, follow the instructions provided in Locating OS Faults.
Step 2 Run the following SQL statement to query running SQL statements:
SELECT SID,SERIAL#, EVENT, PROGRAM, CLIENT_IP, (SYSDATE - SQL_EXEC_START)*86400,
WAIT_SID, CURRENT_SQL,SQL_ID, MODULE FROM DV_SESSIONS WHERE STATUS = 'ACTIVE';
Where,
l If stopping the blocked session affects the normal running of the database, you are
advised to contact Huawei technical support to confirm whether the session can be
stopped.
----End
Problem
A standby database cannot connect to the primary database, raising error GS-00303 or
GS-00323.
GS-00303, failed to establish tcp connection to [192.168.1.12]:[2001],errno 111
GS-00323, RFS is not ready, can not get %s.
GS-00326, replica thread closed, replica agent error, remote ip [192.168.0.1] not
configured in archive destination [srv_replica.c:262]
Cause Analysis
l Cause 1: The configured IP address or port number is incorrect. As a result, the IP
address or port number used for connection is not the listening IP address or port number
of the peer end.
l Cause 2: The HA primary/standby connection relationship is determined by the link
specified by the ARCHIVE_DEST_N parameter (including the peer IP address and port
number). If multiple NICs exist on the machine where the connection initiator is located,
or if multiple IP addresses exist in a NIC, the source IP address used by the connection
initiator may be different from the IP address configured on the receiver. As a result, the
connection fails.
Procedure
Assume that the IP addresses of a primary database are 192.168.10.12 and 192.168.1.13 and
that the IP address of a standby database is 192.168.1.204.
NOTE
If the standby database cannot be connected to the primary database after a primary/standby switchover,
perform Step 1 through Step 5 again.
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 3 Add all the primary database IP addresses that can communicate with the standby database to
the configuration item REPL_TRUST_HOST in the zengine.ini file of the standby database.
Step 4 Add all the primary database IP addresses that can communicate with the standby database to
the configuration item LSNR_ADDR in the zengine.ini file of the primary database.
Enable listening of the IP address used by the primary database to communicate with the
standby database so that the standby database can properly access the primary database.
----End
Solution 2
Assume that the IP addresses of a primary database are 192.168.10.12 and 192.168.1.12 and
that the IP address of a standby database is 192.168.1.204. The link configured in the primary
database is ARCHIVE_DEST_2=SERVICE=192.168.1.204:1612 SYNC, and that in the
standby database is ARCHIVE_DEST_2=SERVICE=192.168.1.12:1612 SYNC.
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Add the LOCAL_HOST attribute to the configuration item ARCHIVE_DEST_2 in the
zengine.ini file on the primary database.
ARCHIVE_DEST_2 = LOCAL_HOST=192.168.1.12 SERVICE=192.168.1.204:1612 SYNC
In this case, the primary database explicitly binds to 192.168.1.12 before connecting to the
standby database, and the standby database obtains 192.168.1.12 after parsing. Therefore, the
standby database can be connected to the primary database.
----End
Problem
Two primary databases exist at the same time. They will both write data, resulting in data
inconsistency.
Cause Analysis
If primary and standby databases exist, the original standby is promoted to primary in a
failover, and the original primary is started again, there will be two primary databases at the
same time.
NOTE
Log in to the GaussDB 100 from each server as the database administrator. Run the following command:
select DATABASE_ROLE from v$database;
If the database role is PRIMARY in all the command output, it indicates that two primary databases
exist. In this case, you need to perform demotion.
DATABASE_ROLE
------------------------------
PRIMARY
1 rows fetched.
Procedure
Step 1 Log in to the original primary GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 2 Run the following command to stop the original primary database:
shutdown immediate;
Step 4 Run the following command to start database loading, but do not open the database:
python $GSDB_HOME/bin/zctl.py -t start -m MOUNT
Step 5 Log in to the original primary GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 6 Run the following command on the original primary database:
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
----End
Problem
The standby database needs to be rebuilt.
Cause Analysis
After the primary database is faulty, a failover occurs. the original primary is demoted to
standby. In this case, the standby may be in need repair state and needs to be rebuilt.
Notice that only in the maximum performance mode or the maximum availability mode can
the following method be used to rebuild the standby database.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 2 Run the following command to stop the new standby database:
shutdown immediate;
Step 4 Delete contents in the archive directory and data directory of the new standby database.
Note that the data files in the directory must be deleted and all subdirectories must be
retained. Otherwise, the rebuilding fails. The directories are as follows:
l Archive directory: $GSDB_DATA/archive_log
l Data directory: $GSDB_DATA/data
Step 5 Run the following command to start database loading, but do not open the database:
python $GSDB_HOME/bin/zctl.py -t start -m MOUNT
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 7 Run the build database command on the new standby database to rebuild the database.
build database;
----End
l The data file on the primary database is abnormal. The error information is as follows:
GS-00002, Failed to open the file %s, the error code was %d.
Cause Analysis
While the standby database is rebuilt, the configuration file information on the primary and
standby databases may be incorrect. As a result, the connection between the primary and
standby databases cannot be set up, and the standby database fails to be rebuilt. After a
connection is set up between the primary and standby databases, if the Data file of the primary
database is abnormal or the file permission is insufficient, the standby will also fail to be
rebuilt.
Procedure
l The configuration file is incorrect.
a. Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
b. Set parameters including LSNR_ADDR, LSNR_PORT, REPL_PORT,
ARCHIVE_DEST_2, and REPL_TRUST_HOST in the zengine.ini file of the
primary database.
The path of zengine.ini is $GSDB_DATA/cfg/zengine.ini.
c. Restart the primary database.
-- Stop the primary database.
python $GSDB_HOME/bin/zctl.py -t stop
-- Start the primary database.
python $GSDB_HOME/bin/zctl.py -t start
NOTE
To query the data volume in the DB_TABLES and DB_INDEXES views, use the following statements:
SELECT SUM(BYTES) FROM DB_TABLES;
SELECT SUM(BYTES) FROM DB_INDEXES;
Cause Analysis
Data is not synchronized between the primary and standby databases. For example, if the logs,
archive logs, and data files are stored on the same disk, the I/O bottleneck of the disk affects
the replay speed of the logs on the standby database. As a result, the data volumes on the
primary and standby databases are different.
Procedure
Wait for the standby node to synchronize data. No operation is required.
Problem
Failed to insert data into the table. Error information:
GS-00786, could not find datafile to extend extent in tablespace test_tablespace.
Or
GS-00708: The table or the view test_table does not exist.
Cause Analysis
If the table object to be inserted does not exist or the tablespace is insufficient, data insertion
fails.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
If the object does not exist, create it or check whether the object name is correct.
If it exists, go to Step 3.
Step 3 Run the following command to check the tablespace usage:
SELECT * FROM DBA_TABLESPACES;
l The tablespace is automatically expanded. You can specify the size of each tablespace to
be expanded.
ALTER TABLESPACE human_resource AUTOEXTEND ON NEXT 5M;
Step 5 If the tablespace fails to be expanded, and the error "GS-00728: failed to find free space size"
is displayed, check the disk space and select a proper location to create data files, or free up
the disk space.
-- Query the disk space usage.
df -h
----End
Cause Analysis
If you do not have the read and write permissions for the directory where the tablespace is to
be created, the creation fails. In this case, you need to assign the required permission to the
directory.
Procedure
Step 1 Log in to a server where GaussDB 100 is deployed as user root.
Step 2 Change the owner of the tablespace directory to the user who installs the database.
Assume the database installation user is gaussdba and the tablespace directory is /home/
gaussdba/data/data1.
chown gaussdba /home/gaussdba/data/data1
----End
Cause Analysis
When a table is created, the DC POOL space is insufficient. As a result, the table fails to be
created. In this case, you need to manually increase the value of SHARED_POOL_SIZE.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Modify the zengine.ini file and increase the value of SHARED_POOL_SIZE. The value
range is an integer, [82M,32T]. The unit is byte.
The path of zengine.ini is $GSDB_DATA/cfg/zengine.ini.
SHARED_POOL_SIZE = 128M
----End
Cause Analysis
The SSH configurations are different across OSs. As a result, trust fails to be established
through SSH connections. In this case, you need to manually modify the SSH configurations.
Procedure
Step 1 Log in to each server where GaussDB 100 is deployed as user root.
PasswordAuthentication yes
Step 5 Press Esc and enter :wq to save the changes and exit.
----End
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
Issue 03
Date 2019-06-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
7 Glossary....................................................................................................................................... 198
The framework of GaussDB 100 is component-based and can be used for a standalone
database or a cluster. This document provides guidance for installing, configuring, working
with, and maintaining standalone GaussDB 100 and for using its tools.
GaussDB 100 software packages are classified into basic and compatible packages. They
differ in the names of various interfaces. A compatible package is used to offer compatibility
with the usage habits of mainstream databases in the industry. The interfaces mentioned in
this document use names from basic packages. If you have installed compatible packages, you
can use either the interface names of basic packages or those of compatible packages by
referring to Interface Mapping (Basic Packages vs. Compatible Packages). For details
about how to install the basic and compatible packages, see Installation and Deployment in
GaussDB 100 V300R001C00 User Guide (Standalone).
Intended Audience
The document is intended for GaussDB 100 database administrators and describes how to
create and maintain databases.
l Knowledge about a relational database. The theory helps you get familiar with GaussDB
100 and its usage.
l Knowledge about OSs. You will need it when you configure and run GaussDB 100.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Example Conventions
The following table describes some example information in this document. You can replace
the example information as needed.
Information Description
Parameters of GaussDB 100 tools are parsed in sequence. If a parameter is specified for
multiple times, the last value takes effect.
[ x | y | ... ] Indicates that one item is selected from two or more options or no
item is selected.
{ x | y | ... } Indicates that one item is selected from two or more options.
{ x | y | ... } [ ... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with spaces.
{ x | y | ... } [ ,... ] Indicates that at least one parameter can be selected. If multiple
parameters are selected, separate them with commas (,).
Change History
Version Change Description Changed On
03 Added: 2019-06-06
l Flashback
l Data Restoration Using ztrst
Modified:
l Procedure in Logical Replication
l Descriptions of parameters of BUILD
DATABASE in HA Rebuilding
l Schema-specific restoration added in
Backup and Restoration Solutions
l Support for tablespace shrinking in
Managing Tablespaces
l Support for renaming indexes in Managing
Indexes
l Parallel backup and restoration added in
Backup and Restoration Solutions, Data
Backup, and Data Restoration
l Support for collecting index statistics in
Manual Collection
l Support for the -w parameter to set the
timeout interval for the client to wait for a
database response in Connecting to a
Database
l Alarm TablespaceUsage added in
Monitoring Alarms, with its alarm format
modified
l Method of configuring LSNR_ADDR in
Configuring Client Access
Authentication
l Descriptions of rollback transaction
scenarios in Managing Transactions
l Descriptions of the EXTENTS parameter
added in Managing Tablespaces
l Support for renaming constraints in
Modifying a Table
Optimized:
l Support for HA building by zctl.py in HA
Rebuilding
02 Added: 2019-04-05
l Managing Logs
l Obtaining and Verifying an Installation
Package
l GaussDB 100 support for smooth upgrade
in Upgrading a Database
l OS parameter table in Environment
Requirements
Optimized:
l Procedure in Upgrading a Database
l Procedure in Standalone Installation
(Simple Mode)
l Procedure in Standalone HA Installation
(Simple Mode)
l Logical Replication
l Configuring Client Access
Authentication
Modified:
l Index types in Managing Indexes
l Descriptions of slow queries added in Log
Overview and Viewing Logs
l Operation steps in the OPEN child modes
in Managing Database Status
l Procedure in Uninstalling a Database
l Upgrading a Database
l High risk information added in Risky
Operations
GaussDB 100 supports multiple deployment modes. Choose the one suitable for your service
scenario.
Both of the standalone deployment and standalone HA deployment support a simple mode
and a compatible mode. In simple mode, you can install a GaussDB 100 database and use it
properly. The compatible mode is an enhancement of the simple mode. After compatible
packages are installed, GaussDB 100 offers compatibility with the interface names of
mainstream databases.
Hardware Requirements
GaussDB 100 can be deployed and run on physical servers or in mainstream virtualization
environments, such as VMware, KVM, and Xen. For details about the hardware requirements,
see Table 2-1.
Memory At least 218 MB free memory for Each database instance requires 8
each instance GB memory.
Software Requirements
GaussDB 100 can be deployed on a Linux OS. Ensure that the OS is completely installed, or
database exceptions may occur.
OS type and The x86 architecture supports the following operating systems:
version l Red Hat Enterprise Linux Server release 7.4 x86_64
l SUSE Linux Enterprise Server 11.3 (SUSE 11 for short), x86_64
l SUSE Linux Enterprise Server 12.4 (SUSE 12 for short), x86_64
l EulerOS Server V2.0SP3 x86_64
l EulerOS Server V2.0SP5 x86_64
The ARM architecture supports the following operating systems:
EulerOS Server V2.0SP8 ARM_64
Prerequisites
l You have obtained a simplified PGP verification tool, such as PGPVerify, and the public
key file as well.
l You have obtained the GaussDB 100 installation package and signature file, which must
be saved in the same directory, and each package corresponds to a verification file. The
signature file is released together with the corresponding installation package in .asc
format. Generally, the file name is the same as the package name. If the name of an
installation package is GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz, the name of the corresponding verification file will be
GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.asc.
Procedure
Step 1 Obtain the PGPVerify tool.
Run the tool without installation. You can download it in any of the following ways:
l From Huawei Support website:
http://support.huawei.com/carrier/digitalSignatureAction
The website may be displayed in Chinese. To change the language to English, click
English, as shown in Figure 2-1.
Decompress the downloaded package. Open the decompressed folder VerificationTools
to obtain the verification tools of different versions available for different platforms.
NOTE
If you have the access permission but an error is displayed after you click the URL, switch the
language.
The verification tool and public key file are packed in one file. Therefore, their download
paths are the same. After decompression, the file named KEYS is the public key file.
l From the public key server:
b. Click the public key ID 27A74842 in the search results, and view complete
information of the public key, as shown in Figure 2-5.
c. Copy the public key information to a text file, and name the file KEYS.
Step 3 Import the public key.
1. Log in to the server where the installation package to be verified resides as a common
user.
2. Run the following command to import the public key file (/home/openpgp/keys is an
example directory where the KEYS file is stored, and needs to be replaced with an actual
directory):
# gpg --import "/home/opengpg/keys/KEYS"
Key fingerprint = B100 0AC3 8C41 525A 19BD C087 99AD 81DF 27A7 4824
uid OpenPGP signature key for Huawei software (created on
30th Dec,2013) <support@huawei.com>
Information similar to the following will be displayed, and the parts in bold must be
manually specified. Enter 5 following Your decision?, and y following Do you really
want to set this key to ultimate trust? (y/n).
gpg (GnuPG) 2.0.9; Copyright (C) 2008 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please decide how far you trust this user to correctly verify other users'
keys
(by looking at passports, checking fingerprints from different sources, etc.)
The signature file and installation package must be stored in the same directory. Assume that
they are both stored in /soft. Run the following command:
If the RSA key ID in bold is the same as the public key ID and no WARN or FAIL is
displayed, the signature is valid.
The public key is not found. gpg: Signature made Thu FAIL
Jan 9 15:20:01 2014 CST
using RSA key ID
27A74824
gpg: Can't check signature:
public key not found
If there are multiple files requiring signature verification in a version, the version will be
considered safe only when the verification results of all files are PASS and the trustiness of
the public key fingerprint source is confirmed. If the verification results contain WARN or
FAIL, the verification is not passed, indicating security risks. In this case, re-download the
installation package.
total 6660
-rw-r--r-- 1 gaussdba dbgrp 65 Apr 26 15:13 GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit.sha256
-rw-r--r-- 1 gaussdba dbgrp 6631944 Apr 26 15:12 GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit.tar.gz
----End
Host name l The primary server name must be unique within the local network.
Otherwise, the network will become faulty.
l The primary server name must contain more than two characters,
including letters, digits, and hyphens (-), excluding underscores (_).
l A name easy to remember and understand is recommended, such as
DBserver.
Port No. Number of the TCP port to be listened by GaussDB 100, for example,
1888
1 ` 9 & 17 \ 25 '
2 ~ 10 * 18 | 26 "
3 ! 11 ( 19 [ 27 ,
4 @ 12 ) 20 { 28 <
5 # 13 - 21 } 29 .
6 $ 14 _ 22 ] 30 >
7 % 15 = 23 ; 31 /
8 ^ 16 + 24 : 32 ?
Installation Directories
Prerequisites
l The installation has been planned following instructions provided in Planning for
Installation.
l The installation package has been uploaded to the /opt/software/ directory.
l If a non-root user is used to install the database, ensure that this user is the owner of the
installation directory and has certain permissions (≤ 0750).
Precautions
l Before reinstalling GaussDB 100, ensure that operations in Uninstalling a Database
have been completed. Otherwise, the reinstallation may fail.
l To install multiple instances on the same server, plan different listening ports and data
directories.
Procedure
Assume that the IP address of the server where GaussDB 100 is installed is 192.168.0.1 and
the listening port number of the database is 1888.
Step 1 Log in to the server where GaussDB 100 is deployed as user root.
Step 2 Create an installation user and its user group, and configure their permissions to be less than
or equal to 0750.
groupadd dbgrp
useradd -g dbgrp -d /home/gaussdba -m -s /bin/bash gaussdba
Step 3 Create the /opt/software/gaussdb directory for storing the installation package and the
GaussDB 100 directory /opt/gaussdb as planned.
mkdir -p /opt/software/gaussdb
mkdir -p /opt/gaussdb
Step 5 Go to the decompressed folder on the primary node and run the install.py script.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.1 -C LSNR_PORT=1888 -X
l Table 2-9 describes the parameters that need to be specified for install.py. For details
about install.py, see Database Management Tools > install.py in GaussDB 100
V300R001C00 Operation Guide to Tools (Standalone).
Parameter Description
l During installation, you can use the default optimized configuration of zengine.ini or use
the -C parameter to replace the initial configuration. Table 2-10 describes the parameters
that can be modified.
NOTE
Running install.py will preliminarily check whether the memory size is valid. If the result does
not meet the installation requirements of GaussDB 100, the system will exit installation.
The memory size can be calculated as follows:
SGA_BUFF_SIZE = LOG_BUFFER_SIZE+SHARED_POOL_SIZE+DATA_BUFFER_SIZE
+TEMP_BUFFER_SIZE
The memory size (SGA_BUFF_SIZE) must be in the range [114 MB, shmmax) where shmmax
is a Linux kernel parameter and defines the maximum size of a single shared memory segment.
If -C is not used to specify the above four parameters during install.py running, their default
values will be used in the system check.
For details about parameters, see section "Parameters" in GaussDB 100 V300R001C00
Database Reference (Standalone).
l Running install.py creates an instance based on the database creation template. The
template requires that the data directory should have at least 20 GB space.
The save path of the template is /opt/software/gaussdb/GAUSSDB100-V300R001C00-
DATABASE-EULER20SP8-64bit/GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit/admin/scripts/create_database.sample.sql.
The database creation template is as follows:
CREATE DATABASE gauss CHARACTER SET binary CONTROLFILE
('?/data/cntl1',
'?/data/cntl2',
'?/data/cntl3')
LOGFILE
('?/data/log1' size 2G,
'?/data/log2' size 2G,
'?/data/log3' size 2G,
'?/data/log4' size 2G,
'?/data/log5' size 2G,
'?/data/log6' size 2G)
SYSTEM TABLESPACE DATAFILE
'?/data/system' size 1G
UNDO TABLESPACE DATAFILE
'?/data/undo' size 1G
DEFAULT TABLESPACE DATAFILE
'?/data/user1' size 1G autoextend on next 32M,
'?/data/user2' size 1G autoextend on next 32M,
'?/data/user3' size 1G autoextend on next 32M,
'?/data/user4' size 1G autoextend on next 32M,
'?/data/user5' size 1G autoextend on next 32M
TEMPORARY TABLESPACE TEMPFILE
'?/data/temp1' size 160M autoextend on next 32M,
'?/data/temp2' size 160M autoextend on next 32M
NOLOGGING TABLESPACE TEMPFILE
'?/data/temp2_01' size 160M autoextend on next 32M
NOLOGGING UNDO TABLESPACE TEMPFILE
'?/data/temp2_undo' size 160M autoextend on next 32M
ARCHIVELOG;
When you manually create a database creation template, the constraints on file
parameters are as follows:
– CONTROLFILE: Specifies a control file. The minimum number of control files is
2, and the file size is always 10 MB.
– LOGFILE: Specifies a log file. The minimum number of log files is 3, and the
minimum file size is 56 MB plus 16 KB plus the value of LOG_BUFFER_SIZE.
– SYSTEM TABLESPACE DATAFILE: Specifies the size of a data file in the
SYSTEM tablespace. The value range is [128 MB, 8 TB].
– UNDO TABLESPACE DATAFILE: Specifies the size of a data file in the UNDO
tablespace. The value range is [128 MB, 32 GB].
– DEFAULT TABLESPACE DATAFILE: Specifies the size of a data file in the
USERS tablespace (default). The value range is [1 MB, 8 TB].
– TEMPORARY TABLESPACE TEMPFILE: Specifies the size of a data file in
the TEMP tablespace. The value range is [5 MB, 8 TB].
– NOLOGGING TABLESPACE TEMPFILE: Specifies the size of a data file in
the TEMP2 tablespace. The value range is [1 MB, 8 TB].
– NOLOGGING UNDO TABLESPACE TEMPFILE: Specifies the size of a data
file in the TEMP2_UNDO tablespace. The value range is [128 MB, 32 GB].
– If AUTOEXTEND ON is specified, the following attributes can be set:
n NEXT: Specifies the extension size. If this parameter is not set, the default
value 16MB will be used.
l If the installation fails, rectify the fault based on the installation logs. The save path of
the installation logs is /var/log/zengineinstall.log.
After the installation succeeds, four environment variables will be added to the OS, as
described in the following table.
Environment Description
Variable
The default administrator of GaussDB 100 is SYS and its default password is
Changeme_123. To ensure information security, change the password of SYS as soon as
possible. For more connection modes, see Connecting to a Database.
NOTE
For details about how to configure security hardening after a database is installed, see GaussDB 100
V300R001C00 Security Hardening Guide (Standalone).
----End
Prerequisites
l The installation has been planned following instructions provided in Planning for
Installation.
l The installation package has been uploaded to the /opt/software/ directory.
l The compatible package DIALECT-SCRIPT-xxxxx.tar.gz has been obtained and
uploaded to the same directory as the installation package.
l If a non-root user is used to install the database, ensure that this user is the owner of the
installation directory and has certain permissions (≤ 0750).
Precautions
l Before reinstalling GaussDB 100, ensure that operations in Uninstalling a Database
have been completed. Otherwise, the reinstallation may fail.
l To install multiple instances on the same server, plan different listening ports and data
directories.
Procedure
Assume that the IP address of the server where GaussDB 100 is installed is 192.168.0.1 and
the listening port number of the database is 1888.
Step 1 Log in to the server where GaussDB 100 is deployed as user root.
Step 2 Create an installation user and its user group, and configure their permissions to be less than
or equal to 0750.
groupadd dbgrp
useradd -g dbgrp -d /home/gaussdba -m -s /bin/bash gaussdba
Step 3 Create the /opt/software/gaussdb directory for storing the installation package and the
GaussDB 100 directory /opt/gaussdb as planned.
mkdir -p /opt/software/gaussdb
mkdir -p /opt/gaussdb
Step 4 Upload the installation package and compatible package to the created directory. The
installation package and compatible package must be stored in the same directory.
Step 5 Decompress the installation package.
cd /opt/software/gaussdb
tar -zxvf GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.tar.gz
Step 6 Go to the decompressed folder on the primary node and run the install.py script.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.1 -C LSNR_PORT=1888
l Table 2-12 describes the parameters that need to be specified for install.py. For details
about install.py, see Database Management Tools > install.py in GaussDB 100
V300R001C00 Operation Guide to Tools (Standalone).
l During installation, you can use the default optimized configuration of zengine.ini or use
the -C parameter to replace the initial configuration. Table 2-13 describes the parameters
that can be modified.
NOTE
Running install.py will preliminarily check whether the memory size is valid. If the result does
not meet the installation requirements of GaussDB 100, the system will exit installation.
The memory size can be calculated as follows:
SGA_BUFF_SIZE = LOG_BUFFER_SIZE+SHARED_POOL_SIZE+DATA_BUFFER_SIZE
+TEMP_BUFFER_SIZE
The memory size (SGA_BUFF_SIZE) must be in the range [114 MB, shmmax) where shmmax
is a Linux kernel parameter and defines the maximum size of a single shared memory segment.
If -C is not used to specify the above four parameters during install.py running, their default
values will be used in the system check.
For details about parameters, see section "Parameters" in GaussDB 100 V300R001C00
Database Reference (Standalone).
l Running install.py creates an instance based on the database creation template. The
template requires that the data directory should have at least 20 GB space.
The save path of the template is /opt/software/gaussdb/GAUSSDB100-V300R001C00-
DATABASE-EULER20SP8-64bit/GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit/admin/scripts/create_database.sample.sql.
The database creation template is as follows:
CREATE DATABASE gauss CHARACTER SET binary CONTROLFILE
('?/data/cntl1',
'?/data/cntl2',
'?/data/cntl3')
LOGFILE
('?/data/log1' size 2G,
When you manually create a database creation template, the constraints on file
parameters are as follows:
– CONTROLFILE: Specifies a control file. The minimum number of control files is
2, and the file size is always 10 MB.
– LOGFILE: Specifies a log file. The minimum number of log files is 3, and the
minimum file size is 56 MB plus 16 KB plus the value of LOG_BUFFER_SIZE.
– SYSTEM TABLESPACE DATAFILE: Specifies the size of a data file in the
SYSTEM tablespace. The value range is [128 MB, 8 TB].
– UNDO TABLESPACE DATAFILE: Specifies the size of a data file in the UNDO
tablespace. The value range is [128 MB, 32 GB].
– DEFAULT TABLESPACE DATAFILE: Specifies the size of a data file in the
USERS tablespace (default). The value range is [1 MB, 8 TB].
– TEMPORARY TABLESPACE TEMPFILE: Specifies the size of a data file in
the TEMP tablespace. The value range is [5 MB, 8 TB].
– NOLOGGING TABLESPACE TEMPFILE: Specifies the size of a data file in
the TEMP2 tablespace. The value range is [1 MB, 8 TB].
– NOLOGGING UNDO TABLESPACE TEMPFILE: Specifies the size of a data
file in the TEMP2_UNDO tablespace. The value range is [128 MB, 32 GB].
– If AUTOEXTEND ON is specified, the following attributes can be set:
n NEXT: Specifies the extension size. If this parameter is not set, the default
value 16MB will be used.
n MAXSIZE: Specifies the upper limit of extension.
○ If this parameter is omitted or is set to UNLIMITED, the maximum size
of the UNDO tablespace will be 32 GB, and that of other tablespaces will
be 8 TB.
○ The parameter value cannot be greater than 32 GB for the UNDO
tablespace, and cannot be greater than 8 TB for other tablespaces.
○ The value of MAXSIZE must be no less than that of NEXT.
Step 7 Check the installation result.
l If the installation is successful, the following information will be displayed:
l If the installation fails, rectify the fault based on the installation logs. The save path of
the installation logs is /var/log/zengineinstall.log.
After the installation succeeds, four environment variables will be added to the OS, as
described in the following table.
The default administrator of GaussDB 100 is SYS and its default password is
Changeme_123. To ensure information security, change the password of SYS as soon as
possible. For more connection modes, see Connecting to a Database.
NOTE
For details about how to configure security hardening after a database is installed, see GaussDB 100
V300R001C00 Security Hardening Guide (Standalone).
----End
Figure 2-6 One primary database, two standby databases, and one cascaded standby database
The following describes parameter configurations for the deployment of one primary database
with two standby databases and one cascaded standby database: (Instance A is used as an
example.)
l If databases are deployed in HA DR mode, perform security hardening on both the primary and
standby databases.
l In simple mode, you can install a GaussDB 100 database and use it properly.
Prerequisites
l The installation has been planned following instructions provided in Planning for
Installation.
l The installation package has been uploaded to the /opt/software/ directory.
l If a non-root user is used to install the database, ensure that this user is the owner of the
installation directory and has certain permissions (≤ 0750).
l If the firewall service is enabled on servers, ensure that the primary and standby
databases have been added to the trust zones for each other.
Precautions
l To install multiple instances on the same server, plan different listening ports and data
directories.
Procedure
Assume that the IP address of the primary database is 192.168.0.1, that of standby database 1
is 192.168.0.2, that of standby database 2 is 192.168.0.3, and that of the cascaded standby
database is 192.168.0.4; and assume that the database listening port is 1888 and the
communication port is 1889 for all the four databases. The procedure for installing the four
databases is as follows:
Step 1 Log in to the primary and standby databases of GaussDB 100 as user root to perform Step 2
to Step 4 below.
Step 2 Create an installation user and its user group, and configure permissions to be less than or
equal to 0750.
groupadd dbgrp
useradd -g dbgrp -d /home/gaussdba -m -s /bin/bash gaussdba
Step 3 Create the /opt/software/gaussdb directory for storing the installation package and the
GaussDB 100 directory /opt/gaussdb as planned.
mkdir -p /opt/software/gaussdb
mkdir -p /opt/gaussdb
Step 5 On the primary node, go to the decompressed folder and run install.py.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.1 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2=SERVICE=192.168.0.2:1889 SYNC PRIMARY_ROLE" -C
"ARCHIVE_DEST_3=SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C
"ARCHIVE_DEST_4=SERVICE=192.168.0.2:1889 STANDBY_ROLE" -C CHECKPOINT_PERIOD=3 -C
SESSIONS=1500 -C REPL_WAIT_TIMEOUT=30000 -X
l Table 2-15 lists the install.py parameters that must be specified during installation. For
details about install.py, see Database Management Tools > install.py in GaussDB 100
V300R001C00 Operation Guide to Tools (Standalone).
l During installation, you can use the default optimized configuration of zengine.ini or use
the -C parameter to replace the initial configuration. Table 2-16 describes the common
parameters that you need to pay attention to. Table 2-17 describes the parameters that
you need to pay attention to during the configuration of primary and standby databases.
NOTE
Running install.py will preliminarily check whether the memory size is valid. If the result does
not meet the installation requirements of GaussDB 100, the system will exit installation.
The memory size can be calculated as follows:
SGA_BUFF_SIZE = LOG_BUFFER_SIZE+SHARED_POOL_SIZE+DATA_BUFFER_SIZE
+TEMP_BUFFER_SIZE
The memory size (SGA_BUFF_SIZE) must be in the range [114 MB, shmmax) where shmmax
is a Linux kernel parameter and defines the maximum size of a single shared memory segment.
If -C is not used to specify the above four parameters during install.py running, their default
values will be used in the system check.
For details about parameters, see section "Parameters" in GaussDB 100 V300R001C00
Database Reference (Standalone).
l Running install.py creates a default database based on the database creation template. If
you need to customize a database, modify the template first.
The save path of the template is /opt/software/gaussdb/GAUSSDB100-V300R001C00-
DATABASE-EULER20SP8-64bit/GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit/admin/scripts/create_database.sample.sql.
The database creation template is as follows:
CREATE DATABASE gauss CHARACTER SET binary CONTROLFILE
('?/data/cntl1',
'?/data/cntl2',
'?/data/cntl3')
LOGFILE
('?/data/log1' size 2G,
'?/data/log2' size 2G,
'?/data/log3' size 2G,
'?/data/log4' size 2G,
'?/data/log5' size 2G,
'?/data/log6' size 2G)
SYSTEM TABLESPACE DATAFILE
'?/data/system' size 1G
UNDO TABLESPACE DATAFILE
'?/data/undo' size 1G
DEFAULT TABLESPACE DATAFILE
'?/data/user1' size 1G autoextend on next 32M,
'?/data/user2' size 1G autoextend on next 32M,
'?/data/user3' size 1G autoextend on next 32M,
'?/data/user4' size 1G autoextend on next 32M,
'?/data/user5' size 1G autoextend on next 32M
TEMPORARY TABLESPACE TEMPFILE
'?/data/temp1' size 160M autoextend on next 32M,
'?/data/temp2' size 160M autoextend on next 32M
NOLOGGING TABLESPACE TEMPFILE
'?/data/temp2_01' size 160M autoextend on next 32M
NOLOGGING UNDO TABLESPACE TEMPFILE
'?/data/temp2_undo' size 160M autoextend on next 32M
ARCHIVELOG;
When you manually create a database creation template, the constraints on file
parameters are as follows:
– CONTROLFILE: Specifies a control file. The minimum number of control files is
2, and the file size is always 10 MB.
– LOGFILE: Specifies a log file. The minimum number of log files is 3, and the
minimum file size is 56 MB plus 16 KB plus the value of LOG_BUFFER_SIZE.
Step 6 On standby node 1, go to the decompressed folder and run install.py with the -O parameter
specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.2 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.1:1889 STANDBY_ROLE" -O -X
Step 7 On standby node 2, go to the decompressed folder and run install.py with the -O parameter
specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.3 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.4:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.4:1889 STANDBY_ROLE" -O -X
Step 8 On the cascaded standby node, go to the decompressed folder and run install.py with the -O
parameter specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.4 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.3:1889 STANDBY_ROLE" -O -X
Step 9 Rebuild the databases on the two standby nodes and the cascaded standby node.
su - gaussdba
cd /opt/gaussdb/app/bin
python zctl.py -t build
NOTE
In this step, you must strictly follow the sequence of standby and cascaded standby nodes.
Step 10 Check whether the primary/standby relationship is successfully created on the primary,
standby, and cascaded standby nodes.
zsql SYS/Changeme_123@127.0.0.1:1888
The default administrator of GaussDB 100 is SYS and its default password is
Changeme_123. To ensure information security, change the password of SYS as soon as
possible. For more connection modes, see Connecting to a Database.
SELECT DATABASE_ROLE FROM DV_DATABASE;
For details about how to configure security hardening after a database is installed, see GaussDB
100 V300R001C00 Security Hardening Guide (Standalone).
Step 11 Change the password of the database administrator SYS for the primary database.
ALTER USER SYS IDENTIFIED BY database_123 REPLACE Changeme_123;
In this command, database_123 is the new password of SYS. After the password is changed
for the primary database, the passwords of the users for standby and cascaded standby
databases will be automatically changed, and no manual change is needed.
----End
l One primary database with N standby databases and N cascaded standby databases
The primary database communicates with standby databases, which in turn with
cascaded standby databases, effectively reducing the loads of the primary database.
The primary and standby databases can be set to either the SYNC or ASYNC mode, and
the standby and cascaded standby databases support only ASYNC replication. When the
primary database is faulty, a standby database can be promoted to primary. The cascaded
standby databases are only for DR purposes, and are placed remotely. They replicate data
from standby databases and have little impact on service environment performance. If
the primary database and all standby databases are faulty, a cascaded standby database
can also be promoted to primary.
This section uses one primary database, two standby databases, and one cascaded standby
database as an example to illustrate the standalone HA deployment, as shown in Figure 2-7.
Figure 2-7 One primary database, two standby databases, and one cascaded standby database
The following describes parameter configurations for the deployment of one primary database
with two standby databases and one cascaded standby database: (Instance A is used as an
example.)
l If a database is deployed in HA mode, perform security hardening on both the primary and standby
databases.
l The compatible mode is an enhancement of the simple mode. After compatible packages are
installed, GaussDB 100 offers compatibility with the interface names of mainstream databases.
Prerequisites
l The installation has been planned following instructions provided in Planning for
Installation.
l The installation package has been uploaded to the /opt/software/ directory.
l The compatible package DIALECT-SCRIPT-xxxxx.tar.gz has been obtained and
uploaded to the same directory as the installation package.
l If a non-root user is used to install the database, ensure that this user is the owner of the
installation directory and has certain permissions (≤ 0750).
l If the firewall service is enabled on servers, ensure that the primary and standby
databases have been added to the trust zones for each other.
Precautions
l To install multiple instances on the same server, plan different listening ports and data
directories.
l Before reinstalling GaussDB 100, ensure that operations in Uninstalling a Database
have been completed. Otherwise, the reinstallation may fail.
l It is recommended that the data and log directories of primary and standby databases be
the same. If they are different, you can use the DB_FILE_NAME_CONVERT and
LOG_FILE_NAME_CONVERT parameters for conversion. Assume that the data
directory of a primary database is /home/user1/zenith_home1/data and that of a
standby database is /home/user1/zenith_home2/data. Set
DB_FILE_NAME_CONVERT to /home/user1/zenith_home2/data,/home/user1/
zenith_home1/data on the primary database and to /home/user1/zenith_home1/data,/
home/user1/zenith_home2/data on the standby database. In this way, even if the
standby database is promoted to primary, the primary/standby relationship will not be
affected.
Procedure
Assume that the IP address of the primary database is 192.168.0.1, that of standby database 1
is 192.168.0.2, that of standby database 2 is 192.168.0.3, and that of the cascaded standby
database is 192.168.0.4; and assume that the database listening port is 1888 and the
communication port is 1889 for all the four databases. The procedure for installing the four
databases is as follows:
Step 1 Log in to the primary, standby, and cascaded standby databases of GaussDB 100 as user root
to perform Step 2 to Step 5 below.
Step 2 Create an installation user and its user group, and configure permissions to be less than or
equal to 0750.
groupadd dbgrp
useradd -g dbgrp -d /home/gaussdba -m -s /bin/bash gaussdba
Step 3 Create the /opt/software/gaussdb directory for storing the installation package and the
GaussDB 100 directory /opt/gaussdb as planned.
mkdir -p /opt/software/gaussdb
mkdir -p /opt/gaussdb
Step 4 Upload the installation package and compatible package to the created directory. The
installation package and compatible package must be stored in the same directory.
Step 5 Decompress the installation package.
cd /opt/software/gaussdb
tar -zxvf GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.tar.gz
Step 6 On the primary node, go to the decompressed folder and run install.py.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.1 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2=SERVICE=192.168.0.2:1889 SYNC PRIMARY_ROLE" -C
"ARCHIVE_DEST_3=SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C
"ARCHIVE_DEST_4=SERVICE=192.168.0.2:1889 STANDBY_ROLE" -C CHECKPOINT_PERIOD=3 -C
SESSIONS=1500 -C REPL_WAIT_TIMEOUT=30000
l Table 2-18 lists the install.py parameters that must be specified during installation. For
details about install.py, see Database Management Tools > install.py in GaussDB 100
V300R001C00 Operation Guide to Tools (Standalone).
l During installation, you can use the default optimized configuration of zengine.ini or use
the -C parameter to replace the initial configuration. Table 2-19 describes the common
parameters that you need to pay attention to. Table 2-20 describes the parameters that
you need to pay attention to during the configuration of primary and standby databases.
NOTE
Running install.py will preliminarily check whether the memory size is valid. If the result does
not meet the installation requirements of GaussDB 100, the system will exit installation.
The memory size can be calculated as follows:
SGA_BUFF_SIZE = LOG_BUFFER_SIZE+SHARED_POOL_SIZE+DATA_BUFFER_SIZE
+TEMP_BUFFER_SIZE
The memory size (SGA_BUFF_SIZE) must be in the range [114 MB, shmmax) where shmmax
is a Linux kernel parameter and defines the maximum size of a single shared memory segment.
If -C is not used to specify the above four parameters during install.py running, their default
values will be used in the system check.
For details about parameters, see section "Parameters" in GaussDB 100 V300R001C00
Database Reference (Standalone).
l Running install.py creates a default database based on the database creation template. If
you need to customize a database, modify the template first.
The save path of the template is /opt/software/gaussdb/GAUSSDB100-V300R001C00-
DATABASE-EULER20SP8-64bit/GAUSSDB100-V300R001C00-RUN-
EULER20SP8-64bit/admin/scripts/create_database.sample.sql.
The database creation template is as follows:
CREATE DATABASE gauss CHARACTER SET binary CONTROLFILE
('?/data/cntl1',
'?/data/cntl2',
'?/data/cntl3')
LOGFILE
('?/data/log1' size 2G,
'?/data/log2' size 2G,
'?/data/log3' size 2G,
'?/data/log4' size 2G,
'?/data/log5' size 2G,
'?/data/log6' size 2G)
SYSTEM TABLESPACE DATAFILE
'?/data/system' size 1G
UNDO TABLESPACE DATAFILE
'?/data/undo' size 1G
DEFAULT TABLESPACE DATAFILE
'?/data/user1' size 1G autoextend on next 32M,
'?/data/user2' size 1G autoextend on next 32M,
'?/data/user3' size 1G autoextend on next 32M,
'?/data/user4' size 1G autoextend on next 32M,
'?/data/user5' size 1G autoextend on next 32M
TEMPORARY TABLESPACE TEMPFILE
'?/data/temp1' size 160M autoextend on next 32M,
'?/data/temp2' size 160M autoextend on next 32M
NOLOGGING TABLESPACE TEMPFILE
'?/data/temp2_01' size 160M autoextend on next 32M
NOLOGGING UNDO TABLESPACE TEMPFILE
'?/data/temp2_undo' size 160M autoextend on next 32M
ARCHIVELOG;
When you manually create a database creation template, the constraints on file
parameters are as follows:
– CONTROLFILE: Specifies a control file. The minimum number of control files is
2, and the file size is always 10 MB.
– LOGFILE: Specifies a log file. The minimum number of log files is 3, and the
minimum file size is 56 MB plus 16 KB plus the value of LOG_BUFFER_SIZE.
Step 7 On standby node 1, go to the decompressed folder and run install.py with the -O parameter
specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.2 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.1:1889 STANDBY_ROLE" -O
Step 8 On standby node 2, go to the decompressed folder and run install.py with the -O parameter
specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.3 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.4:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.4:1889 STANDBY_ROLE" -O
Step 9 On the cascaded standby node, go to the decompressed folder and run install.py with the -O
parameter specified.
cd GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
python install.py -U gaussdba:dbgrp -R /opt/gaussdb/app -D /opt/gaussdb/data -C
LSNR_ADDR=127.0.0.1,192.168.0.4 -C LSNR_PORT=1888 -C REPL_PORT=1889 -C
"ARCHIVE_DEST_2 = SERVICE=192.168.0.3:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_3
= SERVICE=192.168.0.1:1889 SYNC PRIMARY_ROLE" -C "ARCHIVE_DEST_4 =
SERVICE=192.168.0.3:1889 STANDBY_ROLE" -O
Step 10 Rebuild the databases on the two standby nodes and the cascaded standby node.
su - gaussdba
cd /opt/gaussdb/app/bin
python zctl.py -t build
NOTE
In this step, you must strictly follow the sequence of standby and cascaded standby nodes.
Step 11 Check whether the primary/standby relationship is successfully created on the primary,
standby, and cascaded standby nodes.
zsql SYS/Changeme_123@127.0.0.1:1888
The default administrator of GaussDB 100 is SYS and its default password is
Changeme_123. To ensure information security, change the password of SYS as soon as
possible. For more connection modes, see Connecting to a Database.
SELECT DATABASE_ROLE FROM DV_DATABASE;
For details about how to configure security hardening after a database is installed, see GaussDB
100 V300R001C00 Security Hardening Guide (Standalone).
Step 12 Change the password of the database administrator SYS for the primary database.
ALTER USER SYS IDENTIFIED BY database_123 REPLACE Changeme_123;
In this command, database_123 is the new password of SYS. After the password is changed
for the primary database, the passwords of the users for standby and cascaded standby
databases will be automatically changed, and no manual change is needed.
----End
Prerequisites
The database has been correctly installed.
Procedure
Step 1 Log in to a server where GaussDB 100 is deployed as user gaussdba.
Step 2 Go to the bin folder in the installation directory and execute the uninstallation script
uninstall.py:
cd $GSDB_HOME/bin
python uninstall.py -U gaussdba -F -D $GSDB_DATA -g withoutroot
The uninstallation log is ~/zengineuninstall.log. For details about uninstall.py, see Database
Management Tools > uninstall.py in GaussDB 100 V300R001C00 Operation Guide to Tools
(Standalone).
----End
Prerequisites
l Upgrades are supported only between adjacent C versions. Ensure that the target and
source versions are adjacent C versions. If an upgrade across multiple versions is needed,
perform an upgrade for each version in sequence. If the database versions before and
after an upgrade are the same, the upgrade is not supported.
l Reserve 10 to 60 minutes for an upgrade. Upgrade time depends on hardware
performance in the product environment, service load before service stop, and network
performance between primary and standby databases.
l Back up important data before an upgrade. A full backup is recommended.
l Ensure that the reserved space of a disk where the target database is deployed is no less
than the space occupied by system catalog files. (The reserved space is for backup
operations.) Otherwise, the upgrade will fail.
l The lsof tool has been installed in the system.
l The Python version is 2.7.*.
l Before an upgrade, prepare the installation package for the upgrade and verify the
integrity of the installation package. For details about how to verify an installation
package, see Obtaining and Verifying an Installation Package.
l Ensure that the database user (for example, gaussdba) has certain permissions for the
upgrade installation package (≤ 0750). Otherwise, the upgrade will fail and rollback will
be needed.
l Before an upgrade, ensure that the database instance is running properly, can be started
and stopped, and can perform services. Otherwise, the upgrade will fail and rollback will
be needed.
l Services must be stopped before an upgrade.
l No other control software (such as CloudSOP or DBM) is used to stop or start database
instances; perform primary/standby switchovers, disaster recovery, and backup; or
trigger scheduled jobs. If there is such software, the upgrade may fail and rollback
cannot be performed.
l Ensure that the network between primary and standby databases is normal before an HA
upgrade. Otherwise, the upgrade will fail and rollback will be needed.
l For an HA manual upgrade, ensure a consistent password for user SYS of different
database instances started from the same program directory. Otherwise, the upgrade will
fail.
l For an HA automatic upgrade, ensure a consistent password for user SYS of different
database instances that need to be upgraded on all nodes. Otherwise, the upgrade will
fail.
l For an HA automatic upgrade, the mutual trust relationships must be consistently
configured between nodes. That is, each node must either have or not have a mutual trust
relationship. If there is an inconsistency, the upgrade will fail.
l For an HA automatic upgrade, the value of ChallengeResponseAuthentication
in /etc/ssh/sshd_config must be no. Otherwise, the execution of pre-check will fail.
l A compatible package must be installed for an upgraded database. The package is
compatible with the view names of mainstream databases, without affecting users.
Precautions
If password-free login is disabled on zsql, you need to use the formal parameter -P to enter
the password for an upgrade.
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
Step 4 Configure an upgrade file config_file.ini.
The config_file.ini file needs to be created manually. Assume that the IP address of the node
where the standalone database resides is 192.168.0.1. The format of the node configuration is
as follows:
192.168.0.1=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
If the upgrade fails, you need to perform rollback and then run the upgrade command again.
For details about the rollback procedure, see Procedure (Standalone Automatic Rollback).
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the installation package of the target version and the compatible package DIALECT-
SCRIPT-xxxxx.tar.gz to the same directory.
Step 3 Decompress the installation package of the target version and obtain the upgrade script
upgrade.py.
tar -zxvf GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.tar.gz
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
The config_file.ini file needs to be created manually. For an HA automatic upgrade, you need
to add information about each node. The configuration information of each node is in a
separate line, and the configuration information in the first line must be about the node for
running the upgrade command. Information behind the equal sign (=) must be arranged in the
following sequence: Upgrade package path, Database installation path, Backup path,
Database instance data file path.
Assume that the HA deployment involves nodes 192.168.0.1, 192.168.0.2, and 192.168.0.3
and that the upgrade command will be executed on node 192.168.0.1. The format of the node
configuration is as follows:
192.168.0.1=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
192.168.0.2=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
192.168.0.3=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
Step 7 Perform the upgrade check and prepare the upgrade package and environment for remote
nodes.
python upgrade.py -s pre-check --config-file=/opt/gaussdb/config_file.ini --
upgrade-mode=ha
If the upgrade fails, you need to perform rollback and then run the upgrade command again.
For details about the rollback procedure, see Procedure (HA Automatic Rollback).
----End
A standalone manual upgrade consists of nine steps: pretest, precheck, prepare, replace,
start, upgrade, sync, dbcheck, and flush. The specific procedure is as follows:
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the installation package of the target version and the compatible package DIALECT-
SCRIPT-xxxxx.tar.gz to the same directory.
Step 3 Decompress the installation package of the target version and obtain the upgrade script
upgrade.py.
tar -zxvf GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.tar.gz
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be upgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In 5 through 14, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The two
parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment variables.
If the first two parameters are not specified, the default configuration in the environment variables
will be used.
l The generated log file is upgrade.log, which is stored in the backup directory specified by --
backupdir=/opt/gaussdb/backup. Rollback is needed if any step from 7 to 14 in a standalone
manual upgrade fails. It is not needed if 6 (pretest) fails. For details about the rollback procedure,
see Procedure (Standalone Manual Rollback).
Step 5 (Optional) Run the upgrade-type command to obtain the upgrade type.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 6 (Optional) Run the pretest command to examine whether the current database environment is
suitable for the upgrade.
python upgrade.py -t pretest --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the installation package of the target version and the compatible package DIALECT-
SCRIPT-xxxxx.tar.gz to the same directory on all physical machines where primary and
standby databases are installed.
Step 3 Decompress the installation package of the target version and obtain the upgrade script
upgrade.py on all physical machines where primary and standby databases are installed.
tar -zxvf GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.tar.gz
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be upgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In 5 through 14, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The two
parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment variables.
If the first two parameters are not specified, the default configuration in the environment variables
will be used.
l The generated log file is upgrade.log, which is stored in the backup directory specified by --
backupdir=/opt/gaussdb/backup. Rollback is needed if any step from 7 to 14 in an HA manual
upgrade fails. It is not needed if 6 (pretest) fails. For details about the rollback procedure, see
Procedure (HA Manual Rollback).
Step 4 Go to the directory where upgrade.py is stored on all physical machines where primary and
standby databases are installed.
cd /opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
Step 5 (Optional) Run the upgrade-type command on the current physical machine to obtain the
upgrade type.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 6 (Optional) Run the pretest command on all physical machines where primary and standby
databases are installed to check whether the current database environment is suitable for the
upgrade.
python upgrade.py -t pretest --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 7 Run the precheck command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t precheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 8 Run the prepare command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t prepare --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 9 Run the replace command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t replace --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 10 Run the start command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t start --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/gaussdb/
data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 11 Run the upgrade command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t upgrade --GSDB_HOME=/home/gaussdba/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 12 Run the sync command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t sync --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/gaussdb/
data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 13 Run the dbcheck command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t dbcheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/home/gaussdba/up_out
Step 14 Run the flush command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t flush --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/gaussdb/
data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
----End
Prerequisites
l Downgrades are supported only between adjacent C versions. Ensure that the target and
source versions are adjacent C versions. If a downgrade across multiple versions is
needed, perform a downgrade for each version in sequence. If the database versions
before and after a downgrade are the same, the downgrade is not supported.
l Reserve 10 to 60 minutes for a downgrade. Downgrade time depends on hardware
performance in the product environment, service load before service stop, and network
performance between primary and standby databases.
l Back up important data before a downgrade. A full backup is recommended.
l The reserved space of a disk where the target database is deployed is no less than the
space occupied by system catalog files. (The reserved space is for backup operations.)
Otherwise, the downgrade will fail.
l The lsof tool has been installed in the system.
l The Python version is 2.7.*.
l Before a downgrade, prepare the installation package for the downgrade and verify the
integrity of the installation package. For details about how to verify an installation
package, see Obtaining and Verifying an Installation Package.
l Ensure that the database user (for example, gaussdba) has certain permissions for the
downgrade installation package (≤ 0750). Otherwise, the downgrade will fail and
rollback will be needed.
l Before a downgrade, ensure that the database instance is running properly, can be started
and stopped, and can perform services. Otherwise, the downgrade will fail and rollback
will be needed.
l Services must be stopped before a downgrade.
l No other control software (such as CloudSOP or DBM) is used to stop or start database
instances; perform primary/standby switchovers, disaster recovery, and backup; or
trigger scheduled jobs. If there is such software, the downgrade may fail and rollback
cannot be performed.
l Ensure that the network between primary and standby databases is normal before an HA
downgrade. Otherwise, the downgrade will fail and rollback will be needed.
l For an HA manual downgrade, ensure a consistent password for user SYS of different
database instances started from the same program directory. Otherwise, the downgrade
will fail.
l For an HA automatic downgrade, ensure a consistent password for user SYS of different
database instances that need to be downgraded on all nodes. Otherwise, the downgrade
will fail.
l For an HA automatic downgrade, the mutual trust relationships must be consistently
configured between nodes. That is, each node must either have or not have a mutual trust
relationship. If there is an inconsistency, the downgrade will fail.
l For an HA automatic downgrade, the value of ChallengeResponseAuthentication
in /etc/ssh/sshd_config must be no. Otherwise, the execution of pre-check will fail.
l If a downgrade package supports the compatible package, the degraded database will be
in compatible mode.
l Before downgrading a database, ensure that the database is running properly.
Precautions
l If password-free login is disabled on zsql, you need to use the formal parameter -P to
enter the password for a downgrade.
l For a downgraded database, the next upgrade must use a version of the installation
package later than that before the downgrade.
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the downgrade installation package.
You must have the read, write, and execute permissions for the directory where the
downgrade installation package is stored. If the downgraded version supports the compatible
package, you need to upload the installation package of the downgraded version and the
compatible package DIALECT-SCRIPT-3.1.0.0.0.tar.gz to the same directory.
Step 3 Create the upgradetool folder in the installation directory of the current database to store the
tool scripts used for the downgrade.
cd $GSDB_HOME
mkdir upgradetool
chmod 700 upgradetool
Step 4 Copy upgrade.py, sshexkey.py, and funclib.py in the installation package of the current
database to the created upgradetool folder.
Assume that the installation package path of the current database is /opt/gaussdb/
GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.
cd /opt/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
cp upgrade.py sshexkey.py funclib.py $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
The config_file.ini file needs to be created manually. Assume that the IP address of the node
where the standalone database resides is 192.168.0.1. The format of the node configuration is
as follows:
192.168.0.1=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
When multiple instances are installed, the database instances that are started from the same
installation directory must be downgraded at a time. In this case, --GSDB_DATA specifies
the data directories of multiple database instances, separated by commas (,). The /home/
gaussdba/up_out directory specified by --backupdir needs to be manually created.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
If the downgrade fails, you need to perform rollback and then run the downgrade command
again. For details about the rollback procedure, see Procedure (Standalone Automatic
Rollback).
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the downgrade installation package.
You must have the read, write, and execute permissions for the directory where the
downgrade installation package is stored. If the downgraded version supports the compatible
package, you need to upload the installation package of the downgraded version and the
compatible package DIALECT-SCRIPT-3.1.0.0.0.tar.gz to the same directory.
Step 3 Create the upgradetool folder in the installation directory of the current database to store the
tool scripts used for the downgrade.
cd $GSDB_HOME
mkdir upgradetool
chmod 700 upgradetool
Step 4 Copy upgrade.py, sshexkey.py, and funclib.py in the installation package of the current
database to the created upgradetool folder.
Assume that the installation package path of the current database is /opt/gaussdb/
GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.
cd /opt/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
cp upgrade.py sshexkey.py funclib.py $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
192.168.0.1=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
192.168.0.2=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
192.168.0.3=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz,/opt/gaussdb/app,/opt/gaussdb/backup,/opt/gaussdb/data
When multiple instances are installed, the database instances that are started from the same
installation directory must be downgraded at a time. In this case, --GSDB_DATA specifies
the data directories of multiple database instances, separated by commas (,). The /home/
gaussdba/up_out directory specified by --backupdir needs to be manually created.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 8 Perform the downgrade check and prepare the downgrade package and environment for
remote nodes.
python upgrade.py -s pre-check --config-file=/opt/gaussdb/config_file.ini --
upgrade-mode=ha
If the downgrade fails, you need to perform rollback and then run the downgrade command
again. For details about the rollback procedure, see Procedure (HA Automatic Rollback).
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the downgrade installation package.
You must have the read, write, and execute permissions for the directory where the
downgrade installation package is stored. If the downgraded version supports the compatible
package, you need to upload the installation package of the downgraded version and the
compatible package DIALECT-SCRIPT-3.1.0.0.0.tar.gz to the same directory.
Step 3 Create the upgradetool folder in the installation directory of the current database to store the
tool scripts used for the downgrade.
cd $GSDB_HOME
mkdir upgradetool
chmod 700 upgradetool
Step 4 Copy upgrade.py, sshexkey.py, and funclib.py in the installation package of the current
database to the created upgradetool folder.
Assume that the installation package path of the current database is /opt/gaussdb/
GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.
cd /opt/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
cp upgrade.py sshexkey.py funclib.py $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be downgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In 6 through 18, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The two
parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment variables.
If the first two parameters are not specified, the default configuration in the environment variables
will be used.
l The generated log file is upgrade.log, which is stored in the backup directory specified by --
backupdir=/opt/gaussdb/backup. Rollback is needed if any step from 8 to 18 in a standalone
manual downgrade fails. It is not needed if 7 (pretest) fails. For details about the rollback procedure,
see Procedure (Standalone Manual Rollback).
Step 5 Go to the directory where upgrade.py is stored, that is, the manually created upgradetool
folder.
cd $GSDB_HOME/upgradetool
Step 6 (Optional) Run the upgrade-type command to obtain the downgrade type.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 7 (Optional) Run the pretest command to examine whether the current database environment is
suitable for the downgrade.
python upgrade.py -t pretest --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Upload the downgrade installation package on all physical machines where primary and
standby databases are installed.
You must have the read, write, and execute permissions for the directory where the
downgrade installation package is stored. If the downgraded version supports the compatible
package, you need to upload the installation package of the downgraded version and the
compatible package DIALECT-SCRIPT-3.1.0.0.0.tar.gz to the same directory.
Step 3 Create the upgradetool folder in the installation directory of the current database on all
physical machines where primary and standby databases are installed to store the tool scripts
used for the downgrade.
cd $GSDB_HOME
mkdir upgradetool
chmod 700 upgradetool
Step 4 Copy upgrade.py, sshexkey.py, and funclib.py in the installation package of the current
database to the created upgradetool folder on all physical machines where primary and
standby databases are installed.
Assume that the installation package path of the current database is /opt/gaussdb/
GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit.
cd /opt/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
cp upgrade.py sshexkey.py funclib.py $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be downgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In Step 6 through 18, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The
two parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment
variables. If the first two parameters are not specified, the default configuration in the environment
variables will be used.
l The generated log file is upgrade.log, which is stored in the backup directory specified by --
backupdir=/opt/gaussdb/backup. Rollback is needed if any step from 8 to 18 in an HA manual
downgrade fails. It is not needed if 7 (pretest) fails. For details about the rollback procedure, see
Procedure (HA Manual Rollback).
Step 5 Go to the directory where upgrade.py is stored on all physical machines where primary and
standby databases are installed, that is, the manually created upgradetool folder.
cd $GSDB_HOME/upgradetool
Step 6 (Optional) Run the upgrade-type command on the current physical machine to obtain the
downgrade type.
python upgrade.py -t upgrade-type --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 7 (Optional) Run the pretest command on all physical machines where primary and standby
databases are installed to check whether the current database environment is suitable for the
downgrade.
python upgrade.py -t pretest --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 8 Run the precheck command on all physical machines where primary and standby databases
are installed.
Step 9 Run the prepare command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t prepare --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 10 Run the replace command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t replace --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 11 Run the start command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t start --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/gaussdb/
data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 12 Run the upgrade command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t upgrade --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 13 Run the sync command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t sync --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/gaussdb/
data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 14 Run the restart command on all physical machines where primary and standby databases are
installed.
python upgrade.py -t dbcheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 15 Run the upgrade-view command on all physical machines where primary and standby
databases are installed.
python upgrade.py -t dbcheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 16 Run the checkpoint command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t dbcheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 17 Run the dbcheck command on all physical machines where primary and standby databases
are installed.
python upgrade.py -t dbcheck --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --package=/opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-
EULER20SP8-64bit.tar.gz --backupdir=/opt/gaussdb/backup
Step 18 Run the flush command on all physical machines where primary and standby databases are
installed.
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
l To roll back a database when its downgrade fails, go to the directory where upgrade.py
is stored.
cd $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
----End
The following steps must be performed on the node where the python upgrade.py -s run --
config-file=/opt/gaussdb/config_file.ini --upgrade-mode=ha command has been executed.
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Go to the directory where upgrade.py is stored.
l To roll back a database when its upgrade fails, go to the directory where upgrade.py is
stored.
cd /opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
l To roll back a database when its downgrade fails, go to the directory where upgrade.py
is stored.
cd $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Go to the directory where upgrade.py is stored.
l To roll back a database when its upgrade fails, go to the directory where upgrade.py is
stored.
cd /opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
l To roll back a database when its downgrade fails, go to the directory where upgrade.py
is stored.
cd $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be upgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In 3 through 5, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The two
parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment variables.
If the first two parameters are not specified, the default configuration in the environment variables
will be used.
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Go to the directory where upgrade.py is stored on all physical machines where HA primary
and standby databases are installed.
l To roll back a database when its upgrade fails, go to the directory where upgrade.py is
stored.
cd /opt/software/gaussdb/GAUSSDB100-V300R001C00-DATABASE-EULER20SP8-64bit
l To roll back a database when its downgrade fails, go to the directory where upgrade.py
is stored.
cd $GSDB_HOME/upgradetool
For details about upgrade.py, see Database Management Tools > upgrade.py in GaussDB
100 V300R001C00 Operation Guide to Tools (Standalone).
NOTE
l When multiple instances are installed, the database instances that are started from the same
installation directory must be upgraded at a time. In this case, --GSDB_DATA specifies the data
directories of multiple database instances, separated by commas (,).
l In 3 through 5, use --GSDB_HOME and --GSDB_DATA to explicitly specify a path. The two
parameters have a higher priority than GSDB_HOME and GSDB_DATA in environment variables.
If the first two parameters are not specified, the default configuration in the environment variables
will be used.
Step 3 Perform the rollback check on all physical machines where HA primary and standby
databases are installed.
python upgrade.py -t rollback-check --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --backupdir=/opt/gaussdb/backup
Step 4 Perform the rollback on all physical machines where HA primary and standby databases are
installed.
python upgrade.py -t rollback --GSDB_HOME=/opt/gaussdb/app --GSDB_DATA=/opt/
gaussdb/data --backupdir=/opt/gaussdb/backup
Step 5 Delete the rollback configurations on all physical machines where HA primary and standby
databases are installed.
----End
and 16 framework sessions for concurrent SQL execution. The following table describes the
reserved sessions for internal use.
0 Instance status switchover, for example, kernel start and stop, demotion from
primary to standby, and promotion from standby to primary
1 lgwr thread
2 ckpt thread
3 smon thread
6 arch thread
7 rst thread
8 lsnr thread
9 mrp thread
11 fal thread
12 timer thread
13 rollback thread
17 rcy thread
26 stats thread
29 Job thread
Precautions
l Parameter names are case-insensitive.
l Modifications on some parameters need a database restart before taking effect.
l You must have the corresponding system permissions for configuring parameters.
l Adjusting parameters may affect database system behavior. After installation is
complete, do not modify parameters. If the parameters require modification, fully
understand the impacts on GaussDB 100 before modifying them. Otherwise, unexpected
results may be generated.
l When optimizing a database, use the parameters provided in Parameters > Advanced
Optimization in GaussDB 100 V300R001C00 Database Reference (Standalone).
DATA_BUFFER_ Size of a data buffer, which is used for Integer (unit: MB)
SIZE recently accessed data
LOG_BUFFER_S Size of a log buffer, which is used for redo Integer (unit: MB)
IZE logs
l MEMORY: Parameter settings are written into only memory and take effect
immediately but become invalid after a restart. MEMORY is suitable for only dynamic
system parameters.
l PFILE: Parameter settings are written into initial parameter files and take effect after a
restart. PFILE is suitable for both dynamic and static system parameters. The settings of
static system parameters can be written into only initial parameter files.
l BOTH: Parameter settings are written into both initial parameter files and memory, and
take effect immediately. BOTH is suitable for only dynamic system parameters.
Examples
l Change the value of _ENABLE_QOS, an instance parameter, to TRUE.
ALTER SYSTEM SET _ENABLE_QOS = TRUE SCOPE = PFILE;
Succeed.
Assume that the value NOWAIT is used and there is a fault after the database receives a
commit request and before the redo log records are written. This can falsely indicate to a
transaction that its changes are persistent. Also, it can violate the durability of ACID
transactions if the database shuts down unexpectedly. ACID is short for Atomicity,
Consistency, Isolation, Durability.
Succeed.
Precautions
l The database status can only be changed only from NOMOUNT to MOUNT or OPEN,
and from MOUNT to OPEN. It cannot be rolled back from OPEN. If you need to
switch to another status from OPEN, restart the database instance in NOMOUNT or
MOUNT mode.
l When a database is switched from OPEN to its child status, READ WRITE and READ
ONLY can be switched online, and RESETLOGS and RESTRICT can be switched
only when the database is in the NOMOUNT or MOUNT status.
Related Concepts
The GaussDB 100 start process covers four phases: CLOSED, NOMOUNT, MOUNT, and
OPEN. A database administrator can start the database to any phase as needed.
Step 2 Start the database in NOMOUNT or MOUNT mode and connect it.
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
----End
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
----End
– To switch from READ WRITE to READ ONLY, run the following command:
ALTER DATABASE CONVERT TO READONLY;
Status Description
MOUNT The database is mounted but l Rename, add, or delete data files.
not started. In this mode, only l Perform full restoration on a database.
database administrators can
modify the database and users l Change the archiving mode of a
cannot establish connections or database.
sessions with the database.
READ Supports read and write. It is This status is used when services are
WRITE the default status after the executed normally.
database enters OPEN.
RESTRICT Loads only core system l This status can be used with
catalogs, allowing for user upgrade.py. For details, see
SYS only. Upgrading a Database. Other
unconventional operations will cause
database processes to exit abnormally,
resulting in database unavailability.
l In this status, you can run the
COMMIT FORCE command to
forcibly commit residual transactions.
However, new transactions generated
in this status cannot be forcibly
committed.
Examples
Change the database status.
-- Change the database status to MOUNT:
ALTER DATABASE MOUNT;
-- Change the database status to OPEN:
ALTER DATABASE OPEN;
Procedure
l Add redo log files to a database.
ALTER DATABASE ADD LOGFILE ('/gaussdb/data/log1' size 256M, '/gaussdb/data/
log2' size 256M, '/gaussdb/data/log3' size 256M);
l In primary/standby database deployment, you can set the redo log mode only to
ARCHIVELOG.
l By default, GaussDB 100 uses the archiving mode.
l Change the system protection mode.
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;
if the standby database fails to receive the redo logs related to the transactions
committed by the primary database, the transaction data may be lost.
l Change the size of the data file USER.
ALTER DATABASE DATAFILE 'USER' RESIZE 128M;
– When reducing the file size, ensure it is not smaller than the minimum size required
by the database system. The minimum size required by the SYSTEM tablespace
and UNDO tablespace is 128 MB, and that of other data files is 1 MB.
– When reducing the file size, ensure that the storage area of valid data is not
damaged. Otherwise, the command execution will fail.
– The number of pages occupied by valid data can be obtained from the
HIGH_WATER_MARK column in the view Data Dictionary and Views >
Dynamic Performance Views > DV_DATA_FILESDV_DATA_FILES in
GaussDB 100 V300R001C00 Database Reference (Standalone). Note that this
column specifies the number of pages occupied by a data file. To calculate the file
size, multiply the column value by the size of each page.
l The _LOG_LEVEL parameter specifies the run and debug logs to be recorded. If
multiple types of logs need to be recorded, set the parameter to the sum of parameter
values for required log types. Table 3-8 lists parameter values for different log types.
Table 3-8 Settings of _LOG_LEVEL (for run logs and debug logs)
Log Type Decimal Value Binary Value
NOTE
l After _LOG_LEVEL is set to a number, the number will be converted into a binary value,
with the last nine bits in use. If the most significant bits are insufficient, the value will be left
padded with 0. From the most significant bit to the least significant bit, each bit represents
LONGSQL LOG, reserved bit, DEBUG INFORMATION, DEBUG WARNING, DEBUG
ERROR, reserved bit, RUN INFORMATION, RUN WARNING, and RUN ERROR,
respectively. Bit 1 indicates on, and bit 0 indicates off.
For example, if the parameter is set to 16, its binary value will be 10000. After left-padding
with 0, the value becomes 000010000, indicating that DEBUG ERROR logs will be recorded.
l For details about the _LOG_LEVEL parameter, see section "Parameters" in GaussDB 100
V300R001C00 Database Reference (Standalone).
l The AUDIT_LEVEL parameter specifies the types of audit logs to be recorded. Table
3-9 lists parameter values for different audit log types.
DDL 1 00000001
DCL 2 00000010
DML 4 00000100
PL 8 00001000
For functional syntax, set AUDIT_LEVEL based on its types to record required audit
logs.
– For EXP, IMP, LOAD, and DUMP syntax that consists of multiple types of SQL
statements, the value of AUDIT_LEVEL must cover the corresponding SQL types.
– For other syntax, the value of AUDIT_LEVEL must cover the DCL type.
NOTE
Connect to a database, and run the ALTER SYSTEM SET statement to change the parameter
values. The changes take effect immediately.
For example, to record RUN ERROR and DEBUG ERROR logs, change the value of
_LOG_LEVEL to 17 (00010001).
ALTER SYSTEM SET _LOG_LEVEL = 17;
– SIZE integer [ K | M | G | T | P | E ]
Specifies the file size. The default unit is byte. K indicates KB, M indicates MB, G
indicates GB, T indicates TB, and E indicates EB.
Note that the total number of log files cannot exceed 256.
l Execute redo log files.
Delete one redo log file from a primary or standby node.
Syntax:
ALTER DATABASE DROP LOGFILE ( 'file_name' );
Run Logs
Run logs record various information during database running. If there is a fault during the
running, you can view the run log file zengine.rlog to locate the fault.
l Log directory
$GSDB_DATA/log/run/zengine.rlog by default
l Log format
For example:
UTC+8 2018-06-09 12:09:40.053|ZENGINE|00048|140045052368640|INFO>[SPACE]
succeed to create tablespace TABLESPACE1
– Time zone
– Event occurrence time
– Module
– Session ID
– Current thread ID
– Log level
– Log content
Debug Logs
l Log directory
$GSDB_DATA/log/debug/zengine.dlog by default
l Log format
For example:
UTC+8 2018-06-09 12:09:40.053|ZENGINE|00048|140045052368640|INFO>[SPACE]
succeed to create tablespace TABLESPACE1
– Time zone
– Event occurrence time
– Module
– Session ID
– Current thread ID
– Log level
– Log content
STAGE SQL_TEXT
------------ ----------------------------------------------------------------
Audit Logs
Audit logs record different information based on the audit level set by users.
Sensitive information (such as a password) is not allowed in SQL constant strings or dynamic
SQL statements of stored procedures because it cannot be identified by GaussDB 100 and
may be printed with audit logs.
l Log directory
$GSDB_DATA/log/audit/zengine.aud by default
l Log format
For example:
UTC+8 2019-01-30 10:09:23.488
LENGTH: "117"
SESSIONID:[2] "48" STMTID:[0] "" USER:[6] "SYSDBA" HOST:[0] "" ACTION:[7]
"CONNECT" RETURNCODE:[1] "0" SQLTEXT:[0] ""
Operation Logs
Operation logs are generated when a database administrator uses zsql.py to perform
operations on the database. If GaussDB 100 is faulty, you can backtrack user operations on
the database and reproduce the fault based on the operation logs.
l Log directory
$GSDB_DATA/log/oper/zsql.olog by default
l Log format
For example:
2019-01-30 10:21:10.894|zsql|SELECT VALUE FROM DV_PARAMETERS WHERE NAME =
'TCP_INVITED_NODES';
l Log directory
for installation by user root
~/zengineinstall.log for installation by a common user
for uninstallation by user root
~/zengineuninstall.log for uninstallation by a common user
l Log format
For example:
[2019-01-31 11:17:50] Begin check parameters...
Alarm Logs
Alarm logs record database exceptions. If there is a deadlock in a database, a deadlock log
will be recorded, including deadlocks on tables and transactions. If the number of database
handles is insufficient, a corresponding log will be recorded. If running a customized job for
the first time fails, a failure log will be recorded.
l Log directory
$GSDB_DATA/log/zenith_alarm.log by default
l Log format
For example:
2019-02-02 09:24:55|1078919231|DeadLock|DN|zenith|zenith
zctl Logs
zctl logs record information about O&M operations performed by zctl.py.
l Log directory
$GSDB_DATA/log/zctl-yyyy-mm-dd_xxx.log by default
l Log format
For example:
[2019-02-28 09:35:14.414081][USER:root][HOST:10.190.92.132][zctl][LOG]:Zctl
start instance successful.
Startup Logs
Startup logs record output information upon database startup. To obtain complete O&M
information of GaussDB 100, examine both the zctl logs and startup logs.
l Log directory
$GSDB_DATA/log/zenithstatus.log by default
l Log format
For example:
starting instance(normal)
instance started
Trace Logs
Trace logs are used to record tracing information about related sessions when a fault occurs in
the database. Currently, they record only the session deadlock information.
l Log directory
$GSDB_DATA/trc/zengine_00003_xxxx.trc by default
In this directory, 00003 is the ID of a session, and the deadlock log is recorded in the
session whose SESSION_ID is 3. In addition, xxxx is the process ID. After the database
is restarted, a new trace log will be generated.
l Log format
For example:
**2019-03-17 11:37:32 DEADLOCK DETECTED*
The following deadlock is not a ZENITH error.It is due to user error in the
design of SQL.The following information may aid in determining the deadlock :
----------------------WAIT INFORMATION---------------------
[Transaction Deadlock]
session id: 54, wait session: 53, wait rowid: 0-0-9624
wait sql: update d1 set c2 = 'w' where c1 = 1
If wait_sql is empty, there are DDL statements being executed in the current session.
Currently, no DDL deadlocks can be recorded.
A trace log contains the following information:
– Time when the deadlock occurred
– Deadlock type
– Session IDs in the deadlock ring
– Wait session IDs in the deadlock ring
– SQL statements that cause the deadlock in the deadlock ring
– Cause of each deadlock
n Transaction deadlock: You can check the row that causes the deadlock based
on row_id.
n Table deadlock: You can check the table that causes the deadlock based on
table_id.
n ITL deadlock: You can check the page that causes the deadlock based on
page_id.
The maximum size of a zctl log file is 10 MB. If a zctl log exceeds 10 MB, it will be
automatically dumped to a historical log file. A maximum of nine historical log files can be
retained. When the number of historical log files reaches the upper limit, delete earlier files
first.
Controlling Transactions
The following describes transaction operations supported by GaussDB 100:
l Starting a transaction
GaussDB 100 provides no explicit statement to start a transaction. The first executable
SQL statement (except the login statement) indicates the start of a transaction.
You cannot run the BEGIN or START statement to start a transaction.
l Setting a transaction
GaussDB 100 provides the SET TRANSACTION statement to set a transaction.
l Committing a transaction
GaussDB 100 provides COMMIT to commit all operations of a transaction. In
GaussDB 100, transactions are not automatically committed by default. They will be
committed only after COMMIT is displayed. Otherwise, records will be lost during
sessions.
After zsql is used to create a connection, you can run the SET command with
AUTOCOMMIT set to ON to enable automatic transaction committing. For details
about how to set this parameter, see Client Tools > zsql > Setting Parameters in
GaussDB 100 V300R001C00 Operation Guide to Tools (Standalone).
l Rolling back a transaction
GaussDB 100 provides ROLLBACK to roll back work done in the current transaction
and terminate the transaction. You can use SAVEPOINT to set a savepoint, which can
be selected for rollback. If you do not explicitly commit a transaction and the program
terminates abnormally, the database rolls back the last uncommitted work unit, rather
than the entire transaction. In addition, you can use RELEASE SAVEPOINT to destroy
unnecessary savepoints. The syntax is as follows:
RELEASE SAVEPOINT savepoint_name;
The isolation level cannot be changed after the first SQL statement SELECT, INSERT,
DELETE, UPDATE, FETCH, or COPY in the transaction is executed.
l READ COMMITTED: At this level, a transaction can access only committed data. This
is the default level.
Generally, the SELECT statement accesses a database snapshot taken when the query
begins. It can also access the data updates in its session, regardless of whether they have
been committed. In this case, different database snapshots may be available to two
consecutive SELECT statements in the same transaction because other transactions may
be committed while the first SELECT statement is executed.
At the READ COMMITTED level, the execution of each statement begins with a new
snapshot, which contains all the transactions that have been committed by the execution
time. Therefore, during a transaction, a statement can access the results of other
committed transactions. Pay attention to whether a single statement always accesses
absolutely consistent snapshots in a database.
Transaction isolation at this level meets the requirements of many applications, and is
fast and easy to use. However, applications performing complicated queries and updates
may require data that is more consistent than this level can provide.
l READ CURRENT COMMITTED: At this level, when the execution of a statement
starts, the SCN of the current system is obtained as the query SCN for the statement.
During the execution of the statement, the last committed result of the current access
record is visible. In GaussDB 100, READ CURRENT COMMITTED is a sub-type of
READ COMMITTED. However, it does not indicate a consistent read, and therefore
can be used when requirements for consistency are low.
l SERIALIZABLE: At this level, transactions are serialized sequentially for execution,
similar to the serializable level in snapshot isolation. This is the highest transaction
isolation level. When all statements in transactions are executed, the start SCN of the
current serializable transaction is obtained as the query SCN for the statements. The
visible result of a statement is determined at the beginning of the transaction and is not
affected by other transactions or changes.
Data Protection
The primary job of a backup administrator is preparing and monitoring data backups. Backup
is a copy that can be used to rebuild a database. Each physical backup is a file that stores
database information in another location (such as a disk, tape, or other offline storage media).
It contains data files, control files, and archive redo logs. GaussDB 100 supports physical
backup. Users can store backup data on disks or using NetBackup (NBU), an enterprise-level
backup and restoration suite. In addition, GaussDB 100 supports hot backup. Data can be
backed up without stopping the database service, ensuring that the system is not interrupted in
7x24 hours.
Data Archiving
Although data archiving is related to data protection, it has different purposes. Archive
backups are different from common backups and use different restoration policies. These
backups are usually archived on separate storage media and retained for a long time. For
example, a database administrator may need to retain the backup of a database until the
service season ends, but this backup is not part of the DR policy. After a backup is complete,
the media to which the backup is written is usually unavailable. This type of backup is called
archive backup.
Data Migration
In some cases, you need to back up the data in a database and move the data to another
location. Strictly, these jobs are not part of backup and restoration policies, but they need
database backups. For example, move an entire database from one platform to another.
Backup Modes
Data can be backed up in either of the following modes. Currently, GaussDB 100 supports
only physical backup.
l Physical backup
A database is backed up by copying physical files. Data is replicated from a primary
database to a standby database in the unit of disk block. Each backup covers the data of a
sector (512 bytes). A database can be restored using backup files, such as data files and
archive log files. Physical backup is usually used for full backup, quickly backing up and
restoring data with low costs if properly planned.
l Logical backup
Data of a primary database can be backed up to its standby database as follows: the
online and archive log files of the primary database are parsed to generate logical logs;
and then these are used to construct DML statements to be replayed on the standby
database.
Logical backup provides more flexibility than physical backup. Specifically, it allows for
either a heterogeneous database or a homogeneous database of a different version as a
standby database; and it can also back up database subsets by using a table-level, logical
replication switch.
Backup Levels
GaussDB 100 supports full backup and multi-level incremental backup.
l Full backup
Data at a specific time point is fully replicated regardless of the archive attribute of all
files. The archive attribute will be deleted during the backup. A full backup allows tapes
alone to be used for restoring lost data, greatly accelerating system or data restoration.
However, there is a large amount of duplicate information across the tapes with the same
full backup data. In addition, a large amount of data needs to be backed up each time,
consuming huge time.
l Incremental backup
Incremental backup is a backup of changes since the previous backup.
If you use the BACKUP command to perform an incremental backup, the system will
cover all files that have been changed after the previous backup (full backup, differential
incremental backup, or cumulative incremental backup). Such a backup has no duplicate
data. The amount of data to be backed up is small and the backup takes a short time.
However, it is complicated to restore an incremental backup. You must have the tapes for
the previous full backup and all the incremental backups (once the tapes are lost or
damaged, the restoration will fail). In addition, you must restore data one by one from
the full backup to the incremental backups in sequence. This greatly prolongs the
restoration. Incremental backup can be performed only by running the BACKUP
statement.
GaussDB 100 supports two levels for incremental backup: level 0 and level 1. Level-0
backup is a baseline incremental backup, that is, a full backup. Level-1 backup is an
incremental backup for the previous level 0 backup or level 1 backup.
With differential incremental backup, all changed data blocks since the previous level 1
or level 0 backup are backed up. Differential incremental backup is the default
incremental backup mode.
With cumulative incremental backup, all data blocks changed since the previous level 0
backup are backed up.
If you perform an incremental backup, the system will back up archive log files in a
specified time range to a specified directory. Before this incremental backup, you must
perform a full backup. If there is small gap or little data update between the current
backup time and the last full backup time, you can back up only the archive log files in
this gap period.
Backup Media
GaussDB 100 supports backup to disk.
l Backup to disk
When data is backed up to a disk, multiple concurrent read and write threads are started
in the database to implement parallel backup and restoration, accelerating the operations.
The parallel backup and restoration supports a maximum of 8 pairs of concurrent read
and write threads. The default value is 4. A data file is usually backed up by a pair of
read and write threads. If the size of a data file exceeds the splitting threshold, the file
will be split into multiple pieces, which are then allocated to different read and write
threads. This accelerates the data file backup. During backup, the database automatically
determines the optimal data file splitting threshold and policy. You can manually specify
the threshold and policy as well. Data files that exceed the specified threshold are split,
but the number of data file pieces cannot exceed that of concurrent threads. If the
splitting threshold is manually specified, a larger value will result in a decrease in
parallel efficiency, and a smaller value will generate too many backup files. Therefore,
you are not advised to specify the splitting threshold manually.
Backup Compression
GaussDB 100 supports backup set compression.
If the backup storage space or the network backup bandwidth is limited, the backup set size
needs to be reduced. In this case, backup set compression is applied. GaussDB 100 supports
the zstd, lz4, and zlib compression algorithm for compressing backup sets. A compression
level can be specified for compressed backup. The value range is [1, 9]. A higher level leads
to a higher compression ratio but a lower speed. The default compression level is 1.
PITR
With Point-In-Time Recovery (PITR), a database can be restored to a specified time point
based on physical backup files and redo log files.
Backup sets can be used to restore data to the backup time point. If there are archive logs
generated after the backup, the logs can be replayed to restore data to a time point after the
backup.
Schema-specific Restoration
To facilitate data maintenance, you can store data of different services in a database by using
different schemas. Ensure that each schema uses a separate tablespace. If the data of only one
schema is damaged and needs to be repaired, you can use the ztrst tool to restore only the
data of the specific schema based on the full backup file, which accelerates the restoration.
Table Flashback
Table flashback is a fast data restoration solution. You can selectively query or cancel
misoperations. FLASHBACK TABLE restores a table to an earlier state in the event of
human or application errors. A table can be flashed back to a previous time point.
l When backing up data files on a standby database is complete, the connection between
the primary and standby databases must be normal. If the connection is abnormal, the
backup information cannot be recorded in the system catalog of the primary database,
and an error will be returned.
-- The standby database fails to send backup set information to the primary
database.
GS-00878: Send backup record to primary failed
-- The standby database fails while waiting for the primary database to
record the backup information.
GS-00879: wait primary record backup set failed
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
NOTE
Level 0 indicates the baseline backup. Level 0 backup must be performed before level 1 backup is
performed for the first time. Level 1 backup is based on the previous level 1 or level 0 backup.
– Differential incremental backup: a backup based on the previous level 0 or level 1 backup
– Cumulative incremental backup: a backup based on the previous level 0 backup
l Compressed backup:
– Full compressed backup
BACKUP DATABASE FULL FORMAT '/home/gaussdba/data/backup_c_bak' AS
COMPRESSED BACKUPSET;
l Parallel backup:
– Database-determined splitting threshold
BACKUP DATABASE FULL FORMAT '/home/gaussdba/data/backup_paral_bak'
PARALLELISM 6;
NOTE
n PARALLELISM 8 indicates that eight concurrent threads are enabled for backup. The
value range is [1, 8], and the default value is 4.
n SECTION THRESHOLD 2G indicates that the splitting threshold is 2 GB. If the size
of a data file exceeds 2 GB, the data file will be split to improve the parallel backup
efficiency. The value range is [128 MB, 32 TB]. If the splitting threshold is manually
specified, a larger value will result in a decrease in parallel efficiency, and a smaller
value will generate too many backup files. Therefore, you are not advised to specify the
splitting threshold. If you do not specify the threshold, the database will automatically
calculate the optimal threshold.
l Backup excluding specified tablespaces:
– Tablespace spc1 excluded
BACKUP DATABASE FULL FORMAT '/home/gaussdba/data/exclude_bak1' EXCLUDE
FOR TABLESPACE spc1;
NOTE
n For incremental backup with specified tablespaces excluded, ensure that the current
incremental backup and its baseline incremental backup have the same tablespaces
excluded. Otherwise, the incremental restoration will lead to data exceptions.
n For backup, you cannot specify tablespaces used by the system. Otherwise, errors will be
reported.
n After restoration for backup with a specified tablespace excluded, you need to manually
delete the objects that use the tablespace and manually delete the tablespace. Otherwise,
the tablespace and related objects will become unavailable.
For details about the BACKUP statement, see GaussDB 100 V300R001C00 R&D
Documentation (Standalone).
----End
Related Concepts
GaussDB 100 backups can be restored in either synchronous or asynchronous mode. In
synchronous mode, the system returns execution results to a client only after restoration is
complete. In asynchronous mode, a database instance returns execution results to the client
after receiving the RESTORE DATABASE statement. To check whether the restoration is
successful, check the STATUS column in the DV_BACKUP_PROCESSES view.
Procedure
Step 1 Log in to the GaussDB 100 server as an administrator.
sys/Changeme_123 indicates the username and password of user sys used for logging in to
the database. 192.168.0.1 indicates the IP address of the database server. 1888 indicates the
connected port.
l Asynchronous restoration:
RESTORE DATABASE FROM '/home/gaussdba/data/backup01.bak' DISCONNECT FROM
SESSION;
l Parallel restoration:
RESTORE DATABASE FROM '/home/gaussdba/data/backup01.bak' PARALLELISM 8;
NOTE
l If the backup media is a disk, concurrent-thread restoration will be supported. By default, the
number of concurrent threads is 4. You can also specify the PARALLELISM parameter in the
restoration command to customize the number. The parameter value range is [1, 8].
l If the backup media is not a disk, only single-thread restoration will be supported.
Step 7 If you need to demote the primary database after restoration to standby or cascaded standby,
run the following statements to change the database role and rebuild the database by referring
to HA Rebuilding:
l Demotion to standby:
ALTER DATABASE CONVERT TO PHYSICAL STANDBY MOUNT;
----End
Prerequisites
l The ztrst tool package GAUSSDB100-V300R001C00-RESTORE.tar.gz has been
obtained from the GAUSSDB100-V300R001C00-TOOLS.tar.gz package.
l Schemas and tablespaces are in one-to-one mapping and are unique. One tablespace
cannot be shared by multiple schemas, and one schema cannot use multiple tablespaces.
l The disk where data restoration will be performed has sufficient space.
l There are no tmp_data and export_data directories in the temporary data directory
specified by parameter -D.
l All parameters must be set correctly.
l The backup set specified by parameter -B must be a full backup and be valid.
l Before using ztrst to restore data, you need to recreate a schema and the corresponding
tablespace in the database instance.
l The database version used in file backup must be the same as the tool version.
Precautions
l If the ztrst tool and database are deployed on different servers and the error message
"GS-00331, Whitelist rejects connection for user "jack", ip "192.168.0.1", current date
"2019-05-20 10:26:34.583", please check zhba.conf or tcp valid node configuration" is
returned, client access authentication has been configured for the database whose data
will be restored. In this case, add the IP address of the server where the ztrst tool is
running to the user whitelist or IP address whitelist of the database. For details, see
Configuring Client Access Authentication. In addition, do not add that address to the
IP blacklist to ensure that the ztrst tool has permission to access the database.
l If the ztrst tool and database are deployed on different servers and the error message
"GS-00341, Failed to verify SSL certificate, reason self signed certificate in certificate
chain" is returned, SSL authentication has been configured for the server of the database
whose data will be restored. The corresponding private key, certificate, CA root
certificate, and client parameters for the SSL connection must be configured on the ztrst
client. For details, see Database Configuration > Configuring the Database
Connection > Establishing TCP/IP Connections in SSL Mode in GaussDB 100
V300R001C00 Security Hardening Guide (Standalone). After the configuration, you do
not need to restart the database. When configuring client parameters, you only need to
configure ZSQL_SSL_CERT, ZSQL_SSL_KEY, ZSQL_SSL_CA,
ZSQL_SSL_CRL (optional), and ZSQL_SSL_MODE. Do not configure
ZSQL_SSL_KEY_PASSWD. If ZSQL_SSL_KEY is configured as an encrypted
private key, you need to enter the encrypted password for the key in interactive mode
when using the ztrst tool to restore data. In addition, you need to specify -C
PARALLEL=1,DDL_PARALLEL=1 in the command line of the tool.
Procedure
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Log in to the database, and recreate a schema and the corresponding tablespace.
1. Log in to the GaussDB 100 database as an administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
Step 3 Log in to the server where the full physical backup file is located as user root.
Step 4 Create a user and its group for running the ztrst tool on the server, and configure their
permissions to be less than or equal to 0750.
groupadd toolgrp
useradd -g toolgrp -d /home/ztrstdba -m -s /bin/bash ztrstdba
total 12
drwx------ 2 ztrstdba toolgrp 4096 May 11 16:43 add-ons
drwx------ 2 ztrstdba toolgrp 4096 May 11 16:43 bin
drwx------ 2 ztrstdba toolgrp 4096 May 11 16:43 lib
2. (Optional) Add the bin path in the tool package to the environment variable PATH.
If you do not add the bin path in the tool package to the environment variable PATH, the
format for using the ztrst tool in the bin path of the tool will be ./ztrst. If you add the
path, the format will be ztrst or ./ztrst.
export PATH=$PATH:/home/ztrstdba/GAUSSDB100-V300R001C00-RESTORE/bin
During data restoration, you need to enter the passwords of user SYS and the schema in
interactive mode. For details about ztrst parameters, see Database Management Tools >
ztrst in GaussDB 100 V300R001C00 Operation Guide to Tools (Standalone).
----End
3.8.6 Flashback
GaussDB 100 supports table flashback. FLASHBACK TABLE restores a table to an earlier
state in the event of human or application errors. The time in the past to which the table can
be flashed back is dependent on the amount of undo data in the system. Note that GaussDB
100 cannot restore a table to an earlier state across any DDL operations that change the
structure of the table. Table flashback can be implemented by:
l Restoring table data to a specified time point or SCN point. This mode is suitable when
users have incorrectly adjusted the data.
l Restoring tables that have been deleted by mistake from recycle bins. This mode is
suitable when users have incorrectly executed DROP TABLE.
For details about the FLASHBACK TABLE syntax, see SQL Syntax Reference > SQL
Syntax > FLASHBACK TABLE in GaussDB 100 V300R001C00 R&D Documentation
(Standalone).
Prerequisites
l To run this statement, you must have the FLASHBACK permission. To flash back
tables of other users, the FLASH ANY TABLE permission is required.
l The recycle bin is open, that is, the value of the database parameter RECYCLEBIN is
TRUE.
Precautions
l By default, GaussDB 100 does not forcibly convert the DATE type and this type needs to
be converted by using functions.
l In the table flashback process, a full table scan is performed, which affects the
performance and blocks the modification of the table content.
l When a table is flashed back, the system reorganizes table content based on the SCN
point. In this case, row IDs may move.
l If the time point for table flashback is too early, an error like "snapshot too old" may be
reported upon undo log reuse.
l Flashback cannot be performed during database restart or rollback.
Procedure
Step 1 Log in to the GaussDB 100 database.
zsql
conn jack/database_123@192.168.0.1:1888
jack/database_123 indicates the username and password used for logging in to the database.
192.168.0.1 indicates the IP address of the database server. 1888 indicates the connected port.
The system supports only a query for the current SCN point. If there is a major operation
or operation on a large amount of data, record the current SCN point before the
operation. You can run the SELECT CURRENT_SCN FROM DV_DATABASE
statement to query for and record the current SCN point used by the database.
l Flashback on a table in the recycle bin after the DROP operation is performed:
FLASHBACK TABLE staffs TO BEFORE DROP;
NOTE
You can query the table data of a specified SCN point or at a specified time point.
-- Query data after flashback to a specified time point:
SELECT * FROM staffs AS OF TIMESTAMP TO_TIMESTAMP('2018-12-28 13:14:15',
'YYYY-MM-DD HH24:MI:SS');
-- Query data after flashback to an SCN point:
SELECT * FROM staffs AS OF SCN 10063180815101953;
----End
Related Concepts
GaussDB 100 supports the following three start modes:
l NOMOUNT: An instance is started, and no database is mounted.
In this mode, the system only creates an instance but does not mount the database. It
creates various memory structures and service processes for the instance but does not
open any data files. In this mode, you can access only the data dictionary views related to
the SGA memory, including DV_PARAMETERS and DV_SESSIONS. Information in
these views is obtained from the SGA memory and is irrelevant to the database.
l MOUNT: A database is mounted but not opened.
In this mode, the system mounts the database for the instance but keeps the database shut
down. Mounting a database will open database control files, and data files and redo log
files cannot be read or written. Therefore, you cannot perform operations on the database
in this situation. In this mode, you can access only the data dictionary views related to
the control files, which include V$THREAD, DV_DATABASE, DV_DATA_FILES,
and DV_LOG_FILES. Information in these views is obtained from the control files.
l OPEN: A database is opened.
In this mode, you can only use or develop a database, instead of maintaining it.
GaussDB 100 supports the following four stop modes:
l IMMEDIATE: Client requests are terminated, connected sessions are ended, unfinished
transactions are rolled back, checkpoint operations are performed, and primary processes
are exited.
l ABORT: Client requests are terminated, connected sessions are ended, and primary
processes are exited.
l NORMAL: Client requests are terminated, the system waits for connected sessions to
normally end (which may take a long time), checkpoint operations are performed, and
primary processes are exited.
l KILL: A database instance is urgently stopped. This mode is only as a special handling
measure in abnormal scenarios. It can be used to stop a database instance in ABORT
mode regardless of whether password-free login is enabled. It is not recommended
unless for special requirements.
Precautions
l If no start mode is specified, GaussDB 100 will be started in OPEN mode.
l If no stop mode is specified, GaussDB 100 will be stopped in NORMAL mode.
l At least 918 MB memory space is required for supporting the SGA memory when a
database instance is started.
Starting a Database
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
l NOMOUNT: The database is not mounted. Users can communicate with the database
but cannot use any files in the database.
python zctl.py -t start -m NOMOUNT
NOTE
----End
Stopping a Database
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Stop the database instance. The ${GSDB_HOME}/bin directory must be opened before the
stop.
l NORMAL: The database is normally stopped.
python zctl.py -t stop
l IMMEDIATE: Transactions are rolled back, and the database instance is stopped.
python zctl.py -t stop -m IMMEDIATE
NOTE
----End
4.2 HA Maintenance
GaussDB 100 supports primary and standby databases for HA DR purposes.
NOTE
If databases are deployed in HA DR mode, perform security hardening on both the primary and standby
databases.
NOTE
The protection modes take effect only on primary nodes, instead of standby or cascaded standby nodes.
After a build operation, the protection mode of the primary node will be automatically synchronized to
the standby nodes. Otherwise, if the protection mode of the primary node is changed, the automatic
synchronization will not be performed on the standby nodes. Although a standby node is not used, you
are still advised to synchronize its protection mode since it may be promoted to primary later.
Maximum Protection
This mode ensures zero data loss.
In this mode, transaction logs are not only written into local log files, but also into the log
files of standby databases. Transactions are committed in a primary database only when data
is available in at least one standby database. If all standby databases are unavailable due to
faults (for example, network disconnection), services on the primary database will be blocked
to prevent data loss.
Only LGWR SYNC is supported for replication from a primary database to standby
databases.
This mode needs to be set when the database is in MOUNT mode. The command is as
follows:
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;
l If the SYNC and AFFIRM attributes are specified for ARCHIVE_DEST_n (n is not 1),
transaction logs will be committed on the primary database only after the logs of all
standby databases with AFFIRM configured are written to the transaction logs.
l If the SYNC and NAFFIRM attributes are specified for ARCHIVE_DEST_n (n is not
1), transaction logs are directly written to the primary database without waiting for the
standby database logs to be written.
Maximum Availability
This mode provides the highest data protection policy without affecting the availability of the
primary database.
Its implementation is similar to the maximum protection mode. The transaction log must be
written to the log file of at least one standby node before the local transaction is committed.
There is also a difference between the two. In maximum availability mode, if standby
databases are unavailable due to a fault, services in the primary database will not be blocked.
Although data will not be lost in most cases, data consistency cannot be completely ensured.
Only LGWR SYNC is supported for replication from a primary database to standby
databases.
This mode can be set regardless of the database mode. The command is as follows:
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
Maximum Performance
This mode provides the highest data protection policy without affecting the performance of
the primary database. It is the default mode.
This mode allows transactions to be committed at any time. The transaction logs of a primary
database must be written into at least one standby database, and this write can be
asynchronous. In ideal network conditions, this mode provides data protection similar to the
maximum availability mode and has slight impact on primary database performance.
Both LGWR SYNC and ASYNC are supported for replication from a primary database to
standby databases.
This mode can be set regardless of the database mode. The command is as follows:
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
4.2.2 HA Rebuilding
Currently, only full HA rebuilding is supported. In the following scenarios, the standby or
cascaded standby database needs to be rebuilt:
Procedure
Step 1 Log in to the server of the standby or cascaded standby database to be rebuilt as the OS user
installing the GaussDB 100 database.
Step 2 Go to the directory where the zctl.py script is stored.
cd /home/gaussdba/app/bin
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
DATABASE_ROLE DATABASE_CONDITION
------------------------------ ------------------
PHYSICAL_STANDBY NORMAL
1 rows fetched.
----End
DV_DATABASE
The following table describes HA-related columns in this view, their types, and meanings.
DV_ARCHIVE_DEST_STATUS
The following table describes columns in this view, their types, and meanings.
DV_STANDBYS
The following table describes columns in this view, their types, and meanings.
DV_ARCHIVE_GAPS
The following table describes columns in this view, their types, and meanings.
DV_LOG_FILES
This view is used to display information such as the log file size and name. The following
table describes columns in this view, their types, and meanings.
DV_HA_SYNC_INFO
This view is used to display information about log synchronization (log sending). The
following table describes the columns, types, and meanings.
Scenario
In GaussDB 100, logical replication parses redo logs that contain logical log information to
obtain data changes of database tables, and then replays the changes in a target data source. In
this way, logical replication replicates GaussDB 100 data changes to other homogeneous or
heterogeneous data sources in quasi real time. Logical replication is more flexible than
physical replication, which has strong dependency on the physical formats of logs. Logical
replication can implement GaussDB 100 cross-version replication and GaussDB 100
replication to other heterogeneous databases (such as Oracle databases). It also provides
customization support when the structures of source and target database tables are
inconsistent. Logical replication can be used for incremental data backup between primary
and standby databases, data synchronization between different service systems, and online
data migration during system upgrade.
When service data needs to be synchronized between a GaussDB 100 database and other
commercial databases:
Prerequisites
l Java JDK 1.8 or later has been installed in the operating environment.
l The installation package for the logical replication tool GAUSSDB100-V300R001C00-
LOGICREP.tar.gz has been obtained from GAUSSDB100-V300R001C00-
TOOLS.tar.gz.
Precautions
l Currently, the logical replication service is preconfigured to support the following types
of target data sources: Oracle, GaussDB 100, and Kafka. Other types of data sources can
be supported by developing corresponding plug-ins.
l For primary/standby replication, you need to install, configure, and start the logical
replication service on both the primary and standby databases. In this case, only the
primary database has logical replication in working mode.
l The table data of user SYS does not support logical replication.
l Logical replication needs to read archive and online logs. Therefore, you need to enable
the archive logging function of GaussDB 100 and disable the automatic archive log
deletion function of GaussDB 100 to prevent archive logs from being deleted by mistake.
l In standalone deployment, logical replication does not provide the alarm function. You
do not need to pay attention to the error information "cannot get $DM_AGENT_HOME
env or DM_AGENT_HOME path is invalid." in the run logs of logical replication
because this error is caused by a lack of DM and does not affect the functionality of
logical replication.
l The logical replication service works only on the data updated after the global logical
replication switch and table-level logical replication switch are enabled. The source table
data before logical replication is enabled cannot be replicated to the target database.
l If a target database is GaussDB 100 and "errMsg=the connecting IP is invalid according
to IP white list" is recorded in the zlogcatcher.rlog file, client access authentication has
been configured for the target database. In this case, add the IP address of the server
where the source database is located to the user whitelist or IP address whitelist of the
target database by referring to Configuring Client Access Authentication. In addition,
the IP address must not be added to the IP address blacklist, ensuring that the logical
replication tool has permission to access the target database.
l You need to create a database user LREP for the logical replication service, create a
logical replication progress table which is required in the replication process as user
LREP, and grant required permissions to user LREP. For details about the minimum
permissions required for user LREP, see Table 4-7. User LREP is only used to query
for the metadata of a table where data replication is needed, and to record the logical
replication progress. Note that user LREP is not associated with the users in the table
(where data replication is needed) specified in the repconf_db.xml file.
Procedure
l 127.0.0.1 indicates a local database login. For remote logins, enter the IP address of the
server where the target database is located.
l 1888 is the number of the database listening port.
Step 1 On the primary and standby nodes, decompress and install the logical replication tool.
The following operations 1 to 8 in this step must be performed on both the primary and
standby nodes.
1. Log in to the primary GaussDB 100 node as user root.
2. Create a directory for storing the logical replication tool as planned.
mkdir -p /opt/software/tools
total 7940
drwxr-xr-x. 3 root root 4096 Apr 26 15:13 GAUSSDB100-V300R001C00-LOGICREP
-rw-r--r--. 1 root root 8113682 Apr 29 16:24 GAUSSDB100-V300R001C00-
LOGICREP.tar.gz
total 4
drwx------. 5 root root 4096 Apr 26 15:13 logicrep
7. Change the owner and owner group of the logicrep directory to the database installation
user gaussdba and its group dbgrp, respectively.
chown -R gaussdba:dbgrp /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep
8. (Optional) If the target database is an Oracle database, download the ojdbc driver of the
corresponding version from the Oracle official website and upload the driver to the
logicrep/lib directory.
NOTE
– Ensure that the permission for the ojdbc driver is 500. If the permission is not 500, run the
following command as user root to change the permission:
chmod 500 File name
– Ensure that the owner of the ojdbc driver is the database installation user gaussdba and the
owner group is dbgrp. If the owner and owner group are incorrect, run the following
command as user root to change them:
chown gaussdba:dbgrp /opt/software/tools/GAUSSDB100-V300R001C00-
LOGICREP/logicrep/lib/File name
total 6772
lrwxrwxrwx 1 gaussdba gaussdba 51 May 14 09:24
2. Open the init.properties file and press i to enter the insert mode.
vi init.properties
#unit: MB
#value range:[100,500]
transaction.buffer.memory.size=100
#unit: MB
#value range:[300,800]
logentry.buffer.size=500
#path_to_keystore_file(for zenith)
javax.net.ssl.keyStore=
#path_to_trustStore_file(for oracle)
javax.net.ssl.trustStore=
In this case, increase the value of this parameter to ensure that transactions
occupying large space perform the replication correctly.
– logentry.buffer.size
Specifies the size of the logentry buffer. Use the default configuration in the initial
phase. Later, you can adjust the value of this parameter by observing the HWM for
buffering performance logs.
– checkpoint.interval
Specifies an interval for storing the replay progress. The default configuration is
recommended. If this parameter is set too small, the progress may be frequently
stored, affecting performance.
– checkpoint.location
Specifies the location for storing the parsing and replay progress of physical logs. It
stores the location of the physical log where the logical replication service stopped
parsing and the latest transaction SCN that is committed to the target database. The
value can only be sourcedb, indicating that the progress information of the logical
replication service is stored in the LOGICREP_PROGRESS table of the logical
replication service user created in the source database.
– checkpoint.table.version
Specifies the version of the logical replication progress table stored in the source
database. In an old version of the table, there are ID, COMMITTED_SCN,
LOGPOINT, and UPDATE_TIME columns. In an upgraded version of the table,
the COMMITTED_TX_TIME column is added. This version is identified by v2.
For a logical replication service that has been launched and running, you do not
need to specify this parameter, or you can set this parameter to v1, indicating the
old version of the logical replication progress table in the source database.
For a logical replication service that is newly launched, this parameter in the
parameter file of the logical replication installation package has been set to v2 by
default. The logical replication progress table in the source database will be created
based on the structure of the new version of the table.
– logfile.endian.type
Specifies whether to use the big endian or little endian mode for storing data in
memory. The big endian or little endian mode refers to the configuration of the
machine that generates the log files required by the logical replication tool. When
the big endian mode is used, the high byte of data is stored in the low address of
memory, and the low byte of data is stored in the high address of memory. When
the little endian mode is used, the high byte of data is stored in the high address of
memory, and the low byte of data is stored in the low address of memory.
– javax.net.ssl.keyStore
Specifies the path of the keystore file, which must contain the file name. This
parameter is used only when useSSL is set to true and the client needs to be
authenticated.
– javax.net.ssl.keyStorePassword
Specifies the ciphertext of the keystore password. This parameter is used only when
useSSL is set to true and the client needs to be authenticated. For details about how
to generate the ciphertext, see Step 4.
– javax.net.ssl.trustStore=
Specifies the path of the truststore file, which must contain the file name. This
parameter is used only when SSL is configured for the Oracle database.
– javax.net.ssl.trustStorePassword=
Specifies the ciphertext of the truststore password. This parameter is used only
when SSL is configured for the Oracle database. For details about how to generate
the ciphertext, see Step 4.
4. Press Esc and enter :wq to save the settings and exit.
Step 3 Create and configure users for the logical replication service on the source and target
databases.
l Create and configure a user for the logical replication service on the source database
GaussDB 100.
Perform this operation only on the primary GaussDB 100 node. The created user will be
synchronized to the standby node in real time.
a. Log in to the primary GaussDB 100 node as user gaussdba.
b. Log in to the database as a database administrator.
zsql gaussdba/database_123@127.0.0.1:1888
c. Run the following SQL statements to create a logical replication user LREP, a
logical replication progress table LREP.LOGICREP_PROGRESS, and grant the
CONNECT and RESOURCE roles as well as the permission for reading related
system catalogs and views to the user LREP:
CREATE USER LREP IDENTIFIED BY database_234;
GRANT CONNECT, RESOURCE TO LREP;
GRANT SELECT ON SYS.SYS_TABLES TO LREP;
GRANT SELECT ON SYS.SYS_COLUMNS TO LREP;
GRANT SELECT ON SYS.SYS_USERS TO LREP;
GRANT SELECT ON SYS.SYS_CONSTRAINT_DEFS TO LREP;
GRANT SELECT ON SYS.SYS_LOGIC_REPL TO LREP;
GRANT SELECT ON SYS.DV_DATABASE TO LREP;
GRANT SELECT ON SYS.DV_LOG_FILES TO LREP;
GRANT SELECT ON SYS.DV_ARCHIVED_LOGS TO LREP;
CREATE TABLE LREP.LOGICREP_PROGRESS
(
ID VARCHAR(128),
COMMITTED_TX_SCN BIGINT,
COMMITTED_TX_TIME TIMESTAMP,
LOGPOINT VARCHAR(128),
UPDATE_TIME TIMESTAMP
);
CREATE UNIQUE INDEX IX_LREP_PROGRESS ON LREP.LOGICREP_PROGRESS(ID);
l Create a user for the logical replication service on the target database and grant
permissions to the user.
– When the target database is a GaussDB 100 database:
i. Log in to the server where GaussDB 100 is deployed as the OS user who
installs the GaussDB 100 database.
ii. Connect to the database through the port and create a user.
zsql gaussdba/database_123@127.0.0.1:1888 -c "CREATE USER logicuser
IDENTIFIED BY database_123;"
iii. Grant the permissions to add, delete, and modify any target tables of logical
replication to the user.
zsql gaussdba/database_123@127.0.0.1:1888 -c "GRANT UPDATE ON
user_name.table_name TO logicuser;"
zsql gaussdba/database_123@127.0.0.1:1888 -c "GRANT INSERT ON
user_name.table_name TO logicuser;"
zsql gaussdba/database_123@127.0.0.1:1888 -c "GRANT DELETE ON
user_name.table_name TO logicuser;"
iii. Grant the permissions to add, delete, and modify any target tables of logical
replication to the user.
GRANT CREATE SESSION TO logicuser;
GRANT UPDATE ON user_name.table_name TO logicuser;
GRANT INSERT ON user_name.table_name TO logicuser;
GRANT DELETE ON user_name.table_nameE TO logicuser;
Step 4 On the primary node, generate the password ciphertexts for the logical replication users in the
source and target databases.
This operation is performed by using the zencrypt tool on the source database GaussDB 100.
1. Go to the directory storing the key configuration file of logical replication.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/conf/sec
[gaussdba@plat sec]$ ll
total 8
-rw-------. 1 gaussdba dbgrp 38 Apr 26 15:13 key1.properties
-rw-------. 1 gaussdba dbgrp 100 Apr 26 15:13 key2.properties
The key1.properties file is used to store the random key factor, and the key2.properties
file is used to store the working key.
2. Run the vi key1.properties command to view the random key factor stored in the
key1.properties file.
3. Run the vi key2.properties command to view the working key stored in the
key2.properties file.
4. Go to the bin directory under the GaussDB 100 installation directory.
cd $GSDB_HOME/bin
5. Use the zencrypt tool to generate the password ciphertexts for the logical replication
users in the source and target databases.
When the message "Please enter password to encrypt:" is displayed, entering the
password of user LREP created for logical replication in the source database generates
the password ciphertext of this user, which needs to be configured in the ds.passwd
parameter for the source database in Step 5; and entering the password of user logicuser
created for logical replication in the target database generates the password ciphertext of
this user, which needs to be configured in the ds.passwd parameter for the target
database in Step 5.
./zencrypt -e AES256 -f lCHMm1WvDKU97uVQDd8+ew== -k g/FMnXWyHkp+8TKMa8qm5j
+Ojvuy5hHV/p3WloMhNl2DoUT6Dl90Tom5DKP+3J2M6s/jI0mMdUknmUYcOHQN+g==
Please enter password to encrypt:
*********
Please input password again:
*********
Cipher: jFB1xNaKybjU5kAD3gdJeJvdvEdjj0c87L1NBsSWZHA=
[srcdb]
ds.type=gauss
ds.url=jdbc:zenith:@127.0.0.1:1611?useSSL=false
ds.username=lrep
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=5
min.idle=1
max.idle=10
max.active=50
max.wait=100000
[dstdb]
ds.type=oracle
ds.url=jdbc:oracle:thin:@10.185.240.79:1521:ora11g
ds.username=usrSample
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=10
max.idle=20
min.idle=5
max.active=50
max.wait=100000
[dstkafka]
ds.type=kafka
ds.url=10.185.240.79:9092
compression.type=none
max.block.ms=60000
retries=3
batch.size=1048576
linger.ms=1
buffer.memory=33554432
max.request.size=33554432
request.timeout.ms=10000
optimize.batch.send.buffer=5242880
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config.username=userxxx
#kafka sasl encrypted pwd
sasl.jaas.config.password=passxxx
ssl.truststore.location=/home/kafka.client.truststore.jks
#kafka truststore encrypted pwd
ssl.truststore.password=passxxx
ssl.keystore.location=/home/kafka.client.keystore.jks
#kafka keystore encrypted pwd
ssl.keystore.password=passxxx
#kafka key encrypted pwd
ssl.key.password=passxxx
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.protocol=SSL
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
This example includes parameters for the three data source types supported by logical
replication, and all of them are mandatory. The parameters are as follows:
– [srcdb]/[dstdb]/[dstkafka]: Specifies the section name, indicating the name of the
data source in the section and corresponding to srcName and dstName in the
repconf_db.xml file. Specifically, srcdb indicates the name of the source data
source, and dstdb and dstkafka indicate the name of the target data source.
– ds.type: Specifies the data source type. Currently, the logical replication service
supports the following values: gauss, oracle, and kafka. gauss indicates that the
target data source is GaussDB 100, oracle indicates that the target data source is
Oracle, and kafka indicates that the target data source is a Kafka message queue.
– ds.url: Specifies the URL of the database.
n When ds.type is set to oracle and if SSL is not used, the format of ds.url will
be ds.url=jdbc:oracle:thin:@192.168.0.2:1521:ora11g. In this scenario, if
SSL is used, the format of ds.url will be
ds.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)
(HOST=192.168.0.2)(PORT=2484))
(CONNECT_DATA=(SERVICE_NAME=orcl))). In this case, you need to
configure the SSL certificates on the JDBC client.
n If ds.type is set to gauss, the format of ds.url will be
ds.url=jdbc:zenith:@127.0.0.1:1888?useSSL=false. If useSSL is false,
logical replication will be faster, but there will be security risks. If useSSL is
true, SSL bidirectional authentication will be used. For both of the settings,
SSL certificates need to be configured on the JDBC client. For details, see the
descriptions of "Configuring the SSL Certificate for the JDBC Client" in
Database Development Guide > Development Based on JDBC >
Connecting to a Database in GaussDB 100 V300R001C00 R&D
Documentation (Standalone). If unidirectional authentication is used (clients
do not authenticate servers), the SSL certificates do not need to be configured
on the JDBC client.
n If ds.type is set to kafka, the format of ds.url will be
ds.url=192.168.0.2:9092 where 9092 indicates the port number provided by
the Kafka server for the client.
– If ds.type is set to oracle or gauss, the following parameters need to be configured
in addition to ds.url:
n ds.username: Specifies the name of a logical replication user. The user created
for the logical replication service in the source database is used to read related
system catalogs and views to replay SQL statements and query logical
replication progress tables. The user created for the logical replication service
in the target database is used to access the tables used for SQL replay.
n ds.passwd: Specifies the password ciphertext of a user created for the logical
replication service.
n initial.size: Specifies the initial number of connections in the connection pool.
n max.idle: Specifies the maximum number of idle connections in the
connection pool.
Step 6 (Optional) If ds.type is set to kafka, download the Kafka 2.0 package from the Kafka official
website and copy the kafka-clients-2.0.0.jar, slf4j-api-1.7.25.jar, and slf4j-
log4j12-1.7.25.jar packages to the lib directory (/opt/software/tools/GAUSSDB100-
V300R001C00-LOGICREP/logicrep/lib) of the logical replication installation package.
Step 7 (Optional) If ds.type is set to kafka, define relationships between topics and tables as well as
the name of the partitioner class to be used in the topic_table.properties file on the primary
node.
1. Go to the topicconf directory.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/conf/topicconf
2. Open the topic_table.properties file and press i to enter the insert mode.
vi topic_table.properties
3. Define relationships between topics and tables as well as the name of the partitioner
class.
The following configuration is only an example. An asterisk (*) in the example is a
wildcard. In the partitioner section in the following example, partitioner is used, which
is the default value for logical replication. A customized partitioner is also supported.
topic1, topic2, and topic3 are the names of topic sections. In each topic section, you
need to configure the number of partitions and the tables whose data will be sent to this
topic.
#topic name and table name mapping relation
[partitioner]
class.name=com.huawei.gauss.logicrep.replayer.kafka.TopicPartitioner
[topic1]
partition.num=3
table.name=user1.t1,usr2.t2,user1.t3
[topic2]
partition.num=5
table.name=user2.t2,user1.t3,user3.t4,use4.*
[topic3]
partition.num=5
table.name=*.*
Step 8 On the primary node, define a replication relationship in the repconf_db.xml file.
1. Go to the repconf directory.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/conf/repconf
2. Open the repconf_db.xml file and press i to enter the insert mode.
vi repconf_db.xml
<datasource>
<datasourceInfo srcName="srcdb" dstName="dstdb"/>
</datasource>
<filteredUser>
<userInfo userName="sample"/>
<userInfo userName="lrep"/>
</filteredUser>
<modelMapping>
<tableMapping srcTable="t1" srcSchema="usrSample"
dstTable="t1" dstSchema="usrSample">
<column srcColumn="f_varchar1"
dstColumn="f_varchar1" isKey="true" />
<column srcColumn="f_varchar2"
dstColumn="f_varchar2" />
<column srcColumn="f_varchar3"
dstColumn="f_varchar3" />
<column srcColumn="f_varchar4"
dstColumn="f_varchar4" />
<column srcColumn="f_date1"
dstColumn="f_date1" />
<column srcColumn="f_date2"
dstColumn="f_date2" />
<column srcColumn="f_varchar5"
dstColumn="f_varchar5" />
<column srcColumn="f_number1"
dstColumn="f_number1" />
<column srcColumn="f_varchar6"
dstColumn="f_varchar6" />
</tableMapping>
</modelMapping>
</replicationConfig>
– datasource
Specifies the names of source and target databases for replication. The
datasource.properties file must have definitions for the names.
– filteredUser
Specifies users whose information needs to be filtered. During logical replication,
data generated by the users is not covered. Multiple users can be configured.
The users need to be defined in the source database. If the users do not exist, Warn
logs will be generated, and logical replication will proceed.
It is recommended that user LREP created for the logical replication service in the
source database be configured in the filtered user list. In this way, logs generated
when user LREP performs operations on tables can be filtered out, improving the
logical replication performance.
– modelMapping
Defines a model mapping for the replication relationship. It consists of multiple
tableMapping tags. If the source and target tables in the source and target
databases have the same structure, owner, and name, you do not need to configure a
mapping relationship for the tables. Enabling the table-level logical replication
switch will directly replicate table data. If the source and target tables in the source
and target databases have different owners or names, configure only
"<tableMapping dstTable="orders2018" dstSchema="zuser"
srcTable="orders" srcSchema="zuser">" when configuring a table mapping
relationship. Column mapping relationships do not need to be configured in this
case.
– tableMapping
Defines a table mapping relationship in the model mapping, including the source
table name, source schema name, target table name, and target schema name. It
consists of multiple column mapping relationships.
If the columns in a table do not need to be all replicated, you only need to define the
mapping relationships for the columns to be replicated. If a table has a large number
of columns and only several columns do not need to be replicated, you only need to
configure the mapping relationships of the columns to be ignored in the
<ignoreUpdatecolumns></ignoreUpdatecolumns> tag or
<ignoreInsertcolumns></ignoreInsertcolumns> tag in the model mapping.
Among the <ignoreUpdatecolumns></ignoreUpdatecolumns>,
<ignoreInsertcolumns></ignoreInsertcolumns>, and <column> tags, the
<ignoreInsertcolumns></ignoreInsertcolumns> tag has the highest priority.
– column
Defines a column mapping relationship between a source table and its target table,
including the source column name, source column type, target column name, and
target column type. It is recommended that the source column name be the same as
the target column name and the source column type be the same as the target
column type.
For primary key columns, also set isKey to true. If isKey is not set, the value false
will be used.
4. Press Esc and enter :wq to save the settings and exit.
Step 9 Replicate the modified conf and lib folders from the primary node to the standby node.
1. Log in to the standby GaussDB 100 node as user gaussdba.
root@192.168.0.1's password:
datasource.xml
100% 667 0.7KB/s 00:00
init.properties
100% 760 0.7KB/s 00:00
key2.properties
100% 100 0.1KB/s 00:00
key1.properties
100% 38 0.0KB/s 00:00
repconf_db.xml
100% 850 0.8KB/s 00:00
log4j.xml
100% 3449 3.4KB/s 00:00
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/lib /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
Step 10 Enable the table-level logical replication switch and global logical replication switch.
The global logical replication switch must be enabled on both the primary and standby nodes.
The table-level logical replication switch needs to be enabled only on the primary node. The
table-level logical replication switch on the standby node is enabled synchronously with that
on the primary node.
-- Run the following command on the primary and standby nodes to enable the
global logical replication switch:
zsql gaussdba/database_123@127.0.0.1:1888 -c "ALTER DATABASE
ENABLE_LOGIC_REPLICATION ON;"
-- Enable the table-level logical replication switch.
zsql gaussdba/database_123@127.0.0.1:1888 -c "ALTER TABLE
[schema_name.]table_name ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;"
NOTE
l The table to be replicated must have a primary key. Otherwise, the error message "GS-01213, error
message = 'object index on table TRAINING does not exist'" will be displayed when the table-level
logical replication switch is enabled.
l To ensure data consistency between source and target tables, you need to set, in the target table, the
primary key attribute for the column corresponding to the primary key column of the source table.
l Currently, logical replication supports only primary key–based replication. Therefore, when the
table-level logical replication switch is enabled, COLUMNS can be set only to PRIMARY KEY.
l Data in a table is replicated only when both the table-level and global logical replication switches are
enabled.
You can log in to the primary and standby nodes as user gaussdba to check the status of the
global logical replication switch there. The procedure is as follows:
1. Log in to the database.
zsql gaussdba/database_123@127.0.0.1:1888
-- The status of the global logical replication switch on the primary node is
as follows:
LREP_POINT LREP_MODE
-------------------- --------------------
0-2-422-50b ON
1 rows fetched.
-- The status of the global logical replication switch on the standby node is
as follows:
LREP_POINT LREP_MODE
-------------------- --------------------
0-2-f3c9-a549 ON
1 rows fetched.
You can log in to the primary and standby nodes as user gaussdba, and run the following
command to check the status of the table-level logical replication switch:
zsql gaussdba/database_123@127.0.0.1:1888 -c "SELECT l.status FROM
sys.sys_logic_repl l, sys.sys_tables t WHERE t.name='ORDERS' AND t.id=l.table#;"
SQL>
STATUS
------------
1
1 rows fetched.
NOTE
0 rows fetched.
Step 11 On the primary and standby nodes, start the logical replication service.
1. Go to the logicrep directory.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep
Running the startup.sh and shutdown.sh scripts requires a database installation account.
-- Start the logical replication service on the primary node:
sh startup.sh -n logicrep -a 0-2-422-50b
Step 12 (Optional) On the primary and standby nodes, check the replication progress of the logical
replication service.
zsql gaussdba/database_123@127.0.0.1:1888 -c "SELECT id, committed_scn, logpoint,
update_time FROM LREP.logicrep_progress;"
SQL>
ID
COMMITTED_SCN
LOGPOINT UPDATE_TIME
----------------------------------------------------------------
--------------------
----------------------------------------------------------------
--------------------------------
LOGICREP
391439205040129 0-2-1ddc00-1ddce4-
c27 2019-01-16 17:22:56.470883
1 rows fetched.
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 3 View logs in the log directory to check whether the logical replication service is running
properly.
Logs of the logical replication service include alarm logs, run logs, audit logs, and
performance logs.
View logs in the alarm directory to check whether there are critical errors, such as a thread or
process exit. If there are, fix the errors by referring to other log information and restart the
service.
View run logs in the run directory to check whether there are runtime errors.
View audit logs in the audit directory to check whether there are SQL execution errors.
View performance logs in the perf directory to check whether there are performance problems
and whether the startup parameters need to be adjusted.
If the logical replication service is running properly, the check is complete. If the logical
replication service is abnormal, go to Step 4.
Step 4 On the primary and standby nodes, stop the logical replication service.
If the logical replication service cannot be stopped, run the following command to forcibly
stop it:
sh shutdown.sh -n logicrep -f
Step 5 On the primary and standby nodes, restart the logical replication service.
Running the startup.sh and shutdown.sh scripts requires a database installation account.
When restarting a logical replication service, you can continue replication based on the
progress saved in the last stop of the service.
sh startup.sh -n logicrep
When restarting a logical replication service, you can also forcibly ignore the previous
replication progress.
sh startup.sh -n logicrep -a 0-2-1ddc00-1ddce4-c27 -c
The -c parameter specifies the previous replication progress. The replication will continue
from the start point specified by the -a parameter. When this method is used to restart a
logical replication service, all logical logs from the start point will be parsed and replicated to
the target database. If some replication data already exists in the target database, a conflict
may occur. Therefore, this method is applicable only to some exception handling scenarios.
For example, if forcible re-replication is required, ensure that certain data in the target
database has been deleted or the conflict does not affect the service.
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 On the primary node, modify the startup parameters of the logical replication service process.
1. Go to the conf directory.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/conf
2. Open the init.properties file and press i to enter the insert mode.
vi init.properties
All modifications take effect only after the logical replication service is restarted. The
conf/init.properties file template is as follows (italic indicates an example value, and an
actual value is needed):
#unit: number of replayer threads
#value range:[1,32]
replayer.thread.number=1
#unit: MB
#value range:[100,500]
transaction.buffer.memory.size=100
#unit: MB
#value range:[300,800]
logentry.buffer.size=500
#path_to_keystore_file(for zenith)
javax.net.ssl.keyStore=
#path_to_trustStore_file(for oracle)
javax.net.ssl.trustStore=
– replayer.class
Specifies the replay class in use. When you customize a plug-in based on the
software development kit (SDK) of logical replication, set this parameter to the
name of the replay class that is implemented.
– dispatch.queue.size
Specifies the size of the work queue. It is evaluated based on the size of transactions
that can be concurrently executed. When the requirements for performance are low,
use the default configuration.
– transaction.buffer.size
Specifies the maximum number of transactions that can be stored in the transaction
buffer. When the buffer is full, the system waits for a replay thread to process. Use
the default configuration in the initial phase. Later, you can adjust the value of this
parameter by observing the high-water mark (HWM) for buffering performance
logs.
– transaction.buffer.memory.size
Specifies the maximum memory space occupied by each transaction. When the
space occupied by a transaction exceeds the default value, logical replication fails.
In this case, increase the value of this parameter to ensure that transactions
occupying large space perform the replication correctly.
– logentry.buffer.size
Specifies the size of the logentry buffer. Use the default configuration in the initial
phase. Later, you can adjust the value of this parameter by observing the HWM for
buffering performance logs.
– checkpoint.interval
Specifies an interval for storing the replay progress. The default configuration is
recommended. If this parameter is set too small, the progress may be frequently
stored, affecting performance.
– checkpoint.location
Specifies the location for storing the parsing and replay progress of physical logs. It
stores the location of the physical log where the logical replication service stopped
parsing and the latest transaction SCN that is committed to the target database. The
value can only be sourcedb, indicating that the progress information of the logical
replication service is stored in the LOGICREP_PROGRESS table of the logical
replication service user created in the source database.
– checkpoint.table.version
Specifies the version of the logical replication progress table stored in the source
database. In an old version of the table, there are ID, COMMITTED_SCN,
LOGPOINT, and UPDATE_TIME columns. In an upgraded version of the table,
the COMMITTED_TX_TIME column is added. This version is identified by v2.
For a logical replication service that has been launched and running, you do not
need to specify this parameter, or you can set this parameter to v1, indicating the
old version of the logical replication progress table in the source database.
For a logical replication service that is newly launched, this parameter in the
parameter file of the logical replication installation package has been set to v2 by
default. The logical replication progress table in the source database will be created
based on the structure of the new version of the table.
– logfile.endian.type
Specifies whether to use the big endian or little endian mode for storing data in
memory. The big endian or little endian mode refers to the configuration of the
machine that generates the log files required by the logical replication tool. When
the big endian mode is used, the high byte of data is stored in the low address of
memory, and the low byte of data is stored in the high address of memory. When
the little endian mode is used, the high byte of data is stored in the high address of
memory, and the low byte of data is stored in the low address of memory.
– javax.net.ssl.keyStore
Specifies the path of the keystore file, which must contain the file name. This
parameter is used only when useSSL is set to true and the client needs to be
authenticated.
– javax.net.ssl.keyStorePassword
Specifies the ciphertext of the keystore password. This parameter is used only when
useSSL is set to true and the client needs to be authenticated. For details about how
to generate the ciphertext, see Step 4.
– javax.net.ssl.trustStore=
Specifies the path of the truststore file, which must contain the file name. This
parameter is used only when SSL is configured for the Oracle database.
– javax.net.ssl.trustStorePassword=
Specifies the ciphertext of the truststore password. This parameter is used only
when SSL is configured for the Oracle database. For details about how to generate
the ciphertext, see Step 4.
4. Press Esc and enter :wq to save the settings and exit.
Step 4 Replicate the modified conf folder from the primary node to the standby node.
Assume that the IP address of the primary node is 192.168.0.1.
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/conf /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
root@192.168.0.1's password:
datasource.xml
100% 667 0.7KB/s 00:00
init.properties
100% 760 0.7KB/s 00:00
key2.properties
100% 100 0.1KB/s 00:00
key1.properties
100% 38 0.0KB/s 00:00
repconf_db.xml
100% 850 0.8KB/s 00:00
log4j.xml
100% 3449 3.4KB/s 00:00
Step 5 On the primary and standby nodes, stop the logical replication service.
Running the shutdown.sh script requires a database installation account.
sh shutdown.sh -n logicrep
If the logical replication service cannot be stopped, run the following command to forcibly
stop it:
sh shutdown.sh -n logicrep -f
Step 6 On the primary and standby nodes, restart the logical replication service.
Running the startup.sh script requires a database installation account.
When restarting a logical replication service, you can continue replication based on the
progress saved in the last stop of the service.
sh startup.sh -n logicrep
When restarting a logical replication service, you can also forcibly ignore the previous
replication progress.
sh startup.sh -n logicrep -a 0-2-1ddc00-1ddce4-c27 -c
The -c parameter specifies the previous replication progress. The replication will continue
from the start point specified by the -a parameter.
When this method is used to restart a logical replication service, all logical logs from the start
point will be parsed and replicated to the target database. If some replication data already
exists in the target database, a conflict may occur. Therefore, this method is applicable only to
some exception handling scenarios. For example, if forcible re-replication is required, ensure
that certain data in the target database has been deleted or the conflict does not affect the
service.
----End
2. Open the datasource.properties file and press i to enter the insert mode.
vi datasource.properties
[srcdb]
ds.type=gauss
ds.url=jdbc:zenith:@127.0.0.1:1611?useSSL=false
ds.username=lrep
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=5
min.idle=1
max.idle=10
max.active=50
max.wait=100000
[dstdb]
ds.type=oracle
ds.url=jdbc:oracle:thin:@10.185.240.79:1521:ora11g
ds.username=usrSample
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=10
max.idle=20
min.idle=5
max.active=50
max.wait=100000
[dstkafka]
ds.type=kafka
ds.url=10.185.240.79:9092
compression.type=none
max.block.ms=60000
retries=3
batch.size=1048576
linger.ms=1
buffer.memory=33554432
max.request.size=33554432
request.timeout.ms=10000
optimize.batch.send.buffer=5242880
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config.username=userxxx
#kafka sasl encrypted pwd
sasl.jaas.config.password=passxxx
ssl.truststore.location=/home/kafka.client.truststore.jks
#kafka truststore encrypted pwd
ssl.truststore.password=passxxx
ssl.keystore.location=/home/kafka.client.keystore.jks
#kafka keystore encrypted pwd
ssl.keystore.password=passxxx
#kafka key encrypted pwd
ssl.key.password=passxxx
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.protocol=SSL
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
This example includes parameters for the three data source types supported by logical
replication, and all of them are mandatory. The parameters are as follows:
– [srcdb]/[dstdb]/[dstkafka]: Specifies the section name, indicating the name of the
data source in the section and corresponding to srcName and dstName in the
repconf_db.xml file. Specifically, srcdb indicates the name of the source data
source, and dstdb and dstkafka indicate the name of the target data source.
– ds.type: Specifies the data source type. Currently, the logical replication service
supports the following values: gauss, oracle, and kafka. gauss indicates that the
target data source is GaussDB 100, oracle indicates that the target data source is
Oracle, and kafka indicates that the target data source is a Kafka message queue.
– ds.url: Specifies the URL of the database.
n When ds.type is set to oracle and if SSL is not used, the format of ds.url will
be ds.url=jdbc:oracle:thin:@192.168.0.2:1521:ora11g. In this scenario, if
SSL is used, the format of ds.url will be
ds.url=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)
(HOST=192.168.0.2)(PORT=2484))
Step 3 (Optional) If ds.url is modified to use SSL (from useSSL=false to useSSL=true) in the
datasource.properties file, configure the certificate name and password in the conf/
init.properties file and restart the logical replication service for the modification to take
effect. If bidirectional authentication is used, configure SSL certificates on the JDBC client.
For details, see the descriptions of "Configuring the SSL Certificate for the JDBC Client" in
Database Development Guide > Development Based on JDBC > Connecting to a
2. Open the topic_table.properties file and press i to enter the insert mode.
vi topic_table.properties
3. Define relationships between topics and tables as well as the name of the partitioner
class.
The following configuration is only an example. An asterisk (*) in the example is a
wildcard. In the partitioner section in the following example, partitioner is used, which
is the default value for logical replication. A customized partitioner is also supported.
topic1, topic2, and topic3 are the names of topic sections. In each topic section, you
need to configure the number of partitions and the tables whose data will be sent to this
topic.
#topic name and table name mapping relation
[partitioner]
class.name=com.huawei.gauss.logicrep.replayer.kafka.TopicPartitioner
[topic1]
partition.num=3
table.name=user1.t1,usr2.t2,user1.t3
[topic2]
partition.num=5
table.name=user2.t2,user1.t3,user3.t4,use4.*
[topic3]
partition.num=5
table.name=*.*
root@192.168.0.1's password:
datasource.xml
100% 667 0.7KB/s 00:00
init.properties
100% 760 0.7KB/s 00:00
key2.properties
100% 100 0.1KB/s 00:00
key1.properties
100% 38 0.0KB/s 00:00
repconf_db.xml
100% 850 0.8KB/s 00:00
log4j.xml
100% 3449 3.4KB/s 00:00
Step 8 On the primary and standby nodes, stop the logical replication service.
Running the shutdown.sh script requires a database installation account.
sh shutdown.sh -n logicrep
If the logical replication service cannot be stopped, run the following command to forcibly
stop it:
sh shutdown.sh -n logicrep -f
Step 9 On the primary and standby nodes, restart the logical replication service.
Running the startup.sh script requires a database installation account.
When restarting a logical replication service, you can continue replication based on the
progress saved in the last stop of the service.
sh startup.sh -n logicrep
When restarting a logical replication service, you can also forcibly ignore the previous
replication progress.
sh startup.sh -n logicrep -a 0-2-1ddc00-1ddce4-c27 -c
The -c parameter specifies the previous replication progress. The replication will continue
from the start point specified by the -a parameter.
When this method is used to restart a logical replication service, all logical logs from the start
point will be parsed and replicated to the target database. If some replication data already
exists in the target database, a conflict may occur. Therefore, this method is applicable only to
some exception handling scenarios. For example, if forcible re-replication is required, ensure
that certain data in the target database has been deleted or the conflict does not affect the
service.
----End
2. Open the repconf_db.xml file and press i to enter the insert mode.
vi repconf_db.xml
<datasource>
<datasourceInfo srcName="srcdb" dstName="dstdb"/>
</datasource>
<filteredUser>
<userInfo userName="sample"/>
<userInfo userName="lrep"/>
</filteredUser>
<modelMapping>
<tableMapping srcTable="t1" srcSchema="usrSample"
dstTable="t1" dstSchema="usrSample">
<column srcColumn="f_varchar1"
dstColumn="f_varchar1" isKey="true" />
<column srcColumn="f_varchar2"
dstColumn="f_varchar2" />
<column srcColumn="f_varchar3"
dstColumn="f_varchar3" />
<column srcColumn="f_varchar4"
dstColumn="f_varchar4" />
<column srcColumn="f_date1"
dstColumn="f_date1" />
<column srcColumn="f_date2"
dstColumn="f_date2" />
<column srcColumn="f_varchar5"
dstColumn="f_varchar5" />
<column srcColumn="f_number1"
dstColumn="f_number1" />
<column srcColumn="f_varchar6"
dstColumn="f_varchar6" />
</tableMapping>
</modelMapping>
</replicationConfig>
The users need to be defined in the source database. If the users do not exist, Warn
logs will be generated, and logical replication will proceed.
It is recommended that user LREP created for the logical replication service in the
source database be configured in the filtered user list. In this way, logs generated
when user LREP performs operations on tables can be filtered out, improving the
logical replication performance.
– modelMapping
Defines a model mapping for the replication relationship. It consists of multiple
tableMapping tags. If the source and target tables in the source and target
databases have the same structure, owner, and name, you do not need to configure a
mapping relationship for the tables. Enabling the table-level logical replication
switch will directly replicate table data. If the source and target tables in the source
and target databases have different owners or names, configure only
"<tableMapping dstTable="orders2018" dstSchema="zuser"
srcTable="orders" srcSchema="zuser">" when configuring a table mapping
relationship. Column mapping relationships do not need to be configured in this
case.
– tableMapping
Defines a table mapping relationship in the model mapping, including the source
table name, source schema name, target table name, and target schema name. It
consists of multiple column mapping relationships.
If the columns in a table do not need to be all replicated, you only need to define the
mapping relationships for the columns to be replicated. If a table has a large number
of columns and only several columns do not need to be replicated, you only need to
configure the mapping relationships of the columns to be ignored in the
<ignoreUpdatecolumns></ignoreUpdatecolumns> tag or
<ignoreInsertcolumns></ignoreInsertcolumns> tag in the model mapping.
Among the <ignoreUpdatecolumns></ignoreUpdatecolumns>,
<ignoreInsertcolumns></ignoreInsertcolumns>, and <column> tags, the
<ignoreInsertcolumns></ignoreInsertcolumns> tag has the highest priority.
– column
Defines a column mapping relationship between a source table and its target table,
including the source column name, source column type, target column name, and
target column type. It is recommended that the source column name be the same as
the target column name and the source column type be the same as the target
column type.
For primary key columns, also set isKey to true. If isKey is not set, the value false
will be used.
4. Press Esc and enter :wq to save the settings and exit.
Step 3 (Optional) When adding a table mapping relationship for the newly created table in the
repconf_db.xml file, that is, adding the <tableMapping></tableMapping> tag, you need to
enable the logical replication switch of the table on the primary node.
-- Enable the table-level logical replication switch.
zsql gaussdba/database_123@127.0.0.1:1888 -c "ALTER TABLE
[schema_name.]table_name ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;"
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/conf /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
root@192.168.0.1's password:
datasource.xml
100% 667 0.7KB/s 00:00
init.properties
100% 760 0.7KB/s 00:00
key2.properties
100% 100 0.1KB/s 00:00
key1.properties
100% 38 0.0KB/s 00:00
repconf_db.xml
100% 850 0.8KB/s 00:00
log4j.xml
100% 3449 3.4KB/s 00:00
Step 6 On the primary and standby nodes, stop the logical replication service.
If the logical replication service cannot be stopped, run the following command to forcibly
stop it:
sh shutdown.sh -n logicrep -f
Step 7 On the primary and standby nodes, restart the logical replication service.
When restarting a logical replication service, you can continue replication based on the
progress saved in the last stop of the service.
sh startup.sh -n logicrep
When restarting a logical replication service, you can also forcibly ignore the previous
replication progress.
sh startup.sh -n logicrep -a 0-2-1ddc00-1ddce4-c27 -c
The -c parameter specifies the previous replication progress. The replication will continue
from the start point specified by the -a parameter.
When this method is used to restart a logical replication service, all logical logs from the start
point will be parsed and replicated to the target database. If some replication data already
exists in the target database, a conflict may occur. Therefore, this method is applicable only to
some exception handling scenarios. For example, if forcible re-replication is required, ensure
that certain data in the target database has been deleted or the conflict does not affect the
service.
----End
Such a plug-in can be invoked by the logical replication service to meet the customization
requirements of users.
When plug-in code needs to read the attribute value of a data source, it can first obtain
the ConfigInfo object through getConfigInfo() provided by the LogicRepContext
interface, then obtain the DataSourceInfo object of the data source, and finally read the
corresponding attribute value by using the attribute name over the Get interface.
3. After the development is complete, put the plug-in JAR package in the plugin directory
of logical replication.
The plugin path is /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/plugin.
2. Open the init.properties file and press i to enter the insert mode.
vi init.properties
3. Configure the replayer.class attribute in the init.properties file to specify the class of
the replication plug-in.
#specify which kind of replayer is used
replayer.class=com.huawei.gauss.logicrep.replayer.SampleReplayer
5. Press Esc and enter :wq to save the settings and exit.
2. Open the datasource.properties file and press i to enter the insert mode.
vi datasource.properties
3. Add a section to configure the replication data source parameters for the plug-in.
To name the customized parameters, see the naming convention of existing parameters.
# properties of source/destination datasources defined here
# note:
# 1. section name - datasource name
# 2. mandatory properties:
# ds.type - gauss/oracle/kafka, needed for logicrep
[srcdb]
ds.type=gauss
ds.url=jdbc:zenith:@127.0.0.1:1611?useSSL=false
ds.username=lrep
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=5
min.idle=1
max.idle=10
max.active=50
max.wait=100000
[dstdb]
ds.type=oracle
ds.url=jdbc:oracle:thin:@10.185.240.79:1521:ora11g
ds.username=usrSample
ds.passwd=8W6qr0rX2PwQR3Uf3g/bLcu++haPqbKWXpW7M9nNlAI=
initial.size=10
max.idle=20
min.idle=5
max.active=50
max.wait=100000
[dstkafka]
ds.type=kafka
ds.url=10.185.240.79:9092
compression.type=none
max.block.ms=60000
retries=3
batch.size=1048576
linger.ms=1
buffer.memory=33554432
max.request.size=33554432
request.timeout.ms=10000
optimize.batch.send.buffer=5242880
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config.username=userxxx
#kafka sasl encrypted pwd
sasl.jaas.config.password=passxxx
ssl.truststore.location=/home/kafka.client.truststore.jks
#kafka truststore encrypted pwd
ssl.truststore.password=passxxx
ssl.keystore.location=/home/kafka.client.keystore.jks
#kafka keystore encrypted pwd
ssl.keystore.password=passxxx
#kafka key encrypted pwd
ssl.key.password=passxxx
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.protocol=SSL
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
4. Press Esc and enter :wq to save the settings and exit.
Step 4 Replicate the modified conf, lib, and plugin folders from the primary node to the standby
node.
1. Log in to the standby GaussDB 100 node as user gaussdba.
2. Replicate the conf, lib, and plugin folders to the standby node.
Assume that the IP address of the primary node is 192.168.0.1.
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/conf /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/lib /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
scp -r root@192.168.0.1:/opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/
logicrep/plugin /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep/
Step 5 On the primary and standby nodes, start the logical replication service.
The logical replication service loads the replication plug-in by using the replayer.class
attribute and invokes the user-developed replication logic.
1. Go to the logicrep directory.
cd /opt/software/tools/GAUSSDB100-V300R001C00-LOGICREP/logicrep
Running the startup.sh and shutdown.sh scripts requires a database installation account.
-- Start the logical replication service on the primary node:
sh startup.sh -n logicrep -a 0-2-422-50b
4.2.5 HA Switchovers
HA switchovers include switchovers and failovers.
Switchovers
1. Application scenarios of switchovers are as follows:
l To upgrade databases, stop the standby server, upgrade the database on the standby
server, and start the standby server. When the primary and standby relationship becomes
stable, switch over the primary and standby servers. Then, upgrade the database on the
original primary server. This way, services remain running properly and no data is lost,
providing high-reliability database services.
l A complex application system involves the database software and other application
processes. If these application processes are abnormal, a switchover will be performed to
prevent data loss, and the primary and standby databases also need to be switched over.
2. The normal switchover procedure is as follows:
Step 1 Query for the role and status of a node.
SQL> SELECT DATABASE_ROLE, DATABASE_CONDITION, SWITCHOVER_STATUS FROM DV_DATABASE;
1 rows fetched.
If the query result shows a standby node and the status is normal, the switchover can be
performed there.
Step 2 Issue the switchover command.
SQL> ALTER DATABASE SWITCHOVER;
Succeed.
Switchovers can be performed only on standby nodes, rather than primary and cascaded
standby nodes.
----End
Failovers
Failovers are designed for scenarios where a primary node encounters a fault which cannot be
rectified in a short time. In primary/standby/cascaded standby deployment, if the primary
node and all standby nodes are faulty, you can perform a failover on a cascaded standby node
to promote it to primary. The procedure is as follows:
Query the standby node status.
If the query result shows a standby node and the status is disconnected, the failover can be
performed there.
Step 2 Issue the failover command.
SQL> ALTER DATABASE FAILOVER;
Succeed.
----End
2. Locating method
– The firewall is enabled on the primary and standby nodes, and the primary node
cannot connect to the standby node.
– The configuration links are incorrectly configured in zengine.ini on the primary and
standby nodes. As a result, the primary node cannot find the standby node.
b. Clear the archive directory and data directory on the standby node.
c. Start the standby node in NOMOUNT mode.
zengine nomount
DATA BUFFER DV_GMA Size of a buffer for The value depends on the
lately accessed data system memory. For
example, if the memory
size is 8 GB, you are
advised to set this
parameter to a value no
greater than 128 MB.
SHARED POOL DV_GMA Total size of space The value depends on the
shared by XPG pools, system memory. For
Lock pools, SQL example, if the memory
pools, and DC pools size is 8 GB, you are
advised to set this
parameter to a value no
greater than 128 MB.
NAME VALUE
---------------------------------------- ----------------------------------------
data buffer 128.00M
shared pool 128.03M
large pool 8.00M
log buffer 4.00M
dbwr buffer 8.00M
lgwr buffer 2.00M
transaction pool 18.19M
temporary buffer 32.00M
8 rows fetched.
FILE_NAME BYTES
AUTO_EXTEND AUTO_EXTEND_SIZE MAX_SIZE
------------ ------------- -------------------- --------------------
------------------------------------------------------- --------------------
----- --------------- -------------------- --------------------
0 0 ONLINE FILE /home/
gaussdba/data/data/system 1073741824
FALSE 0 8796093022208
1 1 ONLINE FILE /home/
gaussdba/data/data/temp1_01 167772160
TRUE 33554432 8796093022208
2 1 ONLINE FILE /home/
gaussdba/data/data/temp1_02 167772160
TRUE 33554432 8796093022208
3 2 ONLINE FILE /home/
gaussdba/data/data/undo 1073741824
FALSE 0 34359738368
4 3 ONLINE FILE /home/
gaussdba/data/data/user1 1073741824
TRUE 33554432 8796093022208
5 3 ONLINE FILE /home/
gaussdba/data/data/user2 1073741824
TRUE 33554432 8796093022208
6 3 ONLINE FILE /home/
gaussdba/data/data/user3 1073741824
TRUE 33554432 8796093022208
7 3 ONLINE FILE /home/
gaussdba/data/data/user4 1073741824
TRUE 33554432 8796093022208
8 3 ONLINE FILE /home/
gaussdba/data/data/user5 1073741824
TRUE 33554432 8796093022208
9 4 ONLINE FILE /home/
gaussdba/data/data/temp2_01 167772160
TRUE 33554432 8796093022208
10 4 ONLINE FILE /home/
gaussdba/data/data/temp2_02 167772160
TRUE 33554432 8796093022208
11 5 ONLINE FILE /home/
gaussdba/data/data/temp2_undo 1073741824
FALSE 0 8796093022208
12 rows fetched.
ID STATUS TYPE
FILE_NAME BYTES
WRITE_POS FREE_SIZE RESET_ID ASN BLOCK_SIZE
CURRENT_POINT
------------ -------------------- --------------------
------------------------------------------------------- --------------------
-------------------- -------------------- ------------ ------------ ------------
----------------------------------------------------------------
0 CURRENT ONLINE /home/gaussdba/data/data/
log1 2147483648 14676992
2132806656 0 1 512 0-1/28666/2333
1 INACTIVE ONLINE /home/gaussdba/data/data/
log2 2147483648 512
2147483136 0 0 512
2 INACTIVE ONLINE /home/gaussdba/data/data/
log3 2147483648 512
2147483136 0 0 512
3 INACTIVE ONLINE /home/gaussdba/data/data/
log4 2147483648 512
2147483136 0 0 512
4 INACTIVE ONLINE /home/gaussdba/data/data/
6 rows fetched.
ID NAME
TEMPORARY IN_MEMORY AUTO_PURGE EXTENT_SIZE SEGMENT_COUNT FILE_COUNT STATUS
------------ ----------------------------------------------------------------
--------- --------- ---------- ------------ ------------- ------------ --------
0 SYSTEM
FALSE FALSE FALSE 8 143 1 ONLINE
1 TEMP
TRUE FALSE FALSE 16 0 2 ONLINE
2 UNDO
FALSE FALSE FALSE 1 0 1 ONLINE
3 USERS
FALSE FALSE TRUE 8 0 5 ONLINE
4 TEMP2
TRUE FALSE TRUE 8 0 2 ONLINE
5 TEMP2_UNDO
TRUE FALSE FALSE 1 0 1 ONLINE
6 rows fetched.
Scenario
Databases provide the ANALYZE statement and the advanced DBMS_STATS package to
collect statistics. For example, collect statistics about tables and their indexes.
If the data of a table changes greatly, the earlier statistics may become inaccurate. In this case,
you need to collect statistics about the table again. The database provides the
ADM_TAB_MODIFICATIONS and MY_TAB_MODIFICATIONS views to monitor the
number of changes (including insertion, deletion, and update) in a table. You can query the
views to determine whether to collect statistics again.
Precautions
l ANALYZE and DBMS_STATS.GATHER_TABLE_STATS can be used to collect
statistics about temporary tables, and the statistical results will be all 0.
l ANALYZE cannot be used to collect statistics about the following columns: LOB
columns (skipped) and columns of the LONG or object type.
l ANALYZE and DBMS_STATS.GATHER_TABLE_STATS can also be used to collect
statistics about partitioned tables.
l The DBMS_STATS advanced package can be used to collect statistics about all columns
or only indexed columns. However, ANALYZE cannot be used to collect index statistics
independently. If ANALYZE is used to collect table statistics, the index statistics will be
collected together.
Table Statistics
You can collect the following statistics about a table. The table statistics are displayed in the
data dictionary views MY_TABLES, DB_TABLES, and ADM_TABLES.
Index Statistics
The database provides statistics about collected indexes. Statistics about common indexes are
displayed in the data dictionary views SYS_INDEXES, SYS_INDEX_PARTS,
SYS_INDEX_PARTS, MY_INDEXES, DB_INDEXES, and ADM_INDEXES during
calculation or statistics estimation. For details, see section "Data Dictionary and Views" in
GaussDB 100 V300R001C00 Database Reference (Standalone).
l * Depth of an index from its root block to its leaf block (BLEVEL)
l Number of leaf blocks (LEAF_BLOCKS)
l Number of distinct index keys (DISTINCT_KEYS)
l Average number of leaf blocks for each index key
(AVG_LEAF_BLOCKS_PER_KEY)
l Average number of data blocks for each table index key
(AVG_DATA_BLOCKS_PER_KEY)
l Clustering factor, a sorting order of row and index values (CLUFAC)
l Number of distinct key values of the composite index on the first two columns among
more than two columns (COMB_COLS_2_NDV)
l Number of distinct key values of the composite index on the first three columns among
more than three columns (COMB_COLS_3_NDV)
l Number of distinct key values of the composite index on the first four columns among
more than four columns (COMB_COLS_4_NDV)
Note: The asterisk (*) indicates that a precise value is required.
Column Statistics
A histogram is a special type of column statistics. It provides more detailed information about
the data distribution in a table column. A histogram sorts values into buckets. A bucket
contains values within a certain range.
Based on the number of distinct values (NDV) and data distribution, the database chooses the
type of histogram to create. The types of histograms are as follows:
l Frequency histogram: In a frequency histogram, each distinct column value corresponds
to a single bucket of the histogram. Because each value has its own dedicated bucket,
some buckets may have many values, whereas others have few.
l Height-balanced histogram: In a height-balanced histogram, column values are divided
into buckets so that each bucket contains approximately the same number of rows.
Column statistics are collected together with table statistics. The database determines the type
of a histogram based on the number of values in each column. If the NDV is less than 254, the
database will create a frequency histogram. Otherwise, the database will create a height-
balanced histogram.
Scenario
If the data of a table changes greatly, the earlier statistics may become inaccurate. In this case,
running an SQL statement on this table may use an inaccurate execution plan determined
based on costs, resulting in low execution efficiency.
Databases allow table statistics to be automatically collected. Users can execute customized
statistics collection jobs periodically or according to the change rate of table data. In this way,
the statistics are collected more accurately, increasing SQL statement execution efficiency.
database. It collects statistics only when the data change rate of a table is greater than or
equal to 10%. The default sampling rate is 10%.
Running the SELECT * FROM DB_JOBS; statement as user SYS can view the two
scheduled jobs.
The two scheduled jobs for collecting statistics are closed by default. To start them, make the
following preparations:
l Invoke the DBMS_JOB.RUN interface to start a scheduled job. The value of jobid can
be obtained by SELECT * FROM SYS_JOBS;.
exec DBMS_JOB.RUN(jobid);
commit;
Check the alarm log file. Check whether new alarms are recorded to the alarm log file
zenith_alarm.log.
5 Database Usage
This chapter describes how to manage database objects, import and export data, manage
transactions, manage logs, and manage database security.
NOTE
If user SYS locally logs in to a database in password-free mode, the login will not be limited by the user
whitelist, IP address whitelist, or IP address blacklist.
If user SYS logs in to a database using an encrypted password, the login will be limited by the IP
address blacklist.
Precautions
l Before enabling IP address whitelist checking, ensure that at least one of
TCP_INVITED_NODES and TCP_EXCLUDED_NODES is set. Otherwise, the error
message "GS-00254: For invited and excluded nodes is both empty, ip whitelist function
can't be enabled" will be displayed.
l User SYS can only locally log in to a database.
Prerequisites
Before configuring a user whitelist, IP address blacklist, or IP address whitelist, ensure that
LSNR_ADDR and LSNR_PORT have been configured. Otherwise, the configuration will
not take effect. Do as follows:
Method 1:
Step 1 Check whether the listening IP address and port have been configured on the server.
----End
Method 2:
Step 1 Check whether the listening IP address and port have been configured on the server.
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_ADDR';
SELECT NAME,VALUE FROM DV_PARAMETERS WHERE NAME = 'LSNR_PORT';
Step 2 Restart the database for the configurations of the listening IP address and listening port
number to take effect.
cd ${GSDB_DATA}/bin
python zctl.py -t stop
python zctl.py -t start
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 3 Add an HBA entry (TYPE, USER, and ADDRESS) to the zhba.conf file.
cd ${GSDB_DATA}/cfg
vim zhba.conf
host user 127.0.0.1,192.168.3.222,20AB::9217:acff:feab:fcd0/64
NOTE
l ADDRESS lists the IP addresses allowed for database connections. Separate multiple IP addresses
with commas (,). HBA entries are independent from each other and their order in the whitelist does
not affect the whitelist functionality.
l If a username contains special characters such as number sign (#) and tab characters, enclose the
name with double quotation marks (""). In host "#abc" 127.0.0.1 and host "abc" 127.0.0.1, the
strings enclosed in the double quotation marks are usernames.
l If a string is "*" or *, all users will be listed.
l The IP addresses can be IPv4 or IPv6 addresses, or a network segment with the subnet mask or
prefix length specified. All the following formats are valid:
– 192.168.3.222 indicates an IPv4 host.
– 192.168.3.0/24 indicates an IPv4 segment with the specified subnet mask length 24.
– 20AB::9217:acff:feab:fcd0 indicates an IPv6 host.
– 20AB::9217:acff:feab:fcd0/64 indicates an IPv6 segment with the specified subnet prefix
length 64.
l When editing the zhba.conf file, do not press Tab to enter a space. Otherwise, the error message
"GS-00220, hba line(20) format is not correct" will be displayed when you load the user whitelist
online.
2. Run the following statement to load the user whitelist online. The whitelist takes effect
immediately after the statement is executed.
ALTER SYSTEM RELOAD HBA CONFIG;
3. Query the DV_HBA view to check whether the user whitelist is configured successfully.
SELECT * FROM SYS.DV_HBA;
----End
Step 1 Log in to the server where GaussDB 100 is deployed as the OS user who installs the
GaussDB 100 database.
Step 2 Query for a configured IP address whitelist and a configured IP address blacklist.
zsql gaussdba/database_123@127.0.0.1:1888
SELECT VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_INVITED_NODES';
SELECT VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_EXCLUDED_NODES';
Step 3 Configure the IP address whitelist or blacklist online. The configuration takes effect
immediately, and you do not need to restart the database.
NOTE
The IP addresses specified by TCP_INVITED_NODES cannot exceed 1024 bytes. Otherwise, an error
will be reported.
Step 4 Enable IP address whitelist checking online. The function takes effect immediately, and you
do not need to restart the database.
ALTER SYSTEM SET TCP_VALID_NODE_CHECKING = true;
Run the following command to check whether the function takes effect:
SELECT NAME, VALUE FROM DV_PARAMETERS WHERE NAME = 'TCP_VALID_NODE_CHECKING';
NAME VALUE
----------------------------------------------------------------
--------------------
TCP_VALID_NODE_CHECKING TRUE
----End
If you have logged in to a database through zsql in non-interactive mode, there would be a
large number of plaintext passwords in the environment. Therefore, you are advised to log in
to GaussDB 100 in interactive mode.
If no IP address is specified during password-free login, the first IP address of LSNR_ADDR
in the local configuration file will be used.
Scenario
Use zsql to connect to a GaussDB 100 server. Then, you can run SQL statements and perform
database operations.
Prerequisites
l The zsql tool has been installed on the client.
l The user for connection must have permission to access the database.
l Before remotely accessing a database through APIs such as zsql or JDBC, set LSNR_IP
and LSNR_PORT in the zengine.ini file. A maximum of eight listening IP addresses
can be set at a time, and they must be separated by commas (,). For details, see
Configuring Client Access Authentication.
l Before remotely accessing a database, configure access authentication on the local client.
For details about how to configure client access authentication, see Configuring Client
Access Authentication in GaussDB 100 V300R001C00 User Guide (Standalone).
Precautions
If the password of a database user contains the special character $, use the escape character \
to connect to the database through zsql. Otherwise, the login will fail.
Procedure
l Log in as a database administrator. (Only database administrators can use password-free
login.)
zsql
{ CONNECT | CONN } / AS SYSDBA [ip:port] [-D /home/gaussdba/data1] [-q] [-s
"silent_file"] [-w connect_timeout]
[ip:port] is optional. If it is not specified, the local host will be connected by default.
If a database administrator has started multiple database instances, you need to specify
the database directory (-D) when connecting to a specified database.
The -q parameter is used to cancel the SSL login authentication check. The -s parameter
is used to set the silent mode (no prompt) for SQL statement execution.
The -w parameter is used to set the timeout period for the client to wait for a connection
response from the database. Its values are -1, indicating that the client keeps waiting
without timeout restrictions; 0, indicating that the client does not wait and the server
directly returns a failure result; and n, indicating that the client waits for n seconds. The
default value is 10s. After this parameter is used, its value will be the response waiting
timeout period when the zsql process is started to connect to the database. After the
process startup, the timeout period will be used in waiting for a response for establishing
or reestablishing a new connection as well as that in queries. After the zsql process is
exited, the setting becomes invalid.
l Log in as a common database user.
GaussDB 100 supports the following login modes:
– Interactive login mode 1:
zsql user@ip:port [-D /home/gaussdba/data1] [-q] [-s "silent_file"] [-w
connect_timeout]
In this command, user indicates the name of the database user and password indicates
the password of the user. ip:port indicates the IP address and port number of the host
where the database resides. The default port number is 1888.
If a database administrator has started multiple database instances, you need to specify
the database directory (-D) when connecting to a specified database.
The -q parameter is used to cancel the SSL login authentication check. The -s parameter
is used to set the silent mode (no prompt) for SQL statement execution.
The -w parameter is used to set the timeout period for the client to wait for a connection
response from the database. Its values are -1, indicating that the client keeps waiting
without timeout restrictions; 0, indicating that the client does not wait and the server
directly returns a failure result; and n, indicating that the client waits for n seconds. The
default value is 10s. After this parameter is used, its value will be the response waiting
timeout period when the zsql process is started to connect to the database. After the
process startup, the timeout period will be used in waiting for a response for establishing
or reestablishing a new connection as well as that in queries. After the zsql process is
exited, the setting becomes invalid.
Examples
l Locally log in to a database as user gaussdba.
gaussdba@plat1~> zsql
SQL> CONN gaussdba/database_123@127.0.0.1:1888
connected.
l Start the zsql process and set a response waiting timeout period.
-- Start the zsql process and set the response waiting timeout period to 20s.
After the process is started, the timeout period for waiting for a connection
setup response will be 20s.
zsql gaussdba/database_123@127.0.0.1:1888 -w 20
connected.
-- Create a user jim and grant the CREATE SESSION permission to the user.
DROP USER IF EXISTS jim;
CREATE USER jim IDENTIFIED BY database_123;
GRANT CREATE SESSION TO jim;
-- Switch to the user. The timeout period for waiting for a reconnection
setup response will be also 20s.
CONN jim/database_123@127.0.0.1:1888
connected.
-- Exit the zsql process. The timeout period setting becomes invalid, and the
timeout period for waiting for a new connection setup response will be 10s
(default value).
EXIT
Related Concepts
A GaussDB 100 database tablespace consists of one or more data files. Database objects are
logically stored in tablespaces and physically stored in data files.
A schema is a collection of database objects. A schema can exist in multiple tablespaces, and
a tablespace can contain multiple schemas. By default, a user corresponds to a schema. When
a user is created, the database creates a default schema with the same name as the user to store
the objects created by the user. Therefore, the number of schemas in a database is the same as
the number of users. If the schema name of a table is not explicitly specified when the table is
accessed, the database will automatically append the default schema name to the table.
During GaussDB 100 database creation, the following tablespaces are automatically created:
l SYSTEM tablespace
It stores GaussDB 100 metadata. To ensure stable database running, you are advised not
to store user data in the SYSTEM tablespace. By default, the SYSTEM tablespace is not
automatically extended. If it is full, manually add data files or extend the tablespace.
l TEMP tablespace
It is automatically maintained by GaussDB 100. When SQL statements apply for disk
space, the GaussDB 100 database allocates temporary segments from the TEMP
tablespace. The TEMP tablespace is also used for index creation, data sorting that
cannot be executed in the memory, intermediate result sets of SQL statements, and
temporary tables.
l UNDO tablespace
It stores undo data. When a DML operation (INSERT, UPDATE, or DELETE) is
performed, old data before the operation is written into the UNDO tablespace. Such a
tablespace is mainly used for transaction rollback, database instance restoration, read
consistency, and flashback query.
l USERS tablespace
It is the default tablespace. When a user is created with no tablespace specified, all
information about the user is stored in the USER tablespace.
l TEMP2 tablespace
It stores NOLOGGING Tables data, and is automatically maintained by GaussDB 100.
l TEMP2_UNDO tablespace
It stores NOLOGGING Tables undo data.
Database administrators can use tablespaces to control the layout of disks for installing a
database. This has the following advantages:
l If the disk partition or volume initially allocated to the database is full and the space
cannot be logically extended, you can create and use tablespaces in other partitions until
the system space is reconfigured.
l Tablespaces allow database administrators to arrange storage locations for database
objects as needed, improving database performance.
– A frequently used index can be placed in a disk having stable performance and high
computing speed.
– A table that stores archived data and is rarely used or has low performance
requirements can be placed in a disk with a slow computing speed.
l Database administrators can specify physical disk space for data by using tablespaces.
When multiple application services share one server, database administrators can use
tablespaces to limit the volume of accessed data and prevent the tablespaces from
occupying other spaces in the same partition. This prevents application service
breakdown caused by disk space exhaustion.
Creating a Tablespace
When using the CREATE TABLESPACE statement to create a tablespace, you can specify
the EXTENTS parameter, which specifies the number of pages in an extent.
l EXTENTS
Specifies the number of pages in an extent.
The value must be an integral power of 2 in the range [8, 1024]. If EXTENTS is omitted, an
extent will contain 8 pages.
Increasing the number of pages in a single extent can improve I/O performance. However, if
there are small tables in the tablespace and the table data volume does not reach the size of an
extent, space will be wasted.
l Use the tablespace human_resource as an example. Assume that an extent contains 128
pages. In human_resource, the data file is humanspace_1, and its size is 128 MB. The
tablespace is also enabled with automatic extension (128 MB), that is, the tablespace will
be automatically extended by 128 MB after it is full.
– Run the CREATE TABLESPACE statement to create a tablespace.
CREATE TABLESPACE human_resource EXTENTS 128 DATAFILE 'humanspace_1'
SIZE 128M AUTOEXTEND ON NEXT 128M;
– Create an object in the tablespace. The following describes how to create a table
education.
To create a database object in a tablespace, you must have the CREATE
TABLESPACE permission for the tablespace.
CREATE TABLE education
(
staff_id INT,
higest_degree CHAR(8) NOT NULL,
graduate_school VARCHAR(64),
graduate_date DATETIME,
education_note VARCHAR(70)
)
TABLESPACE human_resource;
Viewing Tablespaces
Database administrators can query the ADM_TABLESPACES view to observe information
about all tablespaces.
Common users can query the ADM_TABLESPACES view to observe information about the
current tablespace.
Extending a Tablespace
Run the ALTER TABLESPACE statement to add a data file to a tablespace.
ALTER TABLESPACE human_resource ADD DATAFILE 'new_datafile' SIZE 128M;
Shrinking a Tablespace
Run the ALTER TABLESPACE statement to shrink a tablespace. In RESTRICT mode,
TEMP and UNDO tablespaces can be shrank.
Modifying a Tablespace
Run the following statement to change the name of the tablespace human_resource to
staff_resource:
ALTER TABLESPACE human_resource RENAME TO staff_resource;
Deleting a Tablespace
Run the DROP TABLESPACE command to delete the tablespace human_resource.
-- Check tablespace usage:
SELECT * FROM adm_objects WHERE tablespace_name= 'HUMAN_RESOURCE';
SELECT * FROM adm_segments WHERE tablespace_name='HUMAN_RESOURCE';
Prerequisites
If a user needs to create a table in its schema, it must have the CREATE TABLE permission.
If a user needs to create a table in another user's schema, the user must have the CREATE
ANY TABLE permission.
Related Concepts
A table definition includes a table name (such as staffs) and a set of columns. Each column
has two attributes, a column name and a data type. For example, a column name is
STAFF_ID, with a data type of NUMBER(6); and another column name is FIRST_NAME,
with a data type of VARCHAR(20 BYTE). Each column allows for an integrity constraint
(such as NOT NULL). This constraint ensures that each row in the column contains a value.
After creating a table, you can run the INSERT statement to insert data, use a data import and
export tool to load data, or run CREATE TABLE AS Query to create a table with data.
l Columns and their data types
The data type of a column determines the type of values in the column. Select a data type
that requires the least space and does not affect data storage. For example, select a
VARCHAR type for storing character strings, a DATE or TIMESTAMP type for storing
dates, and a NUMERIC type for storing numbers. For the CHAR, VARCHAR, and
TEXT data types, if spaces are entered as values, no performance differences will be
generated. In most cases, TEXT and VARCHAR are prior to CHAR.
To join two tables, the join key columns must have the same data type. In most cases, the
primary key of one table and the foreign key of the other table are used as join key
columns. If the data types are different, the database will convert either of the data types
and verify the value correctness, leading to unnecessary overheads.
l Table and column constraints (If you define constraints for tables and columns, data
contained in the tables and columns will be restricted.)
– [ NOT ] NULL
Specifies whether a column can hold NULL values.
– UNIQUE
Specifies that values in a column must be unique. NULL values are allowed. The
UNIQUE constraint can be added to multiple columns in a table.
– PRIMARY KEY
Specifies a primary key. A primary key column cannot hold NULL values, and a
table can have only one primary key.
Ordinary Tables
Create the staffs table in the human_resource tablespace, and forbid NULL values in the
STAFF_ID column.
CREATE TABLE hr.staffs
(
staff_id NUMBER(6) NOT NULL,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
)
TABLESPACE human_resource;
Temporary Tables
In GaussDB 100, you can create temporary tables. A temporary table holds data that exists
only for the duration of a transaction or session when there are complex queries. When a
session ends or a user commits or rolls back a transaction, the temporary table will be
automatically cleared, but the table structure will remain. Temporary tables are classified into
session-level temporary tables and transaction-level temporary tables. If the ON COMMIT
{DELETE | PRESERVE} ROWS clause is not specified when a temporary table is created,
a transaction-level temporary table will be created.
Global temporary tables allow users to create temporary indexes, and to update, insert, and
delete data. Data in such an index has the same session or transaction scope as data in the
temporary table. A local temporary table supports only ON COMMIT PRESERVE ROWS
and the table name must start with a number sign (#). In addition,
LOCAL_TEMPORARY_TABLE_ENABLED=TRUE must be used. The
LOCAL_TEMPORARY_TABLE_ENABLED parameter specifies whether to enable local
temporary tables.
l Session-level temporary tables (ON COMMIT PRESERVE ROWS)
Data in a temporary table exists only for the duration of a session. When the session
ends, the temporary table is automatically cleared. The TRUNCATE statement is
executed to delete data from only the current temporary table, instead of the temporary
tables specific to other sessions.
Create a session-level global temporary table staff_history_session.
CREATE GLOBAL TEMPORARY TABLE staff_history_session
(startdate DATE,
enddate DATE
)
ON COMMIT PRESERVE ROWS;
NOLOGGING Tables
A NOLOGGING table does not record redo logs. A smaller number of logs will improve
system performance. However, since there is no redo log, a database cannot be restored by
replaying logs after a restart. Table definitions are reserved but data is deleted. Non-core data
that does not require high reliability can be saved in a NOLOGGING table. The tablespace of
a NOLOGGING table must be the NOLOGGING tablespace. If the tablespace is not
specified, the TEMP2 tablespace will be used.
NOLOGGING tables:
l Allow users to replicate table definitions, not table data, between primary and standby
databases.
l Allow users to back up and restore table definitions, not table data.
l Support MVCC and rollback, as well as DDL and DML operations that ordinary tables
support.
A NOLOGGING table can be created by specifying the keyword NOLOGGING or the
tablespace NOLOGGING.
l Specify the keyword NOLOGGING to create a table staffs_nologging.
CREATE TABLE staffs_nologging
(
staff_id NUMBER(6) NOT NULL,
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
)NOLOGGING;
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
employment_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
section_id NUMBER(4),
graduated_name VARCHAR2(60)
) TABLESPACE temp2;
Scenario
In GaussDB 100, you can view the definition of a table and all data in the table.
Prerequisites
When the SELECT statement is used to view a table, authorization is not required if the user
is the owner of the table. Otherwise, the SELECT permission for the table or the SELECT
ANY TABLE permission is required.
Procedure
l View table data.
SELECT * FROM staffs;
Scenario
After a table is created, you can run the ALTER TABLE statement to change the table
definition if the service scenario changes. Typical functions are as follows:
l Renaming a table
l Adding, modifying, and deleting a column
l Imposing constraints on adding, renaming, and deleting
Prerequisites
When the ALTER TABLE statement is used to modify a table, authorization is not required
if the user is the owner of the table. Otherwise, the ALTER permission for the table or the
ALTER ANY TABLE permission is required.
Precautions
l Users can modify the names of only the tables in their schemas.
l Common users cannot modify the objects of database administrators.
l The unique index, primary key, and foreign key inline constraints cannot be contained in
a statement for adding or modifying the attributes of a column.
l Table definitions cannot be changed during database restart or rollback.
Procedure
The following illustrates the above functions. Unless otherwise specified, the ordinary table
created in Creating a Table is used as the example.
Renaming a table
Rename the staffs table to staffs_group.
ALTER TABLE hr.staffs RENAME TO hr.staffs_group;
l Change the data type of the phone_number column in the staffs_group table to
BIGINT.
ALTER TABLE hr.staffs_group MODIFY phone_number BIGINT;
staff_id in the employeeinfo_f table is the primary key of the staffs_f table. Therefore,
staff_id is the foreign key of the employeeinfo_f table.
3. Add the foreign key staff_id to the employeeinfo_f table.
ALTER TABLE employeeinfo_f ADD CONSTRAINT fk FOREIGN KEY (staff_id)
REFERENCES staffs_f(staff_id);
4. Change the foreign key constraint name of the employeeinfo_f table to fk_cons.
ALTER TABLE employeeinfo_f RENAME CONSTRAINT fk fk_cons;
1. Add the profile column (about personal profile information) to the hr.staffs_group table
and set the data type to CLOB.
ALTER TABLE hr.staffs_group ADD profile CLOB;
2. As a database runs, the space occupied by CLOB columns may slowly increase. You can
run the following statement to reduce the space occupied by the CLOB columns:
ALTER TABLE hr.staffs_group MODIFY LOB profile (SHRINK SPACE);
Scenario
When table data is no longer used, delete all the rows of the table, that is, clear the table.
Prerequisites
When the DELETE or TRUNCATE statement is used to clear a table, authorization is not
required if the user is the owner of the table. Otherwise, the DELETE permission for the
table or the DROP ANY TABLE permission is required.
Procedure
l Run the DELETE statement to delete rows from the staffs table.
– Delete the rows whose staff_id is 198 from the staffs table.
DELETE FROM staffs WHERE staff_id = '198';
l Run the TRUNCATE statement to delete all rows from the table.
Scenario
When the data or definition of a table is no longer needed, you can run the DROP TABLE
statement to delete the table.
Prerequisites
When the DROP TABLE statement is used to drop a table, authorization is not required if the
user is the owner of the table. Otherwise, the DROP permission for the table or the DROP
ANY TABLE permission is required.
Procedure
Step 1 Delete the hr.staffs_group table.
DROP TABLE IF EXISTS hr.staffs_group;
----End
Related Concepts
GaussDB 100 supports range partitioned tables, list partitioned tables, hash partitioned tables,
and interval partitioned tables.
l Range partitioned table: Data within a specific range is mapped onto each partition. The
range is determined by the partition key specified when the partitioned table is created.
This partitioning mode is most commonly used. The partition key is usually a date. For
example, sales data is partitioned by month.
l Interval partitioned table: It is a special type of range partitioned table. If common range
partitions have been created and data not in the partitions is inserted, the database will
report an error. In this case, you can manually add a partition or use an interval partition.
A user can create range partitioned tables by day. For example, during service
deployment, a batch of partitions (like three months) are created for future use. However,
the table needs to be created again three months later. Otherwise, an error will be
reported when service data is saved to the database. The range partitioning mode
increases maintenance costs, and the kernel is required to support automatic partition
creation. If interval partitioning is used, you do not need to concern about subsequent
partition creation, which reduces design costs and partition maintenance costs.
l List partitioned table: A large table is split into small, easy-to-manage blocks.
l Hash partitioned table: Usually, users cannot predict the range of data changes on a
column and therefore cannot create a fixed number of range partitions or list partitions.
Hash partitioning provides a method for evenly grouping data in a specified quantity of
partitions. In this way, data written to a table is evenly distributed in each partition, and
users cannot predict the partition where the data is written. For example, sales cities are
spreading across a country, and the table can barely support list partitioning. In this case,
hash partitioning is recommended.
A partitioned table has the following advantages over an ordinary table:
l High query performance: Users can specify partitions when querying partitioned tables,
improving query efficiency.
l High availability: If a partition in a partitioned table is faulty, data in the other partitions
is still available.
l Easy maintenance: If a partition in a partitioned table is faulty, only this partition needs
to be repaired.
l Balanced I/O: Partitions can be mapped to different disks to balance I/O and improve the
overall system performance.
Procedure
Perform the following operations on a range partitioned table:
l Delete existing tables named staffs_p, if any.
DROP TABLE IF EXISTS staffs_p;
l Add the P_250 partition. The value range is 200 <= P_250 <= 250.
ALTER TABLE staffs_p ADD PARTITION P_250 VALUES LESS THAN (250);
– Creates a hash partitioned table, with the number of partitions specified but partition
names not specified.
CREATE TABLE staffs_p2018
(
staff_ID NUMBER(6) not null,
FIRST_NAME VARCHAR2(20),
LAST_NAME VARCHAR2(25),
EMAIL VARCHAR2(25),
PHONE_NUMBER VARCHAR2(20),
HIRE_DATE DATE,
employment_ID VARCHAR2(10),
SALARY NUMBER(8,2),
COMMISSION_PCT NUMBER(2,2),
MANAGER_ID NUMBER(6),
section_ID NUMBER(4)
)
PARTITION BY HASH (staff_ID) PARTITIONS 5 STORE IN(USERS,USERS);
l Delete a partition.
When the COALESCE operation is performed, data of the last partition is inserted into
a previous partition, and then the last partition is deleted. If there is only one partition
left, deleting this partition will report an error.
ALTER TABLE staffs_p2017 COALESCE PARTITION;
ALTER TABLE staffs_p2018 COALESCE PARTITION;
l Add the part_06 partition.
ALTER TABLE staffs_p2017 ADD PARTITION part_06;
l Query a partition.
– Query a partition of the hash partitioned table with partition names specified.
Run the following statement to query P_04:
SELECT * FROM staffs_p2017 PARTITION(P_04);
– Query a partition of the hash partitioned table with the number of partitions
specified.
-- Query the SYS.SYS_TABLE_PARTS and SYS.SYS_TABLES system catalogs to
obtain partition names:
SELECT * FROM SYS.SYS_TABLE_PARTS WHERE TABLE# = (SELECT ID FROM
SYS.SYS_TABLES WHERE NAME='STAFFS_P2018');
2 71 20
SYS_P1024
0
3 2189599603179523 4393751543808 2
8
0
2 71 30
SYS_P1025
0
3 2189599603179524 4393751543808 2
8
0
2 71 40
SYS_P1026
0
3 2189599603179525 4393751543808 2
8 0
4 rows fetched.
-- Query the partition named SYS_P1024:
SELECT * FROM STAFFS_P2018 PARTITION(SYS_P1024);
l Delete a partition.
-- Query the SYS.SYS_TABLE_PARTS and SYS.SYS_TABLES system catalogs to obtain
partition names:
SELECT * FROM SYS.SYS_TABLE_PARTS WHERE TABLE# = (SELECT ID FROM
SYS.SYS_TABLES WHERE NAME='STAFFS_P2016');
------------ ----------------------------------------------------------------
------------ ------------ ------------ ------------ ------------
----------------------
2 70 10
P_50 2
50 3
47529132593153 4393751543808 2 8
0
2 70 20
P_100 3
100 3
47529132687361 4393751543808 2 8
0
2 70 30
P_150 3
150 3
47529132744705 4393751543808 2 8
0
3 rows fetched.
-- Delete the partition named P_50:
ALTER TABLE staffs_p2016 DROP PARTITION P_50;
l Add a partition.
An interval partitioned table is dynamically extended with data insertion, and no manual
operation is needed. The name of an automatically generated partition can be obtained
from the SYS.SYS_TABLE_PARTS table.
l Query a partition.
-- Query the SYS.SYS_TABLE_PARTS and SYS.SYS_TABLES system catalogs to obtain
partition names:
SELECT * FROM SYS.SYS_TABLE_PARTS WHERE TABLE# = (SELECT ID FROM
SYS.SYS_TABLES WHERE NAME='STAFFS_P2016');
2 rows fetched.
-- Query the partition named P_150:
SELECT * FROM STAFFS_P2016 PARTITION(P_150);
– Create the partitioned table index idx_education2 with partition names specified.
CREATE INDEX idx_education2 ON education(higest_degree) LOCAL
(
PARTITION higest_degree1_index,
PARTITION higest_degree4_index,
PARTITION higest_degree5_index
) TABLESPACE USERS;
Related Concepts
Based on the number of columns within an index, indexes are classified into single-column
indexes and multi-column indexes (composite index).
l Single-column index
An index is created on only one column.
l Multi-column index
This index is also called a composite index. An index contains multiple columns. The
index is used only when the first column during index creation is used in the query
condition. In GaussDB 100, a multi-column index supports a maximum of 16 columns
and the total length cannot exceed 3900 bytes. It is calculated based on data types with
the maximum length.
l Columns that are frequently searched. The search efficiency can be improved.
l Columns that function as primary keys. The uniqueness of the columns and the data
sequence structures are ensured.
l Columns that function as foreign keys and are used for connections. Then the connection
efficiency is improved.
l Columns that are searched for by specified scope. These indexes have been arranged in a
sequence, and the specified scope is contiguous.
l Columns that need to be arranged in a sequence. These indexes have been arranged in a
sequence, so the sequence query time is accelerated.
l Columns that use the WHERE clause. Then, the condition decision efficiency is
increased.
l Columns that are frequently used after keywords, such as ORDER BY, GROUP BY,
and DISTINCT.
l After an index is created, the system automatically determines when to reference it. If the
system determines that indexing is faster than sequenced scanning, the index will be
used.
l The index must be synchronized with the associated table to ensure that data can be
located accurately, which increases the data operation load. Therefore, delete
unnecessary indexes periodically.
Creating an Index
Run the CREATE INDEX statement to create an index.
The following describes how to create an index staffs_ind in the STAFF_ID column of the
staffs table. The tablespace used by the index is human_resource. ONLINE indicates online
index creation, which reduces the impact on table use by other users and does not block online
services.
CREATE INDEX staffs_ind ON staffs(staff_id) TABLESPACE human_resource ONLINE;
Rebuilding an Index
After a large number of tables are added, deleted, or modified, the table data may be
fragmented on the physical file of the disk, affecting the access speed. Using the ALTER
INDEX statement to rebuild indexes can reassemble index data and release unused space,
improving data access efficiency and space usage.
Renaming an Index
If you need to generalize about the naming style of indexes, use the RENAME syntax to
modify the index names without changing other attributes of the indexes.
Deleting an Index
Run the DROP INDEX statement to delete an index. Restrictions on deleting an index are as
follows:
Run the following command to delete the staffs_ind_new index from the staffs table:
DROP INDEX staffs_ind_new ON staffs;
Related Concepts
A view is different from a base table. It is only a virtual object rather than a physical one. A
database only stores the definition of a view and does not store its data. The data is still stored
in the original base table. If data in the base table changes, the data queried from the view
changes accordingly. In this sense, a view is like a window through which users can know
their interested data and data changes in the database. A view is triggered every time it is
referenced.
Managing Views
l Run the CREATE VIEW command to create a view.
CREATE OR REPLACE VIEW MyView AS SELECT * FROM hr.staffs WHERE section_id =
10;
The OR REPLACE parameter in this command is optional. It indicates that if the view
exists, the new view will replace the existing view.
l Run the SELECT command to query the data in a view.
Example: Query the MyView view.
SELECT * FROM MyView;
l Query the system catalog my_views for the view in the current schema.
SELECT * FROM my_views;
l Run the desc view_name command to query the information about a specified view.
SQL> desc db_views;
When data needs to be migrated from one platform to another, the database administrator
must start data export and import to first export and save the original database information
and then import the data to the new platform.
l Data export: The exported file is a .csv file, which offers compatibility for the import
operations of different databases. For example, move data information from one platform
to another.
l Logical export: The exported file is a binary file, which enables faster export and
import. For example, when there is a large amount of data to be migrated, you can use
logical export and import to accelerate the operation.
Data Types
You can define the type of the exported file as TXT or BIN. A TXT file allows exported data
to be visible and also allows multiple threads to export table data simultaneously. A BIN file
occupies less disk space.
l Data export and import: Exported user data and table data are visible.
l Logical export and import: Metadata about user creation, user roles, and permission
granting can be exported based on specific parameter settings.
Precautions
l Both export and import occupy disk space. You must ensure sufficient disk space
beforehand.
l Export and import occupy CPU resources, which may affect the database processing
speed. Therefore, you are advised to perform the operations during off-peak hours.
Prerequisites
l Ensure that disk space is sufficient.
l Ensure that CPU resources are sufficient.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 2 Run the export command.
l Export the training table.
DUMP TABLE training INTO FILE '/home/gaussdba/data/training_backup' ;
l Export the specified rows in the training table (results returned by SELECT).
DUMP QUERY "SELECT course_name,score,exam_date FROM training WHERE
course_name = 'SQL majorization'"
INTO FILE '/home/gaussdba/data/training_query_backup '
COLUMNS ENCLOSED BY ''''
COLUMNS TERMINATED BY '|';
For details about the DUMP command, see SQL Syntax Reference > Function Syntax
> DUMP in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
----End
Prerequisites
l Ensure that disk space is sufficient.
l Ensure that CPU resources are sufficient.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
Step 2 Run the import command.
l Import the data file training_backup to the training_new table.
LOAD DATA INFILE "/home/gaussdba/data/training_backup" INTO TABLE
training_new FIELDS ENCLOSED BY '|';
For details about the LOAD command, see SQL Syntax Reference > Function Syntax >
LOAD in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
----End
Prerequisites
l Ensure that disk space is sufficient.
l Ensure that CPU resources are sufficient.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
For details about the EXP command, see SQL Syntax Reference > Function Syntax > EXP
in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
----End
Prerequisites
l Ensure that disk space is sufficient.
l Ensure that CPU resources are sufficient.
Procedure
Step 1 Log in to the GaussDB 100 database as a database administrator.
zsql
conn gaussdba/database_123@192.168.0.1:1888
gaussdba/database_123 indicates the system administrator created after the installation and
the administrator password. 192.168.0.1 indicates the IP address of the database server. 1888
indicates the connected port.
For details about the IMP command, see SQL Syntax Reference > Function Syntax > IMP
in GaussDB 100 V300R001C00 R&D Documentation (Standalone).
----End
SYS_BACKUP_SETS BACKUP_SET$
SYS_COLUMNS COLUMN$
SYS_COMMENTS COMMENT$
SYS_CONSTRAINT_DEFS CONSDEF$
SYS_DATA_NODES DATA_NODES$
EXP_TAB_ORDERS DBA_EXP$TBL_ORDER
EXP_TAB_RELATIONS DBA_EXP$TBL_RELATIONS
SYS_DEPENDENCIES DEPENDENCY$
SYS_DISTRIBUTE_RULES DISTRIBUTE_RULE$
SYS_DISTRIBUTE_STRATEGIES DISTRIBUTE_STRATEGY$
SYS_DUMMY DUAL
SYS_EXTERNAL_TABLES EXTERNAL$
SYS_GARBAGE_SEGMENTS GARBAGE_SEGMENT$
SYS_HISTGRAM_ABSTR HIST_HEAD$
SYS_HISTGRAM HISTGRAM$
SYS_INDEXES INDEX$
SYS_INDEX_PARTS INDEXPART$
SYS_JOBS JOB$
SYS_LINKS LINK$
SYS_LOBS LOB$
SYS_LOB_PARTS LOBPART$
SYS_LOGIC_REPL LOGIC_REP$
SYS_DML_STATS MON_MODS_ALL$
SYS_OBJECT_PRIVS OBJECT_PRIVS$
SYS_PART_COLUMNS PARTCOLUMN$
SYS_PART_OBJECTS PARTOBJECT$
SYS_PART_STORES PARTSTORE$
SYS_PENDING_DIST_TRANS PENDING_DISTRIBUTED_TRANS$
SYS_PENDING_TRANS PENDING_TRANS$
SYS_PROCS PROC$
SYS_PROC_ARGS PROC_ARGS$
SYS_PROFILE PROFILE$
SYS_RECYCLEBIN RECYCLEBIN$
SYS_ROLES ROLES$
SYS_SEQUENCES SEQUENCE$
SYS_SHADOW_INDEXES SHADOW_INDEX$
SYS_SHADOW_INDEX_PARTS SHADOW_INDEXPART$
SYS_SYNONYMS SYNONYM$
SYS_PRIVS SYS_PRIVS$
SYS_TABLES TABLE$
SYS_TABLE_PARTS TABLEPART$
SYS_TMP_SEG_STATS TMP_SEG_STAT$
SYS_USERS USER$
SYS_USER_HISTORY USER_HISTORY$
SYS_USER_ROLES USER_ROLES$
SYS_VIEWS VIEW$
SYS_VIEW_COLS VIEWCOL$
SYS_SQL_MAPS SQL_MAP$
WSR_PARAMETER WRH$_PARAMETER
WSR_SQLAREA WRH$_SQLAREA
WSR_SYS_STAT WRH$_SYSSTAT
WSR_SYSTEM WRH$_SYSTEM
WSR_SYSTEM_EVENT WRH$_SYSTEM_EVENT
WSR_SNAPSHOT WRM$_SNAPSHOT
WSR_CONTROL WRM$_WR_CONTROL
WSR_DBA_SEGMENTS WSR$_DBA_SEGMENTS
WSR_LATCH WSR$_LATCH
WSR_LIBRARYCACHE WSR$_LIBRARYCACHE
WSR_SEGMENT WSR$_SEGMENT
WSR_SQL_LIST WSR$SQL_LIST
WSR_WAITSTAT WSR$_WAITSTAT
DB_DB_LINKS ALL_DB_LINKS
DB_IND_STATISTICS ALL_IND_STATISTICS
DB_JOBS ALL_JOBS
DB_TAB_MODIFICATIONS ALL_TAB_MODIFICATIONS
DB_USERS ALL_USERS
DB_USER_SYS_PRIVS ALL_USER_SYS_PRIVS
ADM_ARGUMENTS DBA_ARGUMENTS
ADM_BACKUP_SET DBA_BACKUP_SET
ADM_COL_COMMENTS DBA_COL_COMMENTS
ADM_CONSTRAINTS DBA_CONSTRAINTS
ADM_DATA_FILES DBA_DATA_FILES
ADM_DBLINK_TABLES DBA_DBLINK_TABLES
ADM_DBLINK_TAB_COLUMNS DBA_DBLINK_TAB_COLUMNS
ADM_DEPENDENCIES DBA_DEPENDENCIES
ADM_FREE_SPACE DBA_FREE_SPACE
ADM_HISTOGRAMS DBA_HISTOGRAMS
ADM_HIST_DBASEGMENTS DBA_HIST_DBASEGMENTS
ADM_HIST_LATCH DBA_HIST_LATCH
ADM_HIST_LIBRARYCACHE DBA_HIST_LIBRARYCACHE
ADM_HIST_LONGSQL DBA_HIST_LONGSQL
ADM_HIST_PARAMETER DBA_HIST_PARAMETER
ADM_HIST_SEGMENT DBA_HIST_SEGMENT
ADM_HIST_SNAPSHOT DBA_HIST_SNAPSHOT
ADM_HIST_SQLAREA DBA_HIST_SQLAREA
ADM_HIST_SYSSTAT DBA_HIST_SYSSTAT
ADM_HIST_SYSTEM DBA_HIST_SYSTEM
ADM_HIST_SYSTEM_EVENT DBA_HIST_SYSTEM_EVENT
ADM_HIST_WAITSTAT DBA_HIST_WAITSTAT
ADM_HIST_WR_CONTROL DBA_HIST_WR_CONTROL
ADM_INDEXES DBA_INDEXES
ADM_IND_COLUMNS DBA_IND_COLUMNS
ADM_IND_PARTITIONS DBA_IND_PARTITIONS
ADM_IND_STATISTICS DBA_IND_STATISTICS
ADM_JOBS DBA_JOBS
ADM_JOBS_RUNNING DBA_JOBS_RUNNING
ADM_OBJECTS DBA_OBJECTS
ADM_PART_COL_STATISTICS DBA_PART_COL_STATISTICS
ADM_PART_KEY_COLUMNS DBA_PART_KEY_COLUMNS
ADM_PART_STORE DBA_PART_STORE
ADM_PART_TABLES DBA_PART_TABLES
ADM_PROCEDURES DBA_PROCEDURES
ADM_PROFILES DBA_PROFILES
ADM_ROLES DBA_ROLES
ADM_ROLE_PRIVS DBA_ROLE_PRIVS
ADM_SEGMENTS DBA_SEGMENTS
ADM_SEQUENCES DBA_SEQUENCES
ADM_SOURCE DBA_SOURCE
ADM_SYNONYMS DBA_SYNONYMS
ADM_SYS_PRIVS DBA_SYS_PRIVS
ADM_TABLES DBA_TABLES
ADM_TABLESPACES DBA_TABLESPACES
ADM_TAB_COLS DBA_TAB_COLS
ADM_TAB_COLUMNS DBA_TAB_COLUMNS
ADM_TAB_COL_STATISTICS DBA_TAB_COL_STATISTICS
ADM_TAB_COMMENTS DBA_TAB_COMMENTS
ADM_TAB_DISTRIBUTE DBA_TAB_DISTRIBUTE
ADM_TAB_MODIFICATIONS DBA_TAB_MODIFICATIONS
ADM_TAB_PARTITIONS DBA_TAB_PARTITIONS
ADM_TAB_PRIVS DBA_TAB_PRIVS
ADM_TAB_STATISTICS DBA_TAB_STATISTICS
ADM_TRIGGERS DBA_TRIGGERS
ADM_USERS DBA_USERS
ADM_VIEWS DBA_VIEWS
ADM_VIEW_COLUMNS DBA_VIEW_COLUMNS
DB_ARGUMENTS ALL_ARGUMENTS
DB_COL_COMMENTS ALL_COL_COMMENTS
DB_CONSTRAINTS ALL_CONSTRAINTS
DB_DBLINK_TABLES ALL_DBLINK_TABLES
DB_DBLINK_TAB_COLUMNS ALL_DBLINK_TAB_COLUMNS
DB_DEPENDENCIES ALL_DEPENDENCIES
DB_DISTRIBUTE_RULES ALL_DISTRIBUTE_RULES
DB_DIST_RULE_COLS ALL_DIST_RULE_COLS
DB_HISTOGRAMS ALL_HISTOGRAMS
DB_INDEXES ALL_INDEXES
DB_IND_COLUMNS ALL_IND_COLUMNS
DB_IND_PARTITIONS ALL_IND_PARTITIONS
DB_OBJECTS ALL_OBJECTS
DB_PART_COL_STATISTICS ALL_PART_COL_STATISTICS
DB_PART_KEY_COLUMNS ALL_PART_KEY_COLUMNS
DB_PART_STORE ALL_PART_STORE
DB_PART_TABLES ALL_PART_TABLES
DB_PROCEDURES ALL_PROCEDURES
DB_SEQUENCES ALL_SEQUENCES
DB_SOURCE ALL_SOURCE
DB_SYNONYMS ALL_SYNONYMS
DB_TABLES ALL_TABLES
DB_TAB_COLS ALL_TAB_COLS
DB_TAB_COLUMNS ALL_TAB_COLUMNS
DB_TAB_COL_STATISTICS ALL_TAB_COL_STATISTICS
DB_TAB_COMMENTS ALL_TAB_COMMENTS
DB_TAB_DISTRIBUTE ALL_TAB_DISTRIBUTE
DB_TAB_PARTITIONS ALL_TAB_PARTITIONS
DB_TAB_STATISTICS ALL_TAB_STATISTICS
DB_TRIGGERS ALL_TRIGGERS
DB_VIEWS ALL_VIEWS
DB_VIEW_COLUMNS ALL_VIEW_COLUMNS
ROLE_SYS_PRIVS ROLE_SYS_PRIVS
MY_ARGUMENTS USER_ARGUMENTS
MY_COL_COMMENTS USER_COL_COMMENTS
MY_CONSTRAINTS USER_CONSTRAINTS
MY_CONS_COLUMNS USER_CONS_COLUMNS
MY_DEPENDENCIES USER_DEPENDENCIES
MY_FREE_SPACE USER_FREE_SPACE
MY_HISTOGRAMS USER_HISTOGRAMS
MY_INDEXES USER_INDEXES
MY_IND_COLUMNS USER_IND_COLUMNS
MY_IND_PARTITIONS USER_IND_PARTITIONS
MY_IND_STATISTICS USER_IND_STATISTICS
MY_JOBS USER_JOBS
MY_OBJECTS USER_OBJECTS
MY_PART_COL_STATISTICS USER_PART_COL_STATISTICS
MY_PART_KEY_COLUMNS USER_PART_KEY_COLUMNS
MY_PART_STORE USER_PART_STORE
MY_PART_TABLES USER_PART_TABLES
MY_PROCEDURES USER_PROCEDURES
MY_ROLE_PRIVS USER_ROLE_PRIVS
MY_SEGMENTS USER_SEGMENTS
MY_SEQUENCES USER_SEQUENCES
MY_SOURCE USER_SOURCE
MY_SQL_MAPS USER_SQL_MAPS
MY_SYNONYMS USER_SYNONYMS
MY_SYS_PRIVS USER_SYS_PRIVS
MY_TABLES USER_TABLES
MY_TAB_COLS USER_TAB_COLS
MY_TAB_COLUMNS USER_TAB_COLUMNS
MY_TAB_COL_STATISTICS USER_TAB_COL_STATISTICS
MY_TAB_COMMENTS USER_TAB_COMMENTS
MY_TAB_DISTRIBUTE USER_TAB_DISTRIBUTE
MY_TAB_MODIFICATIONS USER_TAB_MODIFICATIONS
MY_TAB_PARTITIONS USER_TAB_PARTITIONS
MY_TAB_PRIVS USER_TAB_PRIVS
MY_TAB_STATISTICS USER_TAB_STATISTICS
MY_TRIGGERS USER_TRIGGERS
MY_USERS USER_USERS
MY_VIEWS USER_VIEWS
MY_VIEW_COLUMNS USER_VIEW_COLUMNS
NLS_SESSION_PARAMETERS NLS_SESSION_PARAMETERS
DV_ALL_TRANS V$ALL_TRANSACTION
DV_ARCHIVED_LOGS V$ARCHIVED_LOG
DV_ARCHIVE_DEST_STATUS V$ARCHIVE_DEST_STATUS
DV_ARCHIVE_GAPS V$ARCHIVE_GAP
DV_ARCHIVE_THREADS V$ARCHIVE_PROCESSES
DV_BACKUP_PROCESSES V$BACKUP_PROCESS
DV_BUFFER_POOLS V$BUFFER_POOL
DV_BUFFER_POOL_STATS V$BUFFER_POOL_STATISTICS
DV_CONTROL_FILES V$CONTROLFILE
DV_DATABASE V$DATABASE
DV_DATA_FILES V$DATAFILE
DV_OBJECT_CACHE V$DB_OBJECT_CACHE
DV_DC_POOLS V$DC_POOL
DV_DYNAMIC_VIEWS V$DYNAMIC_VIEW
DV_DYNAMIC_VIEW_COLS V$DYNAMIC_VIEW_COLUMN
DV_FREE_SPACE V$FREE_SPACE
DV_HA_SYNC_INFO V$HA_SYNC_INFO
DV_HBA V$HBA
DV_INSTANCE V$INSTANCE
DV_RUNNING_JOBS V$JOBS_RUNNING
DV_LATCHS V$LATCH
DV_LIBRARY_CACHE V$LIBRARYCACHE
DV_LOCKS V$LOCK
DV_LOCKED_OBJECTS V$LOCKED_OBJECT
DV_LOG_FILES V$LOGFILE
DV_LONG_SQL V$LONGSQL
DV_STANDBYS V$MANAGED_STANDBY
DV_ME V$ME
DV_OPEN_CURSORS V$OPEN_CURSOR
DV_PARAMETERS V$PARAMETER
DV_PL_MANAGER V$PL_MANAGER
DV_PL_REFSQLS V$PL_REFSQLS
DV_REACTOR_POOLS V$REACTOR_POOL
DV_REPL_STATUS V$REPL_STATUS
DV_RESOURCE_MAP V$RESOURCE_MAP
DV_SEGMENT_STATS V$SEGMENT_STATISTICS
DV_SESSIONS V$SESSION
DV_SESSION_EVENTS V$SESSION_EVENT
DV_SESSION_WAITS V$SESSION_WAIT
DV_GMA V$SGA
DV_GMA_STATS V$SGASTAT
DV_SPINLOCKS V$SPINLOCK
DV_SQLS V$SQLAREA
DV_SQL_POOL V$SQLPOOL
DV_SYS_STATS V$SYSSTAT
DV_SYSTEM V$SYSTEM
DV_SYS_EVENTS V$SYSTEM_EVENT
DV_TABLESPACES V$TABLESPACE
DV_TEMP_POOLS V$TEMP_POOL
DV_TEMP_UNDO_SEGMENT V$TEMP_UNDO_SEGMENT
DV_TRANSACTIONS V$TRANSACTION
DV_UNDO_SEGMENTS V$UNDO_SEGMENT
DV_USER_ADVISORY_LOCKS V$USER_ADVISORY_LOCKS
DV_USER_ASTATUS_MAP V$USER_ASTATUS_MAP
DV_USER_PARAMETERS V$USER_PARAMETER
DV_VERSION V$VERSION
DV_VM_FUNC_STACK V$VM_FUNC_STACK
DV_WAIT_STATS V$WAITSTAT
DV_XACT_LOCKS V$XACT_LOCK
JOB_THREADS JOB_QUEUE_PROCESSES
COMMIT_MODE COMMIT_LOGGING
COMMIT_WAIT_LOGGING COMMIT_WAIT
PAGE_CHECKSUM DB_BLOCK_CHECKSUM
ARCHIVE_CONFIG LOG_ARCHIVE_CONFIG
ARCHIVE_DEST_N LOG_ARCHIVE_DEST_n
ARCHIVE_DEST_STATE_N LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_FORMAT LOG_ARCHIVE_FORMAT
ARCHIVE_MAX_THREADS LOG_ARCHIVE_MAX_PROCESSES
ARCHIVE_MIN_SUCCEED_DEST LOG_ARCHIVE_MIN_SUCCEED_DEST
ARCHIVE_TRACE LOG_ARCHIVE_TRACE
CHECKPOINT_PERIOD CHECKPOINT_TIMEOUT
CHECKPOINT_PAGES CHECKPOINT_INTERVAL
TIMED_STATS TIMED_STATISTICS
STATS_LEVEL STATISTICS_LEVEL
FILE_OPTIONS FILESYSTEMIO_OPTIONS
7 Glossary
Term Description
A–E
ACID Atomicity, Consistency, Isolation, and Durability (ACID). These are a set of
features of database transactions in a DBMS.
archive A thread started when the archive function is enabled on a database. The
thread thread is used to archive database logs to a specified path.
atomicity One of the ACID features of database transactions. Atomicity means that a
transaction is composed of an indivisible unit of work. All operations
performed in a transaction must either be committed or uncommitted. If an
error occurs during transaction execution, the transaction will be rolled
back to the state when it was not committed.
backup A backup, or the process of backing up, refers to the copying and archiving
of computer data. Backup data can be used for restoration in case of data
loss.
checkpoint A mechanism that stores data in the database memory to disks at a certain
time. GaussDB 100 periodically stores the data of committed transactions
and data of uncommitted transactions to disks. The data and redo logs can
be used for database restoration if a database restarts or breaks down.
CLI Command-line interface (CLI). Users use the CLI to interact with
applications. Its input and output are based on texts. Commands are entered
through keyboards or similar devices and are compiled and executed by
applications. The results are displayed in text or graphic forms on the
terminal interface.
Term Description
coding Coding is representing data and information using code so that it can be
processed and analyzed by a computer. Characters, digits, and other objects
can be converted into digital code, or information and data can be converted
into the required electrical pulse signals based on predefined rules.
concurrency A DBMS service that ensures data integrity when multiple transactions are
control concurrently executed in a multi-user environment. In a multi-threaded
GaussDB 100 environment, concurrency control ensures that database
operations are safe and all database transactions remain consistent at any
given time.
core dump When a program stops abnormally, core dump, memory dump, or system
dump records the state of working memory of the program at that point in
time. The states of key programs are often dumped at the same time. For
example, information about processor registers, including program metrics,
stack pointers, memory management, other processors, and OS flags are
often dumped at the same time. A core dump is often used to assist
diagnosis and computer program debugging.
core file A file that is created when memory overwriting, assertion failures, or access
to invalid memory occurs in a process, causing it to fail. This file is then
used for further analysis.
A core file stores memory dump data, and supports binary mode and
specified ports. The name of a core file consists of the word "core" and the
OS process ID.
The core file is available regardless of the type of platform.
Term Description
data flow An operator that exchanges data among query fragments. By their input/
operator output relationships, data flows can be categorized into Gather flows,
Broadcast flows, and Redistribution flows. Gather combines multiple query
fragments of data into one. Broadcast forwards the data of one query
fragment to multiple query fragments. Redistribution reorganizes the data
of multiple query fragments and then redistributes the reorganized data to
multiple query fragments.
database A collection of data that is stored together and can be accessed, managed,
and updated. Data in a view in a database can be classified into the
following types: numeral, full text, digit, and image.
database file A binary file that stores user data and the internal data of a database system.
database HA GaussDB 100 provides a highly reliable HA solution. Every logical node in
GaussDB 100 is identified as a primary or standby node. At the same time,
only one GaussDB 100 node is identified as the primary server. In
GaussDB 100, standby nodes first perform full synchronization from the
primary node and later incremental synchronization. When the HA system
is running, the primary node can receive data read and write requests in
GaussDB 100.
DBLINK An object of the path from one database to another. A remote database
object can be queried with DBLINK.
Term Description
dirty page A page that has been modified and is not written to a permanent device.
dump file A specific type of trace file. A dump file contains diagnostic data during an
event response, whereas a trace file contains continuously generated
diagnostic data.
durability One of the ACID features of database transactions. Transactions that have
been committed will permanently survive and not be rolled back.
error A technique that automatically detects and corrects errors in software and
correction data streams to improve system stability and reliability.
F–J
failover Automatic switchover from a faulty node to its standby node. Reversely,
automatic switchback from the standby node to the primary node is called
failback.
free space A mechanism for managing free space in a table. This mechanism enables a
management database system to record free space in each table and establish an easy-to-
find data structure, accelerating operations (such as INSERT) performed on
the free space.
Term Description
GNU The GNU Project was publicly announced on September 27, 1983 by
Richard Stallman, aiming at building an OS composed wholly of free
software. GNU is a recursive acronym for "GNU's Not Unix!". Stallman
announced that GNU should be pronounced as Guh-NOO. Technically,
GNU is similar to Unix in design, a widely used commercial OS. However,
GNU is free software and contains no Unix code.
GTS Global Time Server (GTS). It is used to provide a logical clock for each
node in the case of strong consistency.
incremental Incremental backup stores all file changes since the last valid backup.
backup
index An ordered data structure in a DBMS. An index accelerates data query and
update in database tables.
isolation One of the ACID features of database transactions. Isolation means that the
operations inside a transaction and data used are isolated from other
concurrent transactions. Concurrent transactions do not disturb each other.
JDBC Java database connectivity (JDBC) is used to implement the Java APIs of
SQL statements. It provides unified access to multiple relational databases,
consisting of a set of classes and interfaces written in Java language.
junk tuple A tuple that is deleted using the DELETE and UPDATE statements. When
deleting a tuple, GaussDB 100 only marks the tuples that are to be cleared.
The VACUUM thread will then periodically clear these junk tuples.
K–O
log file A file to which a computer system writes a record of its activities.
Term Description
metadata Data that provides information about other data. Metadata describes the
source, size, format, or other characteristics of data. In database columns,
metadata explains the content of a data warehouse.
P–T
page Smallest memory unit for row storage in the relational object structure in
GaussDB 100. The default size of a page is 8 KB.
primary A node that receives data read and write requests in the GaussDB 100 HA
server system and works with all standby servers. At any time, only one node in
the HA system is identified as the primary server.
QPS Query Per Second (QPS) means the number of queries that a server can
respond to per second.
query Each query job can be split into one or more query fragments. Each query
fragment fragment consists of one or more query operators and can independently
run on a node. Query fragments exchange data through data flow operators.
query An iterator or a query tree node, which is a basic unit for the execution of a
operator query. Execution of a query can be split into one or more query operators.
Common query operators include scan, join, and aggregation.
redo log A log that contains information required for performing an operation again
in a database. If a database is faulty, redo logs can be used to restore the
database to its original state.
Term Description
relational A database created using the relational model. It processes data using
database methods of set algebra.
RPO Recovery point objective (RPO) refers to the latest status that a database
system and the data can be restored to after a disaster, and it is usually
represented by time.
RTO Recovery time objective (RTO) refers to the duration between the database
system failure caused by a disaster and its restoration to proper running.
schema A database object set that includes the logical structure, such as tables,
views, sequences, stored procedures, synonyms, clusters, and database
links.
shared pool A shared pool is created for repeatedly executed SQL statements to save
memory. It contains the explain trees and execution plans of given SQL
statements.
SSL Secure Sockets Layer (SSL) is a network security protocol first used by
Netscape. It is based on the TCP/IP protocol and uses public key
technology. SSL supports a wide range of networks and provides three
basic security services, all of which use the public key technology. SSL
ensures the security of service communication through a network by
establishing a secure connection between a client and a server and then
sending data through this connection.
Term Description
stop word In computing, stop words are words which are filtered out before or after
processing of natural language data (text), saving storage space and
improving search efficiency.
stored A group of SQL statements compiled into a single execution plan and
procedure stored in a large database system. Users can specify a name and parameters
(if any) for a stored procedure to execute the procedure.
system A table storing meta information about a database. The meta information
catalog includes user tables, indexes, columns, functions, and data types in a
database.
table A set of columns and rows. Each column is referred to as a field. Values in
each field represent a data type. For example, if a table contains three fields
of person names, cities, and states, it has three columns: Name, City, and
State. In every row in the table, the Name column contains a name, the City
column contains a city, and the State column contains a state.
tablespace A tablespace is a logical storage structure that contains tables, indexes, and
objects. A tablespace provides an abstract layer between physical data and
logical data, and provides storage space for all database objects. When you
create an object, you can specify which tablespace it belongs to.
thesaurus Standardized words or phrases that express document themes and are used
for indexing and retrieval.
U–Z
Xlog A transaction log. A logical node can have only one Xlog file.
zsql GaussDB 100 interactive terminal. zsql enables you to interactively enter
queries, issue them to GaussDB 100, and view the query results. Queries
can also be entered from files. zsql supports many meta commands and
shell-like commands, allowing you to conveniently compile scripts and
automate jobs.