Professional Documents
Culture Documents
Hdb
User's Guide
Copyright 2019 General Electric Company and/or its affiliates (“GE”). All Rights Reserved. This document is the
confidential and proprietary information of GE and may not be reproduced, transmitted, stored, or copied in
whole or in part, or used to furnish information to others, without the prior written permission of GE.
Contents
About This Document ....................................................................................................... xii
Purpose of This Document ............................................................................................................................ xii
Who Should Use This Document ................................................................................................................ xii
Structure of This Document ......................................................................................................................... xiii
For More Information ..................................................................................................................................... xiv
Conventions ........................................................................................................................................................ xv
Change Summary ............................................................................................................................................. xv
2. Environment ................................................................................................................... 10
2.1 Platform Dependencies .......................................................................................................................... 10
2.2 Variable Descriptions ............................................................................................................................... 10
3. hdbcloner ........................................................................................................................ 15
3.1 Creating Clones .......................................................................................................................................... 15
3.2 Loading Schema ........................................................................................................................................ 15
3.3 Removing Schema .................................................................................................................................... 16
3.4 Removing Clones ....................................................................................................................................... 16
3.5 Replacing Clones ....................................................................................................................................... 16
3.6 Segment Page Alignment....................................................................................................................... 17
3.7 Replication ................................................................................................................................................... 18
3.8 Database Schema Version Titles ......................................................................................................... 19
3.9 Importing Clones from Another HABITAT Group ........................................................................... 19
3.10 Moving Clones Within a HABITAT Group ........................................................................................ 20
3.11 Converting a Clone to an Archive File ............................................................................................. 20
6. hdbdump ......................................................................................................................... 43
6.1 hdbdump Uses ........................................................................................................................................... 44
6.2 Source Examples ....................................................................................................................................... 44
6.3 Action Examples ........................................................................................................................................ 45
6.4 Naming Database Objects Examples ................................................................................................ 45
7. hdbexport ........................................................................................................................ 46
7.1 Reasons to Export Data from a Database ....................................................................................... 46
7.2 Functional Overview ................................................................................................................................ 46
7.3 Operational Modes ................................................................................................................................... 47
7.3.1 Default Mode ........................................................................................................................................ 48
7.3.2 Record Mode......................................................................................................................................... 48
7.3.3 Field Mode ............................................................................................................................................. 48
7.3.4 Pattern Mode ........................................................................................................................................ 49
7.4 Data Output Format ................................................................................................................................. 49
7.4.1 Record Format ..................................................................................................................................... 49
7.4.1.1 Exporting Special Fields in a Record ..................................................................................... 50
7.4.2 Field Element Format ........................................................................................................................ 50
7.4.3 Data Types of Exported Values ...................................................................................................... 50
7.5 Example Uses.............................................................................................................................................. 51
7.5.1 Export/Import in Default Mode ...................................................................................................... 51
7.5.2 Export/Import in Field Mode ........................................................................................................... 52
7.5.2.1 Exporting the Whole Database in Field Format ................................................................ 52
7.5.2.2 Selecting Particular Fields for Export .................................................................................... 52
7.5.2.3 Editing Data in a File and Importing Changes Back Into the Database .................. 53
7.5.3 Export, Fix, and Import a Parent Pointer Field ......................................................................... 53
7.5.4 Export/Import a Subtree .................................................................................................................. 53
7.5.4.1 Exporting a Subtree ..................................................................................................................... 54
7.5.4.1.1 Example #1: Extract subtree A(1) ..................................................................................... 54
7.5.4.1.2 Example #2: Extract subtree B(3) ..................................................................................... 54
7.5.4.2 Importing a Subtree .................................................................................................................... 54
7.5.4.2.1 Example #1: Inserting a default subtree ....................................................................... 55
7.5.4.2.2 Example #2: Inserting a subtree using the key field ................................................. 55
7.5.5 Export a Field for Use in Excel ........................................................................................................ 56
7.5.5.1 Example #1: Default .................................................................................................................... 56
7.5.5.2 Example #2: No record name .................................................................................................. 56
7.5.5.3 Example #3: No record name and no field names .......................................................... 57
7.5.5.4 Example #4: No record name, no field names, and no subscripts ............................ 57
7.5.6 Export a Range of Data .................................................................................................................... 57
7.5.6.1 Example #1: Extract entry 2-to-3 from record A .............................................................. 57
7.5.6.2 Example #2: Extract entry start-to-3 from record A ....................................................... 58
7.5.6.3 Example #3: Extract entry 3-to-end from record A ......................................................... 58
8. hdbformat ....................................................................................................................... 62
8.1 Subschema Files ........................................................................................................................................ 62
8.1.1 Fortran 90 Subschema Files ........................................................................................................... 62
8.1.1.1 hdbdb_dbname.f90 ..................................................................................................................... 62
8.1.1.2 dp_dbname.inc ............................................................................................................................. 62
8.1.1.3 dp_dbname_pname.inc............................................................................................................. 63
8.1.1.4 dx_dbname.inc.............................................................................................................................. 63
8.1.1.5 db_dbname.inc ............................................................................................................................. 63
8.1.2 C/C++ Subschema Files .................................................................................................................... 63
8.2 Subschema File Management .............................................................................................................. 63
8.2.1 Recommended Practices ................................................................................................................. 64
8.3 Examples ...................................................................................................................................................... 64
8.3.1 Example #1: Format from a single DBDEF source with defaults ...................................... 65
8.3.2 Example #2: Format from two DBDEF sources ....................................................................... 65
8.3.3 Example #3: Format multiple files using wildcards ............................................................... 65
8.3.4 Example #4: Format for C language only .................................................................................. 65
8.3.5 Example #5: Format C and Fortran 90 using F77 fixed source statement format .... 65
9. hdbimport........................................................................................................................ 66
9.1 Reasons to Import Data Into a Database ........................................................................................ 66
9.2 Creating the Input Data File .................................................................................................................. 66
9.2.1 Sample of an Input File Fragment ................................................................................................ 68
9.3 Hdbimport Concepts ................................................................................................................................ 69
9.3.1 Field Lines and Record Lines .......................................................................................................... 69
9.3.2 Methods of Record Insertion and Update ................................................................................. 70
9.3.3 Declaratives .......................................................................................................................................... 70
9.3.4 Multiple Input Files ............................................................................................................................. 71
9.3.5 Other Command-Line Options....................................................................................................... 71
9.4 Input File Format ....................................................................................................................................... 71
9.4.1 Line Continuation Character .......................................................................................................... 72
9.4.2 Comment Lines .................................................................................................................................... 72
9.4.3 Record Lines ......................................................................................................................................... 72
9.4.3.1 Record Line Syntax ...................................................................................................................... 73
9.4.3.2 Using an Alternative Record Line Format ........................................................................... 73
9.4.4 Field Line ................................................................................................................................................ 74
9.5 Specification of Field Values.................................................................................................................. 74
9.5.1 Character Data .................................................................................................................................... 75
9.5.2 Numeric Data ....................................................................................................................................... 75
9.5.3 Keyword Data ...................................................................................................................................... 76
9.5.4 Boolean Data ........................................................................................................................................ 76
9.5.5 Calendar Date and Calendar Timedate...................................................................................... 76
Figures
Figure 1. Hdb Overview Diagram .................................................................................................................... 1
Figure 2. Sample of an Input File Fragment ..............................................................................................68
Figure 3. Example of a Declarative ...............................................................................................................71
Tables
Table 1. Clone Specification .............................................................................................................................23
Table 2. Conversion Rules.................................................................................................................................32
Table 3. Operational Modes .............................................................................................................................47
Command Prompts
Operating Prompt Description
System
Linux % All commands preceded by a percent sign prompt (%)
are issued from a Linux terminal window. Note that all
Linux commands are case sensitive.
Windows > All commands preceded by a greater-than sign prompt
(>) are issued from a Windows command line window.
All Operating The absence of any prompt character before a
Systems command indicates that the command is valid on all
operating systems.
Command Strings
Operating Delimiter Description
System
Linux Italics Text in italics indicates information you must supply. (*)
Linux [] Text enclosed in square brackets "[ ]" indicates optional
qualifiers, arguments, or data. (*)
All Operating Select When used in command strings, the term “Select”
Systems means placing the pointer over the specified item and
pressing the left (default) mouse button.
(*) Note: All Linux commands are case sensitive and must be typed exactly as shown.
Change Summary
The following changes have been made to this document for Habitat 5.11:
• Added the “-d” option to the hdbrio where command to limit the number of subtree
levels shown.
Figure 1 shows the activity flow and procedures that the application developer and system
administrator follow when using Hdb. Above the dashed line are developer activities; below
the dashed line are developer and system administrator activities.
1
In the Reference Guide chapter, see hdbcloner -c create_clone with the -replace option.
2
As explained in the Hdb Programmer’s Guide, the size-dependent subschema files are the Fortran 90
partition INCLUDE files. The Fortran 90 main database INCLUDE file is not size-dependent, and the C header
files are not size-dependent.
3
To fully specify a database among multiple e-terrahabitat installations, the HABITAT group of the specific
installation will be taken into account in identifying a database. The HABITAT group is set during the
e-terrahabitat installation and is identified by the HABITAT_GROUP environment variable.
HABITAT_DEFAULT_VERSION [optional]
This variable specifies the version title string of the database schema to use when
creating a clone. See the -version option of hdbcloner -c create for more information.
HABITAT_SERVER_HANGUP [optional]
This variable is specific to the UNIX operating system. The valid values are Y to abort
hdbserver if a SIGHUP is received, or N to ignore the SIGHUP signal. The default is N. (For
more information, see the hdbserver pages in chapter 13 Hdb Utilities Reference.)
HABITAT_DISABLE_BACKUP [optional]
To disable replication of data to the standby on a dual-redundant configuration, the
valid values are Y to disable backup (i.e., replication) and N to enable backup. If not set,
the default is N.
HABITAT_DISABLE_CHECKPOINT [optional]
Disabling database checkpointing may improve system performance since less disk I/O
will be performed, but it also increases the risk of data loss if a system crashes before
the data is written back to a disk file. The valid values are Y to disable checkpointing
and N to enable checkpointing. If not set, the default is N.
HABITAT_DISABLE_DBLOCKS [optional]
This variable specifies how to disable database locks if they are enabled. If disabled, the
locked API calls will do nothing and the Hdb read/write functions will not lock the
partitions being accessed. The valid values are Y to disable database locks or N to
enable database locks. If not set, the default is N.
HABITAT_FULL_FIELD_REPLICATE [optional]
This variable specifies how to control whether the full extent of a field is replicated or
just the changed range. The valid values are Y to replicate the full extent or N to
replicate only the changed extent. If not set, the default is Y.
3.7 Replication
Replication is the act of copying changed data from one computer, acting as the enabled
(or primary) system, to a second computer, acting as the standby (or secondary) system.
Enabled-to-standby data replication is the method of providing a dual-redundant database
in a high-availability configuration. Replication is a feature subject to the configuration of
the system and the software; it is not available on every Habitat installation.
Only designated partitions are replicated when data changes. Some circumstances call for
the entire database of a replicated clone to change and, in this case, all partitions are
copied from the enabled system to the standby system.
Therefore, for replication to be enabled, three separate steps are required:
1. Mark the database partitions to be replicated in the DBDEF source definition file.
2. Mark the clone(s) to be replicated with the replication attribute.
3. Configure replication in the system by having the backup/MRS system enabled and
operational.
To mark a clone to be replicated, the -replicate parameter must be specified when the
clone is created. This replicate attribute on the clone is then a permanent part of the clone;
it remains with the clone throughout all replaces and all renames. The only way to switch
off this replication attribute is to replace the clone and specify the -noreplicate parameter.
Note: When you do this, use a compression utility to compress the clone file, prior to
sending it in along with the SPR.
The second reason to use this method is if you need to copy a clone file to another HABITAT
group. This is the method for getting a copy of the clone file. Taking the clone offline is very
important because, without this step, it is possible that the data in the file may be out of
date and possibly corrupted.
It could also be argued that this is a faster way to create an archive file from a clone if it is a
very large clone.
Note: The hdbcopydata utility cannot be used to copy individual records. To copy
individual records, use hdbrio, hdbexport, or hdbimport.
Example:
To copy data from the SCADAMOM database in the “SCADA DTS” clone to the “SCADA EMS”
clone, enter the command:
% hdbcopydata -s scadamom.scada.dts -d scada.ems
4.3 Stamps
This section briefly describes stamps that are involved in copies, as well as how stamps are
handled during copy operations.
Note: A selcop copy operation modifies the source database, and it may modify the
destination database as well. In general, both source and destination databases are
modified.
Selcop copies fields marked as selcop fields from the destination to the source database.
Fields are marked as selcop fields by the database schema definition file (DBDEF) or the
selcop configuration file. Selcop fields are copied on an element-by-element basis by
matching the record occurrence of the source and destination databases. A matching
record occurrence is matched either by key value or by subscript range. For example, a
selcop field element located by subscript 324 in the source can be copied to subscript 325
in the destination, because the records are different based on the insertion or deletion that
took place.
When copying from the destination to the source, the data being copied varies based on
the following logic:
1. Any record type without a KEYFIELD has its SELCOP fields copied by subscript position. If
the source data has more records than the target (i.e., the LV value is larger), the excess
records have their SELCOP field retain the original source value, since there is nothing in
the target to selectively copy from. If the target data has more records than the source,
the excess records are simply removed from the target.
2. Any record type that can be matched up via a KEYFIELD or a set of KEYFIELDs
(hierarchical composite key) has its SELCOP fields selectively copied from the matched
record. Any record in the target with a KEYFIELD that is not matched up with a record in
the target is erased by the copy. Any record in the source with a KEYFIELD that is not
matched up with a record in the target is copied over with the original values in the
SELCOP fields.
4
The picture shows the contents of the databases logically. Physically, the records could be ordered
differently.
• Set the Selective Copy flag when configuring the Copy Data records for the Database
Update Sequence (DBSEQ) application. With DBSEQ you can configure a clone-to-clone
or savecase-to-clone database copy, with the option to enable Selcop or not enable it.
The following example performs a savecase-to-clone selective copy operation in DBSEQ,
copying the netmodel.ade savecase into the rtca.ems clone with the Selective Copy flag
set. For more information about DBSEQ, refer to the Database Update Sequence User’s
Guide.
4.12 Examples
This section provides hdbcopydata examples. The examples use the SCADA clone schema,
which has the following savecase types defined: SCADA, DTS, SOELOGS, COMMLOG,
ACCUHIS, and TAGGING.
4.12.3 Example #3: Retrieving a Savecase File using the File Path
Name
Enter the commands:
% cd $HABITAT_CASES
% hdbcopydata -d scada.ems -sf case_scada_dts.todays_data
Or:
% hdbcopydata -d scada.ems -case $HABITAT_CASES/dts.todays_data
This example performs the very same case retrieval operation as the previous examples;
however, the savecase file is explicitly identified. To use this method, you must know the full
path location of the savecase file (as shown above) and the syntax of the savecase file
name.
Note: The copy operation described here is special. It easily garbles the resulting target
database if you are not familiar with the schema details of each database.
5.2 Examples
The commands below take a source DBDEF file (permit.dbdef) from the current directory
and generate an ASCII file called “permit.dbdoc”, which contains the structure and the
documentation of the DBDEF file.
% hdbdocument -dbdef permit.dbdef
% hdbdocument -dbdef permit
The following command takes the input file permit.dbdef, formats it, and then generates an
output file called sample.dbdoc.
% hdbdocument -dbdef -output=sample permit.dbdef
Note: 2-D and 3-D fields in a record can never be displayed in the record format. These
fields are always displayed in the field element format.
7.5.2.3 Editing Data in a File and Importing Changes Back Into the Database
After the fields have been exported into an output file, the data can be edited to change
field values.
Use the following command to import the changes back into the database:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt
Note: Because the file only contains field-format lines, it can only update existing fields in
the database (for more information about field line versus record line import, see chapter
9 hdbimport).
8.1.1.1 hdbdb_dbname.f90
This file is a Fortran 90 source file that must be compiled and linked to an application
program. The file defines a Fortran 90 module required by the Hdb API if the API is using
Fortran 90 to access the database. The module object produced during compilation of this
file must be made available to the application program.
8.1.1.2 dp_dbname.inc
This file defines the master INCLUDE schema file that is required by the Hdb API. This file
references the database module. It must be placed within the source program and in a
8.1.1.3 dp_dbname_pname.inc
This file defines the partition structures used for the Hdb API partition I/O. This INCLUDE file
must follow any module referencing files (e.g., dp_dbname.inc). This INCLUDE file is not used
for Hdb API record I/O or Hdb API field I/O (unless a partition field is specified on the read or
write method). There is one of these INCLUDE files for each partition defined by the
database. Using one of these files makes the application dependent on the dimensions of
the database, requiring a recompile of the dimension change.
8.1.1.4 dx_dbname.inc
This INCLUDE file defines the MX values for each database record type. This file is
compatible with older versions of the system. The use of this file is not recommended,
because dimension-independent code cannot be produced with this file.
8.1.1.5 db_dbname.inc
This INCLUDE file is used to reference all partition and MX value INCLUDE files. It does not
reference the master INCLUDE file (dp_dbname.inc). This file is provided for convenience
only.
8.3 Examples
The following topics show five hdbformat examples:
• Example #1: Format from a single DBDEF source with defaults
• Example #2: Format from two DBDEF sources
• Example #3: Format multiple files using wildcards
• Example #4: Format for C language only
• Example #5: Format C and Fortran 90 using F77 fixed source statement format
8.3.5 Example #5: Format C and Fortran 90 using F77 fixed source
statement format
hdbformat -l c f90 f77 -n scadamom.esca_ems -d /var/habitat/includes
This example assigns C and Fortran 90 as the target languages, but the command also
includes the F77 language to instruct hdbformat to create fixed column (Fortran 77) for the
source statements.
Note: Extreme care must be exercised when fixing database pointers, to avoid corrupting
the integrity of the database.
# Turns on verbose
#VERBOSE
#
# A is the name of the record type. It has these fields:
# BB_A a boolean field
# CC_A a character field
# ID_A a character field
# FF_A a floating point field
# OID_A a internal object id field
# NN_A an integer field
#
# The next 2 record lines are in the default record line format.
#
A,1,ID='A0000001',OID=0,FF=10000000.1,BB=F,CC='A SAMPLE DATA: 100',NN=
1000
A,2,ID='A0000002',OID=0,FF=20000000.2,BB=T,CC='A SAMPLE DATA: 200',NN=
2000
#
# Use declarative to change order of the fields and
# to add new BOOLEAN keywords (OPEN/CLOSE)
#
#boolean OPEN/CLOSE
#record A,%SUBSCRIPT,BB_A,CC_A,FF_A,ID_A,NN_A,OID_A
#
A,3,BB=CLOSE,CC='A SAMPLE DATA: 300',FF=30000000.3,\
ID='A0000003',NN= 3000,OID=0
A,4,BB=OPEN, CC='A SAMPLE DATA: 400',FF=40000000.4,\
ID='A0000004',NN= 4000,OID=0
#
# Turn off verbose
#noverbose
:
:
#
# Examples of field lines for a 2 D field.
#
N_X_X(1,4)=104
N_X_X(1,5)=105
N_X_X(2,1)=201
N_X_X(2,2)=202
N_X_X(2,3)=203
9.3.3 Declaratives
Declaratives are lines in the input data file that instruct hdbimport how to perform
subsequent operations. A declarative line begins with the pound sign (#) and is immediately
followed by the declarative keyword. No white space is allowed between the pound sign (#)
and the first character of the keyword. For example, #atend, #atstart, #keys, #update,
and #insert are declaratives.
The following is an example of using a declarative to switch between update and insert
mode and then back to update mode:
For more information about how to use declaratives, refer to section 9.6 Declaratives.
The value in a field line can be character data, integer data, floating-point data, keywords,
boolean data, or calendar data. For more information about how to specify each of these
data types, see section 9.5 Specification of Field Values.
'astring' The character string “astring” is specified using the apostrophe character
as the quote character.
"astring" The same string as above is quoted using the double-quote character. This
string is considered identical to the one above.
'ast''ring' The same embedded quote example as above, but this time the character
string is quoted by the apostrophe character, which is also embedded so
that the embedded apostrophe character is escaped by including two (') in
juxtaposition.
Note: Although sometimes difficult to distinguish, the embedded characters
in this example are two apostrophes, not a single double quote.
+ddd.dddE+dd Specifies floating-point data with exponent field and optional + or - sign for both the
coefficient and the exponent. The double-precision format using the D character is
also supported.
T/F The keyword T is accepted for the true state, and the keyword F is accepted for the false
state.
Y/N The keyword Y is accepted for the true state, and the keyword N is accepted for the false
state.
9.6 Declaratives
Semantic declaratives are statement lines in the input data file that instruct hdbimport how
to perform subsequent operations. By using declaratives, it is possible to gain more control
of the import operation, such as mixing insert and update modes along with keys and
subscripting in the input data.
Note: When inserting with keys, there is a slightly different interpretation of the atstart
and atend rules. Whereas the atstart and atend rules apply to only the first hierarchical
record encountered with key positioning, the atstart and atend rules do apply whenever a
parent record is located by key value. Therefore, the first visit rule is reset whenever a
parent record is located by key.
Note: If the #update declarative is not in the input file and the -update option is not used
on the command line, then hdbimport is reverted back to the default insertion mode.
Therefore, the five record lines shown above are appended to the end of the CON record
type.
# ---------------------------------------------------------
# Step 3: Setup the record template for the data records
# that we want to insert.
#
# For the LIMIT record, we like to set its ID and DBAND
# fields.
#record LIMIT,ID,DBAND
#
# For the CTRL record, we like to set its ID, WAIT and KEY
# fields.
#record CTRL,ID,WAIT,KEY
# ---------------------------------------------------------
# Step 4: Use the '#update' declarative to switch to
# update mode so the ANALOG record line becomes
# a line to position to a record occurrence.
# The composite search key for the analog record
# is:
# ID_SCADEK='SUBSTN'
# ID_SUBSTN='SUB_4X'
# ID_DEVTYP='XFMR'
# ID_DEVICE='2L'
# ID_ANALOG='LTC'
#update
ANALOG,'LTC','2L','XFMR','SUB_4X','SUBSTN'
#
# Step 4.1: Change to INSERT mode and insert the LIMIT
# record under the ANALOG located with the
# analog composite key.
# Here, the new LIMIT record's ID field is
# set to 'TAP' and its 'DBAND' is set to 123.
# The rest of the fields for the record are
# set to their defaults.
#insert
# ---------------------------------------------------------
# Step 5: Now, we change back to UPDATE mode and locate
# another record occurrence but for the POINT
# record type.
#update
POINT,'SUB_2X','CB','400401','BKR'
#
# Step 5.1: Change to INSERT mode and insert the CTRL
# records under the POINT above. Two CTRL
# records are added, one with ID=TRIP and
# the other ID is CLOSE.
#insert
CTRL,'TRIP',22,44
CTRL,'CLOSE',11,33
Use the command line below to import the data into a SCADAMOM database:
% context scada test
% hdbimport -d scadamom -s input.txt
# ---------------------------------------------------------
# Step 2: Setup the record templates for the composite
# keys we want to use to locate the CTRL and
# LIMIT records we like to update.
#
# The CTRL record's composite key is:
#record CTRL,WAIT,KEY,ID_SUBSTN,ID_DEVTYP,ID_DEVICE,ID_POINT,ID
#
# The LIMIT record's composite key is:
#record LIMIT,DBAND \
,ID,ID_ANALOG,ID_DEVICE,ID_DEVTYP,ID_SUBSTN,ID_SCADEK
# ---------------------------------------------------------
# Step 3: Change to update mode. Don't forget to do this,
# otherwise records are inserted rather than
# updated.
#update
# ---------------------------------------------------------
# Step 4: We will update the records we've inserted in
# the previous example.
#
Note: The format of the hdbrio chapter is somewhat different from previous chapters or
those that follow. This is because of the unique nature of hdbrio, which is a database
query language.
Note: When accessing a clone database, make sure hdbserver is running. If hdbserver is
not running, hdbrio will start but it cannot actually access the data in that database.
Note: Because the script file does its own hdbrio access, the command-line options do not
affect the script. If the command-line options provide invalid data (e.g., an invalid Hdb
database name), hdbrio outputs an error message but continues to execute the script.
[SCADAMOM.SCADA.DTS]
rio>
This output shows that two SCADAMOM databases are opened. The first database is a
database in the archive file scada.arc. The second database is from the clone with
application SCADA and family DTS.
[SCADAMOM.SCADA.EMS]
rio>
If DBOPEN is successful, the newly opened database becomes the active database and the
hdbrio prompt shows this. All commands are directed to this newly opened active database
until the user switches to another database or exits hdbrio.
To open a database in an archive file clone, use the following syntax:
dbopen -r <archive file> -d <database>
Below is an example that opens the SCADAMOM database in the archive file “scada.arc”:
[PROCMAN.PROCMAN.HABITAT]
rio> dbopen -r scada.arc -d scadamom
[PROCMAN.PROCMAN.HABITAT]
rio> dbset 2
[SCADAMOM.SCADA.DTS]
rio>
[SCADAMOM.SCADA.DTS]
rio>
10.9 Navigation
The hdbrio commands “up” and “down” can be used to navigate within a record subtree
after the hdbrio position has been established. When navigating within a subtree, the rio>
prompt reflects the following:
• The root record of the subtree to which the user is positioned.
• The current record with the subtree where hdbrio is currently positioned.
The following example illustrates this point:
% hdbrio -a scada -f ems scadamom
rio> pos substn=1
SUBSTN(1)> down 2
SUBSTN(1). . .DEVICE(1)> where
SCADEK(8) = SUBSTN
SUBSTN(1) = SUB_1X
DEVTYP(1) = RTU
DEVICE(1) = STA_REM_LOC
SUBSTN(1). . .DEVICE(1)>
Note: You can only navigate down to the last record position.
SUBSTN = 2
DEVTYP = 15
DEVICE = 31
MEAS = 33
POINT = 51
CTRL = 36
ANALOG = 57
LIMIT = 17
COUNT = 2
ALGREF = 1
SETPNT = 1
[SCADAMOM.SCADA.EMS]
rio> p substn
[SCADAMOM.SCADA.EMS]
SUBSTN(1)> dbpaste -s 2
[SCADAMOM.SCADA.EMS]
SUBSTN(1)>
Note: In the above example, one (1) is a primary record subscript, two (2) is a secondary
record subscript, and three (3) is a tertiary record subscript.
[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /napname // display the NAPNAME field
Char*12 NAPNAME_PATH(2) = NIO00001454
[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /iphost=pc7890 // edit the IPHOST field
[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /iphost // display the change
Char*64 IPHOST_PATH(2) = PC7890
Note that bolded characters are user input.
• Field Name: The field name as defined in the database schema (e.g., BMAGSAT_XF).
• Composite Identifier: This is the composite ID of the record. The composite ID includes
the key fields for all of the parents to the target record plus the target record itself. By
default, the components in the composite key fields are separated by a forward slash.
For an ITEMS field, this column is blank.
Note: The delimiter can be configured to something other than a forward slash
through the -delimiter command-line option.
• Value 1: This is the value as it occurred in the first database. It will be blank if the entry
pertains to an insert.
• Value 2: This is the value as it occurred in the second database. It will be blank if the
entry pertains to a delete.
• Field Description: Descriptive text if available from the optional field description
configuration file — e.g., Slope of Magnetization Curve (%). If the -description_file
option is not used or if the field is not defined in the description file, then this field is
blank.
At the end of the CSV file are comment lines summarizing the number of difference entries
that are found in the two databases.
M,OBJSSCR_ZOUT,1451592402041,53418,53567,
M,OBJSSCR_ZOUT,1652310304028,53249,53398,
M,OBJSSCR_ZOUT,11573901050216,50407,50545,
M,OBJSSCR_ZOUT,16272708080342,52972,53119,
M,OBJSSCR_ZOUT,1613363105028,53968,54119,
I,ID_ZOUT,11293525050421,,,
M,OBJSSCR_ZOUT,1652310304026,53949,54098,
M,OBJSSCR_ZOUT,16523103040210,54010,54161,
#
# Records inserted = 262
# Records deleted = 0
# Fields modified = 3229
#
Note: Renaming is required even if you do not change the clone’s family name.
The command shows how the cloner’s rename_clone command renames this clone for
the EMS family; the -car parameter names the file and a full path must be specified. In
this example, assume that the local directory location is HABITAT_CLONES. Also, the -f
parameter is included to specify a new family name; by specifying a new family name,
the clone’s filename will be changed automatically.
6. Add the clone file to the group’s clone directory.
%hdbdirectory -add clone_netmodel_ems.car
After renaming the file, it needs to be made known to the HABITAT group. This is
accomplished using the hdbdirectory command. The hdbdirectory command is used to
add a clone file to the group’s clone directory.
Remember, the clone file is now in the EMS family, so you need to use the file name as
shown above. Again, this is a full path filename, so you assume that you are currently
located in the HABITAT_CLONES directory.
7. Place the clone online.
%hdbcloner -c online_clone -a netmodel -f ems
Finally, the clone is now placed online using the hdbcloner online_clone command. At
this point, the clone is valid and ready to be used by applications in the new HABITAT
group.
Important: The full file path must be specified for the location of the clone files.
Note: For definitions of Hdb database terms used throughout this document, refer to the
Habitat Glossary of Terms.
In this chapter, Hdb utilities are presented in a reference format. The following utilities are
described:
• hdbcloner
• hdbcompare
• hdbcopydata
• hdbdirectory
• hdbdocument
• hdbdump
• hdbexport
• hdbformat
• hdbimport
• hdbrio
• hdbserver
The information is designed for easy look-up. Each section uses the following format:
• Utility name
• Abstract
• Syntax
• Parameter description
Syntax
hdbcloner -c create_clone -a <appname> -f <famname> [parameters]
Parameters
-a <appname> [required]
This parameter identifies the name of an applications clone’s schema description file in the
dictionary. Wildcards are supported if the -replace parameter is used with this parameter. Application
names are not case-sensitive.
-f <famname> [required]
This parameter specifies the family name of the clone. Assigned family names must be unique to a
given application. Wildcards (*) are allowed only if used with the -replace parameter.
System naming conventions and policies may impose limitations on Hdb naming. Thus, check with
system administrators before naming.
Family naming rules must adhere to the following conventions:
• The name must begin with an alpha-character (A–Z).
• The name can only contain alpha-numeric characters (A–Z, 0–9).
• Special characters are not allowed in the name.
• A maximum of 12 characters is allowed; however, some Habitat and Platform applications
impose an 8-character limitation.
• Family names are not case-sensitive.
-replace [optional]
This parameter is used when a clone is to be replaced. This parameter must be used with the -a and
-f parameters to identify the application and family name of the clone that is to be replaced.
Wildcards (*) are allowed. For example, the following command replaces all clones in the EMS family:
hdbcloner -c create_clone -a * -f ems -replace
The following example modifies a clone attribute by turning off replication of the clones in the DTS
family:
hdbcloner -c create_clone -a * -f dts -replace -noreplicate
Schema comes from the dictionary, not the replaced clone. Clone content is updated with new
schema definitions if the schema has changed since the clone was originally created.
-ignoretruncate [optional]
When a clone is being replaced with the parameter specified, record truncation is ignored when
copying the records from the original clone to the newly created clone during the replace operation.
Record truncation can occur if one or more record types in the original clone have LVs that are larger
than the MX of the same record types in the new schema.
By default, if this option is not specified, record truncation is considered an error and the replace
operation fails.
The default is page, which means that the segment alignment boundary is selected for the platform
on which the cloner is executing.
If the alignment boundary is not a natural segment boundary for the platform, then the clone is
mapped as a whole rather than mapping individual database partitions as they are used.
Normally, clones should be created with the default setting. However, it may be more efficient for
small clones (less than 100K bytes) to be created using quad alignment.
-offline [optional]
This parameter is used together with the -replace parameter to instruct the cloner to locate an
existing clone using the file system rather than the online memory-resident cloning database. The
cloner uses the file system by default if the hdbserver is not running.
If the hdbserver is running, the cloner uses the memory-resident cloning database to locate the clone.
If the clone is offline, or if a previous clone operation failed to complete normally, the clone may not
be defined in the online database. Therefore, in this situation, the -offline parameter is required.
-nocopy [optional]
This parameter is used with the -replace parameter to instruct the cloner that data is not to be copied
from the old clone to the new clone. The default is to copy data from the existing clone to the new
clone.
-noreplicate_oids [optional]
Using the -noreplicate_oids parameter will disable the replication of the OID partition of a database in
a replication environment. By default, the OID partition of a database will be replicated if replication is
enabled on the clone by using the -replicate parameter.
Syntax
hdbcloner -c create_corefile
Parameters
None.
Syntax
hdbcloner -c load_schema -s <source-file> [parameters]
Parameters
-s <source-files> [required]
Specifies the file(s) that are to be loaded into the dictionary. Multiple files of the same or different file
extensions can be loaded. Wildcards (*) can be used to load files.
Files are source schema files coded according to the syntax requirements of each file type (.cls,
.dbdef).
Note: The .dbd extension is still supported by this utility for backward compatibility to pre-Habitat 5.x
systems. However, build scripts and tools in Habitat 5.x systems may no longer recognize the .dbd
extension.
-replace [optional]
Using this option replaces existing schema files stored in the database with new, same-named
schema files. If the named schema file does not exist, this option has no effect.
The load_schema command default is to not replace existing schema.
-nojointitle [optional]
If present, this option tells hdbcloner not to concatenate the DBDEF version title with the MXDEF
version title when formulating the database version title in the data dictionary. Normally, hdbcloner
will combine the titles so it can be distinguished from the title by loading just the DBDEF without the
MXDEF. Specifying -nojointitle allows a CLS file that has a hard-coded database version to use the
new Mxes specified in the MXDEF (see the HABITAT_NOJOINTITLE environment variable). For more
information about using this option, refer to the section “Database Resizing Using MXDEF File” in the
Hdb Programmer’s Guide.
Syntax
hdbcloner -c offline_clone -a <appname> -f <famname>
Parameters
-a <appname> [required]
This parameter identifies the application name of the clone that is to be taken offline. Wildcards are
supported. Application names are not case-sensitive.
-f <famname> [required]
This parameter specifies the clone to be taken offline (ditto above). Wildcards (*) are supported.
Syntax
hdbcloner -c online_clone -a <appname> -f <famname>
Parameters
-a <appname> [required]
This parameter specifies the application name of the clone that is to be placed online. For multiple-
application clones, this name identifies the clone schema name. Wildcards (*) are supported.
-f <famname> [required]
This parameter specifies the family name of the clone to be placed online. Wildcards (*) are
supported.
Syntax
hdbcloner -c remove_clone -a <appname> -f <famname> [parameters]
Parameters
-a <appname> [required]
This parameter specifies the application name of the clone that is to be removed. Wildcards (*) are
supported.
-f <famname> [required]
This parameter specifies the clone to be removed Wildcards (*) are supported.
-offline [optional]
This parameter indicates that the clone to be removed is already offline. This parameter causes the
cloner to search the clone file namespace rather than the memory-resident cloning database
namespace. This parameter is required if the clone has already been taken offline, or if the clone was
never successfully placed online.
Syntax
hdbcloner -c remove_schema -n <type:name>
Parameter
-n <type:name> [required]
Specifies name of the schema to be removed. The type portion of the name specifies the type of
schema to be removed. This is specified as one of the following: cls, dbd, or mxs. Also, a wildcard
character (*) can be specified to mean all schema types that match the name of the component
supplied.
The name component refers to the name of the schema object to be removed. The name must
specify an application name or a clone schema name if the cls type is specified.
If the type is dbd, then the name portion has a more-detailed syntax, which is dbname.versiontitle. If
you want to remove all instances of database schema with a given database name, then specify just
the dbname portion or dbname.*. The wildcard (*) specifies all versions. If only the dbname is
provided, it is the same as specifying all versions by using the wildcard (*) for the version title.
Wildcard characters can be used in any place within the name (including the database version title)
to indicate the removal of multiple schemas of a given type.
Note: The use of the wildcard character in the type and name does not imply file globbing. Therefore,
on platforms that use file globbing (e.g., Linux), wildcards must be specified within quotes so that the
shell does not interpret the wildcard character in its globbing action.
Examples
% hdbcloner -c remove_schema -n cls:scada
This command removes the scada clone schema from the Hdb database dictionary.
% hdbcloner -c remove_schema -n dbd:scadamom
This command removes the database definition of SCADAMOM from the Hdb database dictionary.
Syntax
hdbcloner -c rename_clone -car <clonefilename> [parameter]
Parameters
-car <clonefilename> [required]
This parameter specifies the name of the clone that is to be renamed. The clone must be offline and
must be specified by its full path name.
-f <famname> [optional]
This parameter specifies a new family name for the clone. Choose a name that will not conflict with
other clones in the targeted group.
Syntax
hdbcloner -c show_clone [parameters]
Parameters
-a <appname> [optional]
This parameter is used to specify the application name of the clone to show. Wildcards (*) are
supported in the same manner as create_clone using the -replace parameter. If not specified, then all
applications are assumed, just as if -a * were specified.
-f <famname> [optional]
This parameter is used to specify the family name of the clone to show. It is used just like the previous
description for create_clone. Also, wildcard characters are supported in the same manner as
create_clone using the -replace parameter. If not specified, then all families are assumed, just as if
-f * were specified.
-offline [optional]
This parameter specifies that the offline clone file namespace is to be used for locating clones. By
default, the online memory-resident cloning database is used for the clone namespace. Less
information is available for offline clones.
-online [optional]
This parameter specifies that the online clone file namespace is to be used for locating clones. This is
the default.
-full [optional]
Same as verbose level 5. This option will be deprecated in the future. Users should use the -verbose
parameter instead.
-multiapp [optional]
This parameter indicates that the -a <appname> parameter specifies a schema name instead of the
clone’s application name in a multi-application clone.
Examples
Show all the clones
% hdbcloner -c show_clone
This command displays each clone in one line. It also shows the process identifier (PID) of the
processes that are currently accessing that clone.
Syntax
hdbcloner -c show_limits
Parameters
None.
Descriptions
These are the limits shown by the command:
CLONE limits:
• Maximum Application Name size
• Maximum Family Name size
CLS limits:
• Maximum APPLICATION Name size
• Maximum CASE Title size
DBDEF limits:
• Maximum DBDEF Name size
• Maximum DBDEF Title size
• Maximum FLDPAR Name size
• Maximum RECTYP Name size
• Maximum FIELD Name size
• Maximum # of DBDEF statements per DBDEF file
• Maximum # of RECTYP statements per DBDEF file
• Maximum # of FLDPAR statements per DBDEF file
• Maximum # of FIELDS statements per DBDEF file
• Maximum size for C*
• Maximum keyfield size for C*
Syntax
hdbcloner -c show_schema [-n <type:name>][parameters]
Parameters
-n <type:name> [optional]
Specifies the name of the schema to be displayed. The type portion of the argument specifies the type
of schema to display. This is specified as one of the following: cls, dbd, or mxs. Also, a wildcard
character (*) can be specified to mean all schema types that match the name of the component
supplied.
The name portion refers to the name of the schema object to display. The name must specify an
application name or a clone schema name if the cls type is specified.
If the type is dbd, then the name portion has a more-detailed syntax, which is dbname.versiontitle. If
you want to display all instances of database schema with a given database name, then specify just
the dbname portion or dbname.*, where the (*) wildcard specifies all versions. If only the dbname is
provided, then use wildcards (*) in the version title.
Wildcard characters can be used in any place within the name (including the database version title)
to display multiple schemas of a given type.
Note: The use of the wildcard character in the type and name does not imply file globbing. Therefore,
on platforms that use file globbing (e.g., Linux), wildcards must be specified within quotes so that the
shell does not interpret the * character in its globbing action.
-full [optional]
Same as verbose level 5. This option may be deprecated in the future. Users should use the -verbose
parameter instead.
Examples
Show all schema of all types
% hdbcloner -c show_schema
This command displays all the database schema, clone schema, and mxset schema registered with
Hdb, each on its own line.
Show the partitions of the SCADAMOM database schema for version “PROJECT”
% hdbcloner -c show_schema -n dbd:scadamom.project -verbose 2
DB Schema SCADAMOM.PROJECT
Partition CLSLODIS *REPLICATED*
Partition CLSLOPRV *REPLICATED*
Partition CLSLOPUB *REPLICATED*
Partition CLSTADIS
Partition CLSTAPRV
Partition CLVOLDIS
: : :
Cloner completed successfully
The parameter “dbd:scadamom.project” specifies that only the database schema “SCADAMOM” of
version “PROJECT” is to be displayed. The verbose level of 2 tells the hdbcloner to display the partition
info (only partial output is shown above).
If the parameter “dbd:scadamom.project” is replaced with “dbd:scadamom” above, then the
partitions for both versions of the database schema will be displayed.
Show all the database schema and savecases defined for the SCADA clone schema
% hdbcloner -c show_schema -n cls:scada -verbose 10
CLS Schema SCADA
DB: SCADAMOM.ESCA_EMP
DB: SCADACL.ESCA_EMP
DB: MESCADA.ESCA_EMP
DB: COMMLOG.ESCA_EMP
DB: SOELOGS.ESCA_EMP
DB: ACCUMHIS.ESCA_EMP
Case: SCADA
Includes Database SCADAMOM
Includes Database SCADACL
Includes Database ACCUMHIS
Case: DTS
Includes Database SCADAMOM
Includes Database SCADACL
Includes Database MESCADA
Case: SOELOGS
Includes Database SOELOGS
Case: COMMLOG
Includes Database COMMLOG
Case: ACCUMHIS
Includes Database ACCUMHIS
Case: TAGGING
Includes Partition TGSTAPRV.SCADAMOM
Cloner completed successfully
The command shows the SCADA clone schema and includes six database schema: SCADAMOM,
SCADACL, MESCADA, COMMLOG, SOELOGS, and ACCUMHIS, which all have the same database
schema version title: ESCA_EMP.
Syntax
hdbcloner -c verify_schema -s <source files>
Parameter
-s <source-files> [required]
This parameter specifies the schema file(s) to be verified. Multiple files of the same or different file
extensions can be verified. Wildcards (*) can be used to load files.
Files are source schema files coded according to the syntax requirements of each file type (.cls,
.dbdef).
Syntax
hdbcloner -h [-c <cloner_command>]
Parameter
-c <cloner_command> [optional]
Followed by “cloner” is the name of the command you want help on. To access Help for a specific
hdbcloner command, use the following form of the command:
hdbcloner -c create_clone -h
The above command results in the help file for create_clone being displayed.
Syntax
hdbcompare <container1> <container2> database <options>
Parameter
database
Identifies the name of the database that is to be compared. The database must exist in the specified
container 1 and container 2 that are identified using qualifiers.
-s2 <clone>
Identifies a clone container containing the second instance of the database to be compared. The
syntax is application.family.
-sf1 <savecase>
Identifies a savecase containing the first instance of the database to be compared. This is the full
filename of the savecase file. If a directory is not specified, then it is assumed that the file is in the
HABITAT_CASES directory.
-sf2 <savecase>
Same as sf1, except that it identifies the second database instance.
-szf1 <savecase>
Identifies a savecase type within a zip file that contains the target database. The title is not needed
(assuming there is only one of this type in the zip file). An example would be: case_netmodel_netmom.
-szf2 <savecase>
Same as szf1, except that it identifies a savecase within the second zip file.
Other options
-modeling_only (optional)
Limits field comparisons to only those fields with the “MODELER” attribute.
<database>_comparison_YYYYMMDD_HHMMSS.csv
where YYYYMMDD_HHMMSS identifies the date and time when the comparison was initiated.
-prompt (optional)
If specified, then the user is prompted for any missing required parameters or qualifiers. The
“required” inputs are the two database container specifications and the database name.
Implicit input
HABITAT_CASES environment variable
This is the standard HABITAT directory where the Habitat UI and Habitat APIs assume that savecases
are stored. If a savecase file is specified as a container (recognized by the name starting with “case_”)
and no directory is specified, then the script will look for the savecase in this directory.
Description
The two instances of the requested database are located, mapped, and then compared.
Two output files are created in a common directory using the same base name:
• The “output” file is a comma separated value (CSV) file that enumerates the differences between
the two database instances.
• The “log” file is a message log containing information about the comparison results.
Note: Depending on the databases being compared, it is conceivable that the output file could
contain more records than Microsoft Excel can import (64K). In this case, use of a tool such as
Microsoft Access to view the CSV file is recommended.
Exit Status
The command exists with a status to indicate the overall success or failure of the comparison:
• EXIT_SUCCESS: The comparison was successful and did not generate any error messages.
• EXIT_FAILURE: The comparison generated one or more error messages.
The actual exit value is consistent with the respective standard on the operating system where the
comparison was done.
Examples
• Comparing the NETMOM database between the NETMODEL.DTS clone and the RTNET.DTS clone.
The output files are netdiff.csv and netdiff.log.
%> hdbcompare -s1 netmodel.dts -s2 rtnet.dts netmom -output netdiff
• Comparing the NETMOM database between the NETMODEL.DTS clone and the savecase
case_rtnet_dts.emp60 using the default output format. The output files are
netmom_comparison_yyyymmdd_hhmmss.csv and
netmom_comparison_yyyymmdd_hhmmss.log.
%> hdbcompare -s1 netmodel.dts -sf2 case_rtnet_dts.emp60 netmom -output
netdiff
• Comparing the NETMOM database between the NETMODEL.DTS clone and the zipped archive file
mytest.adearc with a savecase case_netmodel_ade.emp60 in it. The output files are
netade.csv and netade.log.
%> hdbcompare -s1 netmodel.dts -sz2 mytest.adearc -szf2
case_netmodel_ade.emp60 netmom -output netade
Syntax
hdbcopydata <source-container> <destination-container> [parameters]
Container Specifications
-case <case-specification>
Specifies that the source or destination container is a savecase specification.
If the source container is to be designated as a savecase, then use the -d or -df parameters in
conjunction with -case.
If the destination container is to be designated as a savecase, then use the -s or -sf parameters in
conjunction with -case.
Note: The parameters -title and -a may contribute to the information in naming a savecase.
-s <clone-spec>
Specifies a source container as a clone specification.
For a description of the <clone_spec> syntax, see hdbcloner for more information.
-sf <filespec>
Specifies a source container as an archive file.
The -s parameter cannot be used in the same command with -sf.
-d <clone_spec>
Specifies a destination container as a clone specification.
For a description of the <clone_spec> syntax, see hdbcloner for more information.
-df <filespec>
Specifies a destination container as an archive file.
To create a savecase file, you must use the -case argument.
Parameters
-a <appname> [optional]
Specifies the application name context, which is used to locate the savecase type named by -case.
Normally, the case application is derived from the other container (whether source or destination) if it
is a clone specification.
-datafill [optional]
Specifies that excess data in the target field is to be set to a fill byte. A datafill condition is the
opposite of the truncate condition. Datafill is required when the destination LV is larger than the
source LV and sufficient elements do not exist to completely occupy the destination field. The only
-ignoreconvert [optional]
Specifies that field conversion errors are to be ignored during copy. When -ignoreconvert is used,
data copy problems are not treated like errors. Data may or may not be reliable when using this
parameter.
-ignoreconvert ignores the following system errors:
• Field truncate (-ignoretruncate)
• Data fill (-datafill)
• Data type conversions (range errors, numeric truncation, invalid conversion)
-ignorestamps [optional]
This parameter suppresses record time stamp checking during a field or partition copy. By definition,
all record time stamps in all fields of the partition being copied must match the source and
destination databases exactly.
The LV values of the target and the source databases are still honored. If the LV values are different,
then you may still not be able to copy the data due to a truncate or a data fill problem. Therefore,
when using the -ignorestamps parameter, you may also want to use the -ignoretruncate parameter,
the -datafill parameter, or both (see the discussion of truncate and datafill in the following sections).
-ignoretruncate [optional]
Specifies that truncation errors are ignored. A truncation error occurs when the LVs involved in the
copy are different and not all of the data from the source can be copied — for example, when copying
a source field N_UN where UN LV is 100 to a destination field N_UN where the LC is 50. This means
that field elements from subscript 51–100 will not be copied, and the result is called truncated.
By default, truncation of data is considered an error, and the destination database objects involved
are not modified.
When LVs of records involved in a copy are different in source and destination, then the time stamps
are almost always different too, so the -ignorestamps parameter is often specified along with the
-ignoretruncate parameter.
-nolocks [optional]
This parameter turns off all database locking during hdbcopydata transactions.
-samedef [optional]
Specifies that the source and destination databases involved in a copy must have the same definition
based on the same schema. If the schema do not match, the copy cannot be performed.
By default, data can be copied whether the databases are the same or not, and whether the
definitions are the same or not.
-selcop [optional]
Specifies that a selective copy (selcop) is to be performed. A selcop copy is different from a regular
hdbcopydata operation, in that both the source and destination databases are modified.
-selcoponly [optional]
Specifies that only a selcop (selective copy) style backwards copy is to be performed. A selcop
backwards copy is used to copy selcop field values of matching records from the destination to the
source.
-update_source [optional]
This parameter is required whenever a HABITAT savecase file is retrieved and the file is larger than
the system page file. Applies only to retrieval of older HABITAT version 4.x-style savecases. When
specified, the source file will be modified in a manner that makes it incompatible with HABITAT
version 4.x systems.
-verify [optional]
Enables comparison of the source and destination schema. No data will be copied. The comparison is
performed only if the destination context is a clone. The -verbose switch can be used to
increase/decrease the amount of outputs being printed.
Examples
Refer to the hdbcopydata chapter.
Syntax
hdbdirectory [parameters]
Parameters
-add <file-specification> [optional]
Specifies the file(s) to be added to the clone directory. File specification must include full path.
Wildcards are accepted in the filename but only the clone files (.car extension) are accepted. By
default, if not specified, no clone files are processed. If the clone files have the proper file name and
extension but are not true clone files, they will not be able to be placed online even though they are
added to the clone directory.
-create [optional]
Specifies that the clone directory file is to be created. When created, the file is created empty. By
default, the existing clone directory file is accessed (whether default or specified by the -cdr
parameter). If there is no existing clone directory file, then an error is reported. When this parameter
is used in concert with other parameters, the create operation is performed first.
-show [optional]
Used to list the contents of the clone directory file. When this parameter is used in concert with other
parameters, the show operation is performed last.
Example
To clear the existing clone directory (the default directory is the file named “clone_directory.cdr” in the
HABITAT_CDBROOT/cloning_database directory) and add all clone archive file entries in the clone
directory from the default location, i.e., HABITAT_CLONES:
% hdbdirectory -create -add $HABITAT_CLONES/clone*.car
Syntax
hdbdocument -dbdef [-output=<output file name>] <input file name>
Parameters
-dbdef [required]
Indicates that a database definition file is being converted to ASCII format for documentation
purposes.
-output [optional]
This parameter assigns a name to the output file. If -output is used, but the name used has no type
extension, then .dbdoc is assigned.
If not included in the command line, the output file is assigned the same name as the database
definition file, but with a .dbdoc extension.
If no device or directory is specified, the default is to use the current working device and directory.
Errors
All of the following errors cause hdbdocument to abort. Refer to the following table for corrective
action.
Syntax
The hdbdump utility requires both action parameters and source objects to be specified. The modifier
style parameters are optional. The syntax of the command is:
%hdbdump <action-parameters> <source-objects> [ <modifiers> ]
-archive <archive-file>
Specifies that one or more archive files are given as the source object. Wildcards are allowed (invokes
file globbing in Linux). Names can also be repeated, if you separate each with a blank space.
-case <appname>.<casetype>.<title>
Specifies the name of a savecase file as the source object. Wildcards can be used for any component
of the name; however, on Linux platforms, the wildcard character must be enclosed in quotes to
avoid file globbing, as in the following: “*”.
-cdb
Specifies that the memory-resident cloning database be given as source object.
-f <famname>
Specifies the clone context family name when a clone is given as source object. One family name can
be given to name a source clone object. If the source clone is defaulted to, the family name is optional
and the family context can be derived from the HABITAT_FAMILY environment variable.
-file <file-specification>
Specifies that a file is to be used as the source object. This parameter can be used to name any type
of Hdb file as the source object. File names can be repeated, separated by a white space. Wildcards
are allowed. If wildcards are used on a Linux platform, file globbing will be invoked. This includes
clones, archives, savecase, DNA, and/or core data files.
Action Parameters
-data
Used to dump data of the named objects.
-header
Used to dump the header segment of the file, or the memory-resident cloning database. The header
segment includes the sanity block that describes the type of file and, for database containers (clone
files, archive files, and traditional HABITAT savecase files), the header also includes the database
index listing of each database container.
The header action operates on the entire source file, or the clone in memory. It does not operate on
database objects, so no database is specified.
-hier
Dumps hierarchical records into a tree-structured report.
-reset_indirects
Specifies that indirect fields (except for parent and ancestor pointers) are to be reset according to the
current associated LV values. This action is supported for database objects only.
-schema
Dumps schema information for the named object.
-schema-hier
Dumps the record hierarchy schema, which is the list of record types arranged in hierarchical order.
-stamps
Dumps the time stamps for the named object.
-verify_indirects
Validates that indirect fields are set according to the LV and MX settings associated with tables.
-verify_parents
Validates parent pointer fields.
-verify_pointers
Validates that child, descendant, and ancestor pointer fields are set according to the current parent
pointer settings. This action is only supported for database objects.
Modifiers Parameters
-append
The -append parameter appends the output of a report (dump) to the file specified by the -output
parameter. The file is created if it does not exit. Both parameters are considered redirection
parameters. The command redirection (>>) symbol can also be used.
-name <objectname>
Restricts the command action to the object named in the command line. Wildcards are allowed, but
must be enclosed in single or double quotes: ‘*’/“*”.
This parameter is to be used only for database objects.
-nofree
Prevents FREE records from being included in the output of the hierarchical record report produced
by the -hier action parameter.
-nomasks
Dumps a mask’s bit container field instead of the mask fields. This action is used in conjunction with
the -data action parameter. By default, bit container fields are skipped and mask fields are dumped.
Applies only to database objects.
-output <output-filename>
Names the output file to be used. By default, all reports are sent to standard output. Output can also
be redirected to a named file using the command shell redirection (>) parameter.
-partition <partitionname>
Restricts command action to the partition or partitions named in the command. Wildcards are
allowed. Applies only to database objects.
-pseudo
Reports on database pseudo data fields, record types, or partitions. Pseudo data is like other schema
data, and it is created internal to the database when the database is created. Pseudo data is used to
manage internal data, such as LV, RT, stamps, and hierarchical pointers.
-table <tablename>
Names a specified table to be used in restricting the command action. The command action will be
performed on only the tables named (as well as objects named by other parameters -partition,
-name, -field). Wildcards (*) can be used anywhere within the name, or a wildcard can be used all by
itself as the name. When wildcards are used, the name must be enclosed in quotes. This parameter is
used for database objects only.
-timedate_as_numbers
Formats the date and time as an integer instead of a calendar date and/or time. Applies only to
database objects.
-verbose <verboselevel>
Defines the level of detail for a report. Level of detail can be set between 1 and 10. Level 10 provides
the greatest amount of detail; however, reports will be large even for small databases. The default
is 1. Applies primarily to -schema and -dump actions.
Syntax
hdbexport -d <source-dbname> -s <output-file> [parameters]
Parameters
-a <appname> [optional]
This parameter specifies the database’s context application name. The application name is derived
from the HABITAT_APPLICATION environment variable when not specified.
-append [optional]
This parameter specifies that the exported file is to be appended to an existing file. If not specified,
the export operation will create a new file (overwriting any files of the same name). This parameter is
used only when the output file is specified with -s parameter switch. If redirection is used, the
double-bracket symbol (>>) specifies the append operation (for more information, see the -s
parameter description in this section).
-by_field [optional]
This parameter specifies field mode export. The default is to export all fields using field element line
export style. Records are not exported in this mode. Fields can be skipped with the
-skip_field parameter. This parameter is equivalent to specifying the -field parameter and listing all
fields in the database. If only exporting a few fields, it is easier to use the -field parameter.
-by_record [optional]
This parameter specifies export record mode export. When record export mode is selected, all
records of the database are chosen to be exported (alphabetic order by record name) using flat style
of record export for hierarchical records. Individual records can be eliminated from the export by
using the -skip_record parameter.
-comment [optional]
This parameter specifies a comment to be entered into the audit trail log.
-create_patterns [optional]
This parameter specifies pattern-creation mode export. -create_patterns does not export data, but it
is used to create the record patterns that allow the user to customize the appearance of exported
data. Record patterns are compatible with the patterns used with hdbimport. Thus, the same record
pattern file can be used for export and import. Records can be selected according to the same rules
-f <famname> [optional]
This parameter specifies the database’s context family name. If not specified, the name is derived
from the HABITAT_FAMILY environment variable.
-flat [optional]
This parameter specifies that the hierarchical records (in default export mode) are to be exported in
flat style. Flat export style is where each hierarchical record is exported just as if it were a non-
hierarchical record. The ordering of parent and child records is ignored, and the records are exported
in groups of like record type. Normally, this style of export is used in concert with the
-include_ancestors parameter switch, so that parent information can be included with the
hierarchical record export.
-h [optional]
This parameter prints a Help file to the screen or other standard output.
-hier_only [optional]
This parameter limits the default export to hierarchical records only. ITEMS and non-hierarchical
records are skipped.
-include_ancestors [optional]
If specified, the key values for each ancestor record are exported. By default, the ancestors’ key fields
are not included in the exported data.
-include_pointers [optional]
This parameter specifies that pointer fields are to be included in the export of hierarchical records.
Normally, pointer fields are not included because, when such records are imported by hdbimport, the
pointer field values are automatically generated. However, in some situations, pointer fields are
needed and this command can be used.
Note: Even though pointer fields can be included on records, they are ignored by hdbimport when
records are inserted or used for update. Pointer fields are hierarchical parent (one-to-one) or child
(one-to-many) pointers. This parameter affects the construction of record patterns when used with
the -create_patterns mode.
-include_pseudo [optional]
This parameter specifies that pseudo records and pseudo fields are to be included in the export data.
This parameter affects the construction of record patterns when used with the
-create_patterns mode.
-nodata [optional]
This parameter is used for debug and analysis only. When used, no data fields are exported. It is
useful in record mode and, in particular, hierarchical record exports. The result is the export of each
hierarchical record as a record name and record subscript (unless inhibited).
-nofieldnames [optional]
This parameter inhibits the export of all field names. This includes field names that appear in the
record export, and field names that appear in the field element export. If the record pattern is used
for record export, this parameter has no effect because field names are not included with record
patterns.
-nofree [optional]
This parameter disables the export of FREE records (annotations). This parameter only applies when
hierarchical records are exported in hierarchical order (default record export mode). This parameter is
equivalent to specifying the FREE record type name in the -skip_record parameter.
-noitems [optional]
This parameter disables the export of the ITEMS record.
-nolock [optional]
This parameter is used to specify that the source database is not locked. The default is to lock the
database with a shared lock (read-only).
-nomasks [optional]
This parameter disables the export mask fields causing the bit container fields to be exported as
ordinary signed integers. By default, mask fields are exported in record or field mode instead of the
export of the mask’s bit container field. Therefore, bit container fields never appear in exported data.
-nonmultidim [optional]
This parameter disables the export of 2- and 3-dimensional fields in the default export mode. By
default, multidimensional fields are exported following the ITEMS, hierarchical, and non-hierarchical
records. This parameter has meaning only when the default export mode is operating.
-nonulls [optional]
This parameter disables the export of Habitat null data as a null data value. By default, a Habitat null
data value (as tested by the HdbTestNull function) is exported as a null string; that is, the absence of a
value. If this parameter is specified, then no null data test is performed and the Habitat null data
value is exported as it is defined in binary. (Currently, the Habitat null data value is the byte value of
0x80 for each byte of the field data element.)
-nonhier_only [optional]
This parameter limits the export of records to non-hierarchical records only. ITEMS records are also
exported.
-noprefix [optional]
This parameter eliminates the field element line export prefix character (^) that precedes the field
name.
-noquotes [optional]
This parameter disables the quotes at the start and end of character string data. By default, all
character string data is exported as a quoted string. This includes all data exported as CHARACTER
data type, DATE data type, and TIME data type (date and time when -timedate_as_numbers
parameter is not in force).
-norecordnames [optional]
This parameter inhibits the export of all record names. This parameter is only meaningful when the
default record format is used. If a record pattern is used, this parameter does not inhibit the record
name.
-nosubscript [optional]
This parameter specifies that the subscript field in the default record format is to be deleted from the
exported data. By default, the default record format includes the record occurrence subscript value
for each record exported. This parameter is also used to inhibit the export of the subscript field of
each field line exported in field mode.
-s <output-file> [optional]
This parameter specifies the name of the file generated by the export operation. By default, exported
data is sent to standard output, which can be redirected with the redirection symbol (>) or (>>) to
append to an existing file.
The following two command examples are identical in their results:
hdbexport -d scadamom -s rawdata.dat
hdbexport -d scadamom > rawdata.dat
If the exported data is to be appended to an existing file, then the -append parameter switch is used
in conjunction with the -s switch. You can also use the concatenation redirection symbol (>>) instead.
The following two command examples are identical in their results:
hdbexport -d scadamom -s rawdata.dat -append
hdbexport -d scadamom >> rawdata.dat
-subtree [optional]
This parameter specifies that the hierarchical records (specified implicitly by the -key, -range,
or -record ) specification are to be exported along with their subtree of child records. If not specified,
then if hierarchical records are mentioned in any of the record selection parameters, they are treated
as non-hierarchical (no child) records.
-timedate_as_numbers [optional]
This parameter specifies that the time and date values (T*4/T*8 and D*2) are exported as integers
instead of as character strings. Using integers is much more efficient than using character strings for
exporting and importing data within the same Habitat environment where the same time/date
database is in use (so that interpretation of the numeric value is consistent).
-xml [optional]
This parameter specifies the output to be in XML format.
Examples
Refer to the hdbexport chapter.
See Also
hdbimport
Syntax
The form of the hdbformat utility command changes depending on the source of the database
definition.
Use the following table to determine the form of the command to use:
Parameters
-s <input-file> [required DBDEF]
The -s parameter specifies the DBDEF source files to be processed. Wildcards are supported. At least
one DBDEF source file is required and a full path name must be specified. DBDEF sources are parsed
and validated prior to use.
-a <appname> [optional]
The -a parameter specifies the application name context of the source clone. This parameter is only
used with the -db parameter. The environment variable HABITAT_APPLICATION describes the
application name context if this parameter is not used with the -db.
-d <dirpath> [optional]
The -d parameter specifies an alternative directory path where the subschema files are to be created.
The current location is the default when not specified otherwise. The <dirpath> value must be stated
as a full file path specification.
-f <famname> [optional]
The -f parameter specifies the family name context of the source clone. The environment variable
HABITAT_APPLICATION describes the family name context if this parameter is not used with the -db.
The -db parameter must be used with this parameter.
-h [optional]
Displays the online help for this command.
-l <langopts> [optional]
The -l (language) parameter specifies the target programming languages that will use the subschema
files. Subschema files are created for each specified language. Multiple languages can be specified.
The defaults are Fortran 90 and the C languages. Separate specified languages with a space.
-nojointitle [optional]
If present, the generated files will not use the combined titles in the comments of the generated
C/Fortran source files. For more information about using this option, refer to the section “Database
Resizing Using MXDEF File” in the Hdb Programmer’s Guide.
Syntax
hdbimport -s <input-files> -d <target_dbname> [parameters]
Files specified by the -s parameter are processed in the order specified. Parameters are described
below.
Parameters
-a <appname> [optional]
Specifies the application name context for the database. If not specified, the application name is
derived from the HABITAT_APPLICATION environment variable. Either -a or -app is accepted as input.
-atstart [optional]
Inserts records at the start of existing records or, in the case of hierarchical insert, at the start of the
existing siblings. This parameter can be used with key or positional subscripts; however, it is most
commonly used with keys. Default is to insert at the end of the record or sibling.
The action of this atstart position is executed for the first hierarchical record or non-hierarchical
record of a given type. Once a given type of record has been inserted, subsequent records are
inserted at the current position. The atstart parameter can be specified with declaratives.
-backup [optional]
The target database is backed up to the standby node after a successful hdbimport operation
(backup is to a standby system in dual-computer configurations). Volatile partitions are not backed
up; otherwise, the entire database is backed up.
Backup only occurs if:
• No errors occur.
• The database is modified.
• The clone is marked to support replication.
-count [optional]
Counts the number of records and fields in the input data file. Can be used with the -verify and the
-update parameters.
-d <target_dbname> [required]
Specifies the target database by name. Either -d or -db is accepted as input.
-f <famname> [optional]
Specifies the family name context for the database. If not specified, the family name is derived from
the HABITAT_FAMILY variable. Either -f or -fam is accepted as input.
-h [optional]
Displays the online help for this command.
-ignore_indirect_msg [optional]
Suppresses the indirect field modification warning message. If an indirect field is updated during a
partial update of a database (where the database is not initialized first), a warning message is issued
cautioning the user that this type of operation is not always correct.
-initialize [optional]
Initializes the target database before any of the input data files are processed. If not specified, the
database is not initialized.
-insertnodup [optional]
Inserts records only if they do not already exist in the database (no duplicates). This option allows the
user to always work with the same import data file, simply adding new records as needed;
hierarchical records must be added at their valid position. Records must have keys, for hierarchical
records’ parent keys must be specified up to the level that guarantees uniqueness. When this option
is chosen, the -keys parameter is forced (see below). Insert no duplicates can also be specified using
declaratives.
-keys [optional]
This parameter is used to instruct hdbimport to use keys supplied with each input file data record for
locating proper parents for insert mode, or for locating the current record for update mode. If not
specified, then positional subscripting is used. The keys mode can also be specified using
declaratives.
-lowercase [optional]
Character strings are converted to lowercase before being imported into the database. Otherwise,
data is imported as it appears in the imported data file. Can also be specified using declaratives.
-overrideoid [optional]
Overrides the value of the OID in the database with the OID value of the imported file during an insert
or update operation. Only affects record operations. However, if an OID appears as a field in the
imported data file, the field is updated regardless of the setting of this switch. Can also be specified
using declaratives.
-report_linenumberonly [optional]
Limits error reporting to the line number of the offending input data line. Along with each error report
is a description of the error and the offending data line. If this switch is specified, then the data line
text is not reported; instead, only the line number itself is reported.
-report_truncateline [optional]
Limits error reporting of the offending line to 80 characters with no line wrapping. The
-report_linenumberonly parameter has precedence over this parameter.
-s <input_data_files> [required]
Specifies the name of the ASCII input file(s) to be imported into the Hdb database. Wildcards are
allowed for multiple file input. Included files must be separated by a white space. The hdbimport
utility is case-sensitive and must include the appropriate directory path specification.
-skipnulldata [optional]
Instructs hdbimport to skip fields during insert mode or update mode where the input data line does
not contain data. For example, a line with two contiguous field delimiters such as
...,3.4,,“XRAY”,... would normally interpret the field between the values 3.4 and “XRAY” as a Null to be
inserted into the database. The -skipnulldata option changes this behavior and instead causes
hdbimport to ignore the field. During import mode, this has the effect of leaving the field containing
the FILL bytes, the normal result of an insert record operation. During update mode, this has the
effect of leaving the field as is; it is not changed.
-update [optional]
Updates existing records in the database. Records are located by subscript or by key value if the
-keys parameter is used. This parameter only affects record operations; field import is always in
update mode. Update mode can also be specified using declaratives.
-verbose [optional]
Turns out detailed error reporting. Can be used with regular import operations, or with the -verify
parameter. This parameter can result in large reports, depending on the size of the database. Can
also be specified using declaratives.
-verify [optional]
Verifies the validity of the input data files according to the schema defined by the target database.
The database is not modified when this parameter is used.
Most other parameters are accepted as descriptive of the intended import operation, which may
affect the validation.
Note: If verify mode is used, data entry checking is not performed, since data entry checking is
dependent on database record position, which is not correctly established for verify mode.
In verify mode, the following declaratives are ignored: #insert, #insertnodup, #update.
Examples
Refer to the hdbexport chapter.
See Also
hdbexport
Declaratives
#insert #insertnodup #update
Specifies that either insert, insert no duplicates, or update operations are to be used by hdbimport.
Once an operation is selected, it remains in effect until it is changed. Declarations are in effect until
the end of the current input file only and will not carry through to the next data file.
For #update, see sections 9.8.7 Subscript Update for Records and 9.8.8 Subscript Update for Fields.
Note that those declaratives are ignored when the -verify parameter is present.
#atstart
Places the first visit record at the start (#atstart) of the existing records of that type, or of the siblings
of the designated parent record. For more information about this declarative, see section 9.8
Positioning a Record for Insert and Update.
#atend
Places the first visit record at the end (#atend) of the existing records of that type, or of the siblings of
the designated parent record. For more information about this declarative, see section 9.8 Positioning
a Record for Insert and Update.
#keys #nokeys
Keys are used to locate record insertion position for update or insert operations. Subscripts are used
to update records. Record positions are updated as they occur in the input file. The #nokeys and
#subscripts declarative reverse the effect of #keys. For more information about this declarative, see
section 9.8 Positioning a Record for Insert and Update. The declarative #nokeys is ignored when in
insert no duplicates mode.
#entrychecks #noentrychecks
Database defined data entry points are to be performed on field data. Data entry checking is turned
with #noentrychecks.
#verbose #noverbose
A detailed update report is to be produced. Large databases result in very large reports. #noverbose
invalidates the #verbose declarative.
#uppercase #nocase
Changes character string data to uppercase before performing data entry checks (if selected) and
prior to storing the data in the database. If uppercase is not specified, data is read as it appears in the
input data file. #nocase turns off uppercase.
#lowercase #nocase
Changes character string data to lowercase before performing data entry checks (if selected) and
prior to storing the data in the database. If lowercase is not specified, data is read as it appears in the
input data file. #nocase turns off lowercase.
#overrideoid #nooverrideoid
The OID value in the input file is overridden by the value obtained from the insertion of a new record
in insert mode. In update mode, the existing database OID value is overwritten by the value contained
in the input file. #nooverrideoid counters #overrideoid.
#separator “c”
Designates the letter “c” as a field separator. The separator is used to delimit fields in the record
statements of the input data file. The separator character is used on both the default record
statement and on the record statements defined by #record.
However, fields that appear on the #record declarative are not affected by this separator. To revert to
the default separator, declare #separator without a value.
#comment “c”
Designates the letter “c” as the comment character instead of the pound (#) sign. Designation is
enforced from the point of entry in the file until it is changed.
The following example illustrates this point:
#comment "?"
? this is a comment line
? and this
?comment
# this is a comment line
#timedate “date-time-format-string”
Not implemented.
#skipnulldata #noskipnulldata
Directs that null data values in the input data file are to be ignored. The result is that no changes
occur to the corresponding database field. By default, hdbimport interprets a null data item as a
Habitat null value and inserts this value into the associated field in the database. The #noskipnulldata
declarative can be used to reverse the effects of command-line parameters.
#boolean <true>/<false>
Defines a boolean TRUE and FALSE string keyword used to represent true and false values. For more
information about the use of this declarative, see section 9.7.2 Details About the #boolean
Declarative.
Examples
The example use of these declaratives is shown in chapter 9 hdbimport. Specific examples for some
of the declaratives can be found in this list:
• #boolean - 9.7.2 Details About the #boolean Declarative
• #keys - 9.9.1 Inserting with the #keys Declarative, 9.9.2 Updating with the #keys Declarative
• #insert - 7.5.8 Use Declaratives, 9.9.1 Inserting with the #keys Declarative
• #record - 9.7.1 Details About the #record Declarative, 9.9.1 Inserting with the #keys Declarative,
9.9.2 Updating with the #keys Declarative
• #update - 7.5.8 Use Declaratives, 9.8.7 Subscript Update for Records, 9.9.1 Inserting with the
#keys Declarative, 9.9.2 Updating with the #keys Declarative
Syntax
hdbrio -a <appname> -f <famname> <dbname> (Access Clone)
hdbrio -archive <arc file> <dbname> (Access Archive/Savecase)
hdbrio -i <rio script file>
hdbrio -c <rio command>
Argument
dbname
Specifies the name of the Hdb database name.
Options
-a <appname>
Specifies the Hdb application. If not specified, then the HABITAT_APPLICATION environment variable is
used.
-c <rio command>
Executes the hdbrio command (typically within quotes) and exits hdbrio. The database can be
specified with “ -a <appname> -f <famname> <dbname> ” on the command line. If
HABITAT_APPLICATION or HABITAT_FAMILY is already set, then -a and -f can be omitted. You can also
embed a DBOPEN statement in <rio command> to designate the database to open.
-f <famname>
Specifies the Hdb family name. If not specified, then the HABITAT_FAMILY environment variable is
used.
Examples
Access a clone with hdbrio interactively
% context scada ems
% hdbrio scadamom
[SCADAMOM.SCADA.EMS]
rio>
See Also
hdbrio command: dbopen
Syntax
rio> backup
Options
None.
Syntax
rio> checkpoint
Options
None.
Syntax
rio> dbcopy [options] [number of records]
Argument
number of records [optional]
The number of records starting at the current position to be marked for copy. If not specified, the
default of one record at the current position is marked for copy.
Options
-a
Insert before the first sibling record. When the dbpaste command is issued at the parent position
(instead of at a valid record position of the marked record), this option tells hdbrio to insert the
records before all siblings under this particular parent.
-d
Data fill destination. For multidimensional fields, if the destination dimension is greater than the
source, the default action is to leave the field data beyond the source LV alone. Specifying this option
will cause those destination field elements beyond the source LV to be populated with the FILL byte
values.
-f
Skip copying of FREE records from the source to the destination.
-i
Ignore conversion errors. For example, when a field is changed from C* in the source to I* in the
destination, Hdb cannot convert from a string field to an integer value, so normally the copy
operation is terminated at this point. Specifying this option forces Hdb to ignore this kind of
conversion error to continue on with the copy operation.
-l
Insert after the last sibling record. When the dbpaste command is issued at the parent position
(instead of at a valid record position of the marked record), this option tells hdbrio to insert the
records after all siblings under this particular parent.
-m
Skip copying of multidimensional fields from the source to the destination.
-o
Copy the OID field from the source to the destination. By default, new OIDs are generated when
records are inserted into the destination, and it is not necessary to copy the OID from the source. For
-s
Mark the child subtree for the copy.
After the command is issued, the number of records for each record type at the current position will
be displayed.
-z
If specified, all indirect pointers are set to zero in the destination during the paste operation.
See Also
hdbrio command: dbopen
hdbrio command: dbclose
hdbrio command: dbpaste
hdbrio command: position
Syntax
rio> dbclose <database number>
Argument
database number [required]
A number corresponding to a specific database that will be closed.
Options
None.
See Also
hdbrio command: dbopen
hdbrio command: dbpaste
hdbrio command: list
Syntax
rio> dbopen -a <application> -f <family> -d <database> (open clone)
rio> dbopen -r <archive file> -d <database> (open archive)
Arguments
-a <application>
Specifies the application of the clone. Required when opening a clone database.
-f <family>
Specifies the family of the clone. Required when opening a clone database.
-r <archive file>
Specifies the filename of the archive file. Required when opening an archive file database.
-d <database>
Specifies the database name. Required for opening either a clone or an archive file database.
Options
None.
See Also
hdbrio command: dbclose
hdbrio command: dbset
Syntax
rio> dbpaste [options]
Arguments
None.
Options
-k [optional]
Keeps the source information from dbcopy so the dbpaste command can be issued again with the
same source. If not specified, the source information for the copy is cleared, and issuing a dbpaste
command at this time will result in an error indicating that there is no source information initialized
for the copy.
See Also
hdbrio command: dbcopy
hdbrio command: dbset
Syntax
rio> dbset [database number]
Argument
database number [optional]
Specifies the database to switch to. If not specified, a list of all the currently opened databases is
listed.
Options
None.
See Also
hdbrio command: dbopen
hdbrio command: dbclose
Syntax
rio> delete [options]
Options
-n number
The number of records to delete.
-y
Represents “Yes”. Confirms that hdbrio is to delete the selected records from the database. Once the
delete command is successfully issued, the following message appears:
Deleting x records (including all subtrees) under x. Delete records [Y/N]?
Syntax
rio> down [number]
Argument
number
Specifies the number of records to go down the subtree. If the number is an asterisk (“*”), then hdbrio
positions to the last record in the subtree under the anchor record.
Options
None.
Example
In this example, the substn record is the anchor record and the device record is the current record
beneath the anchor record:
substn(2). . .device(4)>
Syntax
rio> echo [options]
Options
-c ON|OFF
Turns on or off command echo. In batch mode, the default is ON. In interactive mode, the default is
OFF.
-f <fname>
Echoes the commands (only) to a file specified by fname. Useful in creating hdbrio scripts.
-o <fname>
Echoes the output of displaying records or fields to a file fname. Useful in creating benchmarks.
-p ON|OFF
Turns on or off the prompt. The default is ON in interactive mode and OFF in batch mode.
Syntax
rio> exit
Options
None.
See Also
hdbrio command: quit
Syntax
rio> find key=value [key1=value1,key2=value2,. . .]
Arguments
key
Key field.
value
Key field value of a particular record.
Options
None.
Syntax
rio> help [command]
Argument
command
Enter the name of the command to obtain specific information about that command.
Options
None.
Syntax
rio> insert [options] [record]
Argument
record
Inserts an Hdb record type record.
Options
-n <number>
Specifies the number of records to insert.
-l
Inserts a child as last child in current subtree.
-r
Inserts a FREE record at the root level.
Syntax
rio> list [options]
Options
-f field
Lists field attributes of the named field. The attributes include type, dimension, size, fill byte, pointer,
and key. The syntax of field is a fully qualified field name (e.g., id_devtyp).
-r record
Lists record attributes of the named record. The attributes include timestamp, parent status, LV, MX,
and record type (e.g., circular).
-p [part]
Lists the attributes of the named partition component. The attributes include timestamps. If no
component name is given, then hdbrio lists all of the partitions in the database.
-s
Lists the definition and timestamps of the currently accessed database.
-h
Lists the record hierarchy.
-m
Lists the multidimensional fields found in the database.
-d
Lists all the currently opened databases. Same as the dbset command without any argument.
Syntax
rio> position record[=subscript]
Arguments
record
Specifies the record type.
subscript
Specifies the record’s position within a record structure.
Options
None.
Syntax
rio> quit
Synopsis
Exits hdbrio.
Options
None.
See Also
hdbrio command: quit
Syntax
rio> read [record=subscript]
Arguments
record
Indicates the record to display.
subscript
Specifies the record’s position within a record structure.
Options
-p
Displays pseudo fields.
-a
Displays all fields.
-h
Displays numbers in hexadecimal format.
-d
Displays numbers in decimal format.
-o
Displays numbers in octal format.
Syntax
rio> reset
Arguments
None.
Options
None.
Syntax
rio> setstamp -d (Set database timestamp)
rio> setstamp -p <partition> <type> (Set partition timestamp)
rio> setstamp -r <record> (Set record timestamp)
rio> setstamp -f <field> <type> (Set partition stamp using field)
Arguments
-d
Sets the database timestamp, uses “list -s” to view the database timestamp.
-p <partition> <type>
Sets the partition timestamp. The partition argument is the name of the partition. The type is one of
ARCHIVE, BACKUP, ENTRY, UPDATE, or WRITE. Use “list -p <partition>” to view the partition timestamp.
-r <record>
Sets the record timestamp. The record argument specifies the record name. Use “list -r <record>” to
view the record timestamp.
-f <field> <type>
Sets the partition timestamp using the field. The field argument identifies the field whose partition’s
timestamp is to be changed. The type argument is one of ARCHIVE, BACKUP, ENTRY, UPDATE, or
WRITE. Use “list -f <field>” to view the partition and partition timestamp for a given field.
Syntax
rio> up [number]
Argument
number
Specifies the number of records to go up the subtree. If an asterisk (“*”) is entered in the number field,
then hdbrio is positioned directly to the anchor record.
Options
None.
Syntax
rio> verify options [dbobject]
Argument
dbobject [optional]
The database object to perform verify on. If no database is specified, all objects selected by the option
value are verified. The dbobject can be a record type, a parent/child pointer field, or an indirect
pointer field, depending on the selected option.
Options
-a
Verifies all pointer fields: parent, child, and indirect pointers. The dbobject can be a record type, a
parent pointer, a child pointer, or an indirect pointer field.
-i
This option only verifies indirect pointer fields.
-p
Verifies parent pointer fields. The dbobject can be a child record type or a parent pointer field.
Syntax
rio> where [options]
Options
-s
Lists the children, or the subtree below the record current position.
-d [number]
Optionally used with -s to limit the number of sublevels shown.
-f
Lists the FREE record subtree below the root.
-c
Lists the number of records for each record type in the subtree.
Syntax
rio> where [options]
Options
-y
Represents “Yes”. Confirms that hdbrio is to zero (delete) all records in the database. Enter “Y” (or “N”
for no) when the following confirmation is displayed:
Initializing Entire database database_app_family. Do you want to continue
[Y/N]?
Syntax
rio> + [number]
Argument
number
Specifies the number of records to increment.
Options
None.
Syntax
rio> - [number]
Argument
number
Specifies the number of records to decrement.
Options
None.
Arguments
field
Identifies the field within a record.
value
Identifies a value for the field.
<null> is the value to set a field to null.
Options
To control the appearance of the output, select one of the following options:
-a
Displays all fields, including pointer fields and pseudo fields.
-c
Performs a constraint check on data entry if field constraint is defined in the database definition for
this field.
-d
Displays numbers in decimal format.
-h
Displays numbers in hexadecimal format.
-o
Displays numbers in octal format.
Syntax
hdbserver -mxclones <mxclones> <other-parameters>
Environment Variables
HABITAT_GROUP
The HABITAT_GROUP variable must be set to the name of the HABITAT group.
HABITAT_CDBROOT
The HABITAT_CDBROOT variable must be set to the root directory of the HABITAT group.
HABITAT_MXCLONES
The HABITAT_MXCLONES variable is another way to set the maximum number of clones allowed
online. This variable can be overridden by the -mxclones command-line option.
HABITAT_SERVER_HANGUP
Optional, Linux only: The valid values are Y to abort when a SIGHUP is received, or N to ignore the
SIGHUP signal. The default is N.
Parameters
-hangup [optional]
The -hangup parameter specifies handling of the hangup signal (SIGHUP) by the hdbserver process. If
-hangup parameter is specified, then hdbserver aborts when it receives the SIGHUP signal. This
feature is used in Linux only. The default setting is -nohangup.
-nohangup [optional]
The -nohangup parameter specifies handling of the hangup signal (SIGHUP) by the hdbserver
process. If -nohangup is specified, then hdbserver ignores the SIGHUP signal. This feature is used in
Linux only. The default setting is -nohangup.
Disk and page file space have individual requirements. A locking file is created that mirrors the
memory layout and is therefore roughly the same size as memory layout.
The page file space is required because it is used as the backing store for the memory section.
Note: The page file is simulated on systems where page file backing, store file mapping is not
supported. On these systems, the page file space requirement merely applies to the disk requirement
needed for the backing store file created to simulate page file mapping.
Linux: In Linux, when using the -mxclones command-line option or the HABITAT_MXCLONES
environment option to increase the maximum number of clones that can be placed online, the user
will need to shut down Habitat and remove the backing store file manually. This is due to the way
OSAL shared memory is implemented, where resizing the shared memory section does not
automatically adjust the size of the backing store file. Deleting the backing store file forces OSAL to
regenerate the backing store file. The backing store file is located in $HABITAT_DATAROOT/osalipc
(default) and the format of the filename is osm_<grp>_mcdb_<grp>, where <grp> is the
HABITAT_GROUP where the hdbserver is running.
-append [optional]
This parameter appends output to a specific file, if it exists. If the file does not exist, then it is created.
You can append and redirect output to a file using the append form of the redirection operator, as in
the following example:
hdbserver -verbose 3 >> server_log_file.txt
-debug [optional]
This parameter sets the server in debug mode. Debug mode sets wait events to use minimum timeout
periods during the main processing loop, so that the interactive debugger can be used to debug and
analyze the server.