You are on page 1of 232

Digital Energy

Hdb
User's Guide

Version: Habitat 5.11


Document Date: December 6, 2019
Copyright and Proprietary Information

Copyright 2019 General Electric Company and/or its affiliates (“GE”). All Rights Reserved. This document is the
confidential and proprietary information of GE and may not be reproduced, transmitted, stored, or copied in
whole or in part, or used to furnish information to others, without the prior written permission of GE.
Contents
About This Document ....................................................................................................... xii
Purpose of This Document ............................................................................................................................ xii
Who Should Use This Document ................................................................................................................ xii
Structure of This Document ......................................................................................................................... xiii
For More Information ..................................................................................................................................... xiv
Conventions ........................................................................................................................................................ xv
Change Summary ............................................................................................................................................. xv

1. Hdb Overview ................................................................................................................... 1


1.1 Developer Activities .................................................................................................................................... 2
1.1.1 Preparing Source Definitions............................................................................................................ 2
1.1.2 Presenting Files to the Cloner .......................................................................................................... 2
1.1.3 Tracking Schema in the Schema Dictionary............................................................................... 3
1.1.4 Using Hdbformat................................................................................................................................... 4
1.1.4.1 Using a Clone Schema Source .................................................................................................. 4
1.1.4.2 Using the DBDEF Source File ...................................................................................................... 5
1.1.5 Using Hdbcloner .................................................................................................................................... 5
1.1.5.1 Creating Clones ............................................................................................................................... 6
1.1.5.2 Replacing Clones ............................................................................................................................ 6
1.2 System Administrator Activities ............................................................................................................. 7
1.2.1 Cloning Operations .............................................................................................................................. 7
1.2.1.1 Creating or Removing Clones .................................................................................................... 7
1.2.1.2 Moving Clones ................................................................................................................................. 7
1.2.1.3 Updating a Clone After a Schema Change ........................................................................... 7
1.2.2 Managing Data ...................................................................................................................................... 7
1.3 Managing the Hdb Server ........................................................................................................................ 9

2. Environment ................................................................................................................... 10
2.1 Platform Dependencies .......................................................................................................................... 10
2.2 Variable Descriptions ............................................................................................................................... 10

3. hdbcloner ........................................................................................................................ 15
3.1 Creating Clones .......................................................................................................................................... 15
3.2 Loading Schema ........................................................................................................................................ 15
3.3 Removing Schema .................................................................................................................................... 16
3.4 Removing Clones ....................................................................................................................................... 16
3.5 Replacing Clones ....................................................................................................................................... 16
3.6 Segment Page Alignment....................................................................................................................... 17
3.7 Replication ................................................................................................................................................... 18
3.8 Database Schema Version Titles ......................................................................................................... 19
3.9 Importing Clones from Another HABITAT Group ........................................................................... 19
3.10 Moving Clones Within a HABITAT Group ........................................................................................ 20
3.11 Converting a Clone to an Archive File ............................................................................................. 20

Proprietary – See Copyright Page iii


4. hdbcopydata .................................................................................................................. 22
4.1 Container Specifications ......................................................................................................................... 22
4.1.1 Clone Container Specification ........................................................................................................ 22
4.1.2 Archive File Specification ................................................................................................................. 23
4.1.3 Savecase Specification ..................................................................................................................... 24
4.2 Copy Context ............................................................................................................................................... 25
4.3 Stamps .......................................................................................................................................................... 26
4.3.1 Schema Definition Stamps .............................................................................................................. 26
4.3.2 Partition Change Stamps ................................................................................................................. 26
4.3.2.1 Archive Stamp ............................................................................................................................... 27
4.3.3 Record Time Stamps and Database Stamp .............................................................................. 27
4.4 Same Definition Database Copy .......................................................................................................... 27
4.5 Compatible Definition Database Copy .............................................................................................. 28
4.6 Incompatible Database Definition Copy........................................................................................... 29
4.7 Partition Copy ............................................................................................................................................. 29
4.7.1 Whole Database Fast-Mode Partition Copy ............................................................................. 29
4.7.2 Partial Archive or Savecase Partition Copy .............................................................................. 29
4.7.3 Copying Named Partitions from a Container to a Clone ..................................................... 30
4.7.4 Copying Partitions within a Clone ................................................................................................ 30
4.8 Field Copy ..................................................................................................................................................... 30
4.8.1 Source and Destination Fields Do Match by Name ............................................................... 30
4.8.2 Source and Destination Fields Do Not Match by Name ....................................................... 30
4.8.3 Field Copy Algorithm ......................................................................................................................... 31
4.8.4 Field Data Type Conversion ............................................................................................................ 32
4.8.4.1 No Conversion Support Rule Procedure .............................................................................. 32
4.8.4.2 Character Conversion................................................................................................................. 32
4.8.4.3 Numeric Conversion Error Handling ..................................................................................... 33
4.9 Selcop Copy ................................................................................................................................................. 33
4.9.1 Modifying Selcop Fields .................................................................................................................... 34
4.9.2 Selcop Example ................................................................................................................................... 35
4.9.3 Using Selcop ......................................................................................................................................... 36
4.10 Compatibility with Savecases from Previous Releases ............................................................ 37
4.10.1 Habitat 5.x Versions Prior to 5.8 ................................................................................................. 37
4.11 Truncation and Datafill ......................................................................................................................... 38
4.12 Examples .................................................................................................................................................... 38
4.12.1 Example #1: Creating a Savecase File...................................................................................... 39
4.12.2 Example #2: Retrieving a Savecase File .................................................................................. 39
4.12.3 Example #3: Retrieving a Savecase File using the File Path Name ............................... 39
4.12.4 Example #4: Creating an Archive File Containing all Databases of a Given Clone . 40
4.12.5 Example #5: Creating an Archive File Containing a Single Database of a Given
Clone ................................................................................................................................................................... 40
4.12.6 Example #6: Creating an Archive File Containing a Single Partition of a Given Clone40
4.12.7 Example #7: Copying a Database from One Clone to another Clone .......................... 41
4.12.8 Example #8: Copying a Database on Top of another Database from Clone to Clone41
4.12.9 Example #9: Copying a Field from Clone to Clone ............................................................... 41

Proprietary – See Copyright Page iv


5. hdbdocument ................................................................................................................. 42
5.1 Document Layout...................................................................................................................................... 42
5.2 Examples ...................................................................................................................................................... 42

6. hdbdump ......................................................................................................................... 43
6.1 hdbdump Uses ........................................................................................................................................... 44
6.2 Source Examples ....................................................................................................................................... 44
6.3 Action Examples ........................................................................................................................................ 45
6.4 Naming Database Objects Examples ................................................................................................ 45

7. hdbexport ........................................................................................................................ 46
7.1 Reasons to Export Data from a Database ....................................................................................... 46
7.2 Functional Overview ................................................................................................................................ 46
7.3 Operational Modes ................................................................................................................................... 47
7.3.1 Default Mode ........................................................................................................................................ 48
7.3.2 Record Mode......................................................................................................................................... 48
7.3.3 Field Mode ............................................................................................................................................. 48
7.3.4 Pattern Mode ........................................................................................................................................ 49
7.4 Data Output Format ................................................................................................................................. 49
7.4.1 Record Format ..................................................................................................................................... 49
7.4.1.1 Exporting Special Fields in a Record ..................................................................................... 50
7.4.2 Field Element Format ........................................................................................................................ 50
7.4.3 Data Types of Exported Values ...................................................................................................... 50
7.5 Example Uses.............................................................................................................................................. 51
7.5.1 Export/Import in Default Mode ...................................................................................................... 51
7.5.2 Export/Import in Field Mode ........................................................................................................... 52
7.5.2.1 Exporting the Whole Database in Field Format ................................................................ 52
7.5.2.2 Selecting Particular Fields for Export .................................................................................... 52
7.5.2.3 Editing Data in a File and Importing Changes Back Into the Database .................. 53
7.5.3 Export, Fix, and Import a Parent Pointer Field ......................................................................... 53
7.5.4 Export/Import a Subtree .................................................................................................................. 53
7.5.4.1 Exporting a Subtree ..................................................................................................................... 54
7.5.4.1.1 Example #1: Extract subtree A(1) ..................................................................................... 54
7.5.4.1.2 Example #2: Extract subtree B(3) ..................................................................................... 54
7.5.4.2 Importing a Subtree .................................................................................................................... 54
7.5.4.2.1 Example #1: Inserting a default subtree ....................................................................... 55
7.5.4.2.2 Example #2: Inserting a subtree using the key field ................................................. 55
7.5.5 Export a Field for Use in Excel ........................................................................................................ 56
7.5.5.1 Example #1: Default .................................................................................................................... 56
7.5.5.2 Example #2: No record name .................................................................................................. 56
7.5.5.3 Example #3: No record name and no field names .......................................................... 57
7.5.5.4 Example #4: No record name, no field names, and no subscripts ............................ 57
7.5.6 Export a Range of Data .................................................................................................................... 57
7.5.6.1 Example #1: Extract entry 2-to-3 from record A .............................................................. 57
7.5.6.2 Example #2: Extract entry start-to-3 from record A ....................................................... 58
7.5.6.3 Example #3: Extract entry 3-to-end from record A ......................................................... 58

Proprietary – See Copyright Page v


7.5.7 Export Using a Record Pattern ...................................................................................................... 58
7.5.7.1 Create a pattern file .................................................................................................................... 58
7.5.7.2 Edit the pattern file ...................................................................................................................... 59
7.5.8 Use Declaratives ................................................................................................................................. 59

8. hdbformat ....................................................................................................................... 62
8.1 Subschema Files ........................................................................................................................................ 62
8.1.1 Fortran 90 Subschema Files ........................................................................................................... 62
8.1.1.1 hdbdb_dbname.f90 ..................................................................................................................... 62
8.1.1.2 dp_dbname.inc ............................................................................................................................. 62
8.1.1.3 dp_dbname_pname.inc............................................................................................................. 63
8.1.1.4 dx_dbname.inc.............................................................................................................................. 63
8.1.1.5 db_dbname.inc ............................................................................................................................. 63
8.1.2 C/C++ Subschema Files .................................................................................................................... 63
8.2 Subschema File Management .............................................................................................................. 63
8.2.1 Recommended Practices ................................................................................................................. 64
8.3 Examples ...................................................................................................................................................... 64
8.3.1 Example #1: Format from a single DBDEF source with defaults ...................................... 65
8.3.2 Example #2: Format from two DBDEF sources ....................................................................... 65
8.3.3 Example #3: Format multiple files using wildcards ............................................................... 65
8.3.4 Example #4: Format for C language only .................................................................................. 65
8.3.5 Example #5: Format C and Fortran 90 using F77 fixed source statement format .... 65

9. hdbimport........................................................................................................................ 66
9.1 Reasons to Import Data Into a Database ........................................................................................ 66
9.2 Creating the Input Data File .................................................................................................................. 66
9.2.1 Sample of an Input File Fragment ................................................................................................ 68
9.3 Hdbimport Concepts ................................................................................................................................ 69
9.3.1 Field Lines and Record Lines .......................................................................................................... 69
9.3.2 Methods of Record Insertion and Update ................................................................................. 70
9.3.3 Declaratives .......................................................................................................................................... 70
9.3.4 Multiple Input Files ............................................................................................................................. 71
9.3.5 Other Command-Line Options....................................................................................................... 71
9.4 Input File Format ....................................................................................................................................... 71
9.4.1 Line Continuation Character .......................................................................................................... 72
9.4.2 Comment Lines .................................................................................................................................... 72
9.4.3 Record Lines ......................................................................................................................................... 72
9.4.3.1 Record Line Syntax ...................................................................................................................... 73
9.4.3.2 Using an Alternative Record Line Format ........................................................................... 73
9.4.4 Field Line ................................................................................................................................................ 74
9.5 Specification of Field Values.................................................................................................................. 74
9.5.1 Character Data .................................................................................................................................... 75
9.5.2 Numeric Data ....................................................................................................................................... 75
9.5.3 Keyword Data ...................................................................................................................................... 76
9.5.4 Boolean Data ........................................................................................................................................ 76
9.5.5 Calendar Date and Calendar Timedate...................................................................................... 76

Proprietary – See Copyright Page vi


9.5.6 Field Value Conversion Rule............................................................................................................ 77
9.5.7 Null Data Rules .................................................................................................................................... 78
9.6 Declaratives................................................................................................................................................. 78
9.6.1 Operational Scope of Declaratives .............................................................................................. 78
9.6.2 Syntax of a Declarative..................................................................................................................... 79
9.6.3 Summary of Declaratives ................................................................................................................ 79
9.7 Declaratives Reference ........................................................................................................................... 79
9.7.1 Details About the #record Declarative ....................................................................................... 80
9.7.2 Details About the #boolean Declarative .................................................................................... 81
9.8 Positioning a Record for Insert and Update .................................................................................... 81
9.8.1 Positional Insert for Non-Hierarchical Records ....................................................................... 82
9.8.2 Positional Insert for Hierarchical Records ................................................................................. 82
9.8.2.1 Inserting Into an Empty Database ......................................................................................... 83
9.8.2.2 Inserting Into a Non-Empty Database ................................................................................. 83
9.8.2.3 With Default or Atend Specified.............................................................................................. 83
9.8.2.4 With Atstart Specified ................................................................................................................. 84
9.8.2.5 First Visit Rule Still Applies for Hierarchical Insert............................................................ 84
9.8.3 Positional Insert for Circular Records .......................................................................................... 84
9.8.4 Positional Insert for MaxLV Records ............................................................................................ 84
9.8.5 Key Insert for Hierarchical Records ............................................................................................. 84
9.8.5.1 Inserting a Subtree ...................................................................................................................... 85
9.8.5.2 Multiple Parent Records ............................................................................................................. 85
9.8.6 Key Update for Hierarchical Records .......................................................................................... 86
9.8.7 Subscript Update for Records ........................................................................................................ 86
9.8.8 Subscript Update for Fields ............................................................................................................. 87
9.9 Example Uses.............................................................................................................................................. 87
9.9.1 Inserting with the #keys Declarative........................................................................................... 87
9.9.2 Updating with the #keys Declarative .......................................................................................... 89

10. hdbrio ............................................................................................................................. 91


10.1 Starting hdbrio ......................................................................................................................................... 91
10.2 Exiting hdbrio ........................................................................................................................................... 91
10.2.1 Getting hdbrio Help ......................................................................................................................... 92
10.3 Syntax Considerations .......................................................................................................................... 92
10.4 Using an hdbrio Script........................................................................................................................... 92
10.5 Using hdbrio Interactively ................................................................................................................... 93
10.6 Working with Multiple Databases .................................................................................................... 93
10.6.1 Listing all Open Databases ........................................................................................................... 94
10.6.2 Opening a Database ....................................................................................................................... 94
10.6.3 Switching Between Databases ................................................................................................... 95
10.6.4 Closing a Database.......................................................................................................................... 95
10.7 Indicating hdbrio's Record Position ................................................................................................. 96
10.8 Accessing an Hdb Archive File ........................................................................................................... 96
10.9 Navigation ................................................................................................................................................. 96
10.9.1 Navigating a Subtree ...................................................................................................................... 97
10.10 Changing Current Record Position ................................................................................................ 97

Proprietary – See Copyright Page vii


10.11 Inserting Records ................................................................................................................................. 97
10.11.1 Inserting Non-Hierarchical Records ....................................................................................... 98
10.11.2 Inserting Hierarchical Records ................................................................................................. 98
10.11.3 Inserting FREE Records ................................................................................................................ 99
10.12 Copying Records ................................................................................................................................... 99
10.12.1 Copying Records in the Same Database .............................................................................. 99
10.12.2 Copying Records from One Database to Another .......................................................... 100
10.13 Deleting Records ............................................................................................................................... 101
10.14 Positioning to a Record Using Key Field Value ....................................................................... 101
10.15 Incrementing the Current Record ............................................................................................... 102
10.16 Decrementing the Current Record ............................................................................................. 102
10.17 Reading a Record and Displaying its Contents ...................................................................... 102
10.18 Displaying or Editing Record Fields ............................................................................................ 103
10.18.1 Displaying the Contents of a Record's Field ..................................................................... 103
10.18.2 Changing the Contents of a Record's Field ....................................................................... 104
10.18.3 Setting Field Values to Habitat Null ..................................................................................... 104
10.19 The hdbrio Commands .................................................................................................................... 105

11. hdbcompare .............................................................................................................. 107


11.1 Comparison Capability ...................................................................................................................... 107
11.2 Comparison Limitations .................................................................................................................... 107
11.3 Detailed Comparison Output File................................................................................................... 108
11.3.1 Indirect Field Differences ............................................................................................................ 109
11.3.2 Sample CSV Output File .............................................................................................................. 109
11.4 Field Description Configuration File .............................................................................................. 110
11.5 Database Context Field Configuration File ................................................................................ 112

12. System Administration ........................................................................................... 113


12.1 System Administration Tasks .......................................................................................................... 113
12.2 Startup and Shutdown ....................................................................................................................... 113
12.2.1 About hdbserver and Online/Offline Clones........................................................................ 113
12.2.2 hdbserver Startup Scenarios .................................................................................................... 114
12.2.2.1 Normal Hdbserver Startup Command............................................................................ 115
12.2.3 hdbserver Shutdown .................................................................................................................... 116
12.2.3.1 Terminate Versus Kill ............................................................................................................. 116
12.2.3.2 hdbserver Shutdown Using the Kill Command ............................................................ 116
12.2.3.3 hdbserver Shutdown Using Habitat tview .................................................................... 116
12.3 Clone Administration .......................................................................................................................... 116
12.3.1 Converting a Clone to an Archive File ................................................................................... 117
12.3.2 Moving a Clone from One HABITAT Group to Another .................................................... 117
12.3.3 Preparing a Clone for an SPR Submission ........................................................................... 118
12.3.4 Replication Set Administration ................................................................................................. 119
12.3.4.1 Creating a New Clone Marked for Replication............................................................. 119
12.3.4.2 Marking Existing Clones for Replication ......................................................................... 119
12.3.4.3 Removing Clones from the Replication Set ................................................................... 120
12.4 Clone Directory Administration ...................................................................................................... 120

Proprietary – See Copyright Page viii


12.4.1 Create Clone Directory File During HABITAT Group Initialization ............................... 120
12.4.2 Update Clone Directory When Importing from Another HABITAT Group ................ 121
12.4.3 Update or Correct Lost or Damaged Clone Directory File ............................................. 122
12.4.4 Create Clone Directory File Outside a HABITAT Group.................................................... 122
12.4.5 Clone Directory File....................................................................................................................... 123
12.4.5.1 Clone Directory Format ........................................................................................................ 123
12.4.5.2 Clone File Scanning ................................................................................................................ 124
12.4.5.3 Concurrent Access to the Clone Directory File ............................................................ 124
12.5 Hdb Files and On-Disk Structure .................................................................................................... 124
12.5.1 File Types .......................................................................................................................................... 125
12.5.1.1 Clone Files .................................................................................................................................. 125
12.5.1.2 Archive Files .............................................................................................................................. 125
12.5.1.3 Savecase Files .......................................................................................................................... 126
12.5.1.4 DNA Files .................................................................................................................................... 127
12.5.1.5 Core Data Files ......................................................................................................................... 128
12.5.1.6 Clone Directory Files .............................................................................................................. 128
12.5.1.7 Clone Locking Files ................................................................................................................. 128
12.5.2 On-Disk Directory Structure ...................................................................................................... 129
12.5.2.1 Hdb Cloning Database (CDB) Root Directory ................................................................ 129
12.5.2.2 Hdb Cloning Database .......................................................................................................... 129
12.5.2.3 Hdb Schema Dictionary ....................................................................................................... 130
12.5.2.4 Hdb Clones Default Directory............................................................................................. 130
12.5.2.5 Hdb Savecase Directory ....................................................................................................... 130
12.5.2.6 Setup of On-Disk Structure ................................................................................................. 130

13. Hdb Utilities Reference ........................................................................................... 132


hdbcloner -c create_clone ......................................................................................................................... 133
hdbcloner -c create_corefile ..................................................................................................................... 136
hdbcloner -c load_schema ........................................................................................................................ 137
hdbcloner -c offline_clone.......................................................................................................................... 138
hdbcloner -c online_clone .......................................................................................................................... 139
hdbcloner -c remove_clone ....................................................................................................................... 140
hdbcloner -c remove_schema.................................................................................................................. 141
hdbcloner -c rename_clone ...................................................................................................................... 142
hdbcloner -c show_clone ........................................................................................................................... 143
hdbcloner -c show_limits ........................................................................................................................... 145
hdbcloner -c show_schema ...................................................................................................................... 146
hdbcloner -c verify_schema ...................................................................................................................... 149
hdbcloner -h .................................................................................................................................................... 150
hdbcompare .................................................................................................................................................... 151
hdbcopydata ................................................................................................................................................... 154
hdbdirectory .................................................................................................................................................... 158
hdbdocument.................................................................................................................................................. 159
hdbdump .......................................................................................................................................................... 161
hdbexport ......................................................................................................................................................... 164
hdbformat ........................................................................................................................................................ 170

Proprietary – See Copyright Page ix


hdbimport ......................................................................................................................................................... 173
hdbimport Declarative Modifiers ............................................................................................................. 177
hdbrio ................................................................................................................................................................. 180
hdbrio command: backup .......................................................................................................................... 182
hdbrio command: checkpoint ................................................................................................................... 183
hdbrio command: dbcopy .......................................................................................................................... 184
hdbrio command: dbclose ......................................................................................................................... 186
hdbrio command: dbopen .......................................................................................................................... 187
hdbrio command: dbpaste......................................................................................................................... 188
hdbrio command: dbset .............................................................................................................................. 189
hdbrio command: delete............................................................................................................................. 190
hdbrio command: down .............................................................................................................................. 191
hdbrio command: echo ............................................................................................................................... 192
hdbrio command: exit .................................................................................................................................. 193
hdbrio command: find ................................................................................................................................. 194
hdbrio command: help ................................................................................................................................ 195
hdbrio command: insert.............................................................................................................................. 196
hdbrio command: list ................................................................................................................................... 197
hdbrio command: position ......................................................................................................................... 198
hdbrio command: quit ................................................................................................................................. 199
hdbrio command: read ................................................................................................................................ 200
hdbrio command: reset ............................................................................................................................... 201
hdbrio command: setstamp ...................................................................................................................... 202
hdbrio command: up .................................................................................................................................... 203
hdbrio command: verify .............................................................................................................................. 204
hdbrio command: where ............................................................................................................................ 205
hdbrio command: zero ................................................................................................................................ 206
hdbrio command: + ...................................................................................................................................... 207
hdbrio command: – ....................................................................................................................................... 208
hdbrio command: / ....................................................................................................................................... 209
hdbserver.......................................................................................................................................................... 211

Figures
Figure 1. Hdb Overview Diagram .................................................................................................................... 1
Figure 2. Sample of an Input File Fragment ..............................................................................................68
Figure 3. Example of a Declarative ...............................................................................................................71

Tables
Table 1. Clone Specification .............................................................................................................................23
Table 2. Conversion Rules.................................................................................................................................32
Table 3. Operational Modes .............................................................................................................................47

Proprietary – See Copyright Page x


Table 4. Data Types ............................................................................................................................................50
Table 5. Field Line Subscripts ..........................................................................................................................74
Table 6. Character Data ....................................................................................................................................75
Table 7. Numeric Data .......................................................................................................................................75
Table 8. Boolean Data ........................................................................................................................................76
Table 9. Field Value Conversion Rules .........................................................................................................77
Table 10. Summary of Declarative Commands .......................................................................................79
Table 11. Hdbrio Command Summary ..................................................................................................... 105
Table 12. DNA File Format ............................................................................................................................. 127
Table 13. Alignment Boundary Interpretation ....................................................................................... 134
Table 14. Level of Detail show_clone ........................................................................................................ 143
Table 15. Reports Level of Detail Settings ............................................................................................... 146
Table 16. Table of Errors ................................................................................................................................ 159
Table 17. hdbformat Command Syntax ................................................................................................... 170
Table 18. hdbformat Language Options .................................................................................................. 171
Table 19. Memory Sizing Requirements ................................................................................................... 212

Proprietary – See Copyright Page xi


About This Document
This document is supplied as a part of GE’s Habitat product.
This document discusses Hdb, the database engine for Habitat. Hdb functions are
described in this document, along with the following utilities:
• The hdbcloner utility manages schema and clone instances.
• The hdbcopydata utility copies and/or archives databases and savecase files.
• The hdbdirectory utility is a system administration utility for managing Hdb groups as
they are created, copied (from one computer to another), or used for repairing a corrupt
cloning database.
• The hdbdump utility dumps clone, archive, and/or save files data to a print file for
printing.
• The hdbexport utility converts data from an Hdb database to an ASCII file for export to
(import into) other Hdb or other databases.
• The hdbformat utility creates Fortran 90 INCLUDE and C language header files that
define database fields for an application.
• The hdbimport utility loads (imports) ASCII files into an Hdb database.
• The hdbrio utility is an interactive, command-based, database query and edit program
for viewing and modifying database data.
• The hdbserver utility is an Hdb system administration program to place an Hdb group
clone server online or offline at system startup or shutdown.
• The hdbdocument utility converts a database schema definition file (DBDEF) into a file
containing only the documentation and structure of the database schema.
• The hdbcompare utility compares the data between two Hdb databases. The Hdb
database can be inside a clone, a savecase, or a modeler archive file.

Purpose of This Document


This guide describes Hdb utilities and commands in detail. In addition, the Hdb Utilities
Reference chapter provides a way to quickly look up Hdb utility command descriptions,
including command syntax and parameters.

Who Should Use This Document


This document supports Hdb application programmers working in the Habitat environment.
This document also supports system administrators who may be required to use Hdb
utilities.

Proprietary – See Copyright Page xii


Structure of This Document
This document is structured in the following manner:
• Chapter 1 is an overview of the Hdb interface to the Habitat environment.
• Chapter 2 discusses the required Hdb environment variables.
• Chapter 3 provides information about the hdbcloner utility, which manages schema and
clones in a HABITAT group.
• Chapter 4 discusses the hdbcopydata utility, used to copy data between hdb data
containers.
• Chapter 5 gives details about the hdbdocument utility, which converts a database
definition source file into an ASCII formatted file.
• Chapter 6 discusses the hdbdump utility, used by developers and system administrators
when reviewing Hdb files.
• Chapter 7 explains the hdbexport utility, which exports the binary data from an Hdb
database to an ASCII formatted file.
• Chapter 8 discusses the hdbformat utility, which generates the database subschema
files.
• Chapter 9 offers information about the hdbimport utility, which imports data into an
Hdb database from one or more ASCII coded data files.
• Chapter 10 discusses the hdbrio utility, a database query tool that allows the user to
examine and modify Hdb databases.
• Chapter 11 supplies information about the hdbcompare utility, which compares two
Hdb database instances to report the differences between them.
• Chapter 12 describes Hdb system administration functions and tasks.
• Chapter 13 is a reference for Hdb utilities.

Proprietary – See Copyright Page xiii


For More Information
For more information about Platform, refer to the following:
• Platform System Overview: An overall view of Platform, its components, their functions
and use, and how they are related functionally and structurally.
For more information about Habitat, refer to the following:
• Introduction to Habitat Programming: Basic information and instructions about the
development and management of Habitat applications.
For more information about Browser, refer to the following:
• Browser User’s Guide: An introduction to the concepts and features of Browser and
instructions for accessing and using displays.
For more information about NETIO, refer to the following:
• NETIO User’s Guide: Information about using and developing NETIO applications and
managing NETIO networks.
For more information about ESCATOOLS, refer to the following:
• ESCATOOLS User’s Guide: An introduction to the concepts and features of ESCATOOLS
and instructions for using ESCATOOLS procedures.
For more information about Hdb, refer to the following:
• Hdb Programmer’s Guide: A detailed description of the Hdb API to the Habitat database
management system in support of application development.

Proprietary – See Copyright Page xiv


Conventions
The following conventions are used throughout this document. Commands that are specific
to an operating system are shown with the corresponding prompt symbol.

Command Prompts
Operating Prompt Description
System
Linux % All commands preceded by a percent sign prompt (%)
are issued from a Linux terminal window. Note that all
Linux commands are case sensitive.
Windows > All commands preceded by a greater-than sign prompt
(>) are issued from a Windows command line window.
All Operating The absence of any prompt character before a
Systems command indicates that the command is valid on all
operating systems.

Command Strings
Operating Delimiter Description
System
Linux Italics Text in italics indicates information you must supply. (*)
Linux [] Text enclosed in square brackets "[ ]" indicates optional
qualifiers, arguments, or data. (*)
All Operating Select When used in command strings, the term “Select”
Systems means placing the pointer over the specified item and
pressing the left (default) mouse button.
(*) Note: All Linux commands are case sensitive and must be typed exactly as shown.

Change Summary
The following changes have been made to this document for Habitat 5.11:
• Added the “-d” option to the hdbrio where command to limit the number of subtree
levels shown.

Proprietary – See Copyright Page xv


1. Hdb Overview
This chapter is an overview of the Hdb interface to the Habitat environment. While reading
this chapter, refer to Figure 1.

Figure 1. Hdb Overview Diagram

Figure 1 shows the activity flow and procedures that the application developer and system
administrator follow when using Hdb. Above the dashed line are developer activities; below
the dashed line are developer and system administrator activities.

Proprietary – See Copyright Page 1 Hdb Overview


Hdb User’s Guide
The various phases of development, analysis, and maintenance are briefly described in the
following sections, along with the numbered items in Figure 1. The discussion focuses first
on developer activities (those activities that occur above the dashed line in the figure) and
then on developer and system administrator duties (those activities that occur below the
dashed line in the figure).

1.1 Developer Activities


The term “developer” refers to a programmer who is responsible for designing application
databases and developing application code. The focus of this discussion is on Hdb;
however, there are other important aspects of developing applications in Habitat that are
not mentioned in the following discussion.
The reader is expected to be familiar with the concepts of Hdb databases, and to have a
basic understanding of Habitat and of developing applications using Hdb.
The reader should also be familiar with the Hdb Programmer’s Guide — in particular, the
“Introduction to Hdb Programming”, “Hdb Data Model”, “Developing Database Schema
Source Files”, and “Developing Clone Schema Source Files” chapters.

1.1.1 Preparing Source Definitions


The developer begins by preparing the source definitions (Figure 1, Item 1) that describe the
databases, the applications, and the clones. These source definitions are coded as ASCII
files using a standard text editor.
These source definitions are used together to define the application components used by
the cloner in creating clone files. The CLS source file defines an application schema (or a
clone schema), and the DBDEF source file defines a database schema.
(Also shown in Figure 1 is the MXDEF file. It is also a source file, but it is optional and it is not
discussed in this overview. For more information about the MXDEF files and uses, see the
Hdb Programmer’s Guide.)
An application is defined by one CLS source file and one or more database DBDEF source
files. Each database used by the application is named in the CLS source file. DBDEF source
files can be shared among different applications.
Both the DBDEF and the CLS source files are prepared by a programmer. The methods used
in preparation, the description of the syntax, and the semantic rules are all described in the
Hdb Programmer’s Guide.

1.1.2 Presenting Files to the Cloner


The DBDEF and CLS files are presented to the cloner (Figure 1, Item 2) to be loaded into the
Hdb schema dictionary. Both database and application schema must be resident in the
schema dictionary before it can be used.

Proprietary – See Copyright Page 2 Hdb Overview


Hdb User’s Guide
The cloner’s load schema command performs two functions:
1. The command validates the schema definition, issuing messages to report errors and
warnings.
2. The command compiles the source schema into a binary image for storage in the
dictionary. Compilation of the schema only takes place if there are no errors (warnings
are acceptable).
The cloner’s load schema command is executed by the hdbcloner command utility using
the command -c load_schema (for more information about loading schema, see section 3.2
Loading Schema).

1.1.3 Tracking Schema in the Schema Dictionary


The schema dictionary (Figure 1, Item 3) is part of the cloning database for the HABITAT
group. It is a set of binary files organized together in a designated location established by
the system administrator. Dictionary files are accessed by Hdb utility programs and utilities,
such as hdbcloner.
The dictionary keeps track of four types of schema:
1. DBD is the database schema defined by the DBDEF source file. There is one DBD image
file in the dictionary for each database definition. A DBD schema image file is identified
using two names: the database name and the database title. Both the database name
and the database title are specified using the //DBDEF statement.
2. Application CLS is the application schema defined by the CLS source file. There is one
application CLS image file for each application definition defined by a CLS source file.
The application CLS is identified using the application name, which is specified using the
CLS CREATE APPLICATION statement.
3. Clone CLS is the clone schema defined by the CLS source file. There is one clone schema
image file for each clone schema definition defined by a CLS source file. The clone
schema references other application schemas; it is also used to create special multi-
application clones. A clone schema is a unique feature used only in special cases; for
more information about clone schema, consult the Hdb Programmer’s Guide.
4. The MXDEF file contains the MX (MAXIMUM) values to override the values defined for the
record types in the DBDEF files.
All subsequent functions requiring schema (e.g., creating clones, formatting subschema)
use schema dictionary images. The schema dictionary is the central repository for all other
Hdb command operations. Once the schema is loaded into the dictionary, the schema
source files are no longer referenced and are no longer needed. The source files can be
version controlled to ensure that the integrity of the application component definitions
remains intact.

Proprietary – See Copyright Page 3 Hdb Overview


Hdb User’s Guide
1.1.4 Using Hdbformat
The Hdb format utility (Figure 1, Item 4) is used to prepare subschema required by
application source programs that use the Hdb API.
A subschema is the application program’s view of the database schema. It is called a
“subschema” because it does not specify the whole database, just the view of the database
used by the application program. A subschema is specified as a set of files coded in the
host language of Fortran 90 or C and included in the application source program. The
Fortran 90 coded program uses the INCLUDE statement to reference subschema files. The C
and C++ coded programs use the #include preprocessor statement.
The subschema files are commonly referred to as “include files” or “header files”. There are
several different types of subschema files. For more information about the different types of
subschema files and their organization and content, see the Hdb Programmer’s Guide.
Subschema files are created from the database schema that is resident in the dictionary.
After the schema is loaded into the dictionary, the hdbformat utility program can be used to
create the subschema files.
The steps to prepare the application program are:
1. Load DBDEF schema into the dictionary.
2. Create subschema files according to the host language requirements (F90 or C).
3. Compile application programs that reference subschema files via the INCLUDE or
#include statements.
Whenever the database definition is changed (e.g., new database fields are added or MX of
record types are changed), this procedure must be repeated. In addition, it is important that
the clones that reference changed database schema be replaced1 using the new schema,
so that the clones are consistent with the application programs.
The hdbformat utility can use multiple sources. As an example, Figure 1 shows only the data
dictionary source. For a list of other source options, see the reference page on hdbformat.

1.1.4.1 Using a Clone Schema Source


Figure 1 shows the schema source being taken from the dictionary and an optional source
being taken from a clone. The clone source is required only in one special circumstance.
Clones are normally created using a fixed database size specified by the database schema.
However, clones can also be created and custom sized according to the needs of the
application on a clone-by-clone basis, by using a schema modified with the MX defined by
an MXDEF file.
In most situations, when the clone is sized in this manner, the application programs are
coded using a dimension-independent method, and the size-dependent subschema files

1
In the Reference Guide chapter, see hdbcloner -c create_clone with the -replace option.

Proprietary – See Copyright Page 4 Hdb Overview


Hdb User’s Guide
are not used2. Therefore, the regular dictionary source for the schema satisfies the needs of
the application.
However, if the application is coded using a database size-dependent method, such as the
partition I/O method, then it is important that the subschema be created from the actual
clone that contains the databases that will be accessed. For this reason, the hdbformat
program accepts schema input from a clone directly so that the actual MX sizes used can
be included in the formatting of the subschema files. Obviously, this binds the application
program image to the clone directly and, therefore, this is considered an unusual situation.

1.1.4.2 Using the DBDEF Source File


Not shown in Figure 1 is the fact that the hdbformat utility can also create subschema files
directly from the DBDEF source file. In fact, in previous releases of Habitat, this was the only
method of creating the necessary INCLUDE and header files.

1.1.5 Using Hdbcloner


The cloner utility (Figure 1, Item 5) is used to create clones. This is in addition to functions,
such as loading schema into the schema dictionary. One way to think of the cloner is as a
dictionary manager. The dictionary manager is responsible for loading schema into the
dictionary and issuing clones from the dictionary.
A “clone” is a collection of one or more databases that are used together by an application.
The application schema and database schema loaded into the schema dictionary are used
as a blueprint, again by the cloner, to create the clone. Physically, a clone is implemented as
a clone archive file containing one or more databases.
To create a clone, the necessary application schema and the database schema must be
resident in the schema dictionary. The schema is referenced in the following manner:
• The application name is given to the cloner when the clone is created.
• The application name is used to locate the application schema in the schema dictionary.
The application schema includes a list of databases used by the application; this database
list is used to locate the database schema in the dictionary.
The cloner also accepts a family name in addition to the application name. The family name
is an arbitrary name (arbitrary in that it does not refer to any other schema or component)
that is used to make the clone identification unique. Each clone is uniquely identified and
located within a HABITAT group by using the application name and the family name. The
application name and family name constitute the “clone context”. To fully specify an Hdb

2
As explained in the Hdb Programmer’s Guide, the size-dependent subschema files are the Fortran 90
partition INCLUDE files. The Fortran 90 main database INCLUDE file is not size-dependent, and the C header
files are not size-dependent.

Proprietary – See Copyright Page 5 Hdb Overview


Hdb User’s Guide
database in a Habitat installation3, it is necessary to specify the database name and the
clone context — i.e., database name, application name, and family name.
The term “family” is used because clones are commonly grouped together for some
common purpose. This group is referred to as a family of clones, and each clone in such a
group shares the same family name. Of course, you cannot have two clones of the same
application within the same family.

1.1.5.1 Creating Clones


Creating clones is considered both a developer task and a system administrator task. As
shown in Figure 1, the line of clone creation crosses the divider (dashed line). A developer
creates clones as part of the developer’s design, implement, debug, and test cycle. A system
administrator creates clones to service system requirements for application programs.
Whereas an application developer changes application or database schema frequently as
part of the development cycle, a system administrator seldom needs to change the
schema.
Because the developer is often changing schema and reloading changes into the
dictionary, it is important that clones be created in a manner consistent with the
application programs that are being developed as well. As already mentioned (Figure 1,
Item 4), application programs and clones both use the same database schema.
It is important that both applications and clones be built from the same common source
whenever the schema changes. Otherwise, the program will not be consistent with the
clone, and database definition stamp mismatch errors will be reported when the Hdb API is
used.

1.1.5.2 Replacing Clones


At times, it is necessary to replace a clone. A clone needs to be replaced if the schema
changes, or if the clone is to be moved from one location to another (i.e., from one directory
location to another). Replacing clones means that a new clone is created using the same
application and family names as the existing clone. Therefore, replacing a clone is really
creating a new clone, but with the additional task of copying database data from the
existing clone to the new clone. Once the data is successfully copied, the existing clone is
deleted, leaving the system with the newly created clone.

3
To fully specify a database among multiple e-terrahabitat installations, the HABITAT group of the specific
installation will be taken into account in identifying a database. The HABITAT group is set during the
e-terrahabitat installation and is identified by the HABITAT_GROUP environment variable.

Proprietary – See Copyright Page 6 Hdb Overview


Hdb User’s Guide
1.2 System Administrator Activities
The system administrator is responsible for maintaining the Habitat system. During
development, programmers or engineers may act as their own system administrator in
carrying out the tasks outlined below.

1.2.1 Cloning Operations


The system administrator may be required to perform cloning functions when creating or
removing clones, moving clones, or updating clones. The following sections discuss these
functions.

1.2.1.1 Creating or Removing Clones


The system administrator is often required to create new clones, as they are needed by the
system. New clones are created for local study and analysis. Also, clones can be created
and removed while managing subsystems such as Grid Simulation, Comm, or Historian.

1.2.1.2 Moving Clones


It is sometimes necessary to move a clone from one disk location to another disk location.
To move a clone, the cloner is executed to replace the clone by specifying a new directory
location for the clone.
In addition, it is sometimes necessary to move a clone from one HABITAT group to another
HABITAT group. This can be accomplished by copying the clone’s data into an archive file,
and then retrieving that archive data into the new HABITAT group’s copy of the clone.
However, when more than one clone needs to be copied in this manner, it is often easier to
move the clones themselves. This operation is called “importing” clones into a Habitat
system, and the cloner and the hdbdirectory utility are used for this task.

1.2.1.3 Updating a Clone After a Schema Change


Clones need to be updated if any of the schema components used by the clone change. To
update a clone, the new schemas must have been loaded into the schema dictionary. Then
the clone should be created with the clone using the “-replace” option. In effect, this creates
a new clone with the new schema, and the data in the old clone is copied to the new one.

1.2.2 Managing Data


Hdb data management is accomplished by copying data from one location to another
through various methods and techniques. Hdb provides four command utilities that are
used to manage data: hdbrio, hdbcopydata, hdbexport, and hdbimport.

Proprietary – See Copyright Page 7 Hdb Overview


Hdb User’s Guide
The hdbrio utility is an interactive command processor that allows the user to view data,
edit data, and traverse the records in the database. The hdbrio utility is used primarily by
the developer.
The hdbcopydata utility is a database copying command that performs the following
database copies:
• Copy a whole clone to an archive file or retrieve a whole clone.
• Create an application savecase or retrieve an application savecase.
• Copy a specified database from one clone to another, from a clone to an archive file, or
from an archive file to a clone.
• Copy a specified partition from one clone to another, from a clone to an archive file, or
from an archive file to a clone.
• Copy a specified field from one clone to another clone or from an archive file to a clone
(you cannot create an archive file that contains just a single field).
The hdbexport utility is used to copy data from a clone or archive file to an ASCII formatted
data file. Binary data is formatted in ASCII by hdbexport.
The hdbimport utility is used to copy data from an ASCII data file to a clone, translating the
data from formatted ASCII to binary. The hdbimport utility cannot be used to change data in
an archive file.
The hdbexport and hdbimport utilities are often used together. Hdbexport and hdbimport
can copy a full database, one or more records, a specified hierarchical subtree of records,
or one or more fields. With hdbexport, you can specify data to be exported by key or by
record subscript range. With hdbimport, you can insert new records into the target
database or you can update existing records in the target database. With hdbimport, you
can also locate records by key, by subscript, or by ordinal position (i.e., first, second, third,
etc.).
Examples of using hdbexport and hdbimport include:
• Data can be archived offline using hdbexport and retrieved into the same clone or a
different clone with hdbimport.
The hdbexport/hdbimport method of archive save and retrieval is slower than using
hdbcopydata, but it is more robust and it can handle more and different types of
database schema changes. Hdbexport is the recommended method for long-term data
archiving.
• Data can be exported from Hdb to be imported into another database system, such as
Oracle.
• Data can be exported from another database system, such as Oracle, and imported into
Hdb with hdbimport.
• Data can be exported so that it can be transformed by a text-scripting tool and then
used for another purpose.

Proprietary – See Copyright Page 8 Hdb Overview


Hdb User’s Guide
An example of this method is when major hierarchical record schema changes are
performed on the database and the data itself must be transformed so that it can be
recovered. First, use hdbexport to export the data, then use scripting tools such as Perl
to transform the data, and finally use hdbimport to import the data into the new clone.
• Data can be exported and imported on a field-element–by–field-element basis.

1.3 Managing the Hdb Server


The system administrator is responsible for managing the Hdb server. The Hdb server is
responsible for maintaining the online cloning database that is needed by application
programs. The Hdb server must be started before any other application program is started.
The hdbserver program is typically started in a startup script as part of the system startup
procedure. Normally, the hdbserver is only shut down when the system itself is shut down.
The hdbdirectory utility program is used to repair or update a damaged clone directory. A
“clone directory” is the list of clones known to the HABITAT group. The clone directory can
be re-created or updated using the hdbdirectory command.

Proprietary – See Copyright Page 9 Hdb Overview


Hdb User’s Guide
2. Environment
This chapter discusses the required Hdb environment variables.

2.1 Platform Dependencies


Setting operating environment variables is platform-dependent. Use the appropriate
method to set variables:
• UNIX uses the setenv command under the C shell. For Borne shell or Korn shell, use the
export command.
• Windows uses the set command.

2.2 Variable Descriptions


HABITAT_GROUP [required]
This variable specifies the name of the HABITAT group. A HABITAT group identifies the
Hdb server. The name provides a unique code that identifies system resources.
The Habitat/Platform naming convention requires that the group name include two
numeric digits.
Example (UNIX):
% setenv HABITAT_GROUP 90
HABITAT_CDBROOT [required]
This variable specifies the name of the root directory for the Hdb cloning database. The
directory is required and is initialized at setup by the Habitat system administrator.
Example (Windows):
> set HABITAT_CDBROOT=G:\home\cdbroot
HABITAT_CLONES [optional]
This variable specifies the directory location of the clone file. If the directory location is
not specified, the location is defined by HABITAT_CDBROOT. Although this variable is
optional, its use is recommended.
The default clone directory specification must be given as a full absolute, file path
specification.
Example (UNIX):
% setenv HABITAT_CLONES \var\habitat\group90\clones

Proprietary – See Copyright Page 10 Environment


Hdb User’s Guide
HABITAT_CASES [required]
This variable specifies the directory location of the default savecase file. This variable
only applies to savecases. Hdb archive files can be created using any specified directory
location.
This variable is required if savecases are created.
The savecase directory location must be given as a full absolute path specification.
Example (UNIX):
% setenv HABITAT_CASES \var\habitat\group90\cases
HABITAT_APPLICATION [optional]
This variable specifies the default application name used to satisfy the clone database
context.
This variable is required if the default values are used by the utility programs or
application programs that use the Hdb API.
The application name can be specified directly, as shown in the following example, or
set with the Habitat context command.
Example (Windows):
> set HABITAT_APPLICATION=SCADA
HABITAT_FAMILY [optional]
This variable specifies the default family name used to satisfy the clone database
context.
This variable is required if the default values are used by the utility programs or
application programs that use the Hdb API.
Example (Windows):
> set HABITAT_FAMILY=EMS
HABITAT_CASE_COMPRESSION [optional]
This variable specifies the default compression behavior for the CASEMGR API and the
hdbcopydata utility. The valid values are Y to enable savecase compression, or N to
disable savecase compression. By default, savecase compression is enabled; to turn off
default compression, set this variable to N.
Example (Windows) – to turn off default compression:
> set HABITAT_CASE_COMPRESSION=N
HABITAT_MXCLONES [optional]
This variable specifies the maximum number of clones that could be put online by
hdbserver. If the number of clones exceeds the maximum, hdbserver will report an error
and abort. If not specified, the default maximum number of online clones is 150.

Proprietary – See Copyright Page 11 Environment


Hdb User’s Guide
Example (Windows) – to set maximum clones to 200:
> set HABITAT_MXCLONES=200

Note: In UNIX, when using the -mxclones command-line option or the


HABITAT_MXCLONES environment option to increase the maximum number of clones
that can be placed online, the user will need to shut down Habitat and remove the
backing store file manually. This is due to the way OSAL shared memory is
implemented, where resizing the shared memory section does not automatically
adjust the size of the backing store file. Deleting the backing store file will force OSAL
to regenerate the backing store file. The backing store file is located in
$HABITAT_DATAROOT/osalipc (default) and the format of the filename is
osm_<grp>_mcdb_<grp> where <grp> is the HABITAT_GROUP where the hdbserver
is running.

HABITAT_DEFAULT_VERSION [optional]
This variable specifies the version title string of the database schema to use when
creating a clone. See the -version option of hdbcloner -c create for more information.
HABITAT_SERVER_HANGUP [optional]
This variable is specific to the UNIX operating system. The valid values are Y to abort
hdbserver if a SIGHUP is received, or N to ignore the SIGHUP signal. The default is N. (For
more information, see the hdbserver pages in chapter 13 Hdb Utilities Reference.)
HABITAT_DISABLE_BACKUP [optional]
To disable replication of data to the standby on a dual-redundant configuration, the
valid values are Y to disable backup (i.e., replication) and N to enable backup. If not set,
the default is N.
HABITAT_DISABLE_CHECKPOINT [optional]
Disabling database checkpointing may improve system performance since less disk I/O
will be performed, but it also increases the risk of data loss if a system crashes before
the data is written back to a disk file. The valid values are Y to disable checkpointing
and N to enable checkpointing. If not set, the default is N.
HABITAT_DISABLE_DBLOCKS [optional]
This variable specifies how to disable database locks if they are enabled. If disabled, the
locked API calls will do nothing and the Hdb read/write functions will not lock the
partitions being accessed. The valid values are Y to disable database locks or N to
enable database locks. If not set, the default is N.
HABITAT_FULL_FIELD_REPLICATE [optional]
This variable specifies how to control whether the full extent of a field is replicated or
just the changed range. The valid values are Y to replicate the full extent or N to
replicate only the changed extent. If not set, the default is Y.

Proprietary – See Copyright Page 12 Environment


Hdb User’s Guide
HDBMONITOR_TIMEOUT [optional]
This variable specifies the timeout value in number of seconds that the hdbmonitor task
will wait for a response from hdbserver before timing out (when hdbserver has been
running normally). If this timeout occurs, the hdbmonitor task will exit and trigger the
shutdown path to execute, causing the system to shut down. The default timeout value
is 120 seconds. When monitoring hdbserver’s startup or shutdown, the timeout is a little
different; by default, it is approximately 25% as long as the normal timeout. Thus, the
default is 30 seconds and it is affected proportionally when HDBMONITOR_TIMEOUT is
defined.
HDBMONITOR_STARTUP_TIMEOUT [optional]
To break away from the startup/shutdown timeout being 25% of the normal timeout,
you can define this variable to specify the startup/shutdown timeout directly, in
seconds. It could be important if there are many clones and/or large clones in your
system, which can raise hdbserver’s startup or shutdown time.
HABITAT_DEBUG_CHECKPOINT [optional]
If set to Y, this variable turns on debug diagnostics messages on checkpointing.
HABITAT_DEBUG_DBLOCKS [optional]
If set to Y, this variable turns on debug diagnostics messages on database locking
actions.
HABITAT_DEBUG_HDBMRS [optional]
If set to Y, this variable turns on Hdb-to-MRS interface diagnostics messages.
HABITAT_DEBUG_TXN [optional]
If set to Y, this variable turns on Hdb transaction diagnostics messages.
HABITAT_HDBEXPORT_AUDITDIR [optional]
This variable redirects the destination directory of HDBEXPORT audit log from
HABITAT_LOGDIR.
HABITAT_HDBRIO_AUDITDIR [optional]
This variable redirects the destination directory of HDBRIO audit log from
HABITAT_LOGDIR.
HABITAT_HDBEXPORT_DISABLE_LOGGING [optional]
This variable disables audit logging of HDBEXPORT invocations.
HABITAT_HDBRIO_DISABLE_LOGGING [optional]
This variable disables audit logging of HDBRIO invocations.
HABITAT_MXDEFDIR [optional]
This variable is used in conjunction with HABITAT_MXTITLE by hdbcloner and hdbformat
to specify the directory where the MXDEF files are located. The MXDEF file specifies
alternate MX values for a given database schema. If not defined and if the -mxdef

Proprietary – See Copyright Page 13 Environment


Hdb User’s Guide
command-line option is not present, hdbcloner and hdbformat will not look for an
MXDEF file.
HABITAT_MXTITLE [optional]
This variable is used in conjunction with HABITAT_MXDEFDIR by hdbcloner and
hdbformat to formulate the filename of the MXDEF file. The filename uses this format:
<dbschema_name>_<HABITAT_MXTITLE>.mxdef
where <dbschema_name> is the name of the database schema given in the //DBDEF
statement in the DBDEF file. The value of the environment variable HABITAT_MXTITLE
corresponds to the <HABITAT_MXTITLE> portion in the filename. If not defined or if the
generated filename does not exist in HABITAT_MXDEFDIR, the MX values defined in the
DBDEF are used without being overridden.
HABITAT_NOJOINTITLE [optional]
This variable is used by hdbcloner to determine whether the title from the DBDEF file
should be joined with the title in the MXDEF file, to create a unique title when loading the
database schema into the data dictionary. If this environment variable is not present,
hdbcloner by default will join the DBDEF title and the MXDEF title when loading a
database schema into the data dictionary. If this environment variable is present with
any arbitrary value, then only the version title in the DBDEF is used for the database
version title in the data dictionary.
HDBRIO_HAVE_COPY_CMD [optional]
Define this to re-enable the obsolete “copy” command in the hdbrio utility.
HDBRIO_DISABLE_COMMAND_LOGGING [optional]
Define this to disable hdbrio from logging every command into the log file in
HABITAT_LOGDIR.

Proprietary – See Copyright Page 14 Environment


Hdb User’s Guide
3. hdbcloner
The hdbcloner program manages schema and clones in a HABITAT group. Through the
hdbcloner application, clone and database schema can be loaded and stored in the cloning
database. The hdbcloner application can also create, replace, or remove clones in the
HABITAT group. In addition, a “show” facility supports the display of database schema
stored in the schema dictionary or clones in the group.
The hdbcloner utility is a developer and system administrator tool.

3.1 Creating Clones


A primary function of hdbcloner is to create clones. The command to create a clone is:
hdbcloner -c create_clone [parameters]
Once a clone has been created, it can be accessed through the Hdb API. However, several
steps must first occur before a clone can be created. These steps are:
1. Load application schema into the dictionary (for more information, see section 3.2
Loading Schema).
2. Load database schema into the dictionary (for each database referenced by the
application).
3. Choose an appropriate family name.
4. Create the clone.
Clones can be populated with data using several techniques, such as:
• Import ASCII formatted data using hdbimport (for more information, see chapter 9
hdbimport).
• Copy archive or savecase files using hdbcopydata (for more information, see chapter 4
hdbcopydata).
• Manually insert records or populate fields using hdbrio interactive commands (for more
information, see chapter 10 hdbrio).
Fortran 90 INCLUDE files must be consistent with the database definition that was used to
create the clone. To ensure application-to-clone consistency, INCLUDE files should be
formatted after the schemas have been loaded into the dictionary (for more information
about this topic, see chapter 8 hdbformat).

3.2 Loading Schema


The first step in creating a clone is to load the schema into the dictionary.
To load the schema, use the following hdbcloner command:
hdbcloner -c load_schema [parameters]

Proprietary – See Copyright Page 15 hdbcloner


Hdb User’s Guide
Loading schema means that the schema source files are read, data is parsed and verified,
and then a binary copy is created (DNA file) and stored in the database dictionary. Storing
schema in binary ensures fast and easy access to dictionary objects.

3.3 Removing Schema


A database schema can be removed when it is no longer required, because a database
schema is no longer used after a clone is created. Schema can also be removed to facilitate
a rudimentary clone creation policy. However, a clone cannot be replaced by removing a
schema.
To remove the schema, use the following command:
hdbcloner -c remove_schema [parameters]

3.4 Removing Clones


Clones can be removed and deleted with the following command:
hdbcloner -c remove_clone [parameters]
However, an online clone can only be removed after displays, mapped applications, and
Hdb utilities have been terminated from the clone.
Removing and deleting clones is a relatively simple task in a development environment;
however, in an operating environment, it can be a complex process.

3.5 Replacing Clones


Replacing a clone is, in fact, creating a new clone. A clone may need to be replaced or
updated in the following circumstances:
• If the application and/or database schema changes — such as adding or removing a
database, a record type, a field, or a partition — then the clone must be replaced.
• If you use MXDEF files to change the MX values associated with the clone, then the clone
must be replaced. After creating the clone, if the application uses the INCLUDE files, the
INCLUDE files need to be regenerated again.
• If the clone attributes such as the replication change, then the clone must be replaced.
To replace a clone, use the following hdbcloner command and parameters:
hdbcloner -c create_clone -replace [parameters]
The following example replaces all clones in the EMS family:
hdbcloner -c create_clone -a * -f ems -replace
The steps Hdb takes when replacing a clone are:
1. Give the new clone a temporary name.

Proprietary – See Copyright Page 16 hdbcloner


Hdb User’s Guide
2. Copy data from the old clone to the new one.
3. Delete the old clone.
4. Rename the new clone.
5. Place the new clone online.
During a replace, the new clone inherits the old clone’s attributes. This is also the time to
specify special clone attributes, such as replication, or other attributes that need to change.

3.6 Segment Page Alignment


The term “segment page alignment” refers to the fact that a clone file is divided into a
number of different segments. Each segment is a logical structure unto itself, and it can be
separately accessed by applications using the Hdb API. For very large clones, segments
provide a more-efficient access mechanism, and they also provide a higher degree of
database integrity because unused portions are not accessible by the application.
A clone file is divided into the following segments:
• Sanity block: Includes the first 4096 bytes of the file, identifies the file type, and includes
the database dictionary.
• Database header block: Identifies a database, and locates the database schema
segment and the database partitions segment.
• Database schema segment: Defines the schema.
• Database partitions segment: Defines the partitions of the database. Each partition is
also in its own segment.
When an application program accesses a clone-resident database using the HdbOpen API,
the schema segment is the only segment accessed. Partition segments are accessed only
when needed. However, once a partition segment is mapped, it remains mapped until the
database is closed.
Segment access does not change due to transaction boundaries (start, stop). Therefore, if
an application accesses only a single partition of a database, then only the database’s
schema segment and the desired partition segment are mapped. Other segments of the file
(including the sanity block) are not mapped.
When the hdbserver program loads a clone into the MCDB (Memory-resident Clone
DataBase), the program maps the entire clone file, ignoring the segment boundaries. It uses
this map to construct the entry in the MCDB for the clone, which identifies all pertinent
information required to access database schema and partition segments via the Hdb API.
Once the MCDB is initialized with this data from the clone, the clone file is unmapped and
only the sanity block segment is mapped. This map is held by the server until the clone is
taken offline.
The segment alignment is platform-dependent. Each operating system imposes a (possibly)
different segment alignment boundary. The boundary is specified as the number of bytes

Proprietary – See Copyright Page 17 hdbcloner


Hdb User’s Guide
that occupy the minimum resolution supporting paging dynamics via the system services.
All known segment boundaries are integral powers of two. Therefore, a larger segment
boundary may also be suitable for a smaller segment boundary. Indeed, the segment
boundary in Windows is 65536 bytes, which is suitable as a segment boundary in Linux
(4096 bytes page). This means that a Windows-constructed clone can be freely copied to
Linux without loss of functionality or capability.
It is not necessary that a clone be created using segment alignment boundaries as
described above. You can create a clone using a four-byte or eight-byte segment alignment
and it is still usable as a clone. In addition, it is still accessible using any Hdb utility or via the
application using the Hdb API. The limitation is that such a clone is accessed (mapped) as a
whole, not by the individual segments mentioned above.
It is not always desirable to use segmented alignment. A relatively small clone with a single
database and a single partition of up to a few 100,000 bytes is more efficient on disk and in
memory using the compressed or quad segmentation rule (see the -align parameter in
chapter 13 Hdb Utilities Reference of this document). Also, given the nature of such a clone,
there may be no great loss in security due to the lack of segmented alignment.
Segmentation alignment is chosen when a clone is created or replaced by the appropriate
mode setting on the -align parameter switch. By default, a page-compatible segmentation
for the current platform is chosen. When a clone is replaced, this segmentation is inherited
from the existing clone unless it is specified differently for the replace.

3.7 Replication
Replication is the act of copying changed data from one computer, acting as the enabled
(or primary) system, to a second computer, acting as the standby (or secondary) system.
Enabled-to-standby data replication is the method of providing a dual-redundant database
in a high-availability configuration. Replication is a feature subject to the configuration of
the system and the software; it is not available on every Habitat installation.
Only designated partitions are replicated when data changes. Some circumstances call for
the entire database of a replicated clone to change and, in this case, all partitions are
copied from the enabled system to the standby system.
Therefore, for replication to be enabled, three separate steps are required:
1. Mark the database partitions to be replicated in the DBDEF source definition file.
2. Mark the clone(s) to be replicated with the replication attribute.
3. Configure replication in the system by having the backup/MRS system enabled and
operational.
To mark a clone to be replicated, the -replicate parameter must be specified when the
clone is created. This replicate attribute on the clone is then a permanent part of the clone;
it remains with the clone throughout all replaces and all renames. The only way to switch
off this replication attribute is to replace the clone and specify the -noreplicate parameter.

Proprietary – See Copyright Page 18 hdbcloner


Hdb User’s Guide
3.8 Database Schema Version Titles
A database schema is uniquely identified by its database name and its version title. Schema
can only be accessed if these two names are given or specified.
Typically, both names are specified in the application schema when a database is included
in a list of owned databases. However, a more-generic (template-like) application schema is
made possible by not including the version title in the database specification. When the
version title is not included as part of the application schema, it must be supplied prior to
creating a clone.
There are two ways to supply the version title when it is not included in the application
schema. The preferred method is to use the environment variable
HABITAT_DEFAULT_VERSION. This optional variable defines a default version for a system.
The standard method of using this variable is to declare it on a system-specific basis. For
example, given a system targeted for a particular platform customer — say, ABC — the
default version might appear as in the following example:
ABC_DTS_SYSTEM
This example infers that this version title refers to a Grid Simulation system supplied to
customer ABC. The choice of a default title is something that is decided at the system level,
and as a software configuration strategy.
The second method is to specify the version title in the create clone command. This is done
by specifying the title using the -version <title> parameter. The title is then used to
complete all missing database version titles from the application schema. For example:
hdbcloner -c create_clone -version <title>
If neither the environment variable nor the -version <title> parameter is used and if a
version title is required, then an error is issued, which states that the database schema
could not be found.
Once a version title has been specified via either method, it remains with the clone until it is
changed. When the clone is replaced, the same version title is used unless changed. You
cannot depend on changing the environment variable and then replacing the clone,
because the database version has already been selected.
Use of externally provided version titles requires a software configuration policy that fits in
with the handling of version titles.

3.9 Importing Clones from Another HABITAT Group


Clones can be moved from one HABITAT group to another. The only requirement is that the
imported clone be compatible with the HABITAT group. However, a clone’s version is
automatically checked to ensure compatibility.
Note that the release version of a clone is not the same as the release version of the
software. Software versions can change many times, while a clone version can remain the
same.

Proprietary – See Copyright Page 19 hdbcloner


Hdb User’s Guide
In addition, data coding uses the IEEE standard on all supported platforms. Therefore, there
is no need to convert floating-point values from one platform to another.
To move a clone from one HABITAT group to another (inter-platform as well), perform the
following steps:
1. Extract the clone you want to move from the source HABITAT group.
This may involve taking the clone offline in that group, or the clone file may already be
available offline.
2. Copy the clone file, using binary copy, to the destination group (and platform).
Clone files can be compressed using suitable compression utilities.
3. Store the clone in a directory that corresponds to the destination group. This can be the
default directory location, or any location that can be accessed by the HABITAT group.
4. Rename the clone using the hdbcloner -c rename_clone command.
5. Update the clone directory with the hdbdirectory command.
6. Place the clone online with the hdbcloner -c online_clone command.
At this point, the clone should be available in the destination HABITAT group. Required
INCLUDE files can be generated, if needed, from the new clone using the hdbformat
command.

3.10 Moving Clones Within a HABITAT Group


To move a clone from one directory to another directory, use the following command:
hdbcloner -c create_clone -d <dirpath> -replace
Specify the new directory location using the -d <dirpath> parameter.
Since this is a create clone command, all the rules and requirements of a create clone
operation are required.

3.11 Converting a Clone to an Archive File


All clones are Hdb archive files. The difference between a clone and an archive file is how an
archive file is used.
To convert a clone file to an archive file, perform the following steps:
1. Take the clone offline with the hdbcloner -c offline_clone command.
2. Copy the clone file from its directory to a designated archive directory.
3. Place the clone back online using the hdbcloner -c online_clone command.
4. Rename the archived clone (optional).

Proprietary – See Copyright Page 20 hdbcloner


Hdb User’s Guide
Why would you want to do this? Especially when you consider that the following command
accomplishes the same thing via a copy operation:
hdbcopydata -s appname.famname -df clone_archive.arc
where the clone is given as the source container and the archive file is named as the
destination container.
There are two good reasons for copying the clone file in the manner suggested above. First,
if there is a possible corruption problem resulting from a software bug, it is desirable to
have the clone file itself to analyze the problem. Any attempt to use hdbcopydata to copy a
corrupted clone file may fail, or erase the critical information that is sought to analyze the
problem. Thus, using this copy procedure is often called for when a Software Problem
Report (SPR) is being submitted and the clone file is needed as supporting evidence.

Note: When you do this, use a compression utility to compress the clone file, prior to
sending it in along with the SPR.

The second reason to use this method is if you need to copy a clone file to another HABITAT
group. This is the method for getting a copy of the clone file. Taking the clone offline is very
important because, without this step, it is possible that the data in the file may be out of
date and possibly corrupted.
It could also be argued that this is a faster way to create an archive file from a clone if it is a
very large clone.

Proprietary – See Copyright Page 21 hdbcloner


Hdb User’s Guide
4. hdbcopydata
The hdbcopydata utility is used to copy data between hdb data containers.
The valid copy operations are:
• Clone to clone
• Clone to archive file
• Clone to savecase file
• Archive file to clone
• Savecase file to clone
Data can be copied on a field basis, a partition basis, a database basis, or a container basis.
Before a copy operation can be performed, a container (source or destination) must be
assigned a value as a clone file, an archive file, or a savecase file.

Note: The hdbcopydata utility cannot be used to copy individual records. To copy
individual records, use hdbrio, hdbexport, or hdbimport.

The hdbcopydata utility accepts a source and a destination container description. In


addition, the container can describe a subset of data on a database basis, a partition basis,
or a field basis. If a container or a database is copied, then the target database is initialized
before the copy operation (unless the initialize is inhibited by a command parameter). If a
partition or field is copied, the target database is never initialized; the data is copied and the
target data is overwritten.

4.1 Container Specifications


The hdbcopydata program recognizes the following container specifications:
• Clone container
• Archive file container
• Savecase container

4.1.1 Clone Container Specification


A clone specification is used with the -s and -d options, and it conforms to the following
syntax:
• appname.famname
where appname is the application name of the clone and famname is the clone family
name. (To identify a clone, the application and family names must be specified. Defaults
cannot be derived from the environment variables.)

Proprietary – See Copyright Page 22 hdbcopydata


Hdb User’s Guide
• dbname.appname.famname
where dbname specifies a database within the clone.
• object.dbname.appname.famname
where object specifies a partition or a field object.
Table 1 shows examples of the clone specification:

Table 1. Clone Specification


Clone Specification Description
scadamdl.ems Describes the clone known as scadamdl.ems.
Copy context is an entire clone container.
scadamom.scadamdl.ems Describes the database scadamom within the clone,
scadamdl.ems.
Copy context is a single database within a clone.
staticdb.scadamom.scadamdl.ems Describes the partition staticdb within the database
scadamom within the clone, scadamdl.ems.
Copy context is partition object.
id_substn.scadamom.scadamdl.ems Describes the field id_substn within the database
scadamom within the clone.
Copy context is a field object.

Example:
To copy data from the SCADAMOM database in the “SCADA DTS” clone to the “SCADA EMS”
clone, enter the command:
% hdbcopydata -s scadamom.scada.dts -d scada.ems

4.1.2 Archive File Specification


An archive file specification is used with the -sf or -df options. The argument following -sf or
-df is the user-given file name for the archive file.
To create an archive file called “scada_dts.arcfile” from the SCADA DTS clone, enter the
command:
% hdbcopydata -s scada.dts -df scada_dts.arcfile
To retrieve data from the archive file called “scada_dts.arcfile” into the SCADA EMS clone,
enter the command:
% hdbcopydata -d scada.ems -sf scada_dts.arcfile

Proprietary – See Copyright Page 23 hdbcopydata


Hdb User’s Guide
4.1.3 Savecase Specification
A savecase specification is used in conjunction with the -case option to designate the
source or destination for a copy operation.
The syntax for a savecase specification is given below.
Savecase specification syntax:
[<application>.]<casetype>[.<casetitle>]
Where:
<application> = Name of application (optional)
<casetype> = Case type name defined in CLS file
<casetitle> = User-supplied case title to identify this particular Savecase
(optional)
If the <application> portion of the savecase specification is omitted in the hdbcopydata
command, the command will use the copy context to derive the application name (see
section 4.2 Copy Context).
If the <casetitle> portion of the savecase specification is omitted, a default save title called
“default_case_title” will be used.
During the creation of a savecase, the hdbcopydata utility uses the savecase specification
to construct the filename for the savecase file. The savecase file is located in the
HABITAT_CASES directory. Using the savecase specification, the savecase file has the
following naming convention:
Savecase file naming convention:
case_<application>_<casetype>.<casetitle>
Where:
<application> = Name of application
<casetype> = Case type name defined in CLS file
<casetitle> = User-supplied case title to identify this particular savecase
Example:
The Scada application has the following savecase types defined: SCADA, DTS, SOELOGS,
COMMLOG, ACCUMHIS, and TAGGING.
You can create a savecase instance for each defined savecase type from the SCADA EMS
clone with the following lists of commands:
% hdbcopydata -s scada.ems -case scada.11_Mar_1999
% hdbcopydata -s scada.ems -case dts.11_Mar_1999
% hdbcopydata -s scada.ems -case soelogs
% hdbcopydata -s scada.ems -case commlog.darth
% hdbcopydata -s scada.ems -case accumhis.yoda
% hdbcopydata -s scada.ems -case tagging.skywalker
As a result of the commands above, the following files are created in the HABITAT_CASES
directory (assuming you are using UNIX):
case_scada_scada.11_Mar_1999

Proprietary – See Copyright Page 24 hdbcopydata


Hdb User’s Guide
case_scada_dts.11_Mar_1999
case_scada_soelogs.default_case_title
case_scada_commlog.darth
case_scada_accumhis.yoda
case_scada_tagging.skywalker

4.2 Copy Context


This section provides a more-detailed description of copy context. The term “copy context”
refers to the type and name of the components being copied, and it can have the
granularity level of a clone, a partition, a database, or a field. There is a separate copy
context for the source and the destination object. The copy context is derived from the
names found in the clone specification or in the source context specification.
The clone context appears with the -s or -d parameters, and the source context appears
with the -sc parameter. The source context can be derived from the destination, or the
destination can be derived from the source.
Consider the following examples:
Copying a database from one clone to another:
hdbcopydata -d scadamom.scada.ems -s scada.dts
hdbcopydata -d scadamom.scada.ems -s scadamom.scada.dts
Both commands achieve the same results; both show that the destination context is a
database within the identified clone. However, in the first command, the source context is
derived whereas, in the second command, it is explicitly given.
Copying a partition from one clone to another:
hdbcopydata -d staticdb.scadamom.scada.ems -s scada.dts
This example shows that the destination copy context is a partition within the specified
database of the clone. The source copy context is the same partition within the same
database.
Copying a database to an archive file:
hdbcopydata -s scadamom.scada.ems -df snapshot_scadamom.arc
In this example, the copy context is a database, and the archive file is created containing
the database named in the source specification.
Copying a field from an archive file to a clone:
hdbcopydata -d id_substn.scadamom.scada.ems -sf mydata.arc
In this example, the copy context is a field within the named database. The field is copied
from the source archive file to the destination clone.
Copying a field in an archive file to a different field in a clone:
hdbcopydata -d id_x.hdbtest5.hdbtest.xxx -sf hdbtest.arc -sc id_z.hdbtest5

Proprietary – See Copyright Page 25 hdbcopydata


Hdb User’s Guide
In this example, the source copy context and the destination copy context do not have the
same components. However, both source and destination contexts must identify the same
kind of object, which means that a partition cannot be copied to a field.
A special copy context case:
When a source object is a partition and it is copied into a destination clone, the destination
context is the whole database and not just the partition. This exception from the definition
of copy context is supported so that hdbcopydata can handle some of the more common
kinds of database schema changes. More information about this subject can be found in
the discussion about copying partitions in section 4.7 Partition Copy.

4.3 Stamps
This section briefly describes stamps that are involved in copies, as well as how stamps are
handled during copy operations.

4.3.1 Schema Definition Stamps


The schema definition stamp is a cyclic redundancy check (CRC) that determines whether
two instances of a given database share the same schema definition. Two schema
definition stamps pertain to the hdbcopydata utility: the partition definition stamp and the
database definition stamp. (If two database instances share the same schema definition,
then fast-mode copy is enabled. Otherwise, a compatible copy is performed.)

4.3.2 Partition Change Stamps


Partition change stamps record partition change events. A set of change stamps exists for
each partition. These are time stamps and are defined as T*4 data values.
The partition change stamps include:
• Archive stamp
• Backup stamp
• Entry stamp
• Update stamp
• Write stamp
However, only the archive partition stamp is updated during a copy operation — although
all stamps are copied when a partition is copied as a whole. If a database schema change
has occurred where a partition exists in the destination but does not exist in the source of
the same database schema, then the partition change stamps are not copied.

Proprietary – See Copyright Page 26 hdbcopydata


Hdb User’s Guide
4.3.2.1 Archive Stamp
The hdbcopydata utility sets an archive stamp for each partition modified during an
hdbcopydata operation. Only one archive time stamp value is applied to each modified
partition.
Partitions are modified during whole database copies, partition copies, and/or field copies.
If a partial database container is copied, it is possible that more than one partition in the
destination database will have its archive stamp set. For more information about partial
database copies, refer to the discussion of partitions in section 4.7 Partition Copy.

4.3.3 Record Time Stamps and Database Stamp


Record time stamps register changes in the record extent caused by an insert or a delete
operation, in addition to those rare instances when an LV (Last Value) is reset because of a
truncation during a copy operation. Record time stamps are always copied during a whole
database copy, but not for other types of copies, such as partition or field copies.
The database stamp is a single time stamp that is updated if an individual record’s time
stamps are updated. It is copied when a whole database instance is copied. It is modified
(destination) only if hdbcopydata finds it necessary to modify a record time stamp when a
truncation occurs.
It is possible that, during a whole database copy, an LV will need to be reset because the
destination database MX value is less than the source LV value. This is referred to as
“truncation”. If truncation is enabled when the LV is changed, the corresponding RT (record
time stamp) and database stamps are also updated.

4.4 Same Definition Database Copy


When a database is copied, a check is made to see if the definitions of the source database
and the destination database are the same. If the definitions are the same, then the
database can be copied using fast-mode copy on a partition-by-partition basis. If the
definitions are not the same, then the database is copied using the compatible copy
described below.
What does it mean to “have the same definition”? The source and the destination definition
stamps must be equal in the following ways:
• Database definition stamps must be the same.
• Each partition definition stamp must be the same.
The database definition stamps are never the same if the databases do not have the same
schema name, because the schema name is part of the contribution to the database
definition stamp. Thus, the first requirement is that the source and destination database
contexts must be the same.

Proprietary – See Copyright Page 27 hdbcopydata


Hdb User’s Guide
The rule for determining if two databases have the same definition does not involve the
record (table) definition stamps, nor does it involve any of the time stamps in the database.
The fast copy is a memory-move operation that is executed on a partition-by-partition
basis moving each partition byte-for-byte. This is the fastest method for copying a
database.

4.5 Compatible Definition Database Copy


A compatible definition database copy is a field-by-field copy from the source database to
the destination database. The term “compatible” means that the databases are nearly
identical in schema so that a copy operation can still occur — although, the source or
destination database failed the same definition database copy test outlined in section 4.4
Same Definition Database Copy.
A compatible database copy can be applied to two instances of the same database, even if
they differ in schema (e.g., new fields or changing field data types) or the MX values do not
match. Note that a compatible database copy can be performed between two entirely
different databases.
In a compatible database copy, the partition boundaries are ignored, and the copy
proceeds on a field-by-field basis according to the following steps:
1. First, the LV values are copied from source to destination for each record type that is
defined both in the source and in the destination.
The LV value always honors the destination schema MX value. That is, you cannot set an
LV value of 100 into a destination record type LV whose MX value is only 50. If you want
to continue with such a copy, the -ignoretruncate parameter must be specified and the
resultant LV is limited by the destination MX.
2. Next, the RT (record time stamp) values are copied from source to destination.
If the LV values are copied without change, then the RT value is copied without change.
If the LV value is truncated as a result of the copy, then the RT value is set to a new
timestamp value to reflect the change in the LV.
3. Finally, all fields are copied using the field name as the key to match the source to the
destination field.
The partition is not involved.
Field copy does not require source and destination data types to be the same. During field
copy, if a destination database contains a field not found in the source database, then it is
skipped. Alternatively, you can specify the -datafill parameter, which results in the
destination field being set to the FILL byte. If a field appears in the source but not in the
destination, then it too is skipped.

Proprietary – See Copyright Page 28 hdbcopydata


Hdb User’s Guide
4.6 Incompatible Database Definition Copy
Differences between source and destination database schema can prevent a copy
operation from occurring. These differences are referred to as “incompatible database
definitions”.
The following are incompatible database definitions:
• A record type of a given name that is FREE in the destination database, but not FREE in
the source database.
• A hierarchical child record of a given name (it has a designated parent) in the
destination database with a different parent, or one that does not have a parent in the
source database (including any non-hierarchical record type).
• A record type of a given name in the source database that does not have the same
ancestry in the source database (an extension of the previous rule).

4.7 Partition Copy


Partitions are copied as individual objects in a number of ways, depending on the type of
copy operation and the copy context.

4.7.1 Whole Database Fast-Mode Partition Copy


In this method, each partition is matched by name in the source and destination databases,
and then the data is copied as a memory move operation. The copy results in a byte-for-
byte exact copy of the partition. The memory move includes all partition data and partition
stamps.

4.7.2 Partial Archive or Savecase Partition Copy


When an archive or savecase is created that contains only a subset of the partitions of the
database, then the archive or savecase is called a “partial database container”. When the
archive or savecase is created, the entire database schema is included — even other
partition schema information about partitions not in the container file. In addition, the
PSEUDO1 partition, which contains the RT and LV values (as well as other information), is
always copied.
When a partial database container is copied, if the database definitions are the same, the
source partition is copied to the destination clone using fast-mode partition copy, which is a
memory move operation. If the definition stamps are not identical, then the fields of the
source partition are copied individually to the destination database.
Note that, in this situation, the destination partition is ignored. The fields of the source
partition are copied to the destination database and not to the destination partition.

Proprietary – See Copyright Page 29 hdbcopydata


Hdb User’s Guide
4.7.3 Copying Named Partitions from a Container to a Clone
This copy operation occurs when the partition name is specified in both the source context
and the destination context (it need not be the same name). When a named partition is
copied from one container to another, the type of copy that occurs depends on two things:
(1) whether the partitions have the same schema definition (different instances of the same
database) and (2) whether the partitions are different (either by name or by definition).
If the partitions have the same schema definition and are different instances of the same
database, then a fast-mode copy is performed. If the partitions are different, then only the
fields that are the same in both the source and the destination partitions are copied. This is
because the user has requested that a partition be copied to another partition, as opposed
to a partition being copied to a database.

4.7.4 Copying Partitions within a Clone


Partitions can be copied within a clone; however, in some cases, no data is copied. A
partition can be copied from one database of a clone to another database in the same
clone, as long as it does not attempt to copy itself, and it cannot be copied to another
partition within the same database.

4.8 Field Copy


Fields are copied as individual objects when the copy context specifies field copy, or when
the differences in the definition stamp require a field copy. There are two field copy
operations.

4.8.1 Source and Destination Fields Do Match by Name


This is the most common type of field copy that can result from a compatible database
copy when the definition stamps of the source and the destination do not match.
This same logic is used whether the field is copied as a result of a whole database copy, a
partition copy, or a named field copy, and where only the one field name is involved.
The following assumption is made about the same named field copies: field dimensional
rank in source and destination is the same.
The important thing to note is that nothing else is assumed about same named field copies.
The data type, data element size, and LVs of individual contributing record types may be
different in source and destination databases.

4.8.2 Source and Destination Fields Do Not Match by Name


This type of copy only results if the field is explicitly specified in the source and destination
context as a different field name. The only requirement is that the two fields must match in

Proprietary – See Copyright Page 30 hdbcopydata


Hdb User’s Guide
their dimension — that is, they both must be scalar, 1-dimensional, 2-dimensional, or 3-
dimensional.
Fields can be copied from one container to another, or fields can be copied from one field to
another in the same database of the same clone (remember, existing archive or savecase
files cannot be modified).

4.8.3 Field Copy Algorithm


Fields are copied using a complex and optimum algorithm to achieve the fastest possible
copy operation without conversion errors.
The field copy algorithm is described here:
If the source and destination fields match by data type and data element size, and if the
dimensional rank is one (1), then the field is moved as a memory copy operation from the
source to the destination database. The data type is ignored in this case. The memory move
takes into consideration the LV and MX of the source and destination in order to move the
correct number of LV-justified bytes. Differences in LV (possibly caused by differences in MX)
may require data truncation or data fill operations (see section 4.11 Truncation and Datafill).
• If the source and destination fields are 2-D or 3-D, and have the same data type and
data element length, then the move is based on the array shape. If the shape is
compatible, where the inner dimensional sizes are the same in source and destination,
then the field is copied as a memory move. If the shape is not compatible (e.g., moving a
30x100 to a 10x50), then the individual columns (of each planar dimension) are moved
as memory copy operations.
• Field masks and bit container fields are treated in a special manner. If the source and
destination bit containers have the same bit shape — meaning that the order and
names of each mask field in the source and destination are the same — then the bit
container is moved as an ordinary, integer data value and is subject to the move rules
previously mentioned. In this case, where the bit containers are moved as a whole, the
individual bit mask fields are skipped during processing of the copy operation.
If the source and destination bit containers do not have the same bit shape, then the bit
masks are moved as individual objects. This involves the masking and ORing together of
the bit values, as needed, in their respective bit container fields. Because of this logic,
mask fields can be redefined from one bit container to another without causing
problems when copying data.
• If the fields have a different data element size but the same data type, then the data is
converted from one data element size to another using the same type conversion rules.
For example, a two-byte integer is sign-extended to a four-byte integer when moving a
(I*2) to a (I*4) field. The same logic applies to supported data types. In this type of data
size conversion, the field is copied on an element-by-element basis from one (1) to LV.
• If the fields are of different data types, then the data must be converted. Refer to the
following conversion table to determine conversions. Data conversion takes place on an
element-by-element basis from one (1) through LV.

Proprietary – See Copyright Page 31 hdbcopydata


Hdb User’s Guide
4.8.4 Field Data Type Conversion
Field data type conversion is supported as described in Table 2.

Table 2. Conversion Rules


Data Type Conversion Rules
I Integer data type supports conversions to L and M (boolean) in the following manner: zero is
considered false, non-zero is considered true (for M types, false is zero bit, true is 1 bit).
Integer to B (bit container) is not supported.
Integer to D (date) and T (time) is supported without conversion (treating D and T as integers).
Integer to R (floating-point) is supported.
Integer to C (character) is not supported (see the comment in section 4.8.4.2 Character
Conversion).
L Boolean (a.k.a. logical) conversion to integer in the following manner: false is set to zero and
true is set to 1.
Boolean to M (mask field) is false is zero bit, true is 1 bit.
All other conversions operate on L converted to I converted to other types.
B Bit container conversion is not supported.
M Mask fields are converted as if they were individual single-bit unsigned integers where 1 has
magnitude of signed integer +1. All such integer conversion rules therefore apply to mask
fields.
D Date is considered a signed integer for all conversions.
T Time is considered a signed integer for all conversions.
R Real (floating-point) is converted to integer using truncation and then whole number fixing.
There can be overflow conversion errors on magnitude, but all fractional numbers become
zero without any conversion error.
Real (floating-point) is converted to other data types based on first converting to integer and
then performing the integer data type conversion rule.
Real to C (character) is not supported (see the comment in section 4.8.4.2 Character
Conversion).
C No cross-data type conversions are currently supported for character.

4.8.4.1 No Conversion Support Rule Procedure


When there is no supported conversion rule, the destination field is set to the field’s FILL
byte. To perform a non-supported data conversion, export the data to an ASCII formatted
file (hdbexport), edit the changes, and then import the data (hdbimport).

4.8.4.2 Character Conversion


It is theoretically possible to convert a numeric field (e.g., integer or floating-point) to
character by assuming a particular coding scheme (e.g., sprintf using %d or %f). However,

Proprietary – See Copyright Page 32 hdbcopydata


Hdb User’s Guide
the utilities hdbexport and hdbimport already perform this function more easily than
hdbcopydata, thus this function is left to hdbexport and hdbimport.

4.8.4.3 Numeric Conversion Error Handling


Conversion errors can result when converting within the same type but different element
size, or when converting from floating-point to integer. Conversion errors cause an error
message to be displayed, unless the -ignoreconversion parameter is set.
There can also be loss-of-precision conversion “errors” when converting between
differently sized floating-points, or loss of fractional components when converting from
floating-point to integer. These are not considered errors, thus no error message is issued.

4.9 Selcop Copy


“Selcop” (Selective Copy) is a method to merge two databases together.

Note: A selcop copy operation modifies the source database, and it may modify the
destination database as well. In general, both source and destination databases are
modified.

Selcop copies fields marked as selcop fields from the destination to the source database.
Fields are marked as selcop fields by the database schema definition file (DBDEF) or the
selcop configuration file. Selcop fields are copied on an element-by-element basis by
matching the record occurrence of the source and destination databases. A matching
record occurrence is matched either by key value or by subscript range. For example, a
selcop field element located by subscript 324 in the source can be copied to subscript 325
in the destination, because the records are different based on the insertion or deletion that
took place.
When copying from the destination to the source, the data being copied varies based on
the following logic:
1. Any record type without a KEYFIELD has its SELCOP fields copied by subscript position. If
the source data has more records than the target (i.e., the LV value is larger), the excess
records have their SELCOP field retain the original source value, since there is nothing in
the target to selectively copy from. If the target data has more records than the source,
the excess records are simply removed from the target.
2. Any record type that can be matched up via a KEYFIELD or a set of KEYFIELDs
(hierarchical composite key) has its SELCOP fields selectively copied from the matched
record. Any record in the target with a KEYFIELD that is not matched up with a record in
the target is erased by the copy. Any record in the source with a KEYFIELD that is not
matched up with a record in the target is copied over with the original values in the
SELCOP fields.

Proprietary – See Copyright Page 33 hdbcopydata


Hdb User’s Guide
3. If the source database is coming from a savecase rather than a clone, the SELCOP
operation works as expected as far as the target database retaining SELCOP field
values, but the operation does not update the source savecase.
After the selcop fields have been copied from the destination to the source database, the
whole database is copied from the source back to the destination. Thus, the result of a
selcop copy operation is to preserve the data values in selective fields before copying the
data from source to destination database.
There are two levels of selcop support, and both are selected by command parameter. The -
selcoponly parameter performs a selective field copy from destination to source. It does not
perform the source to destination, whole database copy. The -selcop parameter performs
both the destination to source selective field copy and the source to destination, whole
database copy.

4.9.1 Modifying Selcop Fields


The selcop flag was originally fixed by the DBDEF and was not meant to be modified easily.
In response to a 2007 and 2008 Users Group SIR, selcop was enhanced to be more flexible.
A new selcop configuration file can be created to specify selcop fields based on their field
name, database, application, and family. This allows, for example, a DTS database to have
different selcop fields than an EMS database.
The default location for this file is %HABITAT_CFGDIR% and the default filename is
hdb_selcopfile.txt. If you would like to specify a different location and filename, the full path
and filename can be defined using the environment variable HABITAT_HDB_SELCOPFILE.
A sample hdb_selcopfile is delivered with Habitat. For reference, the list should be
formatted as:
<FIELD>,<DATABASE>,<APPLICATION>,<FAMILY>
For example:
ID_SUBSTN,SCADAMOM,SCADA,DTS
The <APPLICATION> and <FAMILY> should always refer to the target application and target
family of the clone.
Only ONE hdb_selcopfile should ever exist; however, there is no limit as to how many fields
can be listed. Lines beginning with the # sign are ignored.
The selcop flag in the DBDEF is still preserved.
Anytime an hdbcopydata command is run with the -selcop or -selcoponly parameter, Hdb
will check whether the hdb_selcopfile exists. If it does, Hdb will selcop the fields in that file
as well as the selcop fields that are set in the DBDEF. The same goes for the HABITAT
RETRIEVE/CASE command when called from displays. If the hdb_selcopfile does not exist,
hdbcopydata and HABITAT RETRIEVE/CASE will perform as usual, only using the selcop fields
from the DBDEF.

Proprietary – See Copyright Page 34 hdbcopydata


Hdb User’s Guide
4.9.2 Selcop Example
Suppose there are two databases called “DB1” and “DB2”. Both databases were created
with the same database schema so they are compatible. In reality, these two databases
would reside in different clone or archive files. The field ix_r is designated as a selcop field in
the DBDEF. Furthermore, suppose the record hierarchy of record type R looks like this:
P (keyfield=ID_P)
|
+--Q (keyfield=ID_Q)
|
+--R (keyfield=ID_R, selcop field=IX_R)
We will use the notation %KEY to represent the composite key of a record type, so the
possible key values for %KEY_R will be the combination of the key values of the hierarchy.
We will use a period (.) to separate the key components, so a composite key for record type
R has the form:
<ID_P>.<ID_Q>.<ID_R>
Before the selective copy begins, the contents of the respective databases before the
selcop operation are like this4:
DB1 DB2
%KEY_R IX_R %KEY_R IX_R
A.B.C Alias U.P.N StarTrek
B.B.C CoronationStreet A.B.C SpinCity
C.B.C NatureOfThings N.B.C Frasier
C.B.S CSI F.O.X X-Files
N.B.C Seinfeld B.B.C SherlockHolmes
F.O.X Simpsons
Assume we are doing a SELCOP copy from DB1 (source) to DB2 (destination). The first stage
of SELCOP copy is to copy the selcop field values with the same composite key from DB2
(destination) to DB1 (source). This will result in a picture like this:
DB1 DB2
%KEY_R IX_R %KEY_R IX_R
A.B.C SpinCity U.P.N StarTrek
B.B.C SherlockHolmes A.B.C SpinCity
C.B.C NatureOfThings N.B.C Frasier
C.B.S CSI F.O.X X-Files
N.B.C Frasier B.B.C SherlockHolmes
F.O.X X-Files
In the second stage of the SELCOP copy, the complete content of the source database (DB1)
is copied to the destination (DB2). This is similar to a regular database copy where all entries
from the source are copied to the destination; this is why the record with the composite key
“U.P.N.” no longer exists after the copy in DB2. The resulting picture of the databases looks
like this:

4
The picture shows the contents of the databases logically. Physically, the records could be ordered
differently.

Proprietary – See Copyright Page 35 hdbcopydata


Hdb User’s Guide
DB1 DB2
%KEY_R IX_R %KEY_R IX_R
A.B.C SpinCity A.B.C SpinCity
B.B.C SherlockHolmes B.B.C SherlockHolmes
C.B.C NatureOfThings C.B.C NatureOfThings
C.B.S CSI C.B.S CSI
N.B.C Frasier N.B.C Frasier
F.O.X X-Files F.O.X X-Files
It is important to realize that the SELCOP copy always modifies the source database,
because matching data is copied from the destination database to the source first.
The net result of a SELCOP copy is the preservation of the existing values of the SELCOP field
in the destination database — but, at the same time, new entries to the record type are
added.

4.9.3 Using Selcop


There are several methods to enable the Selcop feature at runtime when copying data into
a clone’s database from either a clone (clone-to-clone) or a savecase file (savecase-to-
clone). Here is a list of these different options.
• Use the “Retrieve with Selcop” option when retrieving a savecase into a clone using the
Savecase Manager displays. The following example is a savecase-to-clone data retrieval
with Selcop enabled.

• Set the Selective Copy flag when configuring the Copy Data records for the Database
Update Sequence (DBSEQ) application. With DBSEQ you can configure a clone-to-clone
or savecase-to-clone database copy, with the option to enable Selcop or not enable it.
The following example performs a savecase-to-clone selective copy operation in DBSEQ,
copying the netmodel.ade savecase into the rtca.ems clone with the Selective Copy flag
set. For more information about DBSEQ, refer to the Database Update Sequence User’s
Guide.

Proprietary – See Copyright Page 36 hdbcopydata


Hdb User’s Guide
• Use the -selcop or -selcoponly option when running the hdbcopydata utility from the
command prompt.
The following example retrieves from a savecase, case_netmodel.ade.emp60, into the
RTCA.EMS clone with the Selcop option enabled:
> hdbcopydata -d rtca.ems -case netmodel.ade.emp60 -selcop
The same command can also be specified like this:
> hdbcopydata -d rtca.ems -a netmodel -case ade -title emp60 -selcop
The following example shows how to use a clone (instead of a savecase) as the source
to copy a database into another clone with the selcop option enabled. This command
copies the NETMOM database from the NETMODEL.EMS clone into the RTCA.EMS clone
with the Selcop option enabled.
> hdbcopydata -d rtca.ems -s netmom.netmodel.ems -selcop

4.10 Compatibility with Savecases from Previous Releases

4.10.1 Habitat 5.x Versions Prior to 5.8


Hdb internals for Habitat versions 5.8 and later have changed to support 64-bit sizes and
offsets. This change allows the creation of clones larger than 4 GB. However, this change
also makes it impossible to use savecases from a Habitat 5.8 and later system on a version
of Hdb prior to version 5.8.
The workaround is to export the data, using hdbexport, from the newer Habitat system and
import it back into an e-terrahabitat 5.7 system. In a Habitat 5.8 or later system, Hdb can
still retrieve savecases created from e-terrahabitat 5.7 or earlier.
The Hdb programming API has not been changed because of the internal 64-bit support.

Proprietary – See Copyright Page 37 hdbcopydata


Hdb User’s Guide
4.11 Truncation and Datafill
Truncation occurs when the destination record LV is smaller than the source record LV of
the same record type.
The destination LV may be smaller for a number of reasons. When a whole database is
copied, hdbcopydata sets the destination LV smaller because the destination MX limits the
value. If the destination record type is MaxLV style, the LV may be smaller because MX is
smaller, although it is not set by hdbcopydata.
By default, Hdb does not allow truncation to occur. If hdbcopydata discovers that an LV is
too small, an error is issued and the LV is left unchanged. However, the default can be
overridden with the -ignoretruncate parameter, which tells hdbcopydata to ignore
truncation errors.
Datafill, the opposite of truncation, occurs when the destination LV is larger than the source
LV.
The datafill problem most likely occurs when increasing the MX for a MaxLV-style record in
a schema, then copying the data from a database based on the old schema with a smaller
MX and, thus, a smaller LV value.
The datafill problem can also occur when individual fields are copied from one database
instance to another. When a full database is not copied, the destination LV values are left
unchanged; therefore, individual field copies are subject to destination LVs.
The datafill problem does not result in a change to an LV value. However, it does result in
the need for supplying data in fields where the source has no equivalent data values. The
rule hdbcopydata uses is that destination field elements from sourceLV+1 through
destinationLV are set to the destination field’s FILL byte value. Setting excess data to FILL
bytes is applied to all dimensional ranks of the field. However, when a whole database is
not copied, datafill is not a default and must be set through the -datafill parameter.
If -datafill is not specified and a datafill event occurs, an error message is issued. However,
a datafill “error” does not mean that there is a problem. When the datafill is not performed,
excess elements are left unchanged and may be OK (e.g., already set to FILL byte values) for
the needs of the user. In such cases, errors can be ignored by hdbcopydata by setting the
-ignoreconversion parameter.

4.12 Examples
This section provides hdbcopydata examples. The examples use the SCADA clone schema,
which has the following savecase types defined: SCADA, DTS, SOELOGS, COMMLOG,
ACCUHIS, and TAGGING.

Proprietary – See Copyright Page 38 hdbcopydata


Hdb User’s Guide
4.12.1 Example #1: Creating a Savecase File
Enter one of the following commands:
% hdbcopydata -s scada.ems -case dts -title todays_data
% hdbcopydata -s scada.ems -case dts.todays_data
A savecase file is created in the HABITAT_CASES directory.
case_scada_dts.todays_data
Other equivalent commands that do the same thing:
% hdbcopydata -s scada.ems -case scada.dts.todays_data
% hdbcopydata -s scada.ems -case dts -a scada
-title todays_data
% hdbcopydata -s scada.ems -case scada.dts.todays_data
-title todays_data
A commonly used copy operation is the creation of a savecase file. In this example, the
clone is an instance of the Scada application under the EMS family. The savecase is called
“DTS”, which creates a savecase consisting of the databases SCADAMOM, SCADACL, and
MESCADA.
The commands shown above all perform the same operation. The differences are:
• Using the -title parameter to specify the savecase title, or not
• Using the -a parameter to specify the application, or not

4.12.2 Example #2: Retrieving a Savecase File


Enter one of the following commands:
% hdbcopydata -d scada.ems -case dts -title todays_data
% hdbcopydata -d scada.ems -case dts.todays_data
The most common copy operation is the retrieval of a savecase file. This example is
identical to Example #1, except that the -s parameter in the first command is now a -d
parameter. Thus, you designate the destination clone along with the case description. All of
the other examples in Example #1 can be used as case retrieval commands by changing
the -s to a -d.

4.12.3 Example #3: Retrieving a Savecase File using the File Path
Name
Enter the commands:
% cd $HABITAT_CASES
% hdbcopydata -d scada.ems -sf case_scada_dts.todays_data
Or:
% hdbcopydata -d scada.ems -case $HABITAT_CASES/dts.todays_data

Proprietary – See Copyright Page 39 hdbcopydata


Hdb User’s Guide
Note: This example uses Linux conventions. For Windows, use the respective convention
to specify the file path.

This example performs the very same case retrieval operation as the previous examples;
however, the savecase file is explicitly identified. To use this method, you must know the full
path location of the savecase file (as shown above) and the syntax of the savecase file
name.

4.12.4 Example #4: Creating an Archive File Containing all Databases


of a Given Clone
Enter the command:
% hdbcopydata -s scada.ems -df full_scada.arc
This example creates the named file (full_scada.arc) in the current directory path location.
In addition, all databases defined in the source clone are copied to the archive file. The
archive file can be named anything. The .arc extension is not required; however, it is
recommended.

4.12.5 Example #5: Creating an Archive File Containing a Single


Database of a Given Clone
Enter the command:
% hdbcopydata -s scadamom.scada.ems -df scadamom_only.arc
This example creates the archive file (scadamom_only.arc) that contains only the
scadamom database from the SCADA EMS clone.

4.12.6 Example #6: Creating an Archive File Containing a Single


Partition of a Given Clone
Enter the command:
% hdbcopydata -s clslodis.scadamom.scada.ems -df clslodis_scadamom.arc
This example creates the archive file (clsodis_scadamom.arc) that contains only a single
partition called “clsodis”.

Proprietary – See Copyright Page 40 hdbcopydata


Hdb User’s Guide
4.12.7 Example #7: Copying a Database from One Clone to another
Clone
Enter the commands:
% hdbcopydata -s scadamom.scada.ems -d scada.offline
% hdbcopydata -s scadamom.scada.ems -d scadamom.scada.offline
The two example commands are equivalent. They copy the scadamom database from the
source clone SCADA EMS to the destination clone SCADA OFFLINE.

4.12.8 Example #8: Copying a Database on Top of another Database


from Clone to Clone
Enter the command:
% hdbcopydata -s hdbtest5.hdbtest.xxx
-d hdbtest6.hdbtest.yyy
-noinitialize
This example (using two contrived databases, hdbtest5 and hdbtest6) shows that the
database HDBTEST5 is copied on top of the database HDBTEST6 in a separate clone.
Notice that the -noinitialize parameter prevents the target database from being initialized.
This is certainly not an ordinary thing to do. However, in this example, there is only one field
called “N_X” common to both the HDBTEST5 and HDBTEST6 databases. Also, this field is not
part of the same kind of record structure. In HDBTEST5, the X record structure is
NON-HIERARCHICAL. In HDBTEST6, the X record structure is CIRCULAR. However, the copy
proceeds without any problems, because it is as if the field N_X were copied by itself.

Note: The copy operation described here is special. It easily garbles the resulting target
database if you are not familiar with the schema details of each database.

4.12.9 Example #9: Copying a Field from Clone to Clone


Enter the commands:
% hdbcopydata -s scada.offline
-d scada.ems
-sc id_accum.scadamom
% hdbcopydata -s scada.offline
-d id_accum.scadamom.scada.ems
% hdbcopydata -s id_accum.scadamom.scada.offline
-d scada.ems
This example copies the ID_ACCUM field from the SCADAMOM database in the SCADA
OFFLINE clone to the ID_ACCUM field in the destination clone — SCADA EMS.

Proprietary – See Copyright Page 41 hdbcopydata


Hdb User’s Guide
5. hdbdocument
The hdbdocument utility converts a database definition source file (xxx.dbdef) into an ASCII
formatted file (xxx.dbdoc). This chapter contains detailed documentation of the database
definition file.

5.1 Document Layout


The output file contains the following sections:
• Title, including the name of the database, the title of the DBDEF file, and the date the
document was created
• Database comments
• ITEMS fields
• Non-hierarchical records
• Multidimensional fields
• Hierarchical records, including FREE records
• Partitions
For any given record type, its key field (if present) is shown before the other fields for the
same record type.
The slot number of the field within the DBDEF and the name of its partition are aligned to
the right in the document.

5.2 Examples
The commands below take a source DBDEF file (permit.dbdef) from the current directory
and generate an ASCII file called “permit.dbdoc”, which contains the structure and the
documentation of the DBDEF file.
% hdbdocument -dbdef permit.dbdef
% hdbdocument -dbdef permit
The following command takes the input file permit.dbdef, formats it, and then generates an
output file called sample.dbdoc.
% hdbdocument -dbdef -output=sample permit.dbdef

Proprietary – See Copyright Page 42 hdbdocument


Hdb User’s Guide
6. hdbdump
The hdbdump utility is a developer’s debug and analysis tool used to troubleshoot Hdb
binary file problems. It can also be used by a system administrator to determine if Hdb files
are valid or if they have been corrupted.
The hdbdump utility can dump database schema, structures, data, or Hdb binary files. ASCII
formatted files can be viewed using standard text display tools.
The following are files and/or sources that hdbdump recognizes:
• Clones
• Clone files (offline)
• Archive files
• Habitat savecase files
• DNA files
• Cloning database core data files
• Memory-resident cloning databases
The hdbdump utility can perform any of the following operations:
• Dump data into a readable format.
• Dump any database, partition, or field, as well as data from named sets of databases,
partitions, and/or fields.
• Dump record hierarchy data in a tree-indented report beginning with _ROOT, or from a
subtree, by naming the root record type.
• Dump record hierarchy schema data (hierarchy of types) in a tree-indented report
beginning with _ROOT, or from a subtree, by naming the root record type.
• Dump the schema definition for a database, table (record type), field, and/or partition.
• Dump header information for any kind of file.
• Dump the binary structure and segment layout for clone and archive files.
• Dump database, partition, and record time stamps.
• Verify the integrity and correctness of database pointers and resetting database
pointers. This function is also supported by hdbrio; however, hdbdump can verify the
pointers of every savecase and archive file in the system with a single command
execution.

Proprietary – See Copyright Page 43 hdbdump


Hdb User’s Guide
6.1 hdbdump Uses
The hdbdump command accepts three distinct command parameters:
• Source
• Action
• Modifier
Source parameters define the source object on which command actions are to operate,
and the modifier enhances or restricts the action of a command.
There are no default source objects and no default actions.
Multiple source objects of the same kind can be specified by using various command
parameters and wildcard characters in place of names.
Actions are not supported on all source objects. For example, you cannot dump the
hierarchy of the core data file, because the core data file does not have a hierarchy. In such
cases, a message is displayed indicating that the action does not apply to the source
object.
Modifiers enhance or restrict a command’s action. For example, when the source object is a
database, the -name, -field, -partition, and -table command parameters limit the dump
output to the named objects.
For more information about the source, action, and modifier parameters used with
hdbdump, refer to chapter 13 Hdb Utilities Reference.

6.2 Source Examples


The following example dumps all SCADAMOM databases that appear in the memory-
resident cloning database:
%hdbdump -cdb -d scadamom -header
The next example dumps the header segment of the memory-resident database:
%hdbdump -header -cdb
The following command dumps the schema of all tables (any occurrence) of the NETMOM
database in all clones. Note that, on the Linux platform, the use of single quotes around the
wildcard ('*') is required to keep the command from being interpreted as a file globbing
operation.
%hdbdump -schema -tabl*e ´*´ -d netmom -a -f ´*´
The last example dumps data from all _PSUEDO1 partitions of all databases in the selected
archive files. Note that the wildcard character associated with the -archive parameter is
not enclosed in single quotes, because file globbing is desired.
%hdbdump -name PSEUDO1 -d ´*´ -archive\var\data\archives\*.arc -data

Proprietary – See Copyright Page 44 hdbdump


Hdb User’s Guide
6.3 Action Examples
You can specify as many actions in a command line as you like, provided the actions apply
to the command. Hdbdump repeats actions for each source object it processes. Following is
an example of an action in a command line.
The following command dumps all databases beginning with the letter “R” in the cloning
database:
%hdbdump -header -cdb -d R*

6.4 Naming Database Objects Examples


Database objects, such as tables, partitions, and/or fields, can be selected for command
actions. This is accomplished through modifiers and applies only to database objects.
In Windows, to specify all names beginning with “ID_”, use the following command:
%hdbdump -name ID_* -d* -a* -f*
Use the same command on the Linux platform; however, enclose the wildcard in double
quotes ("*") to prevent file globbing.
To dump schema for tables X, Y, and Z, as well as any indirect fields that have X or Y as the
object record type, and partitions that are named starting with the letter “P”, use the
following command:
%hdbdump -schema -d * -partition P* -table X Y Z\ -field _X_* _Y_*

Proprietary – See Copyright Page 45 hdbdump


Hdb User’s Guide
7. hdbexport
The hdbexport utility exports the binary data from an Hdb database to an ASCII formatted
file. The format of the exported ASCII file can be configured using different command
options.
The hdbexport utility is designed to work in a reflexive manner with hdbimport. The default
operation of hdbexport produces a full database export ASCII file that can be imported by
hdbimport to reproduce the same database content.
This chapter covers the following topics:
• Reasons to export data from a database
• A functional overview of hdbexport
• The operation modes of hdbexport
• The format of the export data file
• Some example uses of hdbexport

7.1 Reasons to Export Data from a Database


Several possible uses of the exported ASCII file are:
• Data archiving: This is an alternative to using hdbcopydata, because the exported ASCII
file is independent of the Hdb archive binary format.
• Importing into other applications: The format of the exported data can be configured so
it can be loaded into other applications such as a spreadsheet application (e.g.,
Microsoft Excel) or a database (e.g., Oracle, SQL Server).
• Other data management purposes: The exported data can be manipulated with utilities
such as Perl, awk, sed, or a text editor for any purpose the user desires. For example, the
user can change values in certain fields and re-import the data back into the database.

7.2 Functional Overview


The output ASCII file generated by the hdbexport utility can contain data lines in two
formats: record format and field element format.
When importing data that is in the record format, the hdbimport utility generates new
record timestamps and new OID (object identifier) values as items are being added into the
database (for more information about hdbimport’s insert mode operation, see section 9.3.2
Methods of Record Insertion and Update).
When importing data that is in the field element format, the hdbimport utility retains the
source database’s timestamps and OID values as items are being updated in the database

Proprietary – See Copyright Page 46 hdbexport


Hdb User’s Guide
(for more information about hdbimport’s update mode operation, see section 9.3.2 Methods
of Record Insertion and Update).
Some key hdbexport features are highlighted here. Some of the options pertinent to the
function are listed inside parentheses. For more-detailed use descriptions of the options,
see the hdbexport command in chapter 13 Hdb Utilities Reference.
Formatting:
• Record format pattern: Output the desired fields of a record (-create_patterns,
-pattern).
• Miscellaneous formatting options (-nofieldnames, -nonames, -nonulls, -noprefix,
-noquotes, -norecordnames, -nosubscript, -separator, -subtree,
-timedate_as_numbers).
Filtering:
• Record range filtering: Output only records with the specified keys or a subscript range
(-by_record, -key, -record, -range).
• Field range filtering: Output only fields with the specified subscript ranges (-by_field,
-field, -key, -range).
• Single field element filtering: Output a specific field with the specified key or subscript
specification.
• Selective record filtering: Skip output of a specified list of records (-skip_record).
• Selective field filtering: Skip output of a specified list of fields (-skip_field).
• Record/field type filtering: Skip or include fields by type (-hier_only,
-include_pointers, -include_ancestors, -include_pseudo, -nofree, -noitems, -nomasks,
-nomultidim, -nonhier_only).

7.3 Operational Modes


The hdbexport utility supports four different operational modes that are selected by various
command options.
The four modes are shown in Table 3.

Table 3. Operational Modes


Mode Command Option General Use Intent
Default None required Export all user data
Record -key, -record, -range, Export user-specified records only
-by_record
Field -field, -by_field Export user-specified fields only

Proprietary – See Copyright Page 47 hdbexport


Hdb User’s Guide
Mode Command Option General Use Intent
Pattern -create_pattern Get the record patterns

7.3.1 Default Mode


The default mode has no specific mode option specified. It is intended for exporting all the
user data in a database. This mode excludes the output of the pseudo data fields (internal
to the database) unless the -include_pseudo option is specified.
In the default mode, data is exported in this order:
1. The ITEMS record is exported in record format.
2. All hierarchical records are exported in parent/child order in record format.
3. All non-hierarchical records are exported in record format.
4. All 2-D and 3-D fields are exported in field element format.

7.3.2 Record Mode


The record mode excludes the ITEMS record, 2-D fields, and 3-D fields in the output file. It is
intended for exporting only the records chosen by the user.
• The -by_record option outputs all the records in the database except ITEMS as
mentioned.
• The -key, -record, and -range options are used to selectively specify particular
records for export.
• The -skip_record option can be used to eliminate unwanted records from the output.

7.3.3 Field Mode


The field mode exports data in the field element format. It is intended for exporting only the
fields chosen by the user.
• The -by_field option outputs all database fields except the pseudo fields. To add the
pseudo fields into the output, add the -include_pseudo option in the command.
• The -field option is used to selectively specify particular fields for export.
• The -skip_field option can be used to eliminate unwanted fields from the output.
• The -key and -range options are used in conjunction with the -by_field and -field
options to extract a specific range of data to output.

Proprietary – See Copyright Page 48 hdbexport


Hdb User’s Guide
7.3.4 Pattern Mode
The pattern mode generates record patterns. The record patterns can be edited and then
used with the hdbexport -pattern option to describe the format and content of the
records exported.
• The syntax of the record patterns is the same as the #record declarative of the
hdbimport utility (for more information, see section 9.7.1 Details About the #record
Declarative).
• Record filtering and field filtering options can be used to export the desired records and
fields in the record patterns output.

7.4 Data Output Format


This section describes the data output format of hdbexport. The data output format is a
subset of hdbimport’s input file format (for more details, see chapter 9 hdbimport).
This section covers these topics:
• Record format: For more information, see section 9.4.3 Record Line in the hdbimport
chapter.
• Field element format: For more information, see section 9.4.4 Field Line in the hdbimport
chapter.

7.4.1 Record Format


The hdbexport utility can use a default record format or a user-defined record format
(record pattern) to export record data.
The default record format has the following syntax:
<record-name>,<subscript>,<keyfield>,<field>,<field>...
The <record-name> token is the record type name. The <subscript> token is the integer
subscript value for the record occurrence. The <keyfield> token is the record’s primary key
field if one is defined by the schema. Each <field> token represents a scalar (ITEMS or 1-D)
field of the record (2-D and 3-D fields are not included).
The <keyfield> and <field> tokens have the following syntax:
<fieldname>=<fieldvalue>
where the <fieldname> token is the name of the field specified as the field content name.
The <fieldvalue> token is the field value. If the field value is a Habitat null value, then the
field value string is a null string (unless the -nonulls parameter is selected).

Proprietary – See Copyright Page 49 hdbexport


Hdb User’s Guide
7.4.1.1 Exporting Special Fields in a Record
By default, the special fields in a record are not exported in the default output. Here are the
special fields and the options you can specify to display these special fields:
• Parent or child pointer fields: Use the -include_pointers option.
• Pseudo fields: Use the -include_pseudo option.
• Bit container fields: Use the -nomasks option.
• Hierarchical ancestor record key fields: Use the -include_ancestors option.

Note: 2-D and 3-D fields in a record can never be displayed in the record format. These
fields are always displayed in the field element format.

7.4.2 Field Element Format


The hdbexport utility also exports individual field elements on a per-line basis. Field element
export is the only method for exporting 2-D and 3-D fields.
The field element line format is shown below:
^<fieldname>(<row>)=<fieldvalue>
^<fieldname>(<row>,<column>)=<fieldvalue>
^<fieldname>(<row>,<column>,<plane>)=<fieldvalue>
The field element line examples above reflect the 1-D field, the 2-D field, and the 3-D field.
The field element is exported as a single line in the output file. Only one field element per
line is allowed.
If all elements of a field are exported, then a single line for each field element of each record
subscript, from 1 through LV (Last Value), is exported. If the field is defined for a CIRCULAR
record type, then the subscripts range from 1 through MX (MAXIMUM).

7.4.3 Data Types of Exported Values


The data types of the exported value in record format, or in field element format, are
defined by the field data type.
The following table shows the field data types and the corresponding data types of their
exported values:

Table 4. Data Types


Field Data Type Data Type of Exported Field Value
INTEGER Data is exported as a signed integer value using decimal notation.

Proprietary – See Copyright Page 50 hdbexport


Hdb User’s Guide
Field Data Type Data Type of Exported Field Value
BOOLEAN Data is exported using the keywords T (True) and F (False). Keywords
are not considered as character strings, so they are not enclosed in
quotes.
BIT Same as integer.
MASK Same as boolean.
DATE Data is exported as a character string (in quotes) using the standard
calendar date format. If the -timedate_as_numbers parameter is
specified, then the data is exported as a signed integer data value
using decimal notation.
TIME Data is exported as a character string (in quotes) using the standard
time/date format. If the -timedate_as_numbers parameter is
specified, then the data is exported as a signed integer data value
using decimal notation.
FLOAT Data is exported as a signed floating-point number using exponential
notation for very large/small numbers.
CHARACTER Data is exported as a character string enclosed in quotes.

7.5 Example Uses


This section illustrates the uses of hdbexport and hdbimport with examples. The examples
use a contrived Habitat database called “hdbtest” in the application hdbapp and the family
hdbfam.
The example database has these record types and hierarchy:
A /* A is the parent of B */
|
+-B /* B is the parent of C */
|
+-C

7.5.1 Export/Import in Default Mode


The default mode is the most commonly used mode of hdbimport/hdbexport. The default
mode command to export the data of the database is:
hdbexport -a hdbapp -f hdbfam -d hdbtest -s data.txt
The option -s data.txt exports the data into a file called “data.txt”. If the -s option is not
used, the default output goes to stdout.
To import the data into a database with a different application and family name (otherfam)
after it has been exported, use the default import command shown below:

Proprietary – See Copyright Page 51 hdbexport


Hdb User’s Guide
hdbimport -a hdbapp -f otherfam -d hdbtest -s data.txt
A partial listing of the data.txt file is given below with the line continuation character (\)
added manually to avoid line wrapping:
ITEMS,1,OID=0,I__X=3,I__Z=2,A__X='A$X',I__Y=3,\
C2='C2',I1=T,I2=F,I3=T,XITEM4=400000,\
CITEM12='C ITEM 12',XITEM2=2,CITEM2='22',\
CITEM3='C 3',XITEM1=1
A,1,ID='A0000001',OID=0,FF=10000000.1,BB=F, \
CC='A SAMPLE DATA: 100',NN=-1000
B,1,ID='P1A1',FF=10000.1,BB=F,\
CC='B1 SAMPLE DATA: 10',NN=-10
B,2,ID='P2A1',FF=20000.2,BB=T,\
CC='B1 SAMPLE DATA: 20',NN=-20
C,1,ID='C1',FF=100.1,BB=F,CC='C SAMPLE DATA: 10',NN=-10
C,2,ID='C2',FF=200.2,BB=T,CC='C SAMPLE DATA: 20',NN=-20
A,2,ID='A0000002',OID=0,FF=20000000.2,BB=T, \
CC='A SAMPLE DATA: 200',NN=-2000
B,3,ID='P3A2',FF=30000.3,BB=F,\
CC='B1 SAMPLE DATA: 30',NN=-30
C,3,ID='C3',FF=300.3,BB=F,CC='C SAMPLE DATA: 30',NN=-30
C,4,ID='C4',FF=400.4,BB=T,CC='C SAMPLE DATA: 40',NN=-40
X,1,ID='X00001',I__X=5,I__Y=1,N1=100,C2='X0',\
OPEN=T,CLOSE=F,ON=F,OFF=F,F1=100.001,CH1='X 1'
: : : : : : : :

7.5.2 Export/Import in Field Mode


Sometimes, it is useful to export the data in field format instead of in record format. For
example, you might want to change the value of several fields and put the changes back
into the database.

7.5.2.1 Exporting the Whole Database in Field Format


The command to export the whole database in field format is:
hdbexport -a hdbapp -f hdbfam -d hdbtest -by_field -s data.txt
This command could generate many field lines if the database is big. In most cases, a user
may only want to extract a few fields to edit. The next section describes this.

7.5.2.2 Selecting Particular Fields for Export


To export selected fields, use the -field option.
The following example extracts these fields:
ID_A, ID_B, ID_C, P__A_B (Parent Pointers: from B to A), and
P__B_A (Child Pointers: from A to B)
hdbexport -a hdbapp -f hdbfam -d hdbtest
-field id_a id_b id_c p__a_b p__b_a

Proprietary – See Copyright Page 52 hdbexport


Hdb User’s Guide
-s data.txt

7.5.2.3 Editing Data in a File and Importing Changes Back Into the Database
After the fields have been exported into an output file, the data can be edited to change
field values.
Use the following command to import the changes back into the database:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt

Note: Because the file only contains field-format lines, it can only update existing fields in
the database (for more information about field line versus record line import, see chapter
9 hdbimport).

7.5.3 Export, Fix, and Import a Parent Pointer Field


The field export example in the previous section also shows how to fix the parent pointer of
a field if it becomes corrupted.
To fix a parent pointer field, export the parent pointer field as shown in the following
command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -field p__b_c -s data.txt
This will export all the parent pointer fields of C (with B as its parent) into a data.txt file. An
output example follows:
^P__B_C(1)=2
^P__B_C(2)=2
^P__B_C(3)=3
^P__B_C(4)=6
Assume that the parent record type (B) has only three entries, so the line ^P__B_C(4)=6 is
showing that the parent pointer is corrupted. The correct parent of C (4) should have
been (B,3). You can fix the parent pointer by editing the file so it looks like the following
example:
^P__B_C(4)=3
The correct lines are removed from the input file.
The last step is to issue the import command to update the parent pointer:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt

7.5.4 Export/Import a Subtree


To illustrate how to import and export a subtree, the hierarchy of data shown below will be
used. Records A, B, and C have only the ID field. ID is also a primary key field.
Indentations are added to emphasize levels.

Proprietary – See Copyright Page 53 hdbexport


Hdb User’s Guide
A,1,ID='A0001'
B,1,ID='B00001'
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'
A,2 ID='A0002'
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'
B,4,ID='B00004'
B,5,ID='B00005'
A,3 ID='A0003'
A,4 ID='A0004'

7.5.4.1 Exporting a Subtree


To extract a subtree, use the command-line options -subtree and -range, as in the
following examples.

7.5.4.1.1 Example #1: Extract subtree A(1)


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -subtree -range A=1
Result:
A,1,ID='A0001'
B,1,ID='B00001'
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'

7.5.4.1.2 Example #2: Extract subtree B(3)


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -subtree -range B=3
Result:
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'

7.5.4.2 Importing a Subtree


Assume the following subtree:
B
+---C
+---C

Proprietary – See Copyright Page 54 hdbexport


Hdb User’s Guide
Depending on the existing subtree inside the database, the following example inserts this
subtree somewhere into the hierarchy.

7.5.4.2.1 Example #1: Inserting a default subtree


The default input data file is:
Data file (data.txt):
# The subscript 0 is ignored because it is in
# default mode (insert atend).
B,0,ID='BNEW1'
C,0,ID='CNEW1'
C,0,ID='CNEW2'
Command:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt
Result:
The top of the subtree is the record type B. Therefore, the hdbimport program looks for the
last sibling record (B,5 in this case). The hdbimport program then inserts the subtree after
the last sibling record. The resulting hierarchy is shown below:
A,1,ID='A0001'
B,1,ID='B00001'
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'
A,2 ID='A0002'
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'
B,4,ID='B00004'
B,5,ID='B00005'
B,6,ID='BNEW1'
C,5,ID='CNEW1'
C,6,ID='CNEW2'
A,3 ID='A0003'
A,4 ID='A0004'

7.5.4.2.2 Example #2: Inserting a subtree using the key field


You want to insert the subtree under (A,3) instead of the default location. Knowing that ID_A
is a key field for record A, you can specify it as a key field on a record line and use the
command-line option -keys to insert the subtree at the desired location.
Data file (data.txt):
# The subscript 0 is ignored because it is in
# default mode.
B,0,ID='BNEW1',ID_A='A00003'
C,0,ID='CNEW1'
C,0,ID='CNEW2'

Proprietary – See Copyright Page 55 hdbexport


Hdb User’s Guide
Command:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt -keys
Result:
A,1,ID='A0001'
B,1,ID='B00001'
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'
A,2 ID='A0002'
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'
B,4,ID='B00004'
B,5,ID='B00005'
A,3 ID='A0003'
B,6,ID='BNEW1'
C,5,ID='CNEW1'
C,6,ID='CNEW2'
A,4 ID='A0004'

7.5.5 Export a Field for Use in Excel


The Excel application can import comma-separated-value (CSV) formatted files. The
hdbexport program already separates fields with commas. However, you can fine-tune
what needs to be exported by using command-line options such as -norecordnames,
-nofieldnames, -nonames, or -nosubscript.
In the following examples, only the record type A is exported.

7.5.5.1 Example #1: Default


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -record A
Result:
A,1,ID='A00001',FF=100.1
A,2,ID='A00002',FF=200.2
A,3,ID='A00003',FF=300.3
A,4,ID='A00004',FF=400.4

7.5.5.2 Example #2: No record name


Command:
hdbimport -a hdbapp -f hdbfam -d hdbtest -record A -norecordnames
Result:
1,ID='A00001',FF=100.1
2,ID='A00002',FF=200.2

Proprietary – See Copyright Page 56 hdbexport


Hdb User’s Guide
3,ID='A00003',FF=300.3
4,ID='A00004',FF=400.4

7.5.5.3 Example #3: No record name and no field names


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -record A -norecordnames -nofieldnames
Or:
hdbexport -a hdbapp -f hdbfam -d hdbtest -record A -nonames
Result:
1,'A00001',100.1
2,'A00002',200.2
3,'A00003',300.3
4,'A00004',400.4

7.5.5.4 Example #4: No record name, no field names, and no subscripts


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest
-record A -norecordnames -nofieldnames -nosubscript
Or:
hdbexport -a hdbapp -f hdbfam -d hdbtest
-record A -nonames -nosubscript
Result:
'A00001',100.1
'A00002',200.2
'A00003',300.3
'A00004',400.4

7.5.6 Export a Range of Data


To export a range of data, use the -range command-line option of hdbexport. The example
below exports a selected range of data from record type A; it is assumed that record type A
has four record occurrences.

7.5.6.1 Example #1: Extract entry 2-to-3 from record A


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -range A=2:3
Result:
A,2,ID='A00002',FF=200.2
A,3,ID='A00003',FF=300.3

Proprietary – See Copyright Page 57 hdbexport


Hdb User’s Guide
7.5.6.2 Example #2: Extract entry start-to-3 from record A
Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -range A=1:3
Or:
hdbexport -a hdbapp -f hdbfam -d hdbtest -range A=:3
Result:
A,1,ID='A00001',FF=100.1
A,2,ID='A00002',FF=200.2
A,3,ID='A00003',FF=300.3

7.5.6.3 Example #3: Extract entry 3-to-end from record A


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -range A=3:4
Or:
hdbexport -a hdbapp -f hdbfam -d hdbtest -range A=3:
Result:
A,3,ID='A00003',FF=300.3
A,4,ID='A00004',FF=400.4

7.5.7 Export Using a Record Pattern


To export using a record pattern, it is necessary to create a pattern file first. The pattern file
contains a list of #record semantic declaratives (for more information about declaratives,
see section 9.6 Declarative). Creating a pattern file can be done using the -create_patterns
command-line option of hdbexport.

7.5.7.1 Create a pattern file


Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -create_patterns
Result:
#record A \
,%SUBSCRIPT \
,ID_A \
,FF_A
#record B \
,%SUBSCRIPT \
,ID_B1 \
,FF_B1 \
,BB_B1 \
,CC_B1 \

Proprietary – See Copyright Page 58 hdbexport


Hdb User’s Guide
,NN_B1
#record C \
,%SUBSCRIPT \
,ID_C \
,FF_C \
,BB_C

7.5.7.2 Edit the pattern file


Assume you only want to display the ID (characters), BB (boolean), and CC (characters) fields
of record B. You can edit the pattern file to record the pattern file (pattern.txt):
#record B \
,ID_B \
,BB_B \
,CC_B
You can now use the following command to export only record B using the pattern specified
in the pattern.txt file.
Command:
hdbexport -a hdbapp -f hdbfam -d hdbtest -pattern pattern.txt -record B
Result:
#record B \
,ID_B \
,BB_B \
,CC_B
B,'B00001',F,'B1 SAMPLE DATA: 10'
B,'B00002',T,'B1 SAMPLE DATA: 20'
B,'B00003',F,'B1 SAMPLE DATA: 30'
B,'B00004',T,'B1 SAMPLE DATA: 40'
B,'B00005',F,'B1 SAMPLE DATA: 50'
B,'BNEW1',F,''
Note that the pattern file contents are prepended to the output file, to be able to import the
data using the same record patterns.
For each line, the comma separates the record name, ID_B, BB_B, and CC_B fields.

7.5.8 Use Declaratives


The most useful declaratives are #insert and #update. They can be used together to
position to a specific record (using #update) and then insert a new record after it (using
#insert).
We use the same example as the subtree insertion example to illustrate the use of these
declaratives.
Assume the database has the hierarchy shown below to start:
A,1,ID='A0001'
B,1,ID='B00001'

Proprietary – See Copyright Page 59 hdbexport


Hdb User’s Guide
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'
A,2 ID='A0002'
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'
B,4,ID='B00004'
B,5,ID='B00005'
A,3 ID='A0003'
A,4 ID='A0004'
To insert the following subtree into the hierarchy:
B
+---C
+---C
By default, without using key insertion, the tree will be inserted after (B,5) and it will have
(A,2) as its parent. However, if you want to insert the subtree under (A,4), you can use the
#update declarative to set the position to (A,4) and then switch back to #insert mode to
insert the subtree. The data file below shows how this can be done:
Data file (data.txt):
# Switch to UPDATE mode
#update

# Now Position to A,4 without changing any fields.


A,4

# Now Record Position is at A,4 for the hierarchy.

# Switch back to INSERT mode


#insert

# Insert the subtree


B,0,ID='BNEW1'
C,0,ID='CNEW1'
C,0,ID='CNEW2'
Command:
hdbimport -a hdbapp -f hdbfam -d hdbtest -s data.txt
Result:
A,1,ID='A0001'
B,1,ID='B00001'
B,2,ID='B00002'
C,1,ID='C00001'
C,2,ID='C00002'
A,2 ID='A0002'
B,3,ID='B00003'
C,3,ID='C00003'
C,4,ID='C00004'
B,4,ID='B00004'
B,5,ID='B00005'

Proprietary – See Copyright Page 60 hdbexport


Hdb User’s Guide
A,3 ID='A0003'
A,4 ID='A0004'
B,6,ID='BNEW1'
C,5,ID='CNEW1'
C,6,ID='CNEW2'

Proprietary – See Copyright Page 61 hdbexport


Hdb User’s Guide
8. hdbformat
The hdbformat utility generates the database subschema files. Subschema files are used by
applications that make calls to the Hdb API. Subschema files are formatted as INCLUDE files
(Fortran 90 context) or header files (C/C++ context).
The hdbformat utility accepts as input, database schema from multiple sources:
• A DBDEF source file is an ASCII formatted file that is coded using DBDEF declarative
statements.
• A dictionary source file is a database schema that is loaded into the dictionary through
the hdbcloner load_schema command.
• A clone source file is a clone and is identified by its database and clone name.
• Archive or savecase source files are archive or savecase files identified by the name of
the database.

8.1 Subschema Files


The subschema files produced by the hdbformat utility are language-dependent — i.e., the
type of file and how it is used depend on the language.
Not all files are correctly identified and considered as INCLUDE files; for this reason, the
phrase “include file” is discouraged.
The subschema files for each language type are described in the following sections.

8.1.1 Fortran 90 Subschema Files


In the following descriptions, database names are substituted for dbname, and the partition
name is substituted for pname.
Fortran 90 subschema file names are listed and followed by a description.

8.1.1.1 hdbdb_dbname.f90
This file is a Fortran 90 source file that must be compiled and linked to an application
program. The file defines a Fortran 90 module required by the Hdb API if the API is using
Fortran 90 to access the database. The module object produced during compilation of this
file must be made available to the application program.

8.1.1.2 dp_dbname.inc
This file defines the master INCLUDE schema file that is required by the Hdb API. This file
references the database module. It must be placed within the source program and in a

Proprietary – See Copyright Page 62 hdbformat


Hdb User’s Guide
location where modules are accepted. In general, the location is after the procedure
statement (PROGRAM, SUBROUTINE, or FUNCTION), but before other program statements.

8.1.1.3 dp_dbname_pname.inc
This file defines the partition structures used for the Hdb API partition I/O. This INCLUDE file
must follow any module referencing files (e.g., dp_dbname.inc). This INCLUDE file is not used
for Hdb API record I/O or Hdb API field I/O (unless a partition field is specified on the read or
write method). There is one of these INCLUDE files for each partition defined by the
database. Using one of these files makes the application dependent on the dimensions of
the database, requiring a recompile of the dimension change.

8.1.1.4 dx_dbname.inc
This INCLUDE file defines the MX values for each database record type. This file is
compatible with older versions of the system. The use of this file is not recommended,
because dimension-independent code cannot be produced with this file.

8.1.1.5 db_dbname.inc
This INCLUDE file is used to reference all partition and MX value INCLUDE files. It does not
reference the master INCLUDE file (dp_dbname.inc). This file is provided for convenience
only.

8.1.2 C/C++ Subschema Files


Only one C/C++ header file is supported for the Hdb API, and that is db_dbname.h. This file
defines required structures and data types used in the C/C++ Hdb API.

8.2 Subschema File Management


Subschema files are program source code, and they should be managed as such. When
managing subschema files, refer to the following guidelines:
• The target directory receiving the subschema files must be write-enabled for the users
executing the hdbformat utility.
• Existing subschema files of the same name as the target location are replaced during
hdbformat execution. To save the files, perform a save operation or direct the
hdbformat output to a different directory.
• Subschema files are overwritten by hdbformat. Thus, subschema files must be set to
write-enabled.
It is possible to automatically build required schema files using the make utility change
dependency rule. However, this is not recommended because it does not allow consistency

Proprietary – See Copyright Page 63 hdbformat


Hdb User’s Guide
between clones to be enforced. It is possible that some work in progress could be captured
by a change dependency rule and result in a compiled program that is inconsistent with the
clones.
Note that DBDEF source files can be freely commented without affecting the schema
definition and requiring the subschema files to be recompiled.

8.2.1 Recommended Practices


In Hdb, it is very important that the database schema definition used by the clone and the
schema definition assumed by an application program are the same. Confusing error
conditions can result if the clone definition is out of synchronization with the INCLUDE files
used by the application program.
To avoid errors, always generate clones and the subschema files from the same source. The
best source is the dictionary-resident file, because it is the only source consistent with both
the hdbcloner and the hdbformat utilities.
When using hdbformat, the following procedure is the recommended practice:
1. Load all necessary schema into the dictionary using the hdbcloner -c load_schema
command.
2. Create all necessary clones from the schema in the dictionary.
3. Create all necessary subschema (INCLUDE files) from the schema in the dictionary.
The forms of the commands for the above steps are:
1. hdbcloner -c load_schema -s app*.* -replace
2. hdbcloner -c create_clone -a app -f xxx
3. hdbformat -n app*.* -d $HABITAT_INCDIR -l f90 c
The above example assumes that the application schema for app is contained in a file
called “APP.CLS”. It also assumes that the database schema for the databases used by app
is contained in a set of files that start with “APP”. Therefore, wildcards can be used.

8.3 Examples
The following topics show five hdbformat examples:
• Example #1: Format from a single DBDEF source with defaults
• Example #2: Format from two DBDEF sources
• Example #3: Format multiple files using wildcards
• Example #4: Format for C language only
• Example #5: Format C and Fortran 90 using F77 fixed source statement format

Proprietary – See Copyright Page 64 hdbformat


Hdb User’s Guide
8.3.1 Example #1: Format from a single DBDEF source with defaults
hdbformat -s scadamom.dbdef
In this example, the DBDEF file is the source. Fortran 90 is the default language, and the
default target directory is the current directory path.

8.3.2 Example #2: Format from two DBDEF sources


hdbformat -s scadamom.dbdef netmodel.dbdef
This example shows how more than one file can be specified in the command line using the
-s parameter.

8.3.3 Example #3: Format multiple files using wildcards


hdbformat -s *.dbdef -d $HABITAT_INCDIR
This command formats all files in the current directory of the .dbdef type. Subschema files
are created in the target directory given by the HABITAT_INCDIR environment variable. This
syntax assumes a Linux shell (C/Korn shell); the file specification with wildcards is translated
by the shell using the file name globbing feature. In Windows, wildcard characters are
processed by hdbformat.

8.3.4 Example #4: Format for C language only


hdbformat -l c -n trend -d /var/habitat/includes
This example specifies that C and the source for the database is the dictionary-resident
description of the TREND database. The database’s version title is not explicitly stated. This
means that the wildcard is interpreted as a command to format all TREND database
schema. If more than one TREND definition is in the dictionary, the version title should be
stated.

8.3.5 Example #5: Format C and Fortran 90 using F77 fixed source
statement format
hdbformat -l c f90 f77 -n scadamom.esca_ems -d /var/habitat/includes
This example assigns C and Fortran 90 as the target languages, but the command also
includes the F77 language to instruct hdbformat to create fixed column (Fortran 77) for the
source statements.

Proprietary – See Copyright Page 65 hdbformat


Hdb User’s Guide
9. hdbimport
The hdbimport utility imports data into an Hdb database from one or more ASCII coded
data files. A data file contains the following types of data lines: comment lines, record lines,
field lines, and declarative lines (declaratives in short).
The hdbimport and hdbexport utilities are complementary functions in that data exported
with hdbexport can be imported using hdbimport. The default operation of hdbexport
produces a full database export ASCII file that can be imported by hdbimport to reproduce
the same database content.
This chapter covers the following topics:
• Reasons to import data into a database
• Creating the input data file
• A functional overview of hdbimport
• Input file format
• Data conversion rules
• The semantic declaratives
• Positioning a record for insert and update

9.1 Reasons to Import Data Into a Database


The hdbimport utility can be used to:
• Populate data: Insert data into an empty database or append data into an existing
database.
• Update data: Update specific fields or records in a database.
• Merge databases: Records present in the input file(s) are inserted only if they do not
already exist in the target database. This import mode is called “insert with no
duplicates”.
• Fix database pointers: Parent, child, and indirect pointer fields in the database can be
updated to correct invalid links.

Note: Extreme care must be exercised when fixing database pointers, to avoid corrupting
the integrity of the database.

9.2 Creating the Input Data File


The hdbimport utility uses several methods to create an input data file. The user generating
the data file should be familiar with the schema of the database, i.e., the relationships
among record types, the fields of the record types, etc.
Proprietary – See Copyright Page 66 hdbimport
Hdb User’s Guide
To import data, use any one of the three following methods:
• By hand, following the format described in section 9.4 Input File Format, to create a data
file from scratch.
• By modifying the output from hdbexport. If the database being populated has the same
schema as some existing database, you can use hdbexport to copy the desired data
and import it into the new database.
• By modifying the output from other applications. For example, you can export data from
a spreadsheet and format it to match the required hdbimport format.

Proprietary – See Copyright Page 67 hdbimport


Hdb User’s Guide
9.2.1 Sample of an Input File Fragment

# An example of an input file.


#

# Turns on verbose
#VERBOSE

#
# A is the name of the record type. It has these fields:
# BB_A a boolean field
# CC_A a character field
# ID_A a character field
# FF_A a floating point field
# OID_A a internal object id field
# NN_A an integer field
#
# The next 2 record lines are in the default record line format.
#
A,1,ID='A0000001',OID=0,FF=10000000.1,BB=F,CC='A SAMPLE DATA: 100',NN=
1000
A,2,ID='A0000002',OID=0,FF=20000000.2,BB=T,CC='A SAMPLE DATA: 200',NN=
2000

#
# Use declarative to change order of the fields and
# to add new BOOLEAN keywords (OPEN/CLOSE)
#
#boolean OPEN/CLOSE
#record A,%SUBSCRIPT,BB_A,CC_A,FF_A,ID_A,NN_A,OID_A
#
A,3,BB=CLOSE,CC='A SAMPLE DATA: 300',FF=30000000.3,\
ID='A0000003',NN= 3000,OID=0
A,4,BB=OPEN, CC='A SAMPLE DATA: 400',FF=40000000.4,\
ID='A0000004',NN= 4000,OID=0

#
# Turn off verbose
#noverbose
:
:
#
# Examples of field lines for a 2 D field.
#
N_X_X(1,4)=104
N_X_X(1,5)=105
N_X_X(2,1)=201
N_X_X(2,2)=202
N_X_X(2,3)=203

Figure 2. Sample of an Input File Fragment

Proprietary – See Copyright Page 68 hdbimport


Hdb User’s Guide
9.3 Hdbimport Concepts
The primary function of hdbimport is to import data into a database. Hdbimport uses three
modes of operation to import data: insert, insert with no duplicates, and update mode.
To increase efficiency and provide a more flexible tool, hdbimport uses command-line
options that use field or record data lines, along with declaratives to control how data is
inserted or updated.

9.3.1 Field Lines and Record Lines


The hdbimport utility distinguishes between two types of data lines: field data lines and
record data lines. Field data lines can only be used to update the value of a field element.
Record data lines can be used to insert or update a record, depending on the operation
mode that hdbimport is in.
A field line begins with a caret character (^) and has the following format:
^<fieldname>(<subscript>) = <value>
Put in terms of a relational database model, a field line can be interpreted as a unique
element in a table. The <fieldname> identifies the column/table and the <subscript>
identifies the row number. However, the difference is that the <subscript> for a field line in
Hdb can be 1-D, 2-D, or 3-D. Thus, the field line is the only way to update values of 2-D and
3-D fields in Hdb.
Some examples of field lines are:
^NAME_X(1) = 'John'
^AGE_X(1) = 24
^SEX_X(1) = 'M'
^V_Z_Y(1,1) = 100
A record line has the following format:
<recordname>,<subscript>,<field1>=<value>,<field2>=
<value>...,<fieldn>=<value>
A record line can be interpreted as a row in a table. The <recordname> identifies the table.
The <subscript> identifies the row. The <field> identifies a column of the table. The
<subscript> of a record line is strictly a number, so it cannot be used to represent 2-D or
3-D record information.
The following are record line examples:
X,1,ID='John',AGE=24,SEX='M'
X,2,ID='Jane',AGE=22,SEX='F'
For more-detailed descriptions of the data line formats, see section 9.4 Input File Format.

Proprietary – See Copyright Page 69 hdbimport


Hdb User’s Guide
9.3.2 Methods of Record Insertion and Update
The hdbimport utility operates in one of three modes: insertion, insertion with no duplicates,
and update. The default operation mode is insertion. The hdbimport mode can be changed
using command-line options (-update, -insertnodup) or by using command declaratives
(#update, #insert, or #insertnodup).
The insert mode adds new record occurrences to the database and increases the LV
(Last Value) of the affected record type. Normally, insertion and insertion with no duplicates
add the new data at the end of existing record occurrences. The position of record insertion
depends on the record type. The position of the record can be changed using command-
line options (-atend, -atstart, -keys) or declaratives (#atend, #atstart, #keys).
Hdbimport can insert data in the following ways:
• Positional insert for non-hierarchical records
• Positional insert for hierarchical records
• Positional insert for circular records
• Positional insert for MaxLV records
• Key insert for hierarchical records (uses -keys or #keys)
The insert with no duplicates mode inserts records only if they are not already in the
database. Insertion is always a key insertion for hierarchical records, i.e., -key or #keys is
forced.
The update mode can modify (update) the value of an existing field or record; the LV of the
record type is not changed. The record being updated must already exist in the database.
Hdbimport can update data in the following ways:
• Key update for hierarchical records
• Subscript update for fields and records
For more information, refer to section 9.8 Positioning a Record for Insert and Update.

9.3.3 Declaratives
Declaratives are lines in the input data file that instruct hdbimport how to perform
subsequent operations. A declarative line begins with the pound sign (#) and is immediately
followed by the declarative keyword. No white space is allowed between the pound sign (#)
and the first character of the keyword. For example, #atend, #atstart, #keys, #update,
and #insert are declaratives.
The following is an example of using a declarative to switch between update and insert
mode and then back to update mode:

Proprietary – See Copyright Page 70 hdbimport


Hdb User’s Guide
#
# Default is to insert records
X,1,ID='John',AGE=24,SEX='M'
X,2,ID='Jane',AGE=22,SEX='F'

# Change to update mode in the next line


#update

# Updates the age of John from 24 to 40


X,1,AGE=40

# Change back to insert mode


#insert
X,3,ID='Kramer',AGE=40,SEX='M'

Figure 3. Example of a Declarative

For more information about how to use declaratives, refer to section 9.6 Declaratives.

9.3.4 Multiple Input Files


The hdbimport utility accepts one or more input files on the command line. The files are
processed in the order listed on the command line. With hdbimport, the input files are
treated as if they were a single large file. The only difference is that declaratives in one file
do not span into the subsequent files.
An example of an hdbimport command with multiple input files is:
hdbimport -a app -f fam -d db -s f1 f2 f3
In this example, f1, f2, and f3 are input files.

9.3.5 Other Command-Line Options


Chapter 13 Hdb Utilities Reference lists other command-line options. Refer to it for more
information about command-line options, parameters, and declaratives.

9.4 Input File Format


The hdbimport input data file is a formatted ASCII file.
Each line in the ASCII file is one of the following types:
• Comment: Begins with the comment character (#)
• Record line: Begins with a letter (A-Z)
• Field line: Begins with the caret character (^)

Proprietary – See Copyright Page 71 hdbimport


Hdb User’s Guide
• Declarative: Begins with # and is immediately followed by a keyword
Each line type is distinguished by the first non-whitespace character.

9.4.1 Line Continuation Character


Line continuation is supported using the backslash character (\) at the end of a line. The line
that follows is considered a continuation of the preceding line, and the backslash is treated
as a whitespace delimiter.
Example:
DEVTYP,1,ID='RTU',ID_SUBSTN='SUB_1X', \
ID_SCADEK='SUBSTN', \
OID=2445818125445958024,I__TYPNAM=7
The lines above are considered to be one record line because of the ending line
continuation character.

9.4.2 Comment Lines


A comment line begins with the pound sign (#) followed by a white space, as in the following
example:
# This is a comment
# And this
#This is not a comment line
The comment character can be changed with the hdbimport -comment <char> parameter,
or by using the #comment declarative.
The third line above is not considered a comment line by hdbimport, and a syntax error is
returned indicating “#This is not a valid declarative”.

9.4.3 Record Lines


A record is a single line read from the input file. A record line begins with an alpha
character. The default separation character is the comma (,). Tokens are delimited by
separation characters in a record line.
A default record format exists for a record line. This format can be modified by using the
#record declarative in the input file. The #record declarative can be used many times in
an input file to change the record line format.
For all record lines, the first token in the line must be the record name. The hdbimport utility
looks at the record name and checks to see if a previously defined #record declarative for
that name exists. If it does, the utility uses the format specified by the #record declarative
to interpret the record line. Otherwise, it uses the default record line format.

Proprietary – See Copyright Page 72 hdbimport


Hdb User’s Guide
9.4.3.1 Record Line Syntax
The default syntax for a record line is:
<recordname>,<subscript>,<field1>=<value1>,...<fieldn>=
<valuen>
The first field must be the record name. The second field must be an integer subscript. The
rest of the line contains field/value pairs.
An example of a default record line might look like:
SUBSTN,1233,ID='THUBACBX',CO='HQCO',MW=4000.0
For each field/value pair, the field name can be specified as a fully qualified field name (e.g.,
ID_SUBSTN). It can also be specified as the <content> portion of the record field name (in
the above example, “ID” is the content portion of ID_SUBSTN).
The value in a field/value pair can be character data, integer data, floating-point data,
keywords, boolean data, or calendar data. For more information about how to specify each
of these data types, see section 9.5 Specification of Field Values.

9.4.3.2 Using an Alternative Record Line Format


The hdbimport utility allows a user to define which fields will appear on a record line using
the #record declarative. This allows the user to specify only the relevant data fields to
import into a record occurrence (see section 9.7.1 Details About the #record Declarative).
A user can generate a list of all the default #record declaratives for each record using the
-create_patterns option of the hdbexport utility.
Example:
% context permit habitat
% hdbexport -d permit -create_patterns
#record ITEMS \
,%SUBSCRIPT
#record APP \
,%SUBSCRIPT \
,ID_APP
#record AREA \
,%SUBSCRIPT \
,ID_AREA
#record CON \
,%SUBSCRIPT \
,ID_CON \
,APPSUB_CON \
,AREASUB_CON \
,APPID_CON \
,AREAID_CON
This example sets the application context to PERMIT and the family context to HABITAT.
Then the hdbexport command is issued to output the default #record declaratives for the
PERMIT database. A user can use this output as a base to start changing the record line to
the desired format for the hdbimport utility.

Proprietary – See Copyright Page 73 hdbimport


Hdb User’s Guide
9.4.4 Field Line
A field line begins with the caret character (^) and has the following syntax:
^<fieldname>(<subscript>) = value
The fieldname portion must be fully qualified, including all appropriate record names.
The subscript is enclosed in brackets. The subscript value syntax is dependent on the rank
of the field and follows the standard Fortran 90 array subscript notation. The table below
shows how the subscripts are interpreted:

Table 5. Field Line Subscripts


Subscripts Interpretation
(row) • Row is an integer subscript ranging from 1 to LV of the primary record type.
(row,column) • Row is an integer subscript ranging from 1 to LV of the secondary record type.
• Column is an integer subscript ranging from 1 to LV of the primary record type.
(row,column,plane) • Row is an integer subscript ranging from 1 to LV of the tertiary record type.
• Column is an integer subscript ranging from 1 to LV of the secondary record
type.
• Plane is an integer subscript ranging from 1 to LV of the primary record type.

The value in a field line can be character data, integer data, floating-point data, keywords,
boolean data, or calendar data. For more information about how to specify each of these
data types, see section 9.5 Specification of Field Values.

9.5 Specification of Field Values


This section describes how field values are specified on a record line or a field line of an
input data file.
The following is a list of recognized data types:
• Character data
• Numeric data: integer and floating-point
• Keywords
• Boolean data
• Calendar date and calendar timedate
If the data type of the field value in the input file does not match the corresponding data
type defined in the schema of the database, a field value conversion will take place.
For more information about data conversions, refer to section 9.5.6 Field Value Conversion
Rule.

Proprietary – See Copyright Page 74 hdbimport


Hdb User’s Guide
Null data value can sometimes be specified as a field value. For more information, see
section 9.5.7 Null Data Rules.

9.5.1 Character Data


All character data is enclosed in single quotes (apostrophe) or double quotes. The normal
rules for escaping the quote character are used (leading quote to escape a quote). The C
language escape characters marked by the backslash (\) character are not supported for
character data. An empty character string is marked by an empty quoted string.

Note: An empty character string is not the same as null data.

Table 6. Character Data


Character Description

'astring' The character string “astring” is specified using the apostrophe character
as the quote character.
"astring" The same string as above is quoted using the double-quote character. This
string is considered identical to the one above.

"ast'ring" Example of an embedded quote (apostrophe) character. The apostrophe


does not need to be escaped because the character string is quoted using
the double quote (") character.

'ast''ring' The same embedded quote example as above, but this time the character
string is quoted by the apostrophe character, which is also embedded so
that the embedded apostrophe character is escaped by including two (') in
juxtaposition.
Note: Although sometimes difficult to distinguish, the embedded characters
in this example are two apostrophes, not a single double quote.

9.5.2 Numeric Data


Numeric data can appear in a variety of formats. Table 7 shows a few examples.

Table 7. Numeric Data


Format Description

+ddddd Specifies integer data with optional + or - sign.

+0xhhh Specifies hexadecimal integer data with optional + or - sign.

+ddd.ddd Specifies floating-point data with optional + or - sign.

+ddd.dddE+dd Specifies floating-point data with exponent field and optional + or - sign for both the
coefficient and the exponent. The double-precision format using the D character is
also supported.

Proprietary – See Copyright Page 75 hdbimport


Hdb User’s Guide
Integer data is processed and maintained internally as a 64-bit signed integer. Therefore,
the numbers can become very large. However, integer data cannot always be converted to
the target database field if the field data element size is too small. For example, the number
10000 will not fit in an (I*1) field, and the number 305000 will not fit in an (I*2) field. If
truncation of the integer number occurs, an error message is issued and the field is not
changed.
Floating-point data is processed and maintained internally as a double floating-point
number. If such a double floating-point number is stored in a single precision floating-point
field with only loss of precision, then no error or warning message is issued. However, if the
number’s exponential range is too large to fit into the database field, then a truncation
error message is reported and the field is not changed.

9.5.3 Keyword Data


Keyword data is similar in appearance to a character string, but a keyword is not a quoted
string. Keywords do not use double or single quotes. A keyword is a predefined literal value
known by an alphanumeric name and used by hdbimport to represent a data value.
At this time, keywords are used only with boolean data to describe the boolean true and
false values.

9.5.4 Boolean Data


Boolean data (two-state data) is expected for boolean data fields by default. Boolean data
fields include L*1, L*2, L*4, and mask fields. Boolean data values can be specified as
keywords or as numeric values.
There is no acceptable conversion from character string to boolean value. You can use the
#boolean declarative to create additional keywords to represent true/false values.
The predefined hdbimport keywords to represent boolean data are: T, F, Y, and N. Each of
these keywords represents a boolean data value, as in the following table:

Table 8. Boolean Data


Boolean Description

T/F The keyword T is accepted for the true state, and the keyword F is accepted for the false
state.
Y/N The keyword Y is accepted for the true state, and the keyword N is accepted for the false
state.

9.5.5 Calendar Date and Calendar Timedate


Hdb supports calendar date and timedate variables using the D and T data types. Calendar
and timedate data can be specified to hdbimport as a character string value (including the

Proprietary – See Copyright Page 76 hdbimport


Hdb User’s Guide
enclosing quote characters) or a numeric value (typically an integer). If the character string
value is used, it specifies the calendar and timedate values using the format expected by
the hdbimport utility.
The timedate that hdbimport recognizes is defined as
“%d-%b-%Y%#R%H:%M:%S”.
An example date that corresponds to this specification is
“01-Jan-2000 10:20:30”.
The numeric format specifies the internal numeric value used to represent the calendar
date or timedate value. This internal format is useful when data is exported from one
database (use hdbexport’s option -timedate_as_numbers to export the internal numeric
value) to be imported into another database of the same Hdb group. In such cases, the
value representations of the internal date and time are compatible with each other, and
provide a more-efficient means of exporting and importing the data if a large number of
date and time variables are used.

9.5.6 Field Value Conversion Rule


The field value specified in a data line is independent of the data type of the field as defined
in the target database schema. The field value, as derived from the input file, is converted
as necessary to the appropriate internal storage type per the field’s schema-defined data
type.
These conversion rules are described in Table 9.

Table 9. Field Value Conversion Rules


Target Database Field Data Input File Field Value Data Type Supported
Type

CHARACTER Character string, keyword, or numeric field. If a keyword is specified, its


value is converted to the resultant boolean string of “TRUE” or “FALSE”. If a
numeric value is specified, it is converted to an appropriate character
representation of the value.
INTEGER Numeric integer (or hexadecimal number), floating-point is truncated to
integer (no rounding), character string is translated to integer with zero
stored if character string does not translate correctly. Keyword values for
true and false are translated into 1 or 0.
BOOLEAN Numeric data using zero to represent false and non-zero to represent true.
Keywords are translated to their appropriate values. There is no acceptable
translation for character data.
BIT Integer numeric (decimal or hexadecimal), but normally bit fields do not
appear in the input data file; mask fields appear instead. Floating-point,
keyword, and character data is not an acceptable conversion for a bit field.
MASK Same as boolean.

Proprietary – See Copyright Page 77 hdbimport


Hdb User’s Guide
Target Database Field Data Input File Field Value Data Type Supported
Type

REAL Integer numeric (decimal or hexadecimal) is converted to floating-point,


floating-point accepted (but there may be loss of precision). Character data
is converted to numeric, and a value of 0.0 is stored if the conversion is not
successful.
DATE Character string representing date or timedate, integer numeric
representing internal storage value.
TIME Same as DATE; if only DATE portion included, time is set to 00:00.00.

9.5.7 Null Data Rules


A null value is the absence of a value.
A null value representation is always interpreted as a Habitat null data value. It is not
interpreted as a FILLBYTE value. A more-detailed explanation follows:
• FILL data is used to populate a field of a record inserted by hdbimport when the field
itself does not appear in the input data file record. Therefore, to achieve default FILL
byte setting of records when they are inserted by hdbimport, refrain from including the
field from the input data file. Remember, the FILL byte value is specified by the database
schema and typically is blank (for strings), zero for numerics, or the NULL character
0x80.
• NULL data is used to populate a field whenever a null value specification is found in the
input data file, regardless of whether a record is being inserted or updated or if a field is
being updated. NULL data is a special internal binary representation that consists of all
bytes of the data element being set to the hex value 0x80.
• In update mode, where field values are used to update the database, it is not possible to
default to the FILLBYTE value since this defaulting operation is only performed when a
new record is inserted by hdbimport. If the FILLBYTE value is needed, then it must be
explicitly specified as a value in the input data file.

9.6 Declaratives
Semantic declaratives are statement lines in the input data file that instruct hdbimport how
to perform subsequent operations. By using declaratives, it is possible to gain more control
of the import operation, such as mixing insert and update modes along with keys and
subscripting in the input data.

9.6.1 Operational Scope of Declaratives


The operational scope of a declarative is restricted to within a single input file. The
declarative is in effect from the point it is encountered in the input file until it is negated by
another declarative, or until the end of the input file.

Proprietary – See Copyright Page 78 hdbimport


Hdb User’s Guide
A declarative overrides the chosen command-line option from where it is defined to the end
of that file. The next file in the input file list starts with the chosen command-line option.

9.6.2 Syntax of a Declarative


A declarative line has the pound sign (#) as the first non-whitespace character and it is
immediately followed with a keyword. No white space is allowed between the # character
and the keyword; otherwise, the line is considered a comment. A declarative is case-
insensitive.
For example, #atend, #atstart, #keys, #update, and #insert are all declaratives,
whereas # key is a comment (i.e., a space character is in between the # and the key token).

9.6.3 Summary of Declaratives


The table below summarizes all the semantic declaratives of hdbimport. For more
information about how each declarative is used, see section 9.7 Declaratives Reference.

Table 10. Summary of Declarative Commands


Declaratives Description
#insert, #insertnodup, #update Set insert, insert no duplicates, or update mode.
#atend, #atstart Control where a record will be inserted.
#keys, #nokeys, #subscript Control whether a key field should be used to locate a record.
#entrychecks, #noentrychecks Turn data entry check on/off.
#verbose, #noverbose Turn verbose reporting on/off.
#uppercase, #nocase, #lowercase Turn case conversions all upper/none/lower.
#overrideoid, #nooverrideoid Turn OID overriding from the input data on/off.
#separator Set the separator character.
#comment Set the comment character.
#timedate Set the timedate format (not implemented).
#record Set the record line format.
#skipnulldata, #noskipnulldata Turn skipping of null data on/off.
#boolean Define other keywords for boolean values.

9.7 Declaratives Reference


For more information about hdbimport parameters, options, and declaratives, refer to
chapter 13 Hdb Utilities Reference. The following sections about declaratives describe
information not found in the reference guide.

Proprietary – See Copyright Page 79 hdbimport


Hdb User’s Guide
9.7.1 Details About the #record Declarative
The record declarative specifies the fields by name and order as they appear in the input
data file. This method is used when the data appears in a pure comma-separated field
format.
The syntax of the #record declarative is:
#record <recname>,<field1>,<field2>,<field3>,...
where <recname> is the name of the record type, and <field1> is the name of the field or
fields.
#record SUBSTN,%SUBSCRIPT,ID,ID_CO,MW
In the above example, a record type name SUBSTN has fields called “ID”, “ID_CO”, and
“MW”. ID and MW are fields of the SUBSTN record type, but ID_CO is a field of the parent to
SUBSTN and it is used as a key.
The corresponding data field that is described by the declarative above might look
something like this:
SUBSTN,1,'HORSE','PUGET',2000.0
where HORSE is the value for the ID_SUBSTN field, PUGET is the value for the ID_CO field,
and 2000.0 is the value for the MW_SUBSTN field.
Notice that the record begins with the name of the record type as dictated by the #record
declarative. This is required to be the first field in the record. The record type name is
considered a keyword and not a character string, so it is not enclosed in quotes.
The #record statement includes a special field called “%SUBSCRIPT”, which indicates that
the record subscript appears in the data file. The use of this field is optional; record
subscripts are not required for user-defined record formats.
The #record declarative always uses the comma to separate the names of the fields.
However, the actual record from the input data file uses the field separator known to
hdbimport. The value format for the input data file is the same as defined above for the
default processing of the input data file.
The #record can be used in a file that also contains the default record line format. The
user-defined record format only applies to lines for record types named by the #record
declaratives. Lines for other record types encountered in the file are assumed to use the
default record format.
For non-hierarchical records, you can specify a field value using the %KEY special field
name. This special field is supported so that you can specify both a key for lookup and a key
field value for insertion along with other record values.
If the actual key field value was relied upon, then two different values could not be
specified. The result would be using a key to look up a record and inserting that record with
the very same key value for its primary key field. When the %KEY field value is interpreted, it
is converted according to the data type of the record’s true primary key field.

Proprietary – See Copyright Page 80 hdbimport


Hdb User’s Guide
9.7.2 Details About the #boolean Declarative
The boolean declarative specifies a true/false keyword pair that can be used in place of the
default keyword pairs supported by hdbimport. More than one pair can be specified using
multiple occurrences of the #boolean declarative.
The syntax of the #boolean declarative is:
#boolean <true>/<false>
where <true> represents the true keyword and <false> represents the false keyword. The
<true>/<false> tokens are always specified in pairs, as shown, separated by a forward-
slash character. Each <true> or <false> keyword must begin with an alpha character and
can be composed of alpha, numeric, and the underscore ( _ ) character. A keyword is
restricted to a maximum length of 31 characters.
The following example uses closed/open in place of true and false as the boolean
expression:
#boolean CLOSED/OPENED
As is true for other declaratives, the keywords of the boolean declarative are not case-
sensitive.
Each #boolean declarative adds a new true and false keyword pair to those already
recognized by hdbimport. These booleans are effective until the end of the input data file in
which they appear. You cannot remove the default boolean declaratives, which are T for
true and F for false, and Y for true and N for false.

9.8 Positioning a Record for Insert and Update


This section introduces the concepts of record positioning, first visit, and record positioning
with keys. The section then describes different methods of data insert and update that the
hdbimport utility uses for the different record types.
The record position is used by hdbimport to establish the insert or update location. A
record’s position changes when a record line is encountered in the input file.
The hdbimport utility maintains two logical record positions:
• One for non-hierarchical records
• One for hierarchical records
Each record position is identified as the type of record and the subscript of the record.
The two record positions are maintained separately, and each can be updated
independently as record lines are encountered in the input data files.
The “first visit” concept refers to the first time that a record of a given record type is
encountered in the input data file.

Proprietary – See Copyright Page 81 hdbimport


Hdb User’s Guide
Only on the first visit do the command-line options (-atstart) and declaratives (#atstart,
#atend) apply to the positioning of the insertion. After the first visit, records are inserted
based on the current record position for that record type.
The subsections that follow describe the different methods of insert and update pertaining
to the particular type of record.
Keys can be used to position to a record in a database. Since keys are used to position to a
record, key positioning can only be used in databases that already contain some record
occurrences.
The hdbimport utility interprets that key fields have been specified, and will use them if the
-keys command-line option (or the #keys declarative) has been specified in the input data
file.
Hdbimport can insert data in the following ways:
• Positional insert for non-hierarchical records
• Positional insert for hierarchical records
• Positional insert for circular records
• Positional insert for MaxLV records
• Key insert for hierarchical records (uses -key or #keys)
Hdbimport can update data in the following ways:
• Key update for hierarchical records
• Subscript update for records
• Subscript update for fields

9.8.1 Positional Insert for Non-Hierarchical Records


Non-hierarchical records are inserted as they are encountered in the input data file. The
order of insertion is:
• By default, records are inserted at the end of existing records as determined by the LV
value for the record type.
• If the atstart position is requested, then the records are inserted just before the existing
record at subscript location 1.
If LV is zero, the atstart or atend condition has no meaning.

9.8.2 Positional Insert for Hierarchical Records


Hierarchical records are inserted into the database as they are encountered in the input
data file. Each hierarchical record is inserted at the record position currently held by

Proprietary – See Copyright Page 82 hdbimport


Hdb User’s Guide
hdbimport. This position must be either the record’s parent (a valid ancestor) or a valid
sibling record.
Positional insert for hierarchical records requires the input data file to have the proper
ordering of records and the proper parent-child relationships.

9.8.2.1 Inserting Into an Empty Database


If the target database is empty, then the first hierarchical record encountered must have
_ROOT as its parent. Any valid record whose parent is _ROOT can be inserted first. There are
no ordering dependencies between siblings.
The second hierarchical record is a record of the same type as the first record inserted, a
sibling record type (another record type whose parent is _ROOT), or a child of the first
record.
In general, the hierarchical insertion rules for a valid insert operation are:
• Record to be inserted is same type as current record position.
• Record to be inserted is immediate child of current record position.
• Record to be inserted is sibling of current record position.
• Record to be inserted is same as or sibling of current record position’s ancestor (parent,
grandparent, etc.).
Hierarchical positioning does not allow the initial subtree to be violated. The first
hierarchical record inserted establishes the subtree. If the first record inserted has _ROOT
as its parent, then the subtree descends from _ROOT and includes all hierarchical records. If
the first hierarchical record inserted is lower in the tree, then the subtree is established by
the ancestors of the parent to the first inserted hierarchical record.

9.8.2.2 Inserting Into a Non-Empty Database


If the target database is not empty, then the insertion of the first hierarchical record in the
input data file follows the rules pointed out in the previous section. Note that the rules also
apply when the “insert no duplicates option” is selected (-insertnodup parameter or
#insertnodup declarative encountered).

9.8.2.3 With Default or Atend Specified


• If there are records of the same type already in the database, the new record is inserted
after the last record occurrence (LV). The new record will have the same parent as the
last record occurrence.
• If there are no records of the same type in the database (LV=0), then the record is added
as a child to the last parent record occurrence.
• If there is no parent record, then the insert is invalid.

Proprietary – See Copyright Page 83 hdbimport


Hdb User’s Guide
9.8.2.4 With Atstart Specified
• If there are records of the same type already in the database, the new record is inserted
at the start of the existing record occurrences. This new record becomes the child of the
same parent of the record currently at subscript location 1.
• If there are no records of the same type in the database, then the record is added as the
child to the first occurrence of the parent record.
• If there is no parent record, then the insert is invalid.

9.8.2.5 First Visit Rule Still Applies for Hierarchical Insert


In the case of the record hierarchy, the atstart and atend positioning is to the first
hierarchical record occurrence, regardless of record type. Once this first visit rule has been
applied, the atstart and atend attributes are ignored.

9.8.3 Positional Insert for Circular Records


Circular records are always inserted at LV (Last Value) regardless of the setting of atstart or
atend. As circular records are inserted, LV wraps around when MX (MAXIMUM) is hit so that
the next logical position is subscript location 1.
Technically, an infinite number of circular records can be added, as MX will never stop the
insert process. Records will continue to overwrite existing records.

9.8.4 Positional Insert for MaxLV Records


MaxLV records are not actually inserted at all, although the positioning logic is applied as if
records are to be inserted.
As MaxLV records are encountered in the input data file, the existing records at the given
position are updated rather than inserted. However, for practical considerations, the user
can think of the records as being inserted (although the LV and the RT timestamp are not
changed).

9.8.5 Key Insert for Hierarchical Records


The key insert for hierarchical records uses the values of the parent record key fields for
positioning.
If there are no parent records (e.g., parent is _ROOT), or if the parent records do not have
key fields, then key field positioning is not used and the insertion logic reverts to positional
insertion rules. This is an important point to remember, because the most useful method of
using keys is to mix both key and positional insertion within a single input data stream.
If the hierarchical record being inserted has parent records and if those parents have key
fields, then those key fields can be specified along with the other field data for the record

Proprietary – See Copyright Page 84 hdbimport


Hdb User’s Guide
being inserted. These key field values are specified just like other record field values in the
input data file. Such key fields must be declared as the primary key fields for the parent
record. Key positioning requires that key fields be defined by the database schema.
When keys are used to locate the parent record, the insertion position is determined by the
-atstart command parameter (or the #atstart, #atend declaratives).
• If atstart is specified, then the candidate record is inserted at the start of or before the
first record of the same type under the located parent.
• If the default atend position is specified, then the candidate record is inserted at the end
of the same record type as the candidate under the located parent record.
• If there are no records of that candidate record type that belong to the located parent
record, then the atstart and atend positions are the same.

Note: When inserting with keys, there is a slightly different interpretation of the atstart
and atend rules. Whereas the atstart and atend rules apply to only the first hierarchical
record encountered with key positioning, the atstart and atend rules do apply whenever a
parent record is located by key value. Therefore, the first visit rule is reset whenever a
parent record is located by key.

9.8.5.1 Inserting a Subtree


A common insert operation is to insert a subtree of records at a given position in the
database. Since a subtree is being inserted, only the location of the parent record is
specified by keys. This means that the subtree child records should not specify keys.
If keys are specified, then hdbimport will use them. However, if the records are constructed
correctly, the positioning is redundant (since key and positional insertion result in the same
insert operation). Since keys are used, the operation will not be as efficient. Therefore, when
inserting subtree record collections, use keys only for positioning the first (root) record.
If the desired parent record is not located, then an error is reported and hdbimport
continues processing the next record encountered in the input data file. However, if a
subtree insert is being performed, the next record is likely the child of the record that should
have been inserted for a given parent. Since this insert failed, subsequent inserts will also
fail, or the records will be inserted in the wrong location. The hdbimport utility has no choice
in the matter, because it is not always possible to determine which records that follow are
directly affected by the key position failure.

9.8.5.2 Multiple Parent Records


If a given record has multiple parent records (parents, grandparents, etc.), then multiple key
fields can be specified on the input data file record. The order of appearance of these key
fields on the record is not important.
Only those parents that uniquely determine a position are necessary. If the immediate
parent uniquely determines an insertion point, then other ancestor keys need not be

Proprietary – See Copyright Page 85 hdbimport


Hdb User’s Guide
specified since they would be redundant. However, they can be used to speed up the key
lookup processing, as redundant hierarchical keys help narrow the search.

9.8.6 Key Update for Hierarchical Records


To initiate key update, the hdbimport utility should be set to using key mode and update
mode — i.e., using the #update and #keys directives, or using -update and -keys
command-line options.
If keys are used to locate a hierarchical record, then, in addition to any parent record keys,
the record’s primary key field should also be specified (see section 9.9.2 Updating with the
#keys Declarative).

9.8.7 Subscript Update for Records


To initiate key update, the hdbimport utility should be set to update mode with the #update
directive or the -update command-line option.
Records can be located for update using subscript values. The subscript value is specified in
the subscript field on the record (either the second field for default record format or the field
declared by the %SUBSCRIPT special field for #record defined fields). The subscript value
must be in the range of 1 to LV, or 1 to MX in the case of MaxLV and circular records.
Example:
#update
CON,1,ID='ABC',APPSUB=0,AREASUB=0,APPID=' ',AREAID=' '
CON,2,ID='STN1',APPSUB=0,AREASUB=0,APPID=' ',AREAID=' '
CON,3,ID='STN2',APPSUB=0,AREASUB=0,APPID=' ',AREAID=' '
CON,4,ID='STN3',APPSUB=0,AREASUB=0,APPID=' ',AREAID=' '
CON,5,ID='STN4',APPSUB=0,AREASUB=0,APPID=' ',AREAID=' '
This example input file uses the #update declarative to change the operation mode to
update. Then record occurrences 1 to 5 of the CON record type in the PERMIT database are
updated with the specified field values.
You can omit the #update declarative in the input data file and use the -update
command-line option instead:
% context permit habitat
% hdbimport -permit -update -s input.txt

Note: If the #update declarative is not in the input file and the -update option is not used
on the command line, then hdbimport is reverted back to the default insertion mode.
Therefore, the five record lines shown above are appended to the end of the CON record
type.

Proprietary – See Copyright Page 86 hdbimport


Hdb User’s Guide
9.8.8 Subscript Update for Fields
As described in section 9.3.1 Field Lines and Record Lines, a field line is the method of
updating the value of a given field with subscript for positioning.
Example:
^ID_CON(1)='ADMIN'
^FAM_APP_CON(2,1)='HABITAT'

9.9 Example Uses


Refer to chapter 7 hdbexport for some of the more common examples of using hdbimport:
• Export/Import in Default Mode
• Export/Import in Field Mode
• Export, Fix, and Import a Parent Pointer Field
• Export/Import a Subtree
• Export a Field for Use in Excel
• Export a Range of Data
• Export Using a Record Pattern
• Use Declaratives

9.9.1 Inserting with the #keys Declarative


This example shows the use of the #keys declarative with the help of #record, #insert, and
#update, to position to a specific record occurrence and insert data under that record.
The following subset of the SCADAMOM database’s hierarchy is used for this example:
SCADEK
+-->SUBSTN
+-->DEVTYP
+-->DEVICE
+-->MEAS
+-->ANALOG
| +-->LIMIT
+-->POINT
+-->CTRL
The input data file (input.txt) is shown:
# ---------------------------------------------------------
# Step 1: Set the declarative to '#keys' so hdbimport
# will use record keys to locate the desired
# record.
#keys

Proprietary – See Copyright Page 87 hdbimport


Hdb User’s Guide
# ---------------------------------------------------------
# Step 2: Setup the record templates for the composite
# keys we want to use to locate an ANALOG
# and a POINT record.
#
# The ANALOG record's composite key is:
#
#record ANALOG,ID,ID_DEVICE,ID_DEVTYP,ID_SUBSTN,ID_SCADEK
#
#
# The POINT record's composite key is:
#
#record POINT,ID_SUBSTN,ID_DEVTYP,ID_DEVICE,ID_POINT
#
# Notes: I change the position of the fields on the
# POINT #record line to go from SUBSTN to POINT.
# Also, the ID_SCADEK key is not included
# since the keys SUBSTN, DEVTYP and
# DEVICE are enough to make a unique composite
# key to locate the POINT of interest.

# ---------------------------------------------------------
# Step 3: Setup the record template for the data records
# that we want to insert.
#
# For the LIMIT record, we like to set its ID and DBAND
# fields.
#record LIMIT,ID,DBAND
#
# For the CTRL record, we like to set its ID, WAIT and KEY
# fields.
#record CTRL,ID,WAIT,KEY

# ---------------------------------------------------------
# Step 4: Use the '#update' declarative to switch to
# update mode so the ANALOG record line becomes
# a line to position to a record occurrence.
# The composite search key for the analog record
# is:
# ID_SCADEK='SUBSTN'
# ID_SUBSTN='SUB_4X'
# ID_DEVTYP='XFMR'
# ID_DEVICE='2L'
# ID_ANALOG='LTC'
#update
ANALOG,'LTC','2L','XFMR','SUB_4X','SUBSTN'
#
# Step 4.1: Change to INSERT mode and insert the LIMIT
# record under the ANALOG located with the
# analog composite key.
# Here, the new LIMIT record's ID field is
# set to 'TAP' and its 'DBAND' is set to 123.
# The rest of the fields for the record are
# set to their defaults.
#insert

Proprietary – See Copyright Page 88 hdbimport


Hdb User’s Guide
LIMIT,'TAP',123

# ---------------------------------------------------------
# Step 5: Now, we change back to UPDATE mode and locate
# another record occurrence but for the POINT
# record type.
#update
POINT,'SUB_2X','CB','400401','BKR'
#
# Step 5.1: Change to INSERT mode and insert the CTRL
# records under the POINT above. Two CTRL
# records are added, one with ID=TRIP and
# the other ID is CLOSE.
#insert
CTRL,'TRIP',22,44
CTRL,'CLOSE',11,33
Use the command line below to import the data into a SCADAMOM database:
% context scada test
% hdbimport -d scadamom -s input.txt

9.9.2 Updating with the #keys Declarative


This example will again use the same SCADAMOM hierarchy to illustrate the use of #keys to
update a record occurrence:
# ---------------------------------------------------------
# Step 1: Set the declarative to '#keys' so hdbimport
# will use record keys to locate the desired
# record.
#keys

# ---------------------------------------------------------
# Step 2: Setup the record templates for the composite
# keys we want to use to locate the CTRL and
# LIMIT records we like to update.
#
# The CTRL record's composite key is:
#record CTRL,WAIT,KEY,ID_SUBSTN,ID_DEVTYP,ID_DEVICE,ID_POINT,ID
#
# The LIMIT record's composite key is:
#record LIMIT,DBAND \
,ID,ID_ANALOG,ID_DEVICE,ID_DEVTYP,ID_SUBSTN,ID_SCADEK

# ---------------------------------------------------------
# Step 3: Change to update mode. Don't forget to do this,
# otherwise records are inserted rather than
# updated.
#update

# ---------------------------------------------------------
# Step 4: We will update the records we've inserted in
# the previous example.
#

Proprietary – See Copyright Page 89 hdbimport


Hdb User’s Guide
CTRL,-22,-44,'SUB_2X','CB','400401','BKR','TRIP'
CTRL,-11,-33,'SUB_2X','CB','400401','BKR','CLOSE'
LIMIT,-123,'TAP','LTC','2L','XFMR','SUB_4X','SUBSTN'
#
# Notes: The 3 record lines above basically reverse
# the sign of the fields inserted in the
# previous example.

Proprietary – See Copyright Page 90 hdbimport


Hdb User’s Guide
10. hdbrio
The hdbrio utility is a database query tool that allows the user to examine and modify Hdb
databases. The hdbrio utility is an interactive tool that allows the user to view, edit, insert,
and/or delete records in an Hdb database.
The hdbrio commands enable you to effectively manage your database through online
commands. The hdbrio commands allow you to:
• Initialize a database.
• Access information about the database and archive files, list schema information, read
and display records, and display field data contents.
• Modify records by copying, deleting, inserting, incrementing or decrementing records,
and editing field content.
• Navigate through record structures by moving up and down, finding records, and
positioning to a record.
• Request Help with commands.

Note: The format of the hdbrio chapter is somewhat different from previous chapters or
those that follow. This is because of the unique nature of hdbrio, which is a database
query language.

10.1 Starting hdbrio


At the command prompt, type the following:
hdbrio -a app -f fam db
The above command opens a database (db) in a clone specified by the application name
(app) and family name (fam). If the command succeeds, the hdbrio prompt is displayed:
[db.app.fam]
rio>
From the rio> prompt, any legal hdbrio command can be executed.

Note: When accessing a clone database, make sure hdbserver is running. If hdbserver is
not running, hdbrio will start but it cannot actually access the data in that database.

10.2 Exiting hdbrio


To exit hdbrio, type the exit command at the command prompt:
rio> exit

Proprietary – See Copyright Page 91 hdbrio


Hdb User’s Guide
Note that the quit command can also be used to exit hdbrio:
rio> quit

10.2.1 Getting hdbrio Help


The hdbrio Help command provides syntax and an overview for hdbrio commands.
To get a list of all the available hdbrio commands, type in Help without an argument:
rio> help
To get more detailed help on a specific command, type Help followed by the command
name:
rio> help [command]
The following example provides help on the hdbrio position command:
rio> help position
The hdbrio command reference describes each of the hdbrio commands in more detail.

10.3 Syntax Considerations


When issuing hdbrio commands, remember the following syntax items:
• Comments: C++ comments (i.e., // ) are ignored by the parser if they are specified at the
beginning of lines. If you want to add a comment at the end of an hdbrio command, use
a semicolon to separate the command and the comment.
• White (blank) spaces: Additional spaces and tabs are ignored.
• Multiple commands: The semicolon (;) is used to separate commands in the same
command line.

10.4 Using an hdbrio Script


The hdbrio commands can be executed in batch mode by using the -i option in a script file.
A script file can contain any valid hdbrio command, including comments. The following is an
example of an hdbrio script file:
//
// hdbrio Script Example: script1.rio
//
dbopen -f dts -a scada -d scadamom ; // open db
list ; //list records
find substn=junko ; //find junko substation
del ; //delete junko
insert devtyp ; //insert new devtyp
read ; //display record
/id=newdev ; //assign id field a value
find devtyp=newdev; r ; //find devicetype & display

Proprietary – See Copyright Page 92 hdbrio


Hdb User’s Guide
quit ; //exit
To run this script, the following command can be used:
% hdbrio -i script1.rio

Note: Because the script file does its own hdbrio access, the command-line options do not
affect the script. If the command-line options provide invalid data (e.g., an invalid Hdb
database name), hdbrio outputs an error message but continues to execute the script.

10.5 Using hdbrio Interactively


To start an interactive hdbrio session, set the context and start hdbrio with the database as
the argument.
To place hdbrio in interactive mode, at the Habitat system prompt, type the following
command:
% context scada ems
% hdbrio scadamom
This results in the hdbrio command being displayed:
[SCADAMOM.SCADA.EMS]
rio>
You can also specify a specific clone context using the -a (application) and -f (family)
options. The following example places hdbrio in interactive mode and opens the scadamom
database under the DTS family of the Scada application:
% hdbrio -f dts -a scada scadamom
[SCADAMOM.SCADA.DTS]
rio>
When hdbrio opens the database, the rio> command prompt is displayed. If the database
cannot be opened, an error message is displayed (e.g., database not found) and you are
returned to the system prompt.

10.6 Working with Multiple Databases


When hdbrio is first started from the command prompt, there is only one database opened.
The hdbrio prompt indicates this active database. For example, the following command
opens a clone database:
% hdbrio -f dts -a scada scadamom
[SCADAMOM.SCADA.DTS]
rio>
The hdbrio utility can open a database stored in a clone archive file. This example shows
how an archive file database stored in the file scada.arc is opened:
% hdbrio -archive scada.arc scadamom
[scada.arc:SCADAMOM]
$rio>

Proprietary – See Copyright Page 93 hdbrio


Hdb User’s Guide
It is possible to have more than one database opened at one time in hdbrio. With multiple
databases opened, you can copy records between these databases.

10.6.1 Listing all Open Databases


To find out which databases are currently opened in hdbrio, use the DBSET command
without any argument. For example:
[SCADAMOM.SCADA.DTS]
rio> dbset
ID Database
==== ========
[1] scada.arc:SCADAMOM
[2] SCADAMOM.SCADA.DTS

Use 'dbset <numeric id>' to switch database

[SCADAMOM.SCADA.DTS]
rio>
This output shows that two SCADAMOM databases are opened. The first database is a
database in the archive file scada.arc. The second database is from the clone with
application SCADA and family DTS.

10.6.2 Opening a Database


To add a database to the open list of databases, use the DBOPEN command. Two variants
of this command exist: one is for opening a database in an archive file, and the other is for
opening a database in a clone.
To open a database in a clone, use the following syntax:
dbopen -a <application> -f <family> -d <database>
Below is an example that opens the SCADAMOM database in the SCADA/EMS clone:
[PROCMAN.PROCMAN.HABITAT]
rio> dbopen -a scada -f ems -d scadamom

[SCADAMOM.SCADA.EMS]
rio>
If DBOPEN is successful, the newly opened database becomes the active database and the
hdbrio prompt shows this. All commands are directed to this newly opened active database
until the user switches to another database or exits hdbrio.
To open a database in an archive file clone, use the following syntax:
dbopen -r <archive file> -d <database>
Below is an example that opens the SCADAMOM database in the archive file “scada.arc”:
[PROCMAN.PROCMAN.HABITAT]
rio> dbopen -r scada.arc -d scadamom

Proprietary – See Copyright Page 94 hdbrio


Hdb User’s Guide
[scada.arc:SCADAMOM]
$rio>
Again, the prompt is changed to the newly opened database.

10.6.3 Switching Between Databases


To switch between databases, use the DBSET command. First, find the list of opened
databases (use DBSET without any argument, or use LIST -d). Then, switch to the desired
database using the following syntax:
dbset <database number>
The database number is the number beside the database in the listing.
For example, the following sequence shows how to switch to the SCADAMOM database in
the SCADA/DTS clone from the PROCMAN database (the active database):
[PROCMAN.PROCMAN.HABITAT]
rio> dbset
ID Database
==== ========
[1] scada.arc:SCADAMOM
[2] SCADAMOM.SCADA.DTS
[3] PROCMAN.PROCMAN.HABITAT

Use 'dbset <numeric id>' to switch database

[PROCMAN.PROCMAN.HABITAT]
rio> dbset 2

[SCADAMOM.SCADA.DTS]
rio>

10.6.4 Closing a Database


To close a database and remove it from the list of opened databases, use the DBCLOSE
command. The syntax for this command is:
dbclose <database number>
For example, to remove the PROCMAN database from the list of opened databases in the
previous example, use the following sequence:
[SCADAMOM.SCADA.DTS]
rio> dbclose 3
ID Database
==== ========
[1] scada.arc:SCADAMOM
[2] SCADAMOM.SCADA.DTS

[SCADAMOM.SCADA.DTS]
rio>

Proprietary – See Copyright Page 95 hdbrio


Hdb User’s Guide
Hdbrio will not allow a user to remove the active database from the list. In this example, you
will not be able to remove SCADAMOM.SCADA.DTS because you are interacting with it (the
prompt shows that). Therefore, you must switch to the other database before removing
SCADAMOM.SCADA.DTS from the list.

10.7 Indicating hdbrio's Record Position


The hdbrio input prompt indicates the hdbrio record position. When hdbrio is executed, the
current record position is null and the rio> prompt is displayed. The following example
illustrates this point:
% hdbrio -f dts -a scada scadamom
[SCADAMOM.SCADA.DTS]
rio> pos analog=11
ANALOG(11)>

10.8 Accessing an Hdb Archive File


When accessing an Hdb archive file, hdbrio’s prompt begins with a “$”.
The following example illustrates the change in the hdbrio prompt once the scadamom file
has been opened:
% hdbrio -archive scada.car scadamom
[scada.car:SCADAMOM]
$rio> pos substn=1
[scada.car:SCADAMOM]
$SUBSTN(1)> down 2
$SUBSTN(1). . .DEVICE(1)>

10.9 Navigation
The hdbrio commands “up” and “down” can be used to navigate within a record subtree
after the hdbrio position has been established. When navigating within a subtree, the rio>
prompt reflects the following:
• The root record of the subtree to which the user is positioned.
• The current record with the subtree where hdbrio is currently positioned.
The following example illustrates this point:
% hdbrio -a scada -f ems scadamom
rio> pos substn=1
SUBSTN(1)> down 2
SUBSTN(1). . .DEVICE(1)> where
SCADEK(8) = SUBSTN
SUBSTN(1) = SUB_1X
DEVTYP(1) = RTU
DEVICE(1) = STA_REM_LOC
SUBSTN(1). . .DEVICE(1)>

Proprietary – See Copyright Page 96 hdbrio


Hdb User’s Guide
After issuing the “p substn=1” command, the prompt indicates that hdbrio is positioned at
the first record in SUBSTN.
After the “down 2” command, the prompt indicates that hdbrio is positioned at the first
DEVICE under the substation SUB_1X and device type RTU (as displayed using the “where”
command).

10.9.1 Navigating a Subtree


The hdbrio up and down commands enable subtree navigation to hdbrio’s anchor record
and current position record. An “anchor record” is the record being positioned to.
Note that the up and down commands can only be used with hierarchical records.
To navigate within a record:
1. Position to a hierarchical record using hdbrio’s position or find command.
The hdbrio’s position record serves as an anchor for navigating the subtree. The subtree
consists of all children under the record.
2. Use the hdbrio down command to navigate down the subtree.
3. Use the hdbrio up command to navigate up the subtree.

Note: You can only navigate down to the last record position.

10.10 Changing Current Record Position


The position command changes hdbrio’s current record position by specifying a record type
and subscript.
The format for the hdbrio position command is:
rio> position record[=subscript]
The following example changes the record position to the 25th position of a record named
“ANALOG”:
rio> position ANALOG=25

10.11 Inserting Records


The hdbrio insert command inserts a record. The insert rules depend on the following two
items:
• The type of record being inserted (non-hierarchical, hierarchical, or free).
• The record’s current position.
The format of the insert command is:
rio> insert [options] [record]

Proprietary – See Copyright Page 97 hdbrio


Hdb User’s Guide
The following example positions hdbrio to the first substn record, and then inserts three
records after the current record position. After the insertion, the current record position is
set to the position of the first newly created record; in this case, it is record number 2:
rio> p substn = 1
SUBSTN(1)> insert -n 3
SUBSTN(2)>

10.11.1 Inserting Non-Hierarchical Records


Non-hierarchical records can be inserted at any time, regardless of hdbrio’s current record
position. If the record type is not specified in the insert command, then the record is
inserted immediately after the current record. For example:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> p accum=1 // accum is non-hierarchical
ACCUM(1)> insert // insert accum(2)
ACCUM(2)> // change current position
When you are positioned at a hierarchical record and you issue an insertion of a non-
hierarchical record, then the record is inserted at LV+1. The example below shows that, if
you are positioned at SUBSTN (which is hierarchical), then issuing an insertion by explicitly
specifying a record type that is non-hierarchical (ACCUM in this case) will insert the new
record at LV+1:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> p substn=1 // substn is hierarchical
SUBSTN(1)> i accum // insert accum at LV+1
ACCUM(3) > // change current record

10.11.2 Inserting Hierarchical Records


Inserting hierarchical records depends on hdbrio’s current record position and the record
type being inserted. A root parent hierarchical record can be inserted regardless of the
current record position, but before any of its children can be inserted. For example:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> i scadek // scadek (root parent)
SCADEK(1)> // change current position
The first child must be inserted after its parent, and hdbrio must be positioned to the
desired parent record. For example:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> p scadek=1 // scadek (parent of substn)
SCADEK(1)> i substn // insert first child
SUBSTN(1)> // change current record

Proprietary – See Copyright Page 98 hdbrio


Hdb User’s Guide
After the first child record is inserted, subsequent child records can be inserted regardless
of hdbrio’s current record. When this occurs, the new child record is inserted after the child
record’s LV, and the new child takes the same parent as the LV child. For example:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> p scadek=2 // scadek (parent of substn)
SCADEK(2)> i substn // insert substn child
SUBSTN(9)> i devtyp // insert devtyp which is LV
DEVTYP(12)> p accum=1 // position to first accum
ACCUM(1)> i devtyp // insert devtyp after LV
DEVTYP(13)> // change current record

10.11.3 Inserting FREE Records


Inserting FREE records is similar to inserting child records, except that hdbrio’s current
record is either the parent of the FREE record or another FREE record. If the current record
is hierarchical, then the FREE record is inserted as a child of the current record. If the
current record is a FREE record, then the record is inserted as a sibling of the current FREE
record.
In the following example, TEXT is defined as a FREE record type in the scadamom database:
% hdbrio -a scadamdl -f offline scadamom
[SCADAMOM.SCADAMDL.OFFLINE]
rio> p SUBSTN=1 // SUBSTN is a hierarchical
SUBSTN(1)> i text // free child of SUBSTN(1)
TEXT(1)> i text // insert sibling of TEXT(1)
TEXT(2)> // change current position.

10.12 Copying Records


The commands DBCOPY and DBPASTE can be used to copy records.
To copy records from one database to another, use the DBCOPY, DBSET, and DBPASTE
commands.

10.12.1 Copying Records in the Same Database


One or more records can be copied using the hdbrio dbcopy command.
A record must be marked for copy. The following example marks the record at substn(1)
and its child subtree for copy:
rio> p substn=1 // position to record
SUBSTN(1)> dbcopy -s // mark SUBSTN(1) and its subtree
The following example shows how to paste the marked record:
rio> p substn=5 // position to record
SUBSTN(5)> dbpaste // paste the records
SUBSTN(5)

Proprietary – See Copyright Page 99 hdbrio


Hdb User’s Guide
The new records are inserted as SUBSTN(6).
If you do not want to copy the subtree, remove the -s option when marking the record.

10.12.2 Copying Records from One Database to Another


The procedure for copying records from one database to another database is outlined
below:
1. Use the DBOPEN, DBSET, and/or POSITION commands to position to the desired source
database and record.
2. Use the DBCOPY command with the relevant options to mark the number of records to
copy.
Some of the options are: include subtree (-s), skip multidimensional fields (-m), etc. For
more information about the supported options, use the hdbrio command reference.
3. Use the DBOPEN, DBSET, and/or POSITION commands to position to the desired
destination database and record location.
4. Use the DBPASTE command to copy the data from the source to the destination.
Following is an example of copying two SUBSTN records and their subtrees, from the
SCADAMOM database of the SCADAMDL/DTS clone to the SCADAMOM database of the
SCADA/EMS clone:
[SCADAMOM.SCADAMDL.DTS]
rio> p substn=5
SUBSTN(5)> dbcopy -s 2
Source Database = SCADAMOM.SCADAMDL.DTS
Source Record = SUBSTN(5)
Copy Count = 2
Copy Flags = SUBTREE

SUBSTN = 2
DEVTYP = 15
DEVICE = 31
MEAS = 33
POINT = 51
CTRL = 36
ANALOG = 57
LIMIT = 17
COUNT = 2
ALGREF = 1
SETPNT = 1

SUBSTN(5)> dbopen -a scada -f ems -d scadamom

[SCADAMOM.SCADA.EMS]
rio> p substn

[SCADAMOM.SCADA.EMS]
SUBSTN(1)> dbpaste -s 2

Proprietary – See Copyright Page 100 hdbrio


Hdb User’s Guide
PASTE RECORDS
Source Database = SCADAMOM.SCADAMDL.DTS
Source Record = SUBSTN(5)
Destination Database = SCADAMOM.SCADA.EMS
Destination Record = SUBSTN(1)
Copy Count = 2
Copy Flags = SUBTREE

[SCADAMOM.SCADA.EMS]
SUBSTN(1)>

10.13 Deleting Records


The delete command removes one or more records (and the subtree if the record is
hierarchical). If more than one record is to be deleted, then records are deleted starting with
the current record. The new current record position after deletion of a record becomes
subscript less one (1).
The format of the delete command is:
rio> delete [options]
The following example deletes 25 records:
rio> p substn=5
SUBSTN(5)> delete -n 25
SUBSTN(4)
If the number of records to be deleted exceeds the current LV, no records are deleted. This
is treated as a syntax error.

10.14 Positioning to a Record Using Key Field Value


The hdbrio find command can be used to position to a record with a specific key field value
or a composite key value.
If specifying a composite key, hdbrio builds an internal collection of records that satisfies
the composite key values. The current record becomes the first record to satisfy the
composite key constraint.
If another find command is executed, this time without a key but following a find command
that used a key, navigation is within the internal collection of records built by the key find
command. For example, assume the following record hierarchy:
A(1)=AA
+
+----B(1)=BB
+
A(2)=A2
+
+----B(2)=B2
+
A(3)=AA

Proprietary – See Copyright Page 101 hdbrio


Hdb User’s Guide
+
+----B(3)=BB
Executing the find command (without a key) will result in the following:
rio> find a=aa,b=bb // find composite key
b(1)> find // find next
b(3)>
The key and value pairs order is immaterial.

10.15 Incrementing the Current Record


The plus (+) command increments the current record position by the number specified in
the command. The format of the command is:
rio> + [number]
The following example increments the current record by 100 records:
rio> + 100
The type of record influences how a record is incremented. Hierarchical records increment
the current record position up to the last record of that type within the subtree, which is
defined by hdbrio’s anchor record. Non-hierarchical records increment the current record
position by subscript up to a record’s LV.

10.16 Decrementing the Current Record


The minus (-) command decrements hdbrio’s current record position by the number
specified in the command. The format of the command is:
rio> - [number]
The following example decrements the current record by 50 records:
rio> - 50
The type of record influences how a record decrements. Hierarchical records decrement the
current record position down to the first record subscript within the subtree, which is
defined by hdbrio’s anchor record. Non-hierarchical records decrement the current record’s
position subscript down to one.

10.17 Reading a Record and Displaying its Contents


The read command reads a record and displays its contents. The format for the hdbrio read
command is:
rio> read [options] [record = subscript]
The following example reads the 15th occurrence of a record named “SUBSTN” and displays
its contents in decimal format:
rio> read -d SUBSTN =15

Proprietary – See Copyright Page 102 hdbrio


Hdb User’s Guide
10.18 Displaying or Editing Record Fields
The slash (/) command is used to display and edit record fields. The field name must be
specified. The format of the command is:
rio> /fieldname(subscript1,subscript2,subscript3)
The field name can be entered in one of two ways:
• The content portion of the field name (e.g., id) can be entered.
If specifying the content portion of the field name, then subscripts are optional.
Or:
• The fully qualified field name (e.g., id_analog) can be entered.
If specifying the fully qualified field name, then subscripts are required. The fully
qualified field name is the only way to display or edit multidimensional fields.
For 2-D and 3-D fields, subscripts are used to identify the primary, secondary, and tertiary
dimensions of a field. The following example illustrates the relationship between the
subscripts in a field name and the corresponding record:
id_prim (1) // prim is primary field
d2_sec_prim(1,2) // sec is secondary field
d3_tert_sec_prim(1,2,3) // tert is tertiary field

Note: In the above example, one (1) is a primary record subscript, two (2) is a secondary
record subscript, and three (3) is a tertiary record subscript.

Subscripts can take one of the following three forms:


• Numbers
• Asterisk (*)
• Subrange (4:6)

10.18.1 Displaying the Contents of a Record's Field


The field display command shows the contents of a field within either one instance of a
record or all instances of a record. The format of the field display command is:
rio> / [option] [field]
The following example displays the contents of a field record and directs the editing of a
field:
[NIOSRVV.NIOSERVE.DTS]
rio> p path=2 // position to record 2 of PATH
[NIOSRVV.NIOSERVE.DTS]
PATH(2)> / // display the record
Display of record PATH(2)
===========================================
BitContainer Q_PATH = B*4

Proprietary – See Copyright Page 103 hdbrio


Hdb User’s Guide
Bit: 0 NOTINUSE_PATH = False
Bit: 1 ALIAS_PATH = False
Bit: 2 DNCAP_PATH = False
Bit: 3 IPCAP_PATH = True
Bit: 4 X1_PATH = False
Bit: 5 X2_PATH = False
Bit: 6 X3_PATH = False
Char*52 ADSTOBJ_PATH =
Char*52 ASRCOBJ_PATH =
Char*64 IPHOST_PATH = PC1256
Char*12 NAPNAME_PATH = NIO00001454
Char*52 VDSTOBJ_PATH = .ALARM_DTS_93
Char*52 VSRCOBJ_PATH = .ALARM_3400002_
PROCMAN_PC1256_93
Integer*2 PATHID_PATH = 16401
Char*6 NODE_PATH =

[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /napname // display the NAPNAME field
Char*12 NAPNAME_PATH(2) = NIO00001454

[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /iphost=pc7890 // edit the IPHOST field

[NIOSRVV.NIOSERVE.DTS]
PATH(2)> /iphost // display the change
Char*64 IPHOST_PATH(2) = PC7890
Note that bolded characters are user input.

10.18.2 Changing the Contents of a Record's Field


The field edits command changes the contents of a field within either one instance of a
record or all instances of a record. The format of the field edit command used to change the
contents of a record’s field is:
rio> / [option] field = value
The following example changes the field id_devtype(2) to a value of rline:
rio> /id_devtyp(2)=rline

10.18.3 Setting Field Values to Habitat Null


In Habitat, a special byte pattern is used to represent the NULL value. The byte pattern is a
series of hexadecimal value of 0x80 (i.e., 128 base10).
To set the numeric field to null, you can use the hexadecimal option of hdbrio’s slash
command to set the field to that corresponding pattern.
Example:
rio> / -h i1_r(1)=0x80
rio> / -h i2_r(1)=0x8080

Proprietary – See Copyright Page 104 hdbrio


Hdb User’s Guide
rio> / -h i4_r(1)=0x80808080
rio> / -h r4_r(1)=0x80808080
rio> / -h r8_r(1)=0x8080808080808080
In the example, i1_r is a I*1 field, i2 is a I*2 field, etc. r4_r and r8_r are R*4 and R*8 fields,
respectively.

10.19 The hdbrio Commands


The following table lists hdbrio commands. For more information about command
arguments and options, refer to chapter 13 Hdb Utilities Reference.

Table 11. Hdbrio Command Summary


Command Function
Backup Back up replicated data in a database.
Dbcopy Mark records for copy in the same or a different database.
Dbclose Close a database.
Dbopen Open a database.
Dbpaste Paste records marked by dbcopy.
Dbset Set the active database for interaction.
Delete Delete the current record and its subtree.
Down Move down through the current record subtree.
Echo Echo input.
Exit Quit the hdbrio command prompt.
Find Position to a record by key field value.
Help Get help on all hdbrio commands or by command.
Insert Insert a record at the current record position.
List Display Hdb database and record information.
Position Go to the record position and make that the current record position.
Quit Exit the hdbrio command prompt.
Read Scan and display the record listed.
reset Re-establish invalid pointers in a database.
setstamp Set the time stamp for an Hdb object.
up Move up through the current record subtree.
verify Verify the pointer integrity of the database.
where Show the current record position within the record hierarchy.
zero Initialize a database by deleting all records in the database.
+ Increment the current position by the number specified.

Proprietary – See Copyright Page 105 hdbrio


Hdb User’s Guide
Command Function
- Decrement the current position by the number specified.
/ Enable displays and/or edit a record’s field content.

Proprietary – See Copyright Page 106 hdbrio


Hdb User’s Guide
11. hdbcompare
The hdbcompare utility compares two Hdb database instances to report the differences
between them. The comparison results are reported in two files: a summary log file and a
detailed difference file.

11.1 Comparison Capability


The database instances to be compared can exist in any combination of the following
container types:
• A savecase containing one or more different databases, one of which is the target type
(-sf1, -sf2 options)
• A clone containing one or more databases, one of which is the target type (-s1, -s2
options)
• An e-terramodeler archive containing the savecase of the database to compare (-sz1,
-szf1, -sz2, -szf2 options)
Options with a “1” indicate the container type before changes are made, while options with
a “2” indicate the container type after changes are made. For example, use -sf1
<savecase> to specify the savecase before changes are made, and use -sf2
<savecase> to specify the savecase after changes are made.
Limit the comparison to those database fields that have the “MODELER” attribute in the
database schema by using the -modeling_only option. If this option is not selected, the
default behavior will be to compare all fields in the database.
Use the -description_file option to provide a user-friendly description of the field
name in the detailed difference output file.
Use the -context_file option to specify the ITEMS field of the two databases to show in
the comment section of the detailed difference output file. This allows for a more-
meaningful comment header in the output file.

11.2 Comparison Limitations


The hdbcompare utility has the following limitations:
• Comparisons are limited to records that have a key field and the ITEMS record type.
• Comparisons of fields in multi-dimensional tables are not supported.
• Comparisons of partial savecases are not supported.

Proprietary – See Copyright Page 107 hdbcompare


Hdb User’s Guide
11.3 Detailed Comparison Output File
The detailed difference of a comparison is created in a CSV (comma separated value) file.
At the top of the CSV file are comments about the comparison. A comment line is a line that
starts with a “#” character. The header comments include the following information:
• Database 1: <name and source of database 1>
If an ITEMS field context file is given on the command line using the -context_file
option, the values of those fields from database 1 will be displayed.
• Database 2: <name and source of database 2>
If -context_file is specified, the values of those fields from database 2 will be
displayed.
• Date/time of the comparison
The first non-comment entry in the CSV file is a line of column captions for difference
records to follow. Each difference record within the file will have the same number of
columns. The columns included in each difference record are as follows:
• Type: I (for insert), D (for delete), and M (for modify).

Note: If a record is renamed, it will look like a deletion and an insertion.

• Field Name: The field name as defined in the database schema (e.g., BMAGSAT_XF).
• Composite Identifier: This is the composite ID of the record. The composite ID includes
the key fields for all of the parents to the target record plus the target record itself. By
default, the components in the composite key fields are separated by a forward slash.
For an ITEMS field, this column is blank.

Note: The delimiter can be configured to something other than a forward slash
through the -delimiter command-line option.

• Value 1: This is the value as it occurred in the first database. It will be blank if the entry
pertains to an insert.
• Value 2: This is the value as it occurred in the second database. It will be blank if the
entry pertains to a delete.
• Field Description: Descriptive text if available from the optional field description
configuration file — e.g., Slope of Magnetization Curve (%). If the -description_file
option is not used or if the field is not defined in the description file, then this field is
blank.
At the end of the CSV file are comment lines summarizing the number of difference entries
that are found in the two databases.

Proprietary – See Copyright Page 108 hdbcompare


Hdb User’s Guide
11.3.1 Indirect Field Differences
An indirect field is a field that holds the subscript of another record in the database. They
are often used to define many-to-one relationships in the database.
Presenting changes to indirect fields as a subscript would not be very user-friendly. In
addition, simply comparing the subscripts might falsely indicate a difference when, in fact,
the only change is that one of the databases has more of the target record than the other.
What the user really wants to see is whether the record that the indirect field points to has
changed — i.e., whether it has a different key field.
When comparing indirect fields, the hdbcompare utility does the following:
• It follows the indirect pointer and compares the key field of the target record.
• If the key field is different on the two records, then it will report a difference in the output
file displaying the composite key values instead of the indirect subscript.
If the indirect pointer is pointing to a record that does not have a key field, the comparison
is reverted back to a numeric comparison of the indirect pointer value.

11.3.2 Sample CSV Output File


A partial sample output CSV file is shown here:
#
# Database 1 (DB1) : GENMOM [SAVECASE] case_genmodel_ade.ems20040521_14
# GENMOM Standalone validation status = TRUE
# GENMOM Standalone validation time = 21-May-2004 12:09:12
# GENMOM AGC validation status = TRUE
# GENMOM AGC validation time = 21-May-2004 12:09:12
# GENMOM to NETMOM validation status = TRUE
# GENMOM to NETMOM validation time = 21-May-2004 12:09:12
# GENMOM to SCADAMOM validation time = 21-May-2004 12:09:12
#
# Database 2 (DB2) : GENMOM [SAVECASE] case_genmodel_ade.ems20040621_14
# GENMOM Standalone validation status = TRUE
# GENMOM Standalone validation time = 18-Jun-2004 12:16:45
# GENMOM AGC validation status = TRUE
# GENMOM AGC validation time = 18-Jun-2004 12:16:45
# GENMOM to NETMOM validation status = TRUE
# GENMOM to NETMOM validation time = 18-Jun-2004 12:16:45
# GENMOM to SCADAMOM validation time = 18-Jun-2004 12:16:45
#
# Compare Date/Time : 2005-07-19 14:51:06
#
Type, Field Name, Composite ID, From, To, Field Description
M,TAGC_ITEMS,,21-May-2004 12:09:12,18-Jun-2004 12:16:45,
M,TNETMOM_ITEMS,,21-May-2004 12:09:12,18-Jun-2004 12:16:45,
M,TSCADA_ITEMS,,21-May-2004 12:09:12,18-Jun-2004 12:16:45,
M,TSTAND_ITEMS,,21-May-2004 12:09:12,18-Jun-2004 12:16:45,
M,TVALIDGM_ITEMS,,21-May-2004 12:07:39,18-Jun-2004 12:15:11,
M,TVALID_ITEMS,,21-May-2004 12:07:39,18-Jun-2004 12:15:11,

Proprietary – See Copyright Page 109 hdbcompare


Hdb User’s Guide
M,TNOW_ITEMS,,21-May-2004 12:07:39,18-Jun-2004 12:15:11,
M,CAPMX_OPA,TOPOLOGY/ERCOT,102831,102896,
M,CAPMX_PLC,TOPOLOGY/ERCOT/NLSES/UNIT2,135,200,
M,AECOMX_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,121,182,Advisory ECO Maximum (MW)
M,CURCAP_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,121,182,Current Capacity (MW)
M,ECOMX_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,121,182,ECO Maximum (MW)
M,LFCMX_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,121,182,LFC Maximum (MW)
M,RVSPMX_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,121,182,Max Spinning Reserve (MW)
M,AECMXREF_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,135,200,Advisory ECO Max Reference
(MW)
M,CURCPREF_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,135,200,Current Capacity Reference
(MW)
M,ECOMXREF_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,135,200,ECO Max Reference (MW)
M,LFCMXREF_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,135,200,LFC Max Reference (MW)
M,CAPMX_UNIT,TOPOLOGY/ERCOT/NLSES/UNIT2/UNIT2,135,200,Maximum Capacity (MW)
M,I__QSE_PL,TOPOLOGY/ERCOT/BASTEN,TOPOLOGY/0544813412100,TOPOLOGY/9630189812200,
M,I__QSE_PLC,TOPOLOGY/ERCOT/BASTEN/GTG-
1100,TOPOLOGY/0544813412100,TOPOLOGY/9630189812200,

M,OBJSSCR_ZOUT,1451592402041,53418,53567,
M,OBJSSCR_ZOUT,1652310304028,53249,53398,
M,OBJSSCR_ZOUT,11573901050216,50407,50545,
M,OBJSSCR_ZOUT,16272708080342,52972,53119,
M,OBJSSCR_ZOUT,1613363105028,53968,54119,
I,ID_ZOUT,11293525050421,,,
M,OBJSSCR_ZOUT,1652310304026,53949,54098,
M,OBJSSCR_ZOUT,16523103040210,54010,54161,
#
# Records inserted = 262
# Records deleted = 0
# Fields modified = 3229
#

11.4 Field Description Configuration File


The -description_file command-line option specifies a field description configuration
file. The content of the field description configuration file specifies the descriptive text of a
database field to show in the difference CSV file. The descriptive definitions are keyed based
on the database, record, and field name that the description applies to.
When outputting the differences to the CSV file, the hdbcompare utility consults this file to
look for field names with the descriptive text defined. If such a field is found, the
corresponding descriptive text is displayed in the “Field Description” column in the output
CSV file.
The field description file is itself a CSV file. Each line of the file has the following items:
database, recordtype, field_name, and description:
• Database: Identifies the name of the database, e.g., NETMOM.
• Recordtype: Identifies the record type that the field is a member of. If the field is a global
item, then the record type is “ITEMS”.

Proprietary – See Copyright Page 110 hdbcompare


Hdb User’s Guide
• Field_name: Identifies the field.
• Description: The descriptive text for the field. This cannot contain commas and it cannot
exceed 100 characters.
Fields from more than one database can be included in the same file. Comment lines in the
file start with the “#” character.
A partial sample of the field description file is shown here:
# Sample Field Description File
#
GENMOM,CZONE,ID_CZONE,Name
GENMOM,CZONE,USETLAM_CZONE,Use Telemetered Lambda
GENMOM,DAYS,I$INADPF_DAYS,Type
GENMOM,DAYS,ID_DAYS,Day
GENMOM,FUELTY,DESCRIPT_FUELTY,Description
GENMOM,FUELTY,EFF_FUELTY,Fuel Efficiency
GENMOM,FUELTY,ID_FUELTY,Name
GENMOM,FUELTY,MBTUPU_FUELTY,MBTU (pu)
GENMOM,FUELTY,OUTCO2_FUELTY,Amount of CO2 per MBTU
GENMOM,FUELTY,OUTNOX_FUELTY,Amount of NOX per MBTU
GENMOM,FUELTY,OUTSO2_FUELTY,Amount of SO2 per MBTU
GENMOM,FUELTY,PRICEPU_FUELTY,Price Per Unit
GENMOM,FUELTY,UNIT_FUELTY,Unit of Measurement
GENMOM,GTYPE,EDABLE_GTYPE,Dispatchable?
GENMOM,GTYPE,EXCTGEN_GTYPE,Exclude This Type from GTY Total
GENMOM,GTYPE,ID_GTYPE,Name
GENMOM,INAD,CAT_INAD,Inadvertent Account Types
GENMOM,INADPF,H1$INAD_INADPF,Hour 1
GENMOM,INADPF,H10$INAD_INADPF,Hour 10
GENMOM,INADPF,H11$INAD_INADPF,Hour 11
GENMOM,INADPF,H12$INAD_INADPF,Hour 12
GENMOM,INADPF,H13$INAD_INADPF,Hour 13
NETMOM,MONGRP,ID_MONGRP,Name
NETMOM,MONGRP,ILOG_MONGRP,Log Transfer Interface Violation
NETMOM,MONGRP,ITOLALM_MONGRP,Transfer Interface Monitoring Alarm Tolerance
NETMOM,MONGRP,ITOLDBD_MONGRP,Transfer Interface Monitoring Deadband
NETMOM,MONGRP,ITOL_MONGRP,Transfer Interface Monitoring Warning Tolerance
NETMOM,MONGRP,NALM_MONGRP,Alarm Angle Pair Violation
NETMOM,MONGRP,NDIS_MONGRP,Display Angle Pair Violation
NETMOM,MONGRP,NLOG_MONGRP,Log Angle Pair Violation
NETMOM,MONGRP,NPNORMPC_MONGRP,Angle Pair Monitoring DB (Contingency Case)
NETMOM,MONGRP,NPPBI_MONGRP,Angle Pair Monitoring DB(Base Case)
NETMOM,MONGRP,NPTOLALM_MONGRP,Angle Pair Monitoring Alarm Tolerance
NETMOM,MONGRP,NPTOLDBD_MONGRP,Angle Pair Monitoring Deadband
NETMOM,MONGRP,NPTOL_MONGRP,Angle Pair Monitoring Warning Tolerance
NETMOM,MONGRP,PRIORTY_MONGRP,Enforced Priority
NETMOM,MONGRP,VALM_MONGRP,Alarm Voltage Violation
SCADAMOM,ANALOG,ALLOWSEL_ANALOG,Allow SELECT Step
SCADAMOM,ANALOG,AUTOACK_ANALOG,Autoacknowledge on Return-to-Normal
SCADAMOM,ANALOG,AUTOSERP_ANALOG,Automatic SE Replacement
SCADAMOM,ANALOG,BKUPSITE_ANALOG,Fed By Backup Site and RTU
SCADAMOM,ANALOG,BYPASS_ANALOG,Bypass Entry
SCADAMOM,ANALOG,CLMPDBND_ANALOG,Zero Clamping Deadband

Proprietary – See Copyright Page 111 hdbcompare


Hdb User’s Guide
SCADAMOM,ANALOG,DELAY_ANALOG,Alarm Delay Time
SCADAMOM,ANALOG,DISPRIOR_ANALOG,Tabular Display Priority
SCADAMOM,ANALOG,FLIP_ANALOG,Negate Value Before Processing in SCADA Calculation
SCADAMOM,ANALOG,HDRDBAND_ANALOG,Historical Recording Deadband (%)
SCADAMOM,ANALOG,HDR_ANALOG,Record Historical Data
SCADAMOM,ANALOG,HIREAS_ANALOG,High Reasonability Limit

11.5 Database Context Field Configuration File


When comparing instances of a database on a regular basis, it may be useful to extract
selected ITEMS field data from the databases for inclusion in the CSV output file, to provide
some hint of the content in the database. The database context field configuration file
provides a mechanism for you to identify global item fields whose values are output as
beginning comments in the output file. The -context_file command-line option is used
to specify this configuration file.
This database context field configuration file identifies global items within a given database
that should be included in the comparison output header data. The intent is to allow data,
such as titles and validation timestamps from the databases being compared, to be
included in the output, as this data can help a modeling engineer further identify what was
compared.
The configuration file is a simple CSV file consisting of three columns:
• Database Name: The name of the database (e.g., NETMOM, SCADAMOM, etc.).
• Field Name: This is an “items” field to be read and formatted.
• Field Description: If specified, this text replaces the database field name and becomes
the caption for the field in the header. If this value is not specified (i.e., if it is blank), it still
must be preceded by a comma.
More than one field can be identified for a given database. Values for many databases can
be included in the same file (the intent is that a single file will be maintained on a given
system rather than many).
Comment lines in the file start with the “#” character.
An example file is shown here:
#++
# Sample Database Context Field File
#--
GENMOM, OKSTA, GENMOM Standalone validation status
GENMOM, TSTAND, GENMOM Standalone validation time
GENMOM, OKAGC, GENMOM AGC validation status
GENMOM, TAGC, GENMOM AGC validation time
NETMOM, TVALNMD,
NETMOM, NMVALVER, NETMOM Validation Version
SCADAMOM, VFYVALID,SCADA Verify Completed without errors
SCADAMOM, VFYMOM, SCADAMOM Validation Time
SCADAMOM, ENTSTAMP, SCADAMOM Data entry timestamp
SCADAMOM, UPDSTAMP, SCADAMOM Update timestamp

Proprietary – See Copyright Page 112 hdbcompare


Hdb User’s Guide
12. System Administration
This chapter describes Hdb system administration functions and tasks. Tasks performed by
the system administrator are in setup, update, and maintenance of the Habitat Hdb
database management system.
The following topics are discussed in this chapter:
• System administration tasks
• Hdb files and on-disk structure

12.1 System Administration Tasks


This section describes a number of common tasks performed by the system administrator.
Tasks discussed include:
• Startup and shutdown
• Clone administration
• Clone directory administration

12.2 Startup and Shutdown


Managing startup and shutdown of Hdb is a system administrator task. Hdb startup and
shutdown are often configured into the overall Habitat startup and shutdown tasks. They
are typically not performed in standalone mode.

12.2.1 About hdbserver and Online/Offline Clones


The hdbserver program is responsible for initializing the memory-resident cloning database
and for registering clones for online access by application programs.
The Hdb database management system must be placed online before application
programs can access Hdb clone-resident databases. In this case, the term “online” refers to
the following:
• The hdbserver program is executing.
• The memory-resident cloning database is initialized.
• Each known clone is registered in the memory-resident database.
The memory-resident cloning database (CDB) is a registry of clones and the databases that
can be accessed by application programs. If a clone is not registered in the CDB, then it is
considered to be “offline”. Offline clones cannot be accessed by application programs.
An executing hdbserver program creates and initializes the CDB. In this case, the phrase
“initializing the CDB” means:

Proprietary – See Copyright Page 113 System Administration


Hdb User’s Guide
• Each known clone is located, opened, and validated.
• Valid clones are registered in the CDB.
• Once registered, clones can be located and accessed using the clone identity
application name and family name.
• The locking resource file is created and initialized.
A known clone is one that is identified by the Hdb clone directory. The “Hdb clone directory”
is the list of known clones for the group. It is maintained by the cloner utility. For more
information about clones, refer to chapter 3 hdbcloner.
Online clones are placed offline only after all application programs or utility commands
have released access to the clone. This normally means that the application program or
utility commands must be terminated prior to marking the clone offline.
A clone is placed offline by the shutdown of the hdbserver, which marks all clones as offline,
or by executing the hdbcloner offline_clone command, which marks only the specified
clones offline.
A clone is placed online either by restarting hdbserver (if it had been shut down), or by
executing the hdbcloner online_clone command.
Although clones must be online before application programs can access databases that
reside in the clone, a clone must be offline before it can be moved or copied by the file copy
utilities.
Attempts to copy or move a clone file while it is online will result in problems. Either the
operating system will prevent the copy or move action, or the file will be copied or moved
and lose data in the process. Data loss and corruption are possible because the data is
resident in memory and on disk if a clone is online. If the clone is moved without first
marking the clone offline, the memory-resident copy of the data may be lost, possibly
resulting in a corrupt copy of the on-disk database.

12.2.2 hdbserver Startup Scenarios


This section describes a number of common startup scenarios and recommended practices
for the hdbserver program.
The hdbserver program is typically executed in the background as a daemon process.
However, in some circumstances — for example, in analyzing system problems — it is
convenient to execute the hdbserver in the foreground.
The hdbserver program accepts all instructions from the command-line parameters (see
the hdbserver pages in chapter 13 Hdb Utilities Reference of this document). The command
parameters control the following attributes of the hdbserver program:
• maximum clones: The maximum number of clones sized for the memory-resident
cloning database. The minimum default value is 50 and the maximum default value
is 150. The minimum value is selected because of the sizing of other associated

Proprietary – See Copyright Page 114 System Administration


Hdb User’s Guide
structures (e.g., databases, partitions, applications). The default number of maximum
clones is 150; other values can be entered using the -mxclones parameter.
Two resources are sized as a result of the maximum clones count: the size of the
memory-resident cloning database and the size of the clone-locking file. Due to the size
of the maximum number of clones, the approximate sizes of these resources are shown
in the table found in chapter 13 Hdb Utilities Reference of this document, the hdbserver
utility reference.
• sleep interval: The number of milliseconds that the hdbserver sleeps between wakeups
to query its work queue. A small number means that the hdbserver will be very
responsive to command requests from the cloner utility. A large number means that the
hdbserver will be slow to respond, but more quiet on a running system where the
frequency of cloning activity is low.
For a development system where cloning frequency activity is high, a value near
1 second (1000 milliseconds) is recommended. For a run-time system, a sleep interval
of 5 to 10 seconds (5000 to 10000 milliseconds) is reasonable. The default sleep interval
is 1321 milliseconds, based on the requirements of a development system. Unless you
have a good reason, select the default for run-time systems.
• verbosity: The level of detail included in the output log produced by the hdbserver
program. The level of detail is set by the -verbose command parameter as a number
from 0 to 10. A level of 0 means that no output logging is performed, except for errors
sent to the error log. A level of 1 specifies the minimal level of detail for clone reports. A
level of 5 shows detail on clones, databases, and partitions. A level greater than 5 is
typically used for debug and analysis. The default level is 0 so no logging is performed.
To track cloning access problems, a verbose level of at least 1 is recommended.
• output: The output log file that receives all logged reports of cloner activity. The output
file defaults to standard output, but this can be redirected using standard shell
redirection, or it can be named using the -output command parameter.

12.2.2.1 Normal Hdbserver Startup Command


The following command is an example of how the hdbserver program is normally started
on a Linux system:
%hdbserver -verbose 1 -mxclones 200 1>output.log 2>error.log &
This is the recommended method of starting up the hdbserver program. The level of detail
for reports is set to 1, the maximum clone count is set to 200, the standard output is
redirected to the file output log file, and the standard error is redirected to error.log. Notice
that the command ends with an ampersand (&); this instructs the shell to execute this
process in the background.
The next example shows how to start the server in the foreground (this example assumes
the Windows platform):
>hdbserver -verbose 5 -sleep 5000

Proprietary – See Copyright Page 115 System Administration


Hdb User’s Guide
When the server executes in the foreground, the standard output and error logs are sent to
the terminal screen for display. In this example, the sleep interval is set to 5 seconds, and
the level of detail for reports is set to 5.

12.2.3 hdbserver Shutdown


There is no command or Hdb utility program used to shut down the hdbserver. The
recommended method for shutdown is to use standard process/task termination methods
on the supported platform, or to use the Habitat tview utility program.

12.2.3.1 Terminate Versus Kill


Each operating system supports “terminating” a process, or “killing” a process.
When a process is terminated, it is shut down in a normal fashion. Terminating a process
typically allows the process exit handlers and cleanup code to execute.
However, when a process is killed, it is immediately removed from the system. Exit handlers
and cleanup code are not allowed to execute.
Only as a last resort should the hdbserver program be killed; the reason is that data is lost
when a process is killed. In the case of hdbserver, the data is often the output log files,
which may be lost or miss important information. An hdbserver program should only be
killed when it does not respond to a normal terminate request.

12.2.3.2 hdbserver Shutdown Using the Kill Command


In Linux, the easiest way to terminate the hdbserver program is to use the kill command.
Despite its name, the Linux kill command does not “kill” a process unless it is instructed to.
To use the kill command to terminate a process, execute the command as shown below:
%kill [pid]
where the PID is the process ID number, which can be discovered in a number of ways (e.g.,
the ps command).

12.2.3.3 hdbserver Shutdown Using Habitat tview


On all systems, tview can be used to shut down the hdbserver program. Use the tview stop
command for normal termination.

12.3 Clone Administration


On a non-development Habitat/Hdb system, clone administration is limited to just a few
tasks described in the following sections.

Proprietary – See Copyright Page 116 System Administration


Hdb User’s Guide
12.3.1 Converting a Clone to an Archive File
Sometimes it is necessary or convenient to convert a clone to an archive file. This is a
convenient and efficient means of creating an exact copy of a clone. Although you can
always use the hdbcopydata utility to create an archive from a clone, this process does not
create an exact copy, since various timestamp fields are updated in the copy process. In
addition, the hdbcopydata method may be time-consuming for large clones.
The following procedure shows how to convert a clone to an archive file. In this example,
the clone NETMODEL.DTS is used:
1. Terminate all application processes that may be accessing the clone. This is done using
the Habitat tview command.
2. Mark the clone offline using the hdbcloner offline_clone command:
%hdbcloner -c offline_clone -a netmodel -f dts
3. Copy the clone file to a specified directory and filename:
%cp $HABITAT_CLONES/clone_netmodel_dts.car ./myclone.arc
Once the clone is copied in this fashion, it can be used just like any other Hdb archive file.
After the copy operation is completed, the clone can be placed back online using the
hdbcloner online_clone command.

12.3.2 Moving a Clone from One HABITAT Group to Another


Clones can be moved from one HABITAT group to another, as long as the clone files have
the same release version. The release version of the clone file is not the same thing as the
release version of the Habitat system — but the two are related. A clone file’s release
version does not typically change for each Habitat release. However, each Habitat release
only supports the latest release version for clone files. Even with older release versions,
archive files are supported for an indefinite time.
To move a clone from one HABITAT group to another:
1. Mark the clone offline.
%hdbcloner -c offline_clone -a netmodel -f dts -offline
2. Copy the clone file to be moved.
This procedure is similar to converting a clone to an archive file, but you do not change
the name of the clone file. The clone file must retain its original name.
3. Remove the clone.
%hdbcloner -c remove_clone -a netmodel -f dts -offline
At this point, after you have copied the clone file from the source Habitat, you can
remove the clone if you no longer need the clone in that HABITAT group. This is
accomplished using the cloner remove_clone command, but you must also specify the
-offline parameter because you have already taken the clone offline.

Proprietary – See Copyright Page 117 System Administration


Hdb User’s Guide
4. Move the clone to the correct location.
%mv clone_netmodel_dts.car $HABITAT_CLONES
Now, you must move the clone file to the correct location in the destination HABITAT
group. Typically, this should be the location determined by the HABITAT_CLONES
environment variable, but any suitable location is acceptable.
Of course, the exact command used depends on where the destination Habitat is
located (i.e., same computer, separate computer). If you copy the clone file over the
network to another computer, you must use binary copy semantics.
5. Rename the clone.
%hdbcloner -c rename_clone -car clone_netmodel_dts.car -f ems
Once the clone is moved to the destination Habitat, it must be renamed. Renaming is
the command you execute in the destination Habitat to rename the file and reset the
file’s clone unique ID number, and possibly change the clone’s family name.

Note: Renaming is required even if you do not change the clone’s family name.

The command shows how the cloner’s rename_clone command renames this clone for
the EMS family; the -car parameter names the file and a full path must be specified. In
this example, assume that the local directory location is HABITAT_CLONES. Also, the -f
parameter is included to specify a new family name; by specifying a new family name,
the clone’s filename will be changed automatically.
6. Add the clone file to the group’s clone directory.
%hdbdirectory -add clone_netmodel_ems.car
After renaming the file, it needs to be made known to the HABITAT group. This is
accomplished using the hdbdirectory command. The hdbdirectory command is used to
add a clone file to the group’s clone directory.
Remember, the clone file is now in the EMS family, so you need to use the file name as
shown above. Again, this is a full path filename, so you assume that you are currently
located in the HABITAT_CLONES directory.
7. Place the clone online.
%hdbcloner -c online_clone -a netmodel -f ems
Finally, the clone is now placed online using the hdbcloner online_clone command. At
this point, the clone is valid and ready to be used by applications in the new HABITAT
group.

12.3.3 Preparing a Clone for an SPR Submission


If a bug or a software problem is discovered regarding a database operation or a clone, it is
valuable to have a copy of the clone when the problem is analyzed. Therefore, when a
Software Problem Report (SPR) is submitted, it is important to include the clone file along
with the SPR. This is accomplished in the following manner:

Proprietary – See Copyright Page 118 System Administration


Hdb User’s Guide
1. Convert the clone to an archive file.
2. Zip or compress the file.
3. Send the archive to the Customer Support representative for analysis.
First, the clone is converted to an archive file just as described in the procedure outlined in
section 12.3.1 Converting a Clone to an Archive File. You can use any name you like for the
archive file. You do not need to identify the clone itself, because the internal header
information describes the clone’s identity and the group in which it was created.
However, before sending the clone file along with the SPR, it should be compressed. You
can use regular Linux compression or third-party compression tools such as PKZip or
WinZip. If you use standard conventions in the file names, then the compression algorithm
will be assumed from the file extension; that is, Linux compression is derived from the fact
that “.Z” is the appended suffix and is added when the file is compressed. PKZip and WinZip
file compression create a compressed container file with the “.zip” extension.

12.3.4 Replication Set Administration


The system administrator may be responsible for managing the replication set. The
“replication set” is the set of clones that are marked for replication in a dual-computer
configuration. Only those clones marked for replication participate in replication.
A clone that is to be replicated must also have replicated partitions. A replicated partition is
marked by the database designer by setting the REPLICATE flag on the //FLDPAR statement
of the DBDEF source file. Each partition that is to be replicated must be marked in this
fashion.
A database’s PSEUDO1 partition and the OIDS partition (if one exists) are also automatically
marked for replication if any other partition in the database is marked for replication.
A clone is marked for replication using the cloner utility. Replication can be set when a clone
is created, or by replacing the clone and setting the replication parameter. Replication can
also be removed; replication status is removed from a clone by replacing the clone and by
specifying the no-replication parameter.

12.3.4.1 Creating a New Clone Marked for Replication


The following example shows how to set replication for a clone being created using the
Scada application in the EMS family:
%hdbcloner -c create_clone -a scada -f ems -replicate

12.3.4.2 Marking Existing Clones for Replication


The next example shows the cloner command used to mark all clones in the EMS family
with replication status:
%hdbcloner -c create_clone -a '*' -f ems -replace -replicate

Proprietary – See Copyright Page 119 System Administration


Hdb User’s Guide
The above command uses the -replace parameter to replace clones. The application name
is specified as a wildcard, which then replaces all applications of the given family. This
wildcard character is enclosed in quotes as required for a Linux platform so that shell file
globbing is inhibited. The -replicate flag indicates that all clones in the family are to be
marked for replication. If a clone is already marked, it is replaced with the replicate
attribute.
Because the clone is being replaced, all applications that may be accessing the clones must
be terminated before the clones can be replaced.

12.3.4.3 Removing Clones from the Replication Set


The following command shows how the cloner is used to remove all of the NETMODEL
clones from the replication set:
%hdbcloner -c create_clone -a netmodel -f '*' -noreplicate -replace
Because the clone is being removed, all applications that are accessing the clones must be
terminated before the clones can be removed.

12.4 Clone Directory Administration


Clone directory administration includes the following tasks:
• Create clone directory file during HABITAT group initialization
• Update clone directory when importing from another HABITAT group
• Update or correct lost or damaged clone directory file
• Create clone directory file outside a HABITAT group
Besides these tasks, some information about the clone directory file itself is discussed
below.

12.4.1 Create Clone Directory File During HABITAT Group Initialization


Part of the process of setting up a newly installed HABITAT group is to create the clone
directory file. The clone directory file is created using the hdbdirectory utility. The clone
directory file must be created before any clones can be created.
The steps below show how to set up a new HABITAT group and use hdbdirectory to create
the clone directory file:
1. Create all directories:
– Create the cloning database root directory. Define the environment variable
HABITAT_CDBROOT for this directory.
– Create the subdirectories named cloning_database and dictionary under the
directory HABITAT_CDBROOT.

Proprietary – See Copyright Page 120 System Administration


Hdb User’s Guide
– Create the directory where the clones will be stored. Define an environment variable
HABITAT_CLONES for this directory.
2. Make sure the environment variables HABITAT_CDBROOT, HABITAT_GROUP, and
HABITAT_CLONES are defined.
3. Create the cloning database core data file.
Use the hdbcloner utility with the -create_corefile option (for more information, see
section 3.1 Creating Clones). This command creates and initializes the cloner’s core data
file.
4. Create the clone directory file. Use the hdbdirectory command:
hdbdirectory -create
The clone directory file created will be put in the HABITAT_CDBROOT/cloning_database
directory.
5. Start the hdbserver (for more information, see section 12.2 Startup and Shutdown), so
other Hdb utilities can access the clone directory.
Now, when a clone is created with the hdbcloner command, the clone directory will be
updated.

12.4.2 Update Clone Directory When Importing from Another HABITAT


Group
When importing a clone file from another HABITAT group, the clone directory file must be
updated with the filename of the clone file being imported. This task can be done while the
hdbserver is running. (For more information about importing clones, see section 3.9
Importing Clones from Another HABITAT Group.)
To import clone files, perform the following steps (the steps are shown for the Linux
platform):
1. Copy the clone file from the other HABITAT group to the clone file directory of the new
group.
%cp clone_scada_ems.car \
$HABITAT_CLONES/clone_scada_ems.car
Here, the new HABITAT group’s clone file directory is at $HABITAT_CLONES.
2. Rename the clone.
This ensures that the imported clone file will be consistent with the new HABITAT group.
%hdbcloner -rename_clone \
– car $HABITAT_CLONES/clone_scada_ems.car
3. Add the renamed clone file into the clone directory.
%hdbdirectory \
Proprietary – See Copyright Page 121 System Administration
Hdb User’s Guide
– add $HABITAT_CLONES/clone_scada_ems.car
4. Place the new clone online.
%hdbcloner -c online_clone -a scada -f ems

12.4.3 Update or Correct Lost or Damaged Clone Directory File


A corrupted or lost clone directory file of a HABITAT group can be replaced if you know
where all the existing clone files are currently located for that group.
The steps below show how this is done:
1. Offline all clones and shut down the HABITAT group.
The easiest way to do this is to terminate the running hdbserver process.
2. Create an empty clone directory file.
hdbdirectory -create
3. Add clone files into the clone directory file.
The example below adds all the clone files (*.car) in the clone file directory
(/var/habitat/90/cdbroot/clones) into the clone directory file:
hdbdirectory \
– add /var/habitat/90/cdbroot/clones/*.car

Important: The full file path must be specified for the location of the clone files.

4. Online the clones by starting hdbserver.

12.4.4 Create Clone Directory File Outside a HABITAT Group


Sometimes it is more convenient to work on a temporary clone directory file instead of
modifying an existing one for a HABITAT group.
Using a temporary file is appropriate for these situations:
• If you do not want to shut down a HABITAT group because the system cannot be shut
down at this moment.
• If the target HABITAT group does not exist yet.
When it becomes safe to update the directory file, the temporary clone directory file can be
copied to its designated location.
To create a temporary clone directory file, use the hdbdirectory -cdr option. The example
below shows how:
1. Create a temporary clone directory file that is empty (assume that the temporary file is
named “temp_directory.cdr”).

Proprietary – See Copyright Page 122 System Administration


Hdb User’s Guide
hdbdirectory -cdr temp_directory.cdr -create
2. Add clone files into the temporary clone directory file.
The -cdr option designates the temporary file as the file to store the directory
information. This step can be repeated as many times as necessary for every clone file
directory in the HABITAT group.
hdbdirectory -cdr temp_directory.cdr \
– add /var/clones/*.car
hdbdirectory -cdr temp_directory.cdr \
– add /var/study_clones/*.car
3. When the time is right, move and rename the temporary clone directory file to its
designated location.
The official location to store the clone directory file is at
HABITAT_CDBROOT/cloning_database. The official name of the clone directory file is
“clone_directory.cdr”.
%cp temp_directory.cdr \
${HABITAT_CDBROOT}/cloning_database/clone_directory.cdr

12.4.5 Clone Directory File


The clone directory file is an ASCII file that contains a list of all the known clone files in a
HABITAT group. Each clone file entry is specified with a full directory path name.
The clone directory is updated by the hdbcloner utility.
• A clone file entry is added to the directory when a clone is created (hdbcloner -c
create_clone).
• A clone file entry is removed when a clone is removed (hdbcloner -c remove_clone).
• A clone file entry is changed when a clone is replaced (hdbcloner -c create_clone
-replace).
The hdbserver utility reads the clone directory file at startup. Thereafter, the clone directory
file is not accessed.
The clone directory file is stored in the cloning database directory under the name (Linux
convention is used here):
${HABITAT_CDBROOT}/cloning_database/clone_directory.cdr

12.4.5.1 Clone Directory Format


Each line in the clone directory file corresponds to a clone file for the HABITAT group. The
clone file is specified with the full path, and case sensitivity applies depending on the
operating system.

Proprietary – See Copyright Page 123 System Administration


Hdb User’s Guide
The following example is a clone directory file in Windows:
g:\\cdbroot\clones_root\clone_ashare_zzz.car
g:\\cdbroot\clones_root\clone_hdbtest_xxx.car
g:\\cdbroot\clones_root\clone_hdbtest_xxx.car
g:\\cdbroot\clones_root\clone_keytest_xxx.car
g:\\cdbroot\clones_root\clone_testptr_xxx.car
g:\\cdbroot\clones_root\clone_trend_xxx.car
g:\\cdbroot\clones_root\clone_trend_yyy.car
e:\\study\peh\clone_scada_dts.car
e:\\study\peh\clone_pwrflow_dts.car

12.4.5.2 Clone File Scanning


The hdbdirectory utility scans the specified clone files to ensure that they are valid before
adding them into the clone directory. The validity of a clone file is determined solely from
the name.
A valid clone file adheres to the following naming conventions:
clone_appname_famname.car
mclone_clsname_famname.car
The first example shows a conventional clone, and the second shows a multi-application
clone. Both use the “.car” (Clone ARchive) extension. Files that fail to meet the naming
conventions are skipped during scanning by the hdbdirectory utility.

12.4.5.3 Concurrent Access to the Clone Directory File


A clone directory file may be accessed concurrently by multiple programs, e.g., hdbcloner,
hdbserver, and hdbdirectory. Locking is used to ensure that the clone directory file is
accessed without conflict.
The hdbserver program obtains a shared lock prior to reading the clone directory file.
The hdbcloner program obtains an exclusive lock whenever it executes a create_clone or
remove_clone operation.
The hdbdirectory program obtains an exclusive lock whenever it reads or writes to the
clone directory. Locking does not apply if the clone directory is a temporary file. Since
adding clone files to the clone directory may involve several invocations of hdbdirectory,
potential conflicts could arise between invocations. Therefore, it is recommended that you
take the HABITAT group offline first before working with the clone directory.

12.5 Hdb Files and On-Disk Structure


This section describes the types of Hdb files and the Hdb on-disk structure (directory layout
and organization).
Source schema files and other ASCII text files used in Hdb are not described here.

Proprietary – See Copyright Page 124 System Administration


Hdb User’s Guide
12.5.1 File Types
Hdb recognizes the following file types:
• Clone files
• Archive files
• Savecase files
• DNA files
• Core data files
• Clone directory files
• Clone locking files

12.5.1.1 Clone Files


A “clone file” is the backing store file used for the Hdb memory-resident database used by
the Hdb applications. Clone files contain all databases defined for the clone. A clone file is
created when the clone is created, and it is deleted when the clone is removed.
Clone files can reside in any directory location that can be memory-mapped by Hdb. This
excludes remote-accessed network connected file systems (e.g., NFS). By default, clone files
are located in the directory location specified by the HABITAT_CLONES environment
variable.
A clone file employs the following naming conventions:
clone_appname_famname.car
mclone_clsname_famname.car
The clone’s application and family name make up the unique filename. The type of clone is
indicated by the first token in the name. The token clone specifies a single application clone,
and the token mclone specifies a multi-application clone (multi-application clones are
disabled in the Platform system because they are not used).
The file extension “.car” indicates that this is a clone archive file. A “clone archive file” is, in
fact, an archive file that contains special segmentation alignment and clone identity.
However, in all other respects, a clone file can be considered an archive file.

12.5.1.2 Archive Files


An “archive file” is an offline version of a clone file. Archive files are created by the
hdbcopydata command when the -df parameter is specified. Archive files are accessed by
the hdbcopydata command when the -sf parameter is specified.
An archive file can contain the complete image of an entire clone, a database, a partition,
or a field.

Proprietary – See Copyright Page 125 System Administration


Hdb User’s Guide
Archive files can reside in any location chosen by the user at the time that hdbcopydata is
executed. Archive files have no naming convention. An archive file can be named according
to the needs and desires of the user. However, a file extension of “.arc” is suggested to
make it easier to track and manage archive files.
An archive file can be used in many of the same ways that a clone can be used. Many of the
Hdb utilities can access a database within an archive file directly: hdbrio, hdbexport,
hdbdump, and hdbformat. In addition, a database can be accessed by an application
program by opening the archive file through the HdbOpenArchive API and then opening the
database itself with HdbOpen.
An archive file header section is used to describe the contents of an archive file. This header
section is available using the -header action parameter with the hdbdump utility while
specifying a source archive file object. In addition, a segment layout dump can be produced
by using the -dump parameter of hdbdump.
When a clone is marked offline, it can be considered the same as an archive file with
respect to all Hdb utilities and the API.

12.5.1.3 Savecase Files


A “savecase file” is a form of an archive file. There are two kinds of savecase files: those
created on an Hdb system, and those created on an older HABITAT version 4.x system. First,
we will discuss the Hdb savecase file.
An “Hdb savecase file” is a special type of archive file. The Hdb savecase file layout, content,
and organization are identical to all other archive files.
The differences between an archive file and a savecase file are:
• A savecase file is created according to the instructions specified in a case type
associated with the application. These instructions specify which databases and
(optionally) which partitions are to be included in the savecase.
• A savecase file uses a specific file naming convention, and savecase files are typically
located via the HABITAT_CASES environment variable.
For more information about defining the savecase type for an application, see the section
about application schema definition in the Hdb Programmer’s Guide.
Hdb savecase files are typically accessed via the Casemgr Application, the Casemgr API, or
the hdbcopydata command via the -case parameter.
A HABITAT version 4.x savecase file is a savecase created on a version 4.0 (or later) release
of Habitat or Platform (EMP 1.5 or later). A Habitat savecase file is never created by Hdb, but
it can be accessed by Hdb using the hdbcopydata command. This ability to access a
savecase file for older versions is provided for migration purposes only (i.e., migrating from
version 4 to version 5 of Habitat).
When a HABITAT version 4.x savecase file is accessed by hdbcopydata, it must convert data
from the VAX formats (VAX floating-point format to IEEE floating-point format) and fabricate
various internal structures to aid in the conversion. Because of the inefficiency of this

Proprietary – See Copyright Page 126 System Administration


Hdb User’s Guide
operation and the time it takes, data should be moved to Hdb-created archive files or
savecase files as soon as practical; data should not be retained in HABITAT version 4.x
savecase files.
Hdb support for HABITAT version 4.x savecase files has a number of limitations and
restrictions, due to the differences between the Hdb data model and the older versions of
the data model. These limitations and restrictions are:
• Multiple FREE record types, which are legal in HABITAT version 4.x and older versions,
are not legal in Hdb. Hdb supports only a single FREE record type.
• HABITAT version 4.x record-oriented databases are not supported by Hdb.
• HABITAT version 4.x dynamic partitions are not supported by Hdb.
• HABITAT version 4.x field data type COMPLEX (J*4, J*8) is not supported by Hdb.

12.5.1.4 DNA Files


The name “DNA” was chosen because of the notion that Habitat/Hdb employs a clone to
define a database instance. From a biological perspective, clones are created by copying
the DNA pattern. Therefore, in Habitat/Hdb, a clone is created by copying DNA patterns.
However, in this case, the DNA pattern is the binary format of the schema files residing in
the dictionary.
A “DNA file” is a binary file. It is created from an ASCII schema source file loaded into the
dictionary by the cloner. There are three types of DNA files:
• Application DNA
• Clone DNA
• Database DNA
Each file format is different, but they share the same overall structure and access methods.
Each of the three DNA files is identified by a unique file name based upon the schema name
information. As mentioned already, the application schema and the clone schema share
the same name space so, in actual fact, there are only three distinct name conventions
used by DNA files.
The naming conventions are:

Table 12. DNA File Format


Schema File Name Format
Application schema cls_appname.dna
Clone schema cls_clonename.dna
Database schema dbd_dbname_versiontitle.dna

Proprietary – See Copyright Page 127 System Administration


Hdb User’s Guide
12.5.1.5 Core Data Files
The “core data file” is created by executing the hdbcloner create_corefile command. This
file retains the cloning database’s unique group information. Thus, the content of this file is
group-dependent. So far, the only piece of information defined in this file is the highest
clone unique identifier number, or the clone unique ID described in a later section.
The core data file is a binary file with a sanity block (similar to the DNA files) and a
structured linked list of information. It is accessed and used by the cloner only.
The file name for the core data file is:
cloner_core.sc
The core data file serves one other purpose: It is the resource file for the cloning database’s
concurrent access synchronization lock. An exclusive lock is obtained any time the cloner
utility modifies a part of the cloning database, which includes the cloning_database
directory files and the memory-resident cloning database (CDB). Whenever the server
accesses the CDB to update it with new clone information or to remove obsolete clone
information, it also takes out an exclusive lock. Thus, the cloner and the server can never
modify cloning database files simultaneously.
Application programs that use the Hdb API also use the cloner_core.sc lock resource.
Whenever an application opens a clone-resident database, the CDB is used to locate the
clone and obtain its mapping attributes. When the application accesses the CDB, a shared
lock is obtained. If an application cannot be granted the shared lock immediately, it waits
until the exclusive lock is released. Locks are held for only a few milliseconds at a time.
The cloner_core.sc file can be examined (dumped in printable format) using the hdbdump
utility. The information includes the clone unique ID highest number, but it does not show
any control locks state information.

12.5.1.6 Clone Directory Files


A “clone directory file” is an ASCII file. Each line of the file contains a full absolute path file
specification of a clone file. There is one clone file entry for each clone known to the group.
Names in the file are case-sensitive, and the clone file name appears exactly as it is used in
the system.
Normally, file names appear in the clone directory file in alphabetical order by clone
identification, using the application name and the family name. However, this is not a
requirement; it is simply an effect of the way the clone directory is maintained by the Hdb
software.

12.5.1.7 Clone Locking Files


A “clone locking file” is the lock resource file used to implement the clone database and
partition locking facility. This file is created when the hdbserver program is started. It is
always created to be a mirror image of the CDB, although it is not used to map the CDB.

Proprietary – See Copyright Page 128 System Administration


Hdb User’s Guide
The clone locking file has the following name:
cdb_locking.lck
The locking file does not contain data. Although it is sized the same length as the memory-
resident cloning database, it is not used as a backing store for the CDB.

12.5.2 On-Disk Directory Structure


This section describes the on-disk directory structure required by Hdb.

12.5.2.1 Hdb Cloning Database (CDB) Root Directory


All files and directory locations that make up the on-disk cloning database are rooted under
a single directory location named by the HABITAT_CDBROOT environment variable. This
single directory location can reside on any local disk device or mounted file system.
However, the files that reside on the device must be able to be mapped using memory-
mapping system services (this rules out NFS-mounted devices).
The CDB root directory can have any name. However, it is recommended that you name the
directory in some manner that reflects the HABITAT group name.
The following examples show typical CDB root directory names used for each of the
supported platforms.
Linux
/var/esca/habitat90/cdbroot
Windows
g:\esca\habitat90\cdbroot

12.5.2.2 Hdb Cloning Database


The cloning database is the set of files defined under the directory called
“cloning_database”, rooted by the root directory defined by HABITAT_CDBROOT. The
cloning database directory location contains the clone directory file, the core data file, and
the clone locking file.
The cloning database directory must be named “cloning_database”, as shown in the
examples below.
The following examples show the cloning database definitions.
Linux
/var/esca/habitat90/cdbroot/cloning_database
Windows
g:\esca\habitat90\cdbroot\cloning_database

Proprietary – See Copyright Page 129 System Administration


Hdb User’s Guide
12.5.2.3 Hdb Schema Dictionary
The Hdb schema dictionary location is rooted in the dictionary name by the CDB root
directory defined by the HABITAT_CDBROOT environment variable. All DNA files are located
in the schema dictionary location.
The following examples show the definitions of the dictionary.
Linux
/var/esca/habitat90/cdbroot/dictionary
Windows
g:\esca\habitat90\cdbroot\dictionary

12.5.2.4 Hdb Clones Default Directory


The Hdb clones default directory can be any location that supports file mapping (this
excludes NFS-mounted file systems). The location chosen for the default is specified using
the HABITAT_CLONES environment variable.

12.5.2.5 Hdb Savecase Directory


The Hdb savecase default directory can be any location that supports file mapping (this
excludes NFS-mounted file systems). The location chosen for the default is specified using
the HABITAT_CASES environment variable.

12.5.2.6 Setup of On-Disk Structure


The following procedure is used to set up the Hdb on-disk file structure and prepare it for
use within a given HABITAT group. The examples assume a Linux platform.
1. Create a HABITAT_GROUP environment variable for a group named “90”.
%setenv HABITAT_GROUP 90
2. Create an Hdb root directory location under the habitat90 directory.
%mkdir /var/esca/habitat90/cdbroot
3. Create a HABITAT_CDROOT environment variable.
%setenv HABITAT_CDBROOT /var/esca/habitat90/cdbroot
4. Create an Hdb cloning database location.
%mkdir $HABITAT_CDBROOT/cloning_database
5. Create an Hdb dictionary location.
%mkdir $HABITAT_CDBROOT/dictionary
6. Create a core data file.

Proprietary – See Copyright Page 130 System Administration


Hdb User’s Guide
%hdbcloner -c create_corefile
7. Create an empty clone directory file.
%hdbdirectory -create
8. Create a default clones location.
%mkdir $HABITAT_CDBROOT/clones
It is convenient, but not a requirement, for the clones directory to be placed under the
CDB root.
9. Create a HABITAT_CLONES environment variable.
%setenv HABITAT_CLONES $HABITAT_CDBROOT/clones
10. Create a default savecase location.
%mkdir $HABITAT_CDBROOT/savecases
It is convenient, but not a requirement, for the savecases directory to be placed under
the CDB root.
11. Create a HABITAT_CASES environment variable.
%setenv HABITAT_CASES $HABITAT_CDBROOT/savecases
Hdb is now ready to be used. Schema can be loaded into the dictionary and clones can be
created.

Proprietary – See Copyright Page 131 System Administration


Hdb User’s Guide
13. Hdb Utilities Reference
The information in this chapter is intended for use by experienced application developers
and system administrators.

Note: For definitions of Hdb database terms used throughout this document, refer to the
Habitat Glossary of Terms.

In this chapter, Hdb utilities are presented in a reference format. The following utilities are
described:
• hdbcloner
• hdbcompare
• hdbcopydata
• hdbdirectory
• hdbdocument
• hdbdump
• hdbexport
• hdbformat
• hdbimport
• hdbrio
• hdbserver
The information is designed for easy look-up. Each section uses the following format:
• Utility name
• Abstract
• Syntax
• Parameter description

Proprietary – See Copyright Page 132 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c create_clone
This command is used to create a clone. If the -replace parameter is used with this command, then
the command replaces an existing clone.

Syntax
hdbcloner -c create_clone -a <appname> -f <famname> [parameters]

Parameters
-a <appname> [required]
This parameter identifies the name of an applications clone’s schema description file in the
dictionary. Wildcards are supported if the -replace parameter is used with this parameter. Application
names are not case-sensitive.

-f <famname> [required]
This parameter specifies the family name of the clone. Assigned family names must be unique to a
given application. Wildcards (*) are allowed only if used with the -replace parameter.
System naming conventions and policies may impose limitations on Hdb naming. Thus, check with
system administrators before naming.
Family naming rules must adhere to the following conventions:
• The name must begin with an alpha-character (A–Z).
• The name can only contain alpha-numeric characters (A–Z, 0–9).
• Special characters are not allowed in the name.
• A maximum of 12 characters is allowed; however, some Habitat and Platform applications
impose an 8-character limitation.
• Family names are not case-sensitive.

-replace [optional]
This parameter is used when a clone is to be replaced. This parameter must be used with the -a and
-f parameters to identify the application and family name of the clone that is to be replaced.
Wildcards (*) are allowed. For example, the following command replaces all clones in the EMS family:
hdbcloner -c create_clone -a * -f ems -replace
The following example modifies a clone attribute by turning off replication of the clones in the DTS
family:
hdbcloner -c create_clone -a * -f dts -replace -noreplicate
Schema comes from the dictionary, not the replaced clone. Clone content is updated with new
schema definitions if the schema has changed since the clone was originally created.

-ignoretruncate [optional]
When a clone is being replaced with the parameter specified, record truncation is ignored when
copying the records from the original clone to the newly created clone during the replace operation.
Record truncation can occur if one or more record types in the original clone have LVs that are larger
than the MX of the same record types in the new schema.
By default, if this option is not specified, record truncation is considered an error and the replace
operation fails.

Proprietary – See Copyright Page 133 Hdb Utilities Reference


Hdb User’s Guide
-d <dirpath> [optional]
This parameter specifies an alternative path name for the clone files that are being created. The path
name (dirpath) must be a full absolute path specification. The location defined by the
HABITAT_CLONES environment variable is the default.
If the clone is being replaced (-replace), the -d parameter can be used to physically move the file from
its current location to the targeted location.
To move a clone from an alternative location to the default HABITAT_CLONES location, the location
path must be explicit, or use the value of $HABITAT_CLONES as the directory path.

-align <mode> [optional]


This parameter specifies the clone segment alignment boundary that is used to create the clone file.
Each clone segment can be individually accessed if it is aligned on a suitable boundary that is
acceptable to the operating system platform.
The mode specification specifies the alignment boundary using selected keywords, as shown in the
following table:

Table 13. Alignment Boundary Interpretation


Mode Alignment Boundary Interpretation
Page Specifies the natural mapping page boundary required for the operating platform.
Compressed Specifies a compressed, 4-byte alignment boundary. Compressed clone files are
mapped as a whole. Segments cannot be mapped separately. However, there is no
discernible difference to the application program using the Hdb API when compressed
alignment is used.
Quad Specifies an 8-byte alignment boundary (same as the default for archive/savecase files).
Individual segments cannot be mapped separately (exactly like the compressed
alignment described above).
Ntpage Specifies a 64K-byte (65536) boundary, which is the natural mapping boundary required
by Windows platforms.

The default is page, which means that the segment alignment boundary is selected for the platform
on which the cloner is executing.
If the alignment boundary is not a natural segment boundary for the platform, then the clone is
mapped as a whole rather than mapping individual database partitions as they are used.
Normally, clones should be created with the default setting. However, it may be more efficient for
small clones (less than 100K bytes) to be created using quad alignment.

-replicate | -noreplicate [optional]


The -replicate parameter specifies that the clone is to be a part of the dual-computer replication set.
To replicate a clone, the -replicate parameter must be set. The -noreplicate parameter specifies that
the clone is to be removed from the replication set. By default, clones are not replicated.
To replicate a database, it is necessary that individual partitions be marked for replication. This is
done by specifying the REPLICATE flag on the //FLDPAR statement in the DBDEF source file. It is
necessary to mark partitions, but replication attributes on the partitions are ignored unless the clone
is also marked for replication with the -replicate parameter.

Proprietary – See Copyright Page 134 Hdb Utilities Reference


Hdb User’s Guide
To remove a clone from the replication set, you need to execute the create_clone command along
with the -replace parameter and the -noreplicate parameter.

-offline [optional]
This parameter is used together with the -replace parameter to instruct the cloner to locate an
existing clone using the file system rather than the online memory-resident cloning database. The
cloner uses the file system by default if the hdbserver is not running.
If the hdbserver is running, the cloner uses the memory-resident cloning database to locate the clone.
If the clone is offline, or if a previous clone operation failed to complete normally, the clone may not
be defined in the online database. Therefore, in this situation, the -offline parameter is required.

-nocopy [optional]
This parameter is used with the -replace parameter to instruct the cloner that data is not to be copied
from the old clone to the new clone. The default is to copy data from the existing clone to the new
clone.

-noreplicate_oids [optional]
Using the -noreplicate_oids parameter will disable the replication of the OID partition of a database in
a replication environment. By default, the OID partition of a database will be replicated if replication is
enabled on the clone by using the -replicate parameter.

-version <title> [optional]


This parameter specifies the database version title to augment the application schema. This
parameter is only necessary if the application schema does not specify the required database version
title. By default, the version title is taken from the application definition. If it is not defined in the
application definition, it is taken from the value of the environment variable
HABITAT_DEFAULT_VERSION.

Proprietary – See Copyright Page 135 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c create_corefile
This is a system administrator command. This command creates and initializes the core data file. This
command can only be used offline and by the system administrator. If the core data file already
exists, an error message is displayed and no action is taken by the system. If you need to replace the
core data file, you must first delete the existing core data file.

Syntax
hdbcloner -c create_corefile

Parameters
None.

Proprietary – See Copyright Page 136 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c load_schema
This command loads application, clone, or database schema into the cloning database dictionary.
The loading operation accepts one or more source files, with each file processed according to its file
extensions. Valid files are stored in the dictionary.

Syntax
hdbcloner -c load_schema -s <source-file> [parameters]

Parameters
-s <source-files> [required]
Specifies the file(s) that are to be loaded into the dictionary. Multiple files of the same or different file
extensions can be loaded. Wildcards (*) can be used to load files.
Files are source schema files coded according to the syntax requirements of each file type (.cls,
.dbdef).
Note: The .dbd extension is still supported by this utility for backward compatibility to pre-Habitat 5.x
systems. However, build scripts and tools in Habitat 5.x systems may no longer recognize the .dbd
extension.

-replace [optional]
Using this option replaces existing schema files stored in the database with new, same-named
schema files. If the named schema file does not exist, this option has no effect.
The load_schema command default is to not replace existing schema.

-mxdef <mxdef-source> [optional]


Specifies the mxdef file to use to override the MX values in the database schema file (DBDEF). This
option overrides the HABITAT_MXDEFDIR environment variable. For more information about using this
option, refer to the section “Database Resizing Using MXDEF File” in the Hdb Programmer’s Guide.

-nojointitle [optional]
If present, this option tells hdbcloner not to concatenate the DBDEF version title with the MXDEF
version title when formulating the database version title in the data dictionary. Normally, hdbcloner
will combine the titles so it can be distinguished from the title by loading just the DBDEF without the
MXDEF. Specifying -nojointitle allows a CLS file that has a hard-coded database version to use the
new Mxes specified in the MXDEF (see the HABITAT_NOJOINTITLE environment variable). For more
information about using this option, refer to the section “Database Resizing Using MXDEF File” in the
Hdb Programmer’s Guide.

Proprietary – See Copyright Page 137 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c offline_clone
This command marks a clone as offline. An offline clone is not available to application programs.
Clones cannot be marked offline until all applications accessing the clone are halted.
A clone is generally taken offline to fix problems in the clone or to copy the clone file. Therefore, the
offline_clone command is considered a system administrator command and is normally not used by
developers.

Syntax
hdbcloner -c offline_clone -a <appname> -f <famname>

Parameters
-a <appname> [required]
This parameter identifies the application name of the clone that is to be taken offline. Wildcards are
supported. Application names are not case-sensitive.

-f <famname> [required]
This parameter specifies the clone to be taken offline (ditto above). Wildcards (*) are supported.

Proprietary – See Copyright Page 138 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c online_clone
This command places a clone online after it has been manually taken offline. This command may be
required if, while creating a clone, the clone is not placed online within the timeout period.

Syntax
hdbcloner -c online_clone -a <appname> -f <famname>

Parameters
-a <appname> [required]
This parameter specifies the application name of the clone that is to be placed online. For multiple-
application clones, this name identifies the clone schema name. Wildcards (*) are supported.

-f <famname> [required]
This parameter specifies the family name of the clone to be placed online. Wildcards (*) are
supported.

Proprietary – See Copyright Page 139 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c remove_clone
This command removes a clone and its associated clone file.

Syntax
hdbcloner -c remove_clone -a <appname> -f <famname> [parameters]

Parameters
-a <appname> [required]
This parameter specifies the application name of the clone that is to be removed. Wildcards (*) are
supported.

-f <famname> [required]
This parameter specifies the clone to be removed Wildcards (*) are supported.

-offline [optional]
This parameter indicates that the clone to be removed is already offline. This parameter causes the
cloner to search the clone file namespace rather than the memory-resident cloning database
namespace. This parameter is required if the clone has already been taken offline, or if the clone was
never successfully placed online.

Proprietary – See Copyright Page 140 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c remove_schema
This command removes schema from the cloning database dictionary. After a schema has been
removed, it cannot be referenced. Schema can be reloaded with the hdbcloner -c load_schema
command.

Syntax
hdbcloner -c remove_schema -n <type:name>

Parameter
-n <type:name> [required]
Specifies name of the schema to be removed. The type portion of the name specifies the type of
schema to be removed. This is specified as one of the following: cls, dbd, or mxs. Also, a wildcard
character (*) can be specified to mean all schema types that match the name of the component
supplied.
The name component refers to the name of the schema object to be removed. The name must
specify an application name or a clone schema name if the cls type is specified.
If the type is dbd, then the name portion has a more-detailed syntax, which is dbname.versiontitle. If
you want to remove all instances of database schema with a given database name, then specify just
the dbname portion or dbname.*. The wildcard (*) specifies all versions. If only the dbname is
provided, it is the same as specifying all versions by using the wildcard (*) for the version title.
Wildcard characters can be used in any place within the name (including the database version title)
to indicate the removal of multiple schemas of a given type.
Note: The use of the wildcard character in the type and name does not imply file globbing. Therefore,
on platforms that use file globbing (e.g., Linux), wildcards must be specified within quotes so that the
shell does not interpret the wildcard character in its globbing action.

Examples
% hdbcloner -c remove_schema -n cls:scada
This command removes the scada clone schema from the Hdb database dictionary.
% hdbcloner -c remove_schema -n dbd:scadamom
This command removes the database definition of SCADAMOM from the Hdb database dictionary.

Proprietary – See Copyright Page 141 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c rename_clone
This command renames a clone. This command can only be used offline and applied only to clone
files. You rename clones if you need to change the family name, or when you import a clone from
another HABITAT group.
This command performs the following functions:
• Verifies that the clone file is consistent with the current version of HABITAT/Hdb group.
• Updates the clone’s unique identifier so that it is consistent with other clones in the group.
• Optionally, changes the clone’s family name (which also changes the clone’s file name).
The clone directory must be updated to include the renamed clone file. The clone can reside in any
location, but it must be specified to the clone directory using its full path name (refer to the
hdbdirectory program for more information about this subject).

Syntax
hdbcloner -c rename_clone -car <clonefilename> [parameter]

Parameters
-car <clonefilename> [required]
This parameter specifies the name of the clone that is to be renamed. The clone must be offline and
must be specified by its full path name.

-f <famname> [optional]
This parameter specifies a new family name for the clone. Choose a name that will not conflict with
other clones in the targeted group.

Proprietary – See Copyright Page 142 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c show_clone
This command produces a report of clones that populate a HABITAT group. You can specify an
individual clone or all clones in a group, or any subset that is identifiable by wildcards (*) in the
application and/or family names. You can also select the level of detail you want in the report.

Syntax
hdbcloner -c show_clone [parameters]

Parameters
-a <appname> [optional]
This parameter is used to specify the application name of the clone to show. Wildcards (*) are
supported in the same manner as create_clone using the -replace parameter. If not specified, then all
applications are assumed, just as if -a * were specified.

-f <famname> [optional]
This parameter is used to specify the family name of the clone to show. It is used just like the previous
description for create_clone. Also, wildcard characters are supported in the same manner as
create_clone using the -replace parameter. If not specified, then all families are assumed, just as if
-f * were specified.

-offline [optional]
This parameter specifies that the offline clone file namespace is to be used for locating clones. By
default, the online memory-resident cloning database is used for the clone namespace. Less
information is available for offline clones.

-online [optional]
This parameter specifies that the online clone file namespace is to be used for locating clones. This is
the default.

-full [optional]
Same as verbose level 5. This option will be deprecated in the future. Users should use the -verbose
parameter instead.

-multiapp [optional]
This parameter indicates that the -a <appname> parameter specifies a schema name instead of the
clone’s application name in a multi-application clone.

-verbose <level> [optional]


Indicates, from 1 to 10, the level of detail you want the show_clone command to display. The default
is one (1). Levels are defined in the following way:

Table 14. Level of Detail show_clone


Level Description
Level 1 A one-line report per schema component specified by the name
and type modifiers.
Level 2–3 Additional information such as schema name, clone unique ID,
and databases are displayed.

Proprietary – See Copyright Page 143 Hdb Utilities Reference


Hdb User’s Guide
Level Description
Level 4–5 Same as 2–3 with the addition of clone and database GUID and
definition stamps.
Level 6 Same as 4–5 with the addition of partition names for each
database.
Level 7 Same as 6 with the addition of GUID and definition stamp
information about each partition.
Level 8–10 Same as 7 with the addition of mapping information.

Examples
Show all the clones
% hdbcloner -c show_clone
This command displays each clone in one line. It also shows the process identifier (PID) of the
processes that are currently accessing that clone.

Show clones for the Scada application only


% hdbcloner -c show_clone -a scada
This command displays all the SCADA clones only.

Show clones for the EMS family only


% hdbcloner -c show_clone -f ems
This command displays all the clones in the EMS family only.

Proprietary – See Copyright Page 144 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c show_limits
This command displays a list of Hdb limit values.

Syntax
hdbcloner -c show_limits

Parameters
None.

Descriptions
These are the limits shown by the command:
CLONE limits:
• Maximum Application Name size
• Maximum Family Name size
CLS limits:
• Maximum APPLICATION Name size
• Maximum CASE Title size
DBDEF limits:
• Maximum DBDEF Name size
• Maximum DBDEF Title size
• Maximum FLDPAR Name size
• Maximum RECTYP Name size
• Maximum FIELD Name size
• Maximum # of DBDEF statements per DBDEF file
• Maximum # of RECTYP statements per DBDEF file
• Maximum # of FLDPAR statements per DBDEF file
• Maximum # of FIELDS statements per DBDEF file
• Maximum size for C*
• Maximum keyfield size for C*

Proprietary – See Copyright Page 145 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c show_schema
This command displays descriptions of requested schemas to standard print or to screen output.

Syntax
hdbcloner -c show_schema [-n <type:name>][parameters]

Parameters
-n <type:name> [optional]
Specifies the name of the schema to be displayed. The type portion of the argument specifies the type
of schema to display. This is specified as one of the following: cls, dbd, or mxs. Also, a wildcard
character (*) can be specified to mean all schema types that match the name of the component
supplied.
The name portion refers to the name of the schema object to display. The name must specify an
application name or a clone schema name if the cls type is specified.
If the type is dbd, then the name portion has a more-detailed syntax, which is dbname.versiontitle. If
you want to display all instances of database schema with a given database name, then specify just
the dbname portion or dbname.*, where the (*) wildcard specifies all versions. If only the dbname is
provided, then use wildcards (*) in the version title.
Wildcard characters can be used in any place within the name (including the database version title)
to display multiple schemas of a given type.
Note: The use of the wildcard character in the type and name does not imply file globbing. Therefore,
on platforms that use file globbing (e.g., Linux), wildcards must be specified within quotes so that the
shell does not interpret the * character in its globbing action.

-full [optional]
Same as verbose level 5. This option may be deprecated in the future. Users should use the -verbose
parameter instead.

-verbose <level> [optional]


Indicates, from 1 to 10, the level of detail you want the show_schema command to display. The
default is one (1). Levels are defined in the following way:

Table 15. Reports Level of Detail Settings


Level DBD Schema Description CLS Schema Description
Level 1 A one-line-per-database schema report. A one-line-per-clone schema report.
Level 2 Displays each partition defined in the Displays each database defined in the
database. clone schema.
Level 3 Same as above, plus showing the definition
stamp for each partition.
Level 4
Level 5 Same as above, plus the savecase
definitions.
Level 6
Level 7

Proprietary – See Copyright Page 146 Hdb Utilities Reference


Hdb User’s Guide
Level DBD Schema Description CLS Schema Description
Level 8 Same as above, plus showing all the record
types defined in the database. For each
record type, the MX and its parent record
type (if applicable) are displayed.
Level 9 Same as above, plus showing all the fields
defined in each partition. For each field, its
Level 10
type and size is displayed.

Examples
Show all schema of all types
% hdbcloner -c show_schema
This command displays all the database schema, clone schema, and mxset schema registered with
Hdb, each on its own line.

Show database schema(s) for SCADAMOM


% hdbcloner -c show_schema -n dbd:scadamom
DBD schema: SCADAMOM.ESCA_EMP
DBD schema: SCADAMOM.PROJECT
Cloner completed successfully
This command displays all the registered SCADAMOM database schema. In this example, there are
two versions of SCADAMOM: ESCA_EMP and PROJECT.

Show the partitions of the SCADAMOM database schema for version “PROJECT”
% hdbcloner -c show_schema -n dbd:scadamom.project -verbose 2
DB Schema SCADAMOM.PROJECT
Partition CLSLODIS *REPLICATED*
Partition CLSLOPRV *REPLICATED*
Partition CLSLOPUB *REPLICATED*
Partition CLSTADIS
Partition CLSTAPRV
Partition CLVOLDIS
: : :
Cloner completed successfully
The parameter “dbd:scadamom.project” specifies that only the database schema “SCADAMOM” of
version “PROJECT” is to be displayed. The verbose level of 2 tells the hdbcloner to display the partition
info (only partial output is shown above).
If the parameter “dbd:scadamom.project” is replaced with “dbd:scadamom” above, then the
partitions for both versions of the database schema will be displayed.

Show all the registered clone schema


% hdbcloner -c show_schema -n cls:*
CLS schema: ALARM
CLS schema: CASESRV
CLS schema: CFGCTRL
CLS schema: CFGMONI
: : :

Proprietary – See Copyright Page 147 Hdb Utilities Reference


Hdb User’s Guide
CLS schema: RTNET
CLS schema: SAMPLER
CLS schema: SAMPTEST
CLS schema: SCADA
CLS schema: SCADAMDL
Cloner completed successfully
This command shows all the registered clone schema (only partial output is shown above).

Show all the database schema and savecases defined for the SCADA clone schema
% hdbcloner -c show_schema -n cls:scada -verbose 10
CLS Schema SCADA

DB: SCADAMOM.ESCA_EMP
DB: SCADACL.ESCA_EMP
DB: MESCADA.ESCA_EMP
DB: COMMLOG.ESCA_EMP
DB: SOELOGS.ESCA_EMP
DB: ACCUMHIS.ESCA_EMP

Case: SCADA
Includes Database SCADAMOM
Includes Database SCADACL
Includes Database ACCUMHIS
Case: DTS
Includes Database SCADAMOM
Includes Database SCADACL
Includes Database MESCADA
Case: SOELOGS
Includes Database SOELOGS
Case: COMMLOG
Includes Database COMMLOG
Case: ACCUMHIS
Includes Database ACCUMHIS
Case: TAGGING
Includes Partition TGSTAPRV.SCADAMOM
Cloner completed successfully
The command shows the SCADA clone schema and includes six database schema: SCADAMOM,
SCADACL, MESCADA, COMMLOG, SOELOGS, and ACCUMHIS, which all have the same database
schema version title: ESCA_EMP.

Proprietary – See Copyright Page 148 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -c verify_schema
This command validates source file schema. This command is similar to the hdbcloner -c
load_schema command except that the dictionary is not modified. The verification report is sent to
standard output.

Syntax
hdbcloner -c verify_schema -s <source files>

Parameter
-s <source-files> [required]
This parameter specifies the schema file(s) to be verified. Multiple files of the same or different file
extensions can be verified. Wildcards (*) can be used to load files.
Files are source schema files coded according to the syntax requirements of each file type (.cls,
.dbdef).

Proprietary – See Copyright Page 149 Hdb Utilities Reference


Hdb User’s Guide
hdbcloner -h
Online hdbcloner Help is available using the -h command.
If you specify a command with the -c command, detailed Help will be displayed for that command.

Syntax
hdbcloner -h [-c <cloner_command>]

Parameter
-c <cloner_command> [optional]
Followed by “cloner” is the name of the command you want help on. To access Help for a specific
hdbcloner command, use the following form of the command:
hdbcloner -c create_clone -h
The above command results in the help file for create_clone being displayed.

Proprietary – See Copyright Page 150 Hdb Utilities Reference


Hdb User’s Guide
hdbcompare
Command-line interface for comparing two instances of the same database.

Syntax
hdbcompare <container1> <container2> database <options>

Parameter
database
Identifies the name of the database that is to be compared. The database must exist in the specified
container 1 and container 2 that are identified using qualifiers.

Options to specify the database container (required)


-s1 <clone>
Identifies a clone container containing the first instance of the database to be compared. The syntax
is application.family. For example, SCADAMDL.DTS.

-s2 <clone>
Identifies a clone container containing the second instance of the database to be compared. The
syntax is application.family.

-sf1 <savecase>
Identifies a savecase containing the first instance of the database to be compared. This is the full
filename of the savecase file. If a directory is not specified, then it is assumed that the file is in the
HABITAT_CASES directory.

-sf2 <savecase>
Same as sf1, except that it identifies the second database instance.

-sz1 <zip file>


Identifies a zip file containing a savecase that contains the first instance of the database to be
compared. This is the full filename of the zip file.

-szf1 <savecase>
Identifies a savecase type within a zip file that contains the target database. The title is not needed
(assuming there is only one of this type in the zip file). An example would be: case_netmodel_netmom.

-sz2 <zip file>


Same as sz1, except that it identifies the second database instance.

-szf2 <savecase>
Same as szf1, except that it identifies a savecase within the second zip file.

Other options
-modeling_only (optional)
Limits field comparisons to only those fields with the “MODELER” attribute.

Proprietary – See Copyright Page 151 Hdb Utilities Reference


Hdb User’s Guide
-delimiter <value> (optional)
Overrides the default composite ID delimiter (“/”) with a user-specified value. The value must be within
double quotes.

-output <dir or file> (optional)


Identifies the name and/or location of the output CSV file. If not specified, then the output file is
written to the current default directory. The default filename is:

<database>_comparison_YYYYMMDD_HHMMSS.csv

where YYYYMMDD_HHMMSS identifies the date and time when the comparison was initiated.

If a directory is specified without a filename, or if an environment variable is specified, then a file


based on the default filename scheme above is written to the specified directory.
The corresponding summary log file has the same file name but with the “.log” extension.

-logfile <file> (optional)


Name of the log file. Overrides -output <file> specification.

-prompt (optional)
If specified, then the user is prompted for any missing required parameters or qualifiers. The
“required” inputs are the two database container specifications and the database name.

-context_file <file> (optional)


Names a context field file that identifies fields on a per-database basis that should be included in the
output header data.

-description_file <file> (optional)


Identifies a file containing descriptions for database fields. If an output record is generated by the
comparison that includes one of the fields identified in this file, then the description from the file will
be included in the comparison output file.

-real4_sigdig <number> (optional)


Specifies the number of significant digits to display for R*4 values in the CSV file. The minimum value
is 1, and the maximum value is 8. If not specified, the default C formatting string %g is used to format
the floating point value.

-real8_sigdig <number> (optional)


Specifies the number of significant digits to display for R*8 values in the CSV file. The minimum value
is 1, and the maximum value is 18. If not specified, the default C formatting string %g is used to
format the floating point value.

-fp_tolerance <percent> (optional)


Specifies the percentage tolerance for floating-point comparison. It must be a positive number up to
a maximum of 1%, e.g. -fp_tolerance 0.01 means using a 0.01% tolerance when comparing floating-
point numbers. Hdbcompare will report the floating values as different only when their difference
exceeds the given percentage. If not specified, the default is no tolerance, meaning that the floating-
point comparison does a binary compare between two values. In most cases, the default would
report a difference when there are small rounding differences between two values.

Proprietary – See Copyright Page 152 Hdb Utilities Reference


Hdb User’s Guide
-help (optional)
Prints the command-line help message. The output consists of a summary description of each of the
command-line qualifiers.

Implicit input
HABITAT_CASES environment variable
This is the standard HABITAT directory where the Habitat UI and Habitat APIs assume that savecases
are stored. If a savecase file is specified as a container (recognized by the name starting with “case_”)
and no directory is specified, then the script will look for the savecase in this directory.

HABITAT_TMPDIR environment variable


If a savecase must be extracted from a zip file, it is extracted into this directory.

Description
The two instances of the requested database are located, mapped, and then compared.
Two output files are created in a common directory using the same base name:
• The “output” file is a comma separated value (CSV) file that enumerates the differences between
the two database instances.
• The “log” file is a message log containing information about the comparison results.
Note: Depending on the databases being compared, it is conceivable that the output file could
contain more records than Microsoft Excel can import (64K). In this case, use of a tool such as
Microsoft Access to view the CSV file is recommended.

Exit Status
The command exists with a status to indicate the overall success or failure of the comparison:
• EXIT_SUCCESS: The comparison was successful and did not generate any error messages.
• EXIT_FAILURE: The comparison generated one or more error messages.
The actual exit value is consistent with the respective standard on the operating system where the
comparison was done.

Examples
• Comparing the NETMOM database between the NETMODEL.DTS clone and the RTNET.DTS clone.
The output files are netdiff.csv and netdiff.log.
%> hdbcompare -s1 netmodel.dts -s2 rtnet.dts netmom -output netdiff
• Comparing the NETMOM database between the NETMODEL.DTS clone and the savecase
case_rtnet_dts.emp60 using the default output format. The output files are
netmom_comparison_yyyymmdd_hhmmss.csv and
netmom_comparison_yyyymmdd_hhmmss.log.
%> hdbcompare -s1 netmodel.dts -sf2 case_rtnet_dts.emp60 netmom -output
netdiff
• Comparing the NETMOM database between the NETMODEL.DTS clone and the zipped archive file
mytest.adearc with a savecase case_netmodel_ade.emp60 in it. The output files are
netade.csv and netade.log.
%> hdbcompare -s1 netmodel.dts -sz2 mytest.adearc -szf2
case_netmodel_ade.emp60 netmom -output netade

Proprietary – See Copyright Page 153 Hdb Utilities Reference


Hdb User’s Guide
hdbcopydata
The hdbcopydata program copies data clone to clone, clone to archive, or clone to savecase file, and
from archive or savecase file to clone. Data can be copied from/to a field, a partition, a database, or a
container.
A container (source and destination) must be assigned to values as a clone file, an archive file, or a
savecase file. A source container is designated with the -s, -sf, or -case parameters. The destination
container is designated with the -d, -df, or -case parameters.
Note: The hdbcopydata program cannot copy individual records (for record copy operations, see
hdbrio, hdbexport, or hdbimport).

Syntax
hdbcopydata <source-container> <destination-container> [parameters]

Container Specifications
-case <case-specification>
Specifies that the source or destination container is a savecase specification.
If the source container is to be designated as a savecase, then use the -d or -df parameters in
conjunction with -case.
If the destination container is to be designated as a savecase, then use the -s or -sf parameters in
conjunction with -case.
Note: The parameters -title and -a may contribute to the information in naming a savecase.

-s <clone-spec>
Specifies a source container as a clone specification.
For a description of the <clone_spec> syntax, see hdbcloner for more information.

-sf <filespec>
Specifies a source container as an archive file.
The -s parameter cannot be used in the same command with -sf.

-d <clone_spec>
Specifies a destination container as a clone specification.
For a description of the <clone_spec> syntax, see hdbcloner for more information.

-df <filespec>
Specifies a destination container as an archive file.
To create a savecase file, you must use the -case argument.

-sc <source-context> [optional]


Specifies the source object context for an archive or a savecase source container. A field, a partition,
or a database within a source container can be specified using this command.
This parameter is required if the destination and source objects are different, such as copying one
field to another field. Most of the time, the destination context is sufficient to imply the source object.

Proprietary – See Copyright Page 154 Hdb Utilities Reference


Hdb User’s Guide
-title <case title> [optional]
Assigns a title to a savecase file designation if the savecase title has not been specified by the -case
parameter. Redundancy has no impact if both the -case parameter and the -title parameter are used;
however, if different titles are used, an error is generated.

Parameters
-a <appname> [optional]
Specifies the application name context, which is used to locate the savecase type named by -case.
Normally, the case application is derived from the other container (whether source or destination) if it
is a clone specification.

-backup | -nobackup [optional, default is nobackup]


Specifies whether the target clone container database is backed up after the data is modified. The
default is -nobackup; however, the default can change based on system configuration — thus, both
options are provided.
A database is backed up only under the following conditions:
• The database has replicated partitions.
• The clone is marked to be replicated.
• The backup/MRS system is operational on the system.

-compress | -nocompress [optional]


Use -compress to create a compressed savecase or archive file. It is ignored if the destination
container is not specified as a savecase or archive file. By default, savecases and archive files are
created in compressed format. The default behavior can be changed by setting the environment
variable to HABITAT_CASE_COMPRESSION=N.

-compresslevel <number> [optional]


Use -compresslevel to specify the compression level to use when creating a savecase. The argument
to this option is a number between 1 and 9. A smaller number improves the compression speed but
the compressed file size becomes larger than with a higher number. The default compression level is
4 if a number is not specified.

-comment <text> [optional]


This is applicable only with the -sf <source> -case <casetype> option when creating a new
savecase to store the comment text with the savecase file. If the text contains whitespaces, surround
the text with double quotes on the command-line.
To view the comment and creator stored with the savecase file, use the hdbdump command:
> hdbdump -header -file savecase_filename

-creator <name> [optional]


Specifies the name of the user creating this savecase. This is only applicable with the -sf <source>
-case <casetype> option when creating a new savecase. If not specified, hdbcopydata stores the
name of the user running the command into the savecase instead.

-datafill [optional]
Specifies that excess data in the target field is to be set to a fill byte. A datafill condition is the
opposite of the truncate condition. Datafill is required when the destination LV is larger than the
source LV and sufficient elements do not exist to completely occupy the destination field. The only

Proprietary – See Copyright Page 155 Hdb Utilities Reference


Hdb User’s Guide
option allowed is to set those excess elements to the field’s FILL byte character as if they were
inserted and not set to any particular value.
The -datafill parameter is required for partition or field copies where the datafill condition exists. It is
not required for a full database copy.

-ignoreconvert [optional]
Specifies that field conversion errors are to be ignored during copy. When -ignoreconvert is used,
data copy problems are not treated like errors. Data may or may not be reliable when using this
parameter.
-ignoreconvert ignores the following system errors:
• Field truncate (-ignoretruncate)
• Data fill (-datafill)
• Data type conversions (range errors, numeric truncation, invalid conversion)

-initialize | -noinitialize [optional]


This parameter controls whether the target database is initialized prior to being updated during a
copy operation.
By default, the target database is initialized during a database copy.
By default, the target database is not initialized during a field or a partition copy.
Archive and savecase files are always initialized when they are created regardless of the status of this
parameter.
Note: The primary reason to use the -noinitialize parameter is to manually merge databases with
different schemas.

-ignorestamps [optional]
This parameter suppresses record time stamp checking during a field or partition copy. By definition,
all record time stamps in all fields of the partition being copied must match the source and
destination databases exactly.
The LV values of the target and the source databases are still honored. If the LV values are different,
then you may still not be able to copy the data due to a truncate or a data fill problem. Therefore,
when using the -ignorestamps parameter, you may also want to use the -ignoretruncate parameter,
the -datafill parameter, or both (see the discussion of truncate and datafill in the following sections).

-ignoretruncate [optional]
Specifies that truncation errors are ignored. A truncation error occurs when the LVs involved in the
copy are different and not all of the data from the source can be copied — for example, when copying
a source field N_UN where UN LV is 100 to a destination field N_UN where the LC is 50. This means
that field elements from subscript 51–100 will not be copied, and the result is called truncated.
By default, truncation of data is considered an error, and the destination database objects involved
are not modified.
When LVs of records involved in a copy are different in source and destination, then the time stamps
are almost always different too, so the -ignorestamps parameter is often specified along with the
-ignoretruncate parameter.

-nolocks [optional]
This parameter turns off all database locking during hdbcopydata transactions.

Proprietary – See Copyright Page 156 Hdb Utilities Reference


Hdb User’s Guide
However, archive- and savecase-resident databases are never locked. They are always accessed in a
shared, read-only fashion or an exclusive update fashion.
This parameter is provided for special circumstances where a knowledgeable user needs to access
data that has been locked by another process.
Note: It is possible that, by turning off the locking, data can become corrupt or processes can abort
due to corrupt data.

-noreplace | replace [optional]


Specifies whether the destination archive or savecase file is to be replaced. The default is to replace
an existing archive or savecase file by the same name.
Use the -noreplace parameter to avoid accidentally replacing an existing file set.

-samedef [optional]
Specifies that the source and destination databases involved in a copy must have the same definition
based on the same schema. If the schema do not match, the copy cannot be performed.
By default, data can be copied whether the databases are the same or not, and whether the
definitions are the same or not.

-selcop [optional]
Specifies that a selective copy (selcop) is to be performed. A selcop copy is different from a regular
hdbcopydata operation, in that both the source and destination databases are modified.

-selcoponly [optional]
Specifies that only a selcop (selective copy) style backwards copy is to be performed. A selcop
backwards copy is used to copy selcop field values of matching records from the destination to the
source.

-update_source [optional]
This parameter is required whenever a HABITAT savecase file is retrieved and the file is larger than
the system page file. Applies only to retrieval of older HABITAT version 4.x-style savecases. When
specified, the source file will be modified in a manner that makes it incompatible with HABITAT
version 4.x systems.

-verbose <level> [optional]


Sets the level of detail (1 to 10) in the error and detail report that describes copy operations. The
default (1) yields the smallest amount of information. Setting the level to 10 gives a field-by-field level
of detail, including the type of errors, or conversion problems encountered during copies, as well as
the state of the data and field in the destination. For large databases with many fields, the output can
be rather large when set to 10.

-verify [optional]
Enables comparison of the source and destination schema. No data will be copied. The comparison is
performed only if the destination context is a clone. The -verbose switch can be used to
increase/decrease the amount of outputs being printed.

Examples
Refer to the hdbcopydata chapter.

Proprietary – See Copyright Page 157 Hdb Utilities Reference


Hdb User’s Guide
hdbdirectory
The hdbdirectory utility creates or updates a HABITAT group’s Hdb cloning database clones directory
file. The clones directory file contains the full path file name for each clone known to the HABITAT
group.
The hdbdirectory utility is used to:
• Initialize a HABITAT group.
• Import clone files from a different HABITAT group.
• Update or correct a clone directory file if it is lost or damaged.
The hdbdirectory utility is a system administrator tool.

Syntax
hdbdirectory [parameters]

Parameters
-add <file-specification> [optional]
Specifies the file(s) to be added to the clone directory. File specification must include full path.
Wildcards are accepted in the filename but only the clone files (.car extension) are accepted. By
default, if not specified, no clone files are processed. If the clone files have the proper file name and
extension but are not true clone files, they will not be able to be placed online even though they are
added to the clone directory.

-cdr <cdr-filename> [optional]


Specifies the clone directory filename. If not specified, the regular default clone directory file is used. If
specified using the same file name as the regular default clone directory file, then that is tantamount
to not specifying this parameter.

-create [optional]
Specifies that the clone directory file is to be created. When created, the file is created empty. By
default, the existing clone directory file is accessed (whether default or specified by the -cdr
parameter). If there is no existing clone directory file, then an error is reported. When this parameter
is used in concert with other parameters, the create operation is performed first.

-show [optional]
Used to list the contents of the clone directory file. When this parameter is used in concert with other
parameters, the show operation is performed last.

-verbose <level> [optional]


Used to specify a verbose detail level as a number from 1 through 10. Default value is 1, and the most
detailed report is 10. This verbosity value affects the detail on the show command and the error
reporting.

Example
To clear the existing clone directory (the default directory is the file named “clone_directory.cdr” in the
HABITAT_CDBROOT/cloning_database directory) and add all clone archive file entries in the clone
directory from the default location, i.e., HABITAT_CLONES:
% hdbdirectory -create -add $HABITAT_CLONES/clone*.car

Proprietary – See Copyright Page 158 Hdb Utilities Reference


Hdb User’s Guide
hdbdocument
The hdbdocument utility converts a database definition source file (xxx.dbdef) to an ASCII file
(xxx.dbdoc) containing detailed documentation of the database definition.

Syntax
hdbdocument -dbdef [-output=<output file name>] <input file name>

Parameters
-dbdef [required]
Indicates that a database definition file is being converted to ASCII format for documentation
purposes.

-output [optional]
This parameter assigns a name to the output file. If -output is used, but the name used has no type
extension, then .dbdoc is assigned.
If not included in the command line, the output file is assigned the same name as the database
definition file, but with a .dbdoc extension.
If no device or directory is specified, the default is to use the current working device and directory.

Errors
All of the following errors cause hdbdocument to abort. Refer to the following table for corrective
action.

Table 16. Table of Errors


Error Description Action/Correction
Unknown Qualifier A qualifier other than -dbdef or -output Correct the syntax error and try
appeared in the command line. again. Aborts. Correct syntax and
try again.
No Value Is Allowed on the A value was assigned to the -dbdef Correct the syntax error and try
-dbdef Parameter parameter. again. No values are allowed.
Unknown Token Type An internal error has occurred in the Contact Customer Support as
software. soon as possible.
Missing -dbdef Qualifier The -dbdef qualifier is missing from the Include the -dbdef qualifier in the
command line. command line.
Missing Input Specification No input file specification was included Include the file specification in the
in the command line. command line.
Error Parsing Input File The input file specification could not be Correct the syntax error and try
Specification parsed because it contained a syntax again.
error.
Input File Name Length The file name was found empty during Correct the input file
Came Out Zero parsing. specification.

Proprietary – See Copyright Page 159 Hdb Utilities Reference


Hdb User’s Guide
Error Description Action/Correction
Error Parsing Output File The output file specification could not Correct the syntax error and try
Specification be parsed because it contained a again.
syntax error.
Input File Not Found The requested DBDEF file could not be Check the spelling and/or path
found. for the specified input file.
Unable to Open Output File The requested database document Check that the correct
could not be created. permissions or privileges have
been granted. Be sure that the
default directory exists.
Parsing Completed with Hdb found errors when processing the Check the DBDEF file for errors.
Errors database definition file.
Exception: xxx Hdb found an error while converting Refer to the information supplied
the database definition file into a with the error message, and then
database schema. check the DBDEF file for errors.

Proprietary – See Copyright Page 160 Hdb Utilities Reference


Hdb User’s Guide
hdbdump
The hdbdump utility is a developer’s debug and analysis tool used to troubleshoot Hdb binary file
problems. It can also be used by a system administrator to determine if Hdb files are valid or if they
are corrupted.
The hdbdump utility can dump database schema, structures, data, or Hdb binary files. ASCII
formatted files can be viewed using standard text display tools.

Syntax
The hdbdump utility requires both action parameters and source objects to be specified. The modifier
style parameters are optional. The syntax of the command is:
%hdbdump <action-parameters> <source-objects> [ <modifiers> ]

Source Object Parameters


-a <appname>
Specifies the name of the source object application. One application name can specify the application
context of the source object. The application context may be required for clone, savecase, or
database within an archive file. The application name can be derived from the HABITAT_APPLICATION
environment variable.

-archive <archive-file>
Specifies that one or more archive files are given as the source object. Wildcards are allowed (invokes
file globbing in Linux). Names can also be repeated, if you separate each with a blank space.

-case <appname>.<casetype>.<title>
Specifies the name of a savecase file as the source object. Wildcards can be used for any component
of the name; however, on Linux platforms, the wildcard character must be enclosed in quotes to
avoid file globbing, as in the following: “*”.

-cdb
Specifies that the memory-resident cloning database be given as source object.

-f <famname>
Specifies the clone context family name when a clone is given as source object. One family name can
be given to name a source clone object. If the source clone is defaulted to, the family name is optional
and the family context can be derived from the HABITAT_FAMILY environment variable.

-file <file-specification>
Specifies that a file is to be used as the source object. This parameter can be used to name any type
of Hdb file as the source object. File names can be repeated, separated by a white space. Wildcards
are allowed. If wildcards are used on a Linux platform, file globbing will be invoked. This includes
clones, archives, savecase, DNA, and/or core data files.

Action Parameters
-data
Used to dump data of the named objects.

Proprietary – See Copyright Page 161 Hdb Utilities Reference


Hdb User’s Guide
-dump
Dumps the segment information about archive files and clone files. This action is primarily used for
analysis and debug. The reported information helps locate corruption or synchronization of data
problems among the cloning database and clone files. This command is not supported on
HABITAT 4.x savecase files.
The dump action operates on the entire source file, or the clone in memory. It does not operate on
database objects, so no database is specified.

-header
Used to dump the header segment of the file, or the memory-resident cloning database. The header
segment includes the sanity block that describes the type of file and, for database containers (clone
files, archive files, and traditional HABITAT savecase files), the header also includes the database
index listing of each database container.
The header action operates on the entire source file, or the clone in memory. It does not operate on
database objects, so no database is specified.

-hier
Dumps hierarchical records into a tree-structured report.

-reset_indirects
Specifies that indirect fields (except for parent and ancestor pointers) are to be reset according to the
current associated LV values. This action is supported for database objects only.

-schema
Dumps schema information for the named object.

-schema-hier
Dumps the record hierarchy schema, which is the list of record types arranged in hierarchical order.

-stamps
Dumps the time stamps for the named object.

-verify_indirects
Validates that indirect fields are set according to the LV and MX settings associated with tables.

-verify_parents
Validates parent pointer fields.

-verify_pointers
Validates that child, descendant, and ancestor pointer fields are set according to the current parent
pointer settings. This action is only supported for database objects.

Modifiers Parameters
-append
The -append parameter appends the output of a report (dump) to the file specified by the -output
parameter. The file is created if it does not exit. Both parameters are considered redirection
parameters. The command redirection (>>) symbol can also be used.

Proprietary – See Copyright Page 162 Hdb Utilities Reference


Hdb User’s Guide
-field <fieldname>
Names a specified field to be used in restricting the command action. The command action will be
performed on the fields named only (as well as objects named by other parameters -partition, -table,
-name). Wildcards are allowed anywhere within the name, or a wildcard can be used all by itself as
the name. When wildcards are used, the name must be enclosed in quotes. This parameter is used for
database objects only.

-name <objectname>
Restricts the command action to the object named in the command line. Wildcards are allowed, but
must be enclosed in single or double quotes: ‘*’/“*”.
This parameter is to be used only for database objects.

-nofree
Prevents FREE records from being included in the output of the hierarchical record report produced
by the -hier action parameter.

-nomasks
Dumps a mask’s bit container field instead of the mask fields. This action is used in conjunction with
the -data action parameter. By default, bit container fields are skipped and mask fields are dumped.
Applies only to database objects.

-output <output-filename>
Names the output file to be used. By default, all reports are sent to standard output. Output can also
be redirected to a named file using the command shell redirection (>) parameter.

-partition <partitionname>
Restricts command action to the partition or partitions named in the command. Wildcards are
allowed. Applies only to database objects.

-pseudo
Reports on database pseudo data fields, record types, or partitions. Pseudo data is like other schema
data, and it is created internal to the database when the database is created. Pseudo data is used to
manage internal data, such as LV, RT, stamps, and hierarchical pointers.

-table <tablename>
Names a specified table to be used in restricting the command action. The command action will be
performed on only the tables named (as well as objects named by other parameters -partition,
-name, -field). Wildcards (*) can be used anywhere within the name, or a wildcard can be used all by
itself as the name. When wildcards are used, the name must be enclosed in quotes. This parameter is
used for database objects only.

-timedate_as_numbers
Formats the date and time as an integer instead of a calendar date and/or time. Applies only to
database objects.

-verbose <verboselevel>
Defines the level of detail for a report. Level of detail can be set between 1 and 10. Level 10 provides
the greatest amount of detail; however, reports will be large even for small databases. The default
is 1. Applies primarily to -schema and -dump actions.

Proprietary – See Copyright Page 163 Hdb Utilities Reference


Hdb User’s Guide
hdbexport
The hdbexport utility converts binary data from an Hdb database to an ASCII formatted file for export
to other systems or databases. The default operation of this utility produces a complete copy of an
Hdb database into an ASCII file.
Hdbexport creates an audit trail log of its invocations in a file with the name
“HDBEXPORT_<YYYYMMDD>.log” in the default HABITAT_LOGDIR directory. The creation of the log can
be disabled using the HABITAT_HDBEXPORT_DISABLE_LOGGING environment variable or redirected
to another directory using the HABITAT_HDBEXPORT_AUDITDIR environment variable.

Syntax
hdbexport -d <source-dbname> -s <output-file> [parameters]

Parameters
-a <appname> [optional]
This parameter specifies the database’s context application name. The application name is derived
from the HABITAT_APPLICATION environment variable when not specified.

-append [optional]
This parameter specifies that the exported file is to be appended to an existing file. If not specified,
the export operation will create a new file (overwriting any files of the same name). This parameter is
used only when the output file is specified with -s parameter switch. If redirection is used, the
double-bracket symbol (>>) specifies the append operation (for more information, see the -s
parameter description in this section).

-archive <archive file> [optional]


This parameter specifies that the source data should come from an archive file or a savecase file
given by the <archive file> parameter. If this parameter is present, then the family name (-f)
parameter is ignored.

-by_field [optional]
This parameter specifies field mode export. The default is to export all fields using field element line
export style. Records are not exported in this mode. Fields can be skipped with the
-skip_field parameter. This parameter is equivalent to specifying the -field parameter and listing all
fields in the database. If only exporting a few fields, it is easier to use the -field parameter.

-by_record [optional]
This parameter specifies export record mode export. When record export mode is selected, all
records of the database are chosen to be exported (alphabetic order by record name) using flat style
of record export for hierarchical records. Individual records can be eliminated from the export by
using the -skip_record parameter.

-comment [optional]
This parameter specifies a comment to be entered into the audit trail log.

-create_patterns [optional]
This parameter specifies pattern-creation mode export. -create_patterns does not export data, but it
is used to create the record patterns that allow the user to customize the appearance of exported
data. Record patterns are compatible with the patterns used with hdbimport. Thus, the same record
pattern file can be used for export and import. Records can be selected according to the same rules

Proprietary – See Copyright Page 164 Hdb Utilities Reference


Hdb User’s Guide
for exported record data. The default is to include all records. Individual records can be skipped using
the -skip_record parameter, or records can be specified using the -record parameter.

-d <source dbname> [required]


This parameter specifies the source database name.

-f <famname> [optional]
This parameter specifies the database’s context family name. If not specified, the name is derived
from the HABITAT_FAMILY environment variable.

-field <list-of-fields> [optional]


This parameter specifies that data be exported by field name. The list of fields to export is specified by
using fully qualified names (all record type names included as suffix). Each field name is separated
from others by white space. Wildcards are not supported.

-flat [optional]
This parameter specifies that the hierarchical records (in default export mode) are to be exported in
flat style. Flat export style is where each hierarchical record is exported just as if it were a non-
hierarchical record. The ordering of parent and child records is ignored, and the records are exported
in groups of like record type. Normally, this style of export is used in concert with the
-include_ancestors parameter switch, so that parent information can be included with the
hierarchical record export.

-h [optional]
This parameter prints a Help file to the screen or other standard output.

-hier_only [optional]
This parameter limits the default export to hierarchical records only. ITEMS and non-hierarchical
records are skipped.

-include_ancestors [optional]
If specified, the key values for each ancestor record are exported. By default, the ancestors’ key fields
are not included in the exported data.

-include_pointers [optional]
This parameter specifies that pointer fields are to be included in the export of hierarchical records.
Normally, pointer fields are not included because, when such records are imported by hdbimport, the
pointer field values are automatically generated. However, in some situations, pointer fields are
needed and this command can be used.
Note: Even though pointer fields can be included on records, they are ignored by hdbimport when
records are inserted or used for update. Pointer fields are hierarchical parent (one-to-one) or child
(one-to-many) pointers. This parameter affects the construction of record patterns when used with
the -create_patterns mode.

-include_pseudo [optional]
This parameter specifies that pseudo records and pseudo fields are to be included in the export data.
This parameter affects the construction of record patterns when used with the
-create_patterns mode.

Proprietary – See Copyright Page 165 Hdb Utilities Reference


Hdb User’s Guide
-key <key-specification> [optional]
This parameter specifies that a key is to be used to locate a single record occurrence. A single key
value is used to locate non-hierarchical records. Multiple key values are used to locate records using
ancestor record composite keys for hierarchical records.
The -key parameter can be used in either record or field export mode. If record mode is selected, then
the key value automatically selects the record located to be exported (just like using the -record). If
field mode is selected, then only the field element corresponding to the record occurrence is
exported.
Unless changed by the appearance of the -field or -by_field parameters, the -key parameter selects
the record export mode. The -key cannot be used with the -range parameter switch.

-maxerrors <errorcount> [optional]


This parameter specifies the maximum number of detected errors before the export operation aborts.
The default is 100 errors.

-nodata [optional]
This parameter is used for debug and analysis only. When used, no data fields are exported. It is
useful in record mode and, in particular, hierarchical record exports. The result is the export of each
hierarchical record as a record name and record subscript (unless inhibited).

-nofieldnames [optional]
This parameter inhibits the export of all field names. This includes field names that appear in the
record export, and field names that appear in the field element export. If the record pattern is used
for record export, this parameter has no effect because field names are not included with record
patterns.

-nofree [optional]
This parameter disables the export of FREE records (annotations). This parameter only applies when
hierarchical records are exported in hierarchical order (default record export mode). This parameter is
equivalent to specifying the FREE record type name in the -skip_record parameter.

-noitems [optional]
This parameter disables the export of the ITEMS record.

-nolock [optional]
This parameter is used to specify that the source database is not locked. The default is to lock the
database with a shared lock (read-only).

-nomasks [optional]
This parameter disables the export mask fields causing the bit container fields to be exported as
ordinary signed integers. By default, mask fields are exported in record or field mode instead of the
export of the mask’s bit container field. Therefore, bit container fields never appear in exported data.

-nonmultidim [optional]
This parameter disables the export of 2- and 3-dimensional fields in the default export mode. By
default, multidimensional fields are exported following the ITEMS, hierarchical, and non-hierarchical
records. This parameter has meaning only when the default export mode is operating.

Proprietary – See Copyright Page 166 Hdb Utilities Reference


Hdb User’s Guide
-nonames [optional]
This parameter switch inhibits the export of all record and field names. This parameter is only
meaningful when the default record format is used. If a record pattern is used, this parameter does
not inhibit the record name.

-nonulls [optional]
This parameter disables the export of Habitat null data as a null data value. By default, a Habitat null
data value (as tested by the HdbTestNull function) is exported as a null string; that is, the absence of a
value. If this parameter is specified, then no null data test is performed and the Habitat null data
value is exported as it is defined in binary. (Currently, the Habitat null data value is the byte value of
0x80 for each byte of the field data element.)

-nonhier_only [optional]
This parameter limits the export of records to non-hierarchical records only. ITEMS records are also
exported.

-noprefix [optional]
This parameter eliminates the field element line export prefix character (^) that precedes the field
name.

-noquotes [optional]
This parameter disables the quotes at the start and end of character string data. By default, all
character string data is exported as a quoted string. This includes all data exported as CHARACTER
data type, DATE data type, and TIME data type (date and time when -timedate_as_numbers
parameter is not in force).

-norecordnames [optional]
This parameter inhibits the export of all record names. This parameter is only meaningful when the
default record format is used. If a record pattern is used, this parameter does not inhibit the record
name.

-nosubscript [optional]
This parameter specifies that the subscript field in the default record format is to be deleted from the
exported data. By default, the default record format includes the record occurrence subscript value
for each record exported. This parameter is also used to inhibit the export of the subscript field of
each field line exported in field mode.

-pattern <filespec> [optional]


This parameter is used to specify a file specification that contains record patterns used to describe
the format of exported record data. Pattern file is only used for record export. When this parameter is
present, the pattern file contents are prepended to the output file so the latter can be used directly as
input for hdbimport.

-pattern_prefix <prefix> (default: # character) [optional]


This parameter specifies an alternative prefix character for the record pattern statements that are
generated when the -create_patterns command option is used — or that are interpreted by the
-pattern parameter switch option. The pattern prefix is a single character that must be specified in a
quoted string.

Proprietary – See Copyright Page 167 Hdb Utilities Reference


Hdb User’s Guide
-quote <quotechar> (default: apostrophe (')) [optional]
This parameter specifies an alternative character string quotation character. Unless inhibited by the
specification of the -noquotes parameter switch, all character strings are enclosed in quotation
marks. The default quotation mark is the single quote mark ('), commonly called the apostrophe
character.
Another is the double quote ("). The <quotechar> specification must be contained within a quoted
string. This parameter switch is useful if the data of a character string itself contains the default
single quote (') character. By specifying an alternate quote character, the quote character embedded
within the data is treated as a conventional character data value.

-range <range-specification> [optional]


This parameter specifies a record occurrence range by using a first and last subscript. Multiple ranges
for a single or multiple record names can be specified. Segmented ranges can be specified with
different subscript ranges for the same record name. This parameter switch can be used in record
mode or field mode. The -range parameter cannot be used in concert with the
-key parameter switch.

-record <list-of-records> [optional]


This parameter specifies, by record name, which records are to be included in the export. This
parameter automatically sets the record export mode, and only the records specified by name are
exported. This mode differs from default export operational mode in that multidimensional fields are
not included in the export.

-s <output-file> [optional]
This parameter specifies the name of the file generated by the export operation. By default, exported
data is sent to standard output, which can be redirected with the redirection symbol (>) or (>>) to
append to an existing file.
The following two command examples are identical in their results:
hdbexport -d scadamom -s rawdata.dat
hdbexport -d scadamom > rawdata.dat
If the exported data is to be appended to an existing file, then the -append parameter switch is used
in conjunction with the -s switch. You can also use the concatenation redirection symbol (>>) instead.
The following two command examples are identical in their results:
hdbexport -d scadamom -s rawdata.dat -append
hdbexport -d scadamom >> rawdata.dat

-separator <sep> (default: comma , character) [optional]


This parameter specifies an alternative record field separation character. The default separation
character is the comma (,). Almost any field separation character can be specified but not all
characters are prudent in the use of hdbexport. The <sep> specification must be a quoted string and
can be specified using the standard C language escapes such as (“\t”) for the tab character.
The tab character is the only acceptable whitespace character for use as a field separator. The “C”
escape notation for octal or hexadecimal numbers is not supported. The separator character cannot
be the (^) character (used for field mode prefix).

-skip_field <list-of-fields> [optional]


This parameter specifies by name fields that are to be skipped in the export. This parameter can be
used in all modes. It applies to fields within records and field elements in field lines. The list of fields is

Proprietary – See Copyright Page 168 Hdb Utilities Reference


Hdb User’s Guide
specified using fully qualified names (all record type names included as suffix). Each field name is
separated from others by white space. Wildcards are not supported.

-skip_record <list-of-records> [optional]


This parameter specifies by name records to be skipped during export. This parameter can be used in
all modes where records are exported. However, it does not apply to field element line exports. The
list of records is specified where each name is separated by white space. Wildcards are not
supported.

-subtree [optional]
This parameter specifies that the hierarchical records (specified implicitly by the -key, -range,
or -record ) specification are to be exported along with their subtree of child records. If not specified,
then if hierarchical records are mentioned in any of the record selection parameters, they are treated
as non-hierarchical (no child) records.

-timedate_as_numbers [optional]
This parameter specifies that the time and date values (T*4/T*8 and D*2) are exported as integers
instead of as character strings. Using integers is much more efficient than using character strings for
exporting and importing data within the same Habitat environment where the same time/date
database is in use (so that interpretation of the numeric value is consistent).

-xml [optional]
This parameter specifies the output to be in XML format.

-xml_namespace <ns_name> [optional]


This parameter appends a prefix defined by <var_name> to each exported XML tag. By default, no
namespace is assumed.

-xml_variant <var_name> [optional]


This parameter changes how multidimensional fields should be exported. If <var_name> is “field”,
then an XML <Field> tag is created for each multidimensional field, and each element in the field is
defined as an XML <Element> tag under the <Field> tag.
If <var_name> is “element”, then each element in the multidimensional field is displayed as an XML
<FieldElement> tag.

Examples
Refer to the hdbexport chapter.

See Also
hdbimport

Proprietary – See Copyright Page 169 Hdb Utilities Reference


Hdb User’s Guide
hdbformat
The hdbformat utility generates the database subschema files. Subschema files are used by
applications that make calls to the Hdb API.
Subschema files are also called “INCLUDE” files (Fortran 90) or “header” files (C/C++ context).

Syntax
The form of the hdbformat utility command changes depending on the source of the database
definition.
Use the following table to determine the form of the command to use:

Table 17. hdbformat Command Syntax


Source Command
DBDEF hdbformat -s <input-files> <other-parameters>
Dictionary hdbformat -n <dbname>.<title> <other-parameters>
-or-
hdbformat -n <dbname> -v <title> <other-parameters>
Clone hdbformat -db <dbname> -a <appname>
-f <famname> <other-parameters>
Archive -db dbname -file <archivefile> <other-parameter>

Parameters
-s <input-file> [required DBDEF]
The -s parameter specifies the DBDEF source files to be processed. Wildcards are supported. At least
one DBDEF source file is required and a full path name must be specified. DBDEF sources are parsed
and validated prior to use.

-n <dbname.title> [required Dictionary]


The -n parameter specifies that a cloning database dictionary-resident copy of the database schema
be used. The dictionary name is supplied using the database name and version title (<dbname.title>).
Wildcards are supported. However, only one version of a database should be formatted at a time,
because the subschema files are identified by the database name and not by the version title.
This method is faster than using DBDEF, because no parsing or validation takes place during
processing of this command. The dictionary copy has been previously validated. (Note that dictionary
copies are created using the hdbcloner -c load_schema command.)

-a <appname> [optional]
The -a parameter specifies the application name context of the source clone. This parameter is only
used with the -db parameter. The environment variable HABITAT_APPLICATION describes the
application name context if this parameter is not used with the -db.

Proprietary – See Copyright Page 170 Hdb Utilities Reference


Hdb User’s Guide
-cray [optional]
Generates Cray Pointer Fortran 90 source files for each partition in addition to the conventional
partition INCLUDE files.

-d <dirpath> [optional]
The -d parameter specifies an alternative directory path where the subschema files are to be created.
The current location is the default when not specified otherwise. The <dirpath> value must be stated
as a full file path specification.

-db <dbname> [required Clone or Archive]


Specifies the name of the database where the schema is extracted from the named clone or archive
file. If the -file parameter is specified, then the source is an archive file. If the -file parameter is not
specified, then the source is assumed to be clone-resident described by environment context or by
specification of the -a and -f parameters.
This method of generating the subschema is used when the clone is created using sizes supplied by
an MXDEF file. When a clone is constructed using this method, only the clone’s copy of the schema
knows the correct dimension and MX values.

-f <famname> [optional]
The -f parameter specifies the family name context of the source clone. The environment variable
HABITAT_APPLICATION describes the family name context if this parameter is not used with the -db.
The -db parameter must be used with this parameter.

-file <filespec> [required]


The -file parameter specifies that an archive file is the source for the database schema. The -db
parameter must be used with this parameter.

-h [optional]
Displays the online help for this command.

-l <langopts> [optional]
The -l (language) parameter specifies the target programming languages that will use the subschema
files. Subschema files are created for each specified language. Multiple languages can be specified.
The defaults are Fortran 90 and the C languages. Separate specified languages with a space.

Table 18. hdbformat Language Options


Option Language
f90 Specifies Fortran 90. Fortran 90 is the default, thus the parameter is not required.
C Specifies C/C++.
f77 Specifies that Fortran 77 compatibility is required. This is not a target language. Only
Fortran 90 is supported by Hdb. However, this parameter, when used with f90, indicates that
the INCLUDE files are to be incorporated into older Fortran 77-compatible programs. In
particular, when f77 is specified, the fixed-column format for the statements is used instead of
the free-form statement format of Fortran 90.

Proprietary – See Copyright Page 171 Hdb Utilities Reference


Hdb User’s Guide
-v <title> [optional]
This parameter is used in combination with the -n <dbname> parameter to specify the database
version title when generating the subschema files.

-mxdef <mxdef-source> [optional]


Specifies the mxdef file to override the MX values in the database schema file (DBDEF) when
generating the C/Fortran source files. This option is valid only in conjunction with the -s option. This
option overrides the HABITAT_MXDEFDIR environment variable. For more information about using this
option, refer to the section “Database Resizing Using MXDEF File” in the Hdb Programmer’s Guide.

-nojointitle [optional]
If present, the generated files will not use the combined titles in the comments of the generated
C/Fortran source files. For more information about using this option, refer to the section “Database
Resizing Using MXDEF File” in the Hdb Programmer’s Guide.

Proprietary – See Copyright Page 172 Hdb Utilities Reference


Hdb User’s Guide
hdbimport
The hdbimport utility is used to import data into an Hdb database from one or more ASCII formatted
data files. The files can contain record lines, field lines, declaratives, and comments.

Syntax
hdbimport -s <input-files> -d <target_dbname> [parameters]
Files specified by the -s parameter are processed in the order specified. Parameters are described
below.

Parameters
-a <appname> [optional]
Specifies the application name context for the database. If not specified, the application name is
derived from the HABITAT_APPLICATION environment variable. Either -a or -app is accepted as input.

-atstart [optional]
Inserts records at the start of existing records or, in the case of hierarchical insert, at the start of the
existing siblings. This parameter can be used with key or positional subscripts; however, it is most
commonly used with keys. Default is to insert at the end of the record or sibling.
The action of this atstart position is executed for the first hierarchical record or non-hierarchical
record of a given type. Once a given type of record has been inserted, subsequent records are
inserted at the current position. The atstart parameter can be specified with declaratives.

-backup [optional]
The target database is backed up to the standby node after a successful hdbimport operation
(backup is to a standby system in dual-computer configurations). Volatile partitions are not backed
up; otherwise, the entire database is backed up.
Backup only occurs if:
• No errors occur.
• The database is modified.
• The clone is marked to support replication.

-comment <char> [optional]


Sets an alternative comment character. The default is the pound (#) sign. The pound sign also
denotes a semantic declarative to hdbimport. Any printable character is allowed except for the caret
(^) or the comma (,), which are already reserved.

-count [optional]
Counts the number of records and fields in the input data file. Can be used with the -verify and the
-update parameters.

-d <target_dbname> [required]
Specifies the target database by name. Either -d or -db is accepted as input.

Proprietary – See Copyright Page 173 Hdb Utilities Reference


Hdb User’s Guide
-entrychecks [optional]
Uses data entry checks during the verify operation according to the constraints defined for the target
database. If data fails during checking, no modification of the database occurs and an error is
reported. Entry checks can also be specified using declaratives.

-f <famname> [optional]
Specifies the family name context for the database. If not specified, the family name is derived from
the HABITAT_FAMILY variable. Either -f or -fam is accepted as input.

-h [optional]
Displays the online help for this command.

-ignore_indirect_msg [optional]
Suppresses the indirect field modification warning message. If an indirect field is updated during a
partial update of a database (where the database is not initialized first), a warning message is issued
cautioning the user that this type of operation is not always correct.

-initialize [optional]
Initializes the target database before any of the input data files are processed. If not specified, the
database is not initialized.

-insertnodup [optional]
Inserts records only if they do not already exist in the database (no duplicates). This option allows the
user to always work with the same import data file, simply adding new records as needed;
hierarchical records must be added at their valid position. Records must have keys, for hierarchical
records’ parent keys must be specified up to the level that guarantees uniqueness. When this option
is chosen, the -keys parameter is forced (see below). Insert no duplicates can also be specified using
declaratives.

-keys [optional]
This parameter is used to instruct hdbimport to use keys supplied with each input file data record for
locating proper parents for insert mode, or for locating the current record for update mode. If not
specified, then positional subscripting is used. The keys mode can also be specified using
declaratives.

-lowercase [optional]
Character strings are converted to lowercase before being imported into the database. Otherwise,
data is imported as it appears in the imported data file. Can also be specified using declaratives.

-maxerrors <errorcount> [optional]


Sets the maximum number of errors allowed before the import operation is aborted. Data containing
errors is not imported. Typically, this parameter and the -verify parameter are used together. The
default is 100 errors. Any whole number is accepted as input.

-maxlinesz <linesize> [optional]


Sets the maximum line size of the input data file. Use this parameter to increase the maximum line
size, in case errors are reported to hdbimport indicating that the input data file line exceeded the size
of the internal read buffers. The default is set to 4095 bytes.
The typical SCADAMOM exported data file exceeds 3000 bytes, but is rarely larger.

Proprietary – See Copyright Page 174 Hdb Utilities Reference


Hdb User’s Guide
-nofile [optional]
Specifies that there are no input data files to process, but that other command operations, such as
-initialize, may be executed. This parameter cannot be specified along with the -s parameter switch.

-overrideoid [optional]
Overrides the value of the OID in the database with the OID value of the imported file during an insert
or update operation. Only affects record operations. However, if an OID appears as a field in the
imported data file, the field is updated regardless of the setting of this switch. Can also be specified
using declaratives.

-report_linenumberonly [optional]
Limits error reporting to the line number of the offending input data line. Along with each error report
is a description of the error and the offending data line. If this switch is specified, then the data line
text is not reported; instead, only the line number itself is reported.

-report_truncateline [optional]
Limits error reporting of the offending line to 80 characters with no line wrapping. The
-report_linenumberonly parameter has precedence over this parameter.

-s <input_data_files> [required]
Specifies the name of the ASCII input file(s) to be imported into the Hdb database. Wildcards are
allowed for multiple file input. Included files must be separated by a white space. The hdbimport
utility is case-sensitive and must include the appropriate directory path specification.

-separator <“sep”> [optional]


Sets an alternative record field separator character. The default is the comma (,). Quotes must be
used to identify the desired character and the standard C-language escapes, such as “\t” for the tab
character. Almost any character can be used, with exceptions noted in the following paragraph.
Note: The tab character is the only acceptable whitespace field separator. The C-language escape
notation for octal or hexadecimal numbers is not supported. In addition, the caret (^) character (used
for field mode) cannot be the same as the comment character (#).
The separator can be a declarative, and a single file can use more than one character as the
separator character. This can be accomplished by specifying the #separator declarative multiple
times.

-skipnulldata [optional]
Instructs hdbimport to skip fields during insert mode or update mode where the input data line does
not contain data. For example, a line with two contiguous field delimiters such as
...,3.4,,“XRAY”,... would normally interpret the field between the values 3.4 and “XRAY” as a Null to be
inserted into the database. The -skipnulldata option changes this behavior and instead causes
hdbimport to ignore the field. During import mode, this has the effect of leaving the field containing
the FILL bytes, the normal result of an insert record operation. During update mode, this has the
effect of leaving the field as is; it is not changed.

-update [optional]
Updates existing records in the database. Records are located by subscript or by key value if the
-keys parameter is used. This parameter only affects record operations; field import is always in
update mode. Update mode can also be specified using declaratives.

Proprietary – See Copyright Page 175 Hdb Utilities Reference


Hdb User’s Guide
-uppercase [optional]
Character strings are converted to uppercase before being imported into the database. Otherwise,
data is imported as it appears in the imported data file. Can also be specified using declaratives.

-verbose [optional]
Turns out detailed error reporting. Can be used with regular import operations, or with the -verify
parameter. This parameter can result in large reports, depending on the size of the database. Can
also be specified using declaratives.

-verify [optional]
Verifies the validity of the input data files according to the schema defined by the target database.
The database is not modified when this parameter is used.
Most other parameters are accepted as descriptive of the intended import operation, which may
affect the validation.
Note: If verify mode is used, data entry checking is not performed, since data entry checking is
dependent on database record position, which is not correctly established for verify mode.
In verify mode, the following declaratives are ignored: #insert, #insertnodup, #update.

Examples
Refer to the hdbexport chapter.

See Also
hdbexport

Proprietary – See Copyright Page 176 Hdb Utilities Reference


Hdb User’s Guide
hdbimport Declarative Modifiers
Declaratives are case-sensitive. Declarative descriptions follow.
More-complex declarative statements, requiring more detailed explanation, follow this section.

Timedate Format String Syntax


Not implemented. The default timedate format specifier is %d-%b-%Y%#R%H:%M:%S
An example for a string formatted as such is “01-Jan-2000 10:20:30”
For a description of the timedate format specifier, refer to the Habitat Timedate User’s Guide.

Record Format String Syntax


The #record declarative defines the name and order of fields as they appear in the input data file. The
syntax is: <field1>
#record <recname>, <field1>, <field2>, <field3>, ...
where <recname> is the name of the record and <field> is the name of the field.

Boolean Format String Syntax


The boolean declarative defines a true/false keyword pair that can be used to replace the defaults
used by hdbimport. The syntax is:
#boolean <true>/<false>
where <true> represents the TRUE keyword and <false> represents the FALSE keyword.

Declaratives
#insert #insertnodup #update
Specifies that either insert, insert no duplicates, or update operations are to be used by hdbimport.
Once an operation is selected, it remains in effect until it is changed. Declarations are in effect until
the end of the current input file only and will not carry through to the next data file.
For #update, see sections 9.8.7 Subscript Update for Records and 9.8.8 Subscript Update for Fields.
Note that those declaratives are ignored when the -verify parameter is present.

#atstart
Places the first visit record at the start (#atstart) of the existing records of that type, or of the siblings
of the designated parent record. For more information about this declarative, see section 9.8
Positioning a Record for Insert and Update.

#atend
Places the first visit record at the end (#atend) of the existing records of that type, or of the siblings of
the designated parent record. For more information about this declarative, see section 9.8 Positioning
a Record for Insert and Update.

#keys #nokeys
Keys are used to locate record insertion position for update or insert operations. Subscripts are used
to update records. Record positions are updated as they occur in the input file. The #nokeys and
#subscripts declarative reverse the effect of #keys. For more information about this declarative, see
section 9.8 Positioning a Record for Insert and Update. The declarative #nokeys is ignored when in
insert no duplicates mode.

Proprietary – See Copyright Page 177 Hdb Utilities Reference


Hdb User’s Guide
#subscripts
Subscripts are to be used to locate records insertion position for update or insert operations. The
declarative #subscripts is ignored when in insert no duplicates mode.

#entrychecks #noentrychecks
Database defined data entry points are to be performed on field data. Data entry checking is turned
with #noentrychecks.

#verbose #noverbose
A detailed update report is to be produced. Large databases result in very large reports. #noverbose
invalidates the #verbose declarative.

#uppercase #nocase
Changes character string data to uppercase before performing data entry checks (if selected) and
prior to storing the data in the database. If uppercase is not specified, data is read as it appears in the
input data file. #nocase turns off uppercase.

#lowercase #nocase
Changes character string data to lowercase before performing data entry checks (if selected) and
prior to storing the data in the database. If lowercase is not specified, data is read as it appears in the
input data file. #nocase turns off lowercase.

#overrideoid #nooverrideoid
The OID value in the input file is overridden by the value obtained from the insertion of a new record
in insert mode. In update mode, the existing database OID value is overwritten by the value contained
in the input file. #nooverrideoid counters #overrideoid.

#separator “c”
Designates the letter “c” as a field separator. The separator is used to delimit fields in the record
statements of the input data file. The separator character is used on both the default record
statement and on the record statements defined by #record.
However, fields that appear on the #record declarative are not affected by this separator. To revert to
the default separator, declare #separator without a value.

#comment “c”
Designates the letter “c” as the comment character instead of the pound (#) sign. Designation is
enforced from the point of entry in the file until it is changed.
The following example illustrates this point:
#comment "?"
? this is a comment line
? and this
?comment
# this is a comment line

#timedate “date-time-format-string”
Not implemented.

#record <recname>, <field>, <field>, ...


Defines the format of a record statement for the input data file (for more information about the use of
this declarative, see section 9.7.1 Details About the #record Declarative).

Proprietary – See Copyright Page 178 Hdb Utilities Reference


Hdb User’s Guide
Refer to “Record Format String Syntax” above for syntax descriptions.

#skipnulldata #noskipnulldata
Directs that null data values in the input data file are to be ignored. The result is that no changes
occur to the corresponding database field. By default, hdbimport interprets a null data item as a
Habitat null value and inserts this value into the associated field in the database. The #noskipnulldata
declarative can be used to reverse the effects of command-line parameters.

#boolean <true>/<false>
Defines a boolean TRUE and FALSE string keyword used to represent true and false values. For more
information about the use of this declarative, see section 9.7.2 Details About the #boolean
Declarative.

Examples
The example use of these declaratives is shown in chapter 9 hdbimport. Specific examples for some
of the declaratives can be found in this list:
• #boolean - 9.7.2 Details About the #boolean Declarative
• #keys - 9.9.1 Inserting with the #keys Declarative, 9.9.2 Updating with the #keys Declarative
• #insert - 7.5.8 Use Declaratives, 9.9.1 Inserting with the #keys Declarative
• #record - 9.7.1 Details About the #record Declarative, 9.9.1 Inserting with the #keys Declarative,
9.9.2 Updating with the #keys Declarative
• #update - 7.5.8 Use Declaratives, 9.8.7 Subscript Update for Records, 9.9.1 Inserting with the
#keys Declarative, 9.9.2 Updating with the #keys Declarative

Proprietary – See Copyright Page 179 Hdb Utilities Reference


Hdb User’s Guide
hdbrio
Initiates the execution of hdbrio and opens the Hdb database dbname if specified. If the -i option is
not used, the hdbrio rio> prompt is displayed, which enables interactive mode.
Hdbrio creates an audit trail log of its invocations in a file with the name “HDBRIO_<YYYYMMDD>.log”
in the default HABITAT_LOGDIR directory. The creation of the log may be disabled using the
HABITAT_HDBRIO_DISABLE_LOGGING environment variable or redirected to another directory using
the HABITAT_HDBRIO_AUDITDIR environment variable.

Syntax
hdbrio -a <appname> -f <famname> <dbname> (Access Clone)
hdbrio -archive <arc file> <dbname> (Access Archive/Savecase)
hdbrio -i <rio script file>
hdbrio -c <rio command>

Argument
dbname
Specifies the name of the Hdb database name.

Options
-a <appname>
Specifies the Hdb application. If not specified, then the HABITAT_APPLICATION environment variable is
used.

-archive <arc file>


Specifies the name of the savecase file or archive file generated by hdbcopydata.

-c <rio command>
Executes the hdbrio command (typically within quotes) and exits hdbrio. The database can be
specified with “ -a <appname> -f <famname> <dbname> ” on the command line. If
HABITAT_APPLICATION or HABITAT_FAMILY is already set, then -a and -f can be omitted. You can also
embed a DBOPEN statement in <rio command> to designate the database to open.

-f <famname>
Specifies the Hdb family name. If not specified, then the HABITAT_FAMILY environment variable is
used.

-i <rio script file>


Specifies the name of the hdbrio script file containing rio commands. Typically, the script file will
contain a DBOPEN statement, which specifies which database to use.

Examples
Access a clone with hdbrio interactively
% context scada ems
% hdbrio scadamom
[SCADAMOM.SCADA.EMS]
rio>

Proprietary – See Copyright Page 180 Hdb Utilities Reference


Hdb User’s Guide
Access an archive with hdbrio interactively
% hdbrio -archive scada.arc scadamom
[scada.arc:SCADAMOM]
$rio>

Use a rio script file


[Sample script file]
//
// Filename: insert_substn.rio
//
dbopen -a scada -f ems -d scadamom
pos SUBSTN=*
insert
/id=new_sub

[Running hdbrio using the script file]


% hdbrio -i insert_substn.rio

Execute rio statements from the command line


% hdbrio -c "dbopen -a scada -f ems -d scadamom; pos SUBSTN=*; insert ;
/id=new_sub"
Or:
% context scadamom ems
% hdbrio -c "pos SUBSTN=*; insert ; /id=new_sub" scadamom
Or:
% hdbrio -c "pos SUBSTN=*; insert ; /id=new_sub"
-a scada -f ems scadamom
The three examples above perform the same operation as what is specified in the “Use a rio script
file” example.

See Also
hdbrio command: dbopen

Proprietary – See Copyright Page 181 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: backup
The backup command manually forces data to be replicated to the standby node on a replicated
clone with MRS running.

Syntax
rio> backup

Options
None.

Proprietary – See Copyright Page 182 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: checkpoint
The checkpoint command manually forces checkpointing of all dirty data of the current database
back to the disk file.

Syntax
rio> checkpoint

Options
None.

Proprietary – See Copyright Page 183 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: dbcopy
The dbcopy command marks records for copy to some destination within the same database or to
another database. By default, if no option or argument is specified, the record at the current position
(not including its child subtree) is marked for copy.
After the records are marked, use the dbpaste command to paste records to the desired destination
location. To specify the desired destination, use the commands position, dbopen, and dbset.

Syntax
rio> dbcopy [options] [number of records]

Argument
number of records [optional]
The number of records starting at the current position to be marked for copy. If not specified, the
default of one record at the current position is marked for copy.

Options
-a
Insert before the first sibling record. When the dbpaste command is issued at the parent position
(instead of at a valid record position of the marked record), this option tells hdbrio to insert the
records before all siblings under this particular parent.

-d
Data fill destination. For multidimensional fields, if the destination dimension is greater than the
source, the default action is to leave the field data beyond the source LV alone. Specifying this option
will cause those destination field elements beyond the source LV to be populated with the FILL byte
values.

-f
Skip copying of FREE records from the source to the destination.

-i
Ignore conversion errors. For example, when a field is changed from C* in the source to I* in the
destination, Hdb cannot convert from a string field to an integer value, so normally the copy
operation is terminated at this point. Specifying this option forces Hdb to ignore this kind of
conversion error to continue on with the copy operation.

-l
Insert after the last sibling record. When the dbpaste command is issued at the parent position
(instead of at a valid record position of the marked record), this option tells hdbrio to insert the
records after all siblings under this particular parent.

-m
Skip copying of multidimensional fields from the source to the destination.

-o
Copy the OID field from the source to the destination. By default, new OIDs are generated when
records are inserted into the destination, and it is not necessary to copy the OID from the source. For

Proprietary – See Copyright Page 184 Hdb Utilities Reference


Hdb User’s Guide
a specific application, you might want to preserve the OID values from the source by specifying this
option.

-s
Mark the child subtree for the copy.
After the command is issued, the number of records for each record type at the current position will
be displayed.

-z
If specified, all indirect pointers are set to zero in the destination during the paste operation.

See Also
hdbrio command: dbopen
hdbrio command: dbclose
hdbrio command: dbpaste
hdbrio command: position

Proprietary – See Copyright Page 185 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: dbclose
The dbclose command closes a database. The database to be closed is identified by a number in the
list of all open databases. The database number and its corresponding database can be found using
the command - “dbset” or “list -d”.
The dbclose command cannot close the database that the user is currently interacting with. To close
this database, the user should switch to another database first using the dbset command, or exit
hdbrio.

Syntax
rio> dbclose <database number>

Argument
database number [required]
A number corresponding to a specific database that will be closed.

Options
None.

See Also
hdbrio command: dbopen
hdbrio command: dbpaste
hdbrio command: list

Proprietary – See Copyright Page 186 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: dbopen
The dbopen command opens a database from a clone or an archive file. A clone database is specified
by arguments: database, application, and family. An archive file database is specified by arguments:
database and archive file.
If the dbopen command is successful, the hdbrio prompt is switched to the newly opened database
and all hdbrio commands will be directed to this database, until the user decides to switch to another
database using the dbset command or to exit hdbrio.
Dbopen adds entries in the HDBRIO audit trail log of its invocations.

Syntax
rio> dbopen -a <application> -f <family> -d <database> (open clone)
rio> dbopen -r <archive file> -d <database> (open archive)

Arguments
-a <application>
Specifies the application of the clone. Required when opening a clone database.

-f <family>
Specifies the family of the clone. Required when opening a clone database.

-r <archive file>
Specifies the filename of the archive file. Required when opening an archive file database.

-d <database>
Specifies the database name. Required for opening either a clone or an archive file database.

Options
None.

See Also
hdbrio command: dbclose
hdbrio command: dbset

Proprietary – See Copyright Page 187 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: dbpaste
The dbpaste command pastes the previously marked records after the current record position.
Records are marked with the dbcopy command.
When pasting a subtree, if the MX of any record type will be exceeded in the destination database, an
error message is displayed to indicate which record type in the destination database needs to be
resized.

Syntax
rio> dbpaste [options]

Arguments
None.

Options
-k [optional]
Keeps the source information from dbcopy so the dbpaste command can be issued again with the
same source. If not specified, the source information for the copy is cleared, and issuing a dbpaste
command at this time will result in an error indicating that there is no source information initialized
for the copy.

See Also
hdbrio command: dbcopy
hdbrio command: dbset

Proprietary – See Copyright Page 188 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: dbset
The dbset command sets the open database for command interaction. If no argument is given, this
command lists all the currently opened databases. Each open database has a number associated
with it. The dbset command uses this number to switch to the database corresponding to it.

Syntax
rio> dbset [database number]

Argument
database number [optional]
Specifies the database to switch to. If not specified, a list of all the currently opened databases is
listed.

Options
None.

See Also
hdbrio command: dbopen
hdbrio command: dbclose

Proprietary – See Copyright Page 189 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: delete
The delete command removes one or more records (and its subtree if the record is hierarchical). If
more than one record is to be deleted, then records are deleted starting with the current record. The
default is to delete the current record. Record position shifts to the record that preceded the deleted
record.

Syntax
rio> delete [options]

Options
-n number
The number of records to delete.

-y
Represents “Yes”. Confirms that hdbrio is to delete the selected records from the database. Once the
delete command is successfully issued, the following message appears:
Deleting x records (including all subtrees) under x. Delete records [Y/N]?

Proprietary – See Copyright Page 190 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: down
The down command enables navigation down a hierarchical record’s subtree until the anchor record
is reached. The down command defines the current record as an anchor record and positions down
the subtree under the anchor record. The newly positioned record becomes the current record, and
the prompt reflects both the anchor and the current record.

Syntax
rio> down [number]

Argument
number
Specifies the number of records to go down the subtree. If the number is an asterisk (“*”), then hdbrio
positions to the last record in the subtree under the anchor record.

Options
None.

Example
In this example, the substn record is the anchor record and the device record is the current record
beneath the anchor record:
substn(2). . .device(4)>

Proprietary – See Copyright Page 191 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: echo
The echo command controls the echoing of the commands prompt and output.
In interactive mode, the default is to echo the rio> prompt but not to echo the commands. The output
is set to a standard output.
In batch mode (i.e., by specifying the -i option), the default is to echo the commands but not to echo
the prompt. The output is set to a standard output.

Syntax
rio> echo [options]

Options
-c ON|OFF
Turns on or off command echo. In batch mode, the default is ON. In interactive mode, the default is
OFF.

-f <fname>
Echoes the commands (only) to a file specified by fname. Useful in creating hdbrio scripts.

-o <fname>
Echoes the output of displaying records or fields to a file fname. Useful in creating benchmarks.

-p ON|OFF
Turns on or off the prompt. The default is ON in interactive mode and OFF in batch mode.

Proprietary – See Copyright Page 192 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: exit
The exit command exits or quits hdbrio.

Syntax
rio> exit

Options
None.

See Also
hdbrio command: quit

Proprietary – See Copyright Page 193 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: find
The find command positions to a record based on a key field value or a composite key value. After the
record is found, the current record then becomes the first record that satisfies the composite key
constraint. The command searches by key fields to locate a specific record.

Syntax
rio> find key=value [key1=value1,key2=value2,. . .]

Arguments
key
Key field.

value
Key field value of a particular record.

Options
None.

Proprietary – See Copyright Page 194 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: help
The help command provides a command overview and syntax for specific commands. By default (i.e.,
without an argument), help displays a brief summary of all hdbrio commands and the general syntax
for each command.

Syntax
rio> help [command]

Argument
command
Enter the name of the command to obtain specific information about that command.

Options
None.

Proprietary – See Copyright Page 195 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: insert
The insert command inserts a record. The default is to insert one record of the current record type
after the current record position. The inserted record becomes the new record position. If more than
one record is inserted, then the current record is the first record inserted.

Syntax
rio> insert [options] [record]

Argument
record
Inserts an Hdb record type record.

Options
-n <number>
Specifies the number of records to insert.

-l
Inserts a child as last child in current subtree.

-r
Inserts a FREE record at the root level.

Proprietary – See Copyright Page 196 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: list
The list command displays Hdb database and record information. If no options are specified, the
default is to list record names in the database, and to indicate their LV and MX values and record type
(e.g., hierarchical).

Syntax
rio> list [options]

Options
-f field
Lists field attributes of the named field. The attributes include type, dimension, size, fill byte, pointer,
and key. The syntax of field is a fully qualified field name (e.g., id_devtyp).

-r record
Lists record attributes of the named record. The attributes include timestamp, parent status, LV, MX,
and record type (e.g., circular).

-p [part]
Lists the attributes of the named partition component. The attributes include timestamps. If no
component name is given, then hdbrio lists all of the partitions in the database.

-s
Lists the definition and timestamps of the currently accessed database.

-h
Lists the record hierarchy.

-m
Lists the multidimensional fields found in the database.

-d
Lists all the currently opened databases. Same as the dbset command without any argument.

Proprietary – See Copyright Page 197 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: position
The position command alters the current record position within the database to the record position
specified by the record and subscript arguments. If a subscript is not given, hdbrio positions to the
first record.

Syntax
rio> position record[=subscript]

Arguments
record
Specifies the record type.

subscript
Specifies the record’s position within a record structure.

Options
None.

Proprietary – See Copyright Page 198 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: quit
The quit command exits hdbrio.

Syntax
rio> quit

Synopsis
Exits hdbrio.

Options
None.

See Also
hdbrio command: quit

Proprietary – See Copyright Page 199 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: read
The read command scans and displays a record’s content. By default, the current record is scanned
and displayed. If record and subscript arguments are provided, then the specified record is scanned
and displayed. The current record position is unchanged.

Syntax
rio> read [record=subscript]

Arguments
record
Indicates the record to display.

subscript
Specifies the record’s position within a record structure.

Options
-p
Displays pseudo fields.

-a
Displays all fields.

-h
Displays numbers in hexadecimal format.

-d
Displays numbers in decimal format.

-o
Displays numbers in octal format.

Proprietary – See Copyright Page 200 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: reset
The reset command repairs invalid pointers in the database.

Syntax
rio> reset

Arguments
None.

Options
None.

Proprietary – See Copyright Page 201 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: setstamp
The setstamp command sets the time stamp for an Hdb object.

Syntax
rio> setstamp -d (Set database timestamp)
rio> setstamp -p <partition> <type> (Set partition timestamp)
rio> setstamp -r <record> (Set record timestamp)
rio> setstamp -f <field> <type> (Set partition stamp using field)

Arguments
-d
Sets the database timestamp, uses “list -s” to view the database timestamp.

-p <partition> <type>
Sets the partition timestamp. The partition argument is the name of the partition. The type is one of
ARCHIVE, BACKUP, ENTRY, UPDATE, or WRITE. Use “list -p <partition>” to view the partition timestamp.

-r <record>
Sets the record timestamp. The record argument specifies the record name. Use “list -r <record>” to
view the record timestamp.

-f <field> <type>
Sets the partition timestamp using the field. The field argument identifies the field whose partition’s
timestamp is to be changed. The type argument is one of ARCHIVE, BACKUP, ENTRY, UPDATE, or
WRITE. Use “list -f <field>” to view the partition and partition timestamp for a given field.

Proprietary – See Copyright Page 202 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: up
The up command enables navigation up a hierarchical record subtree until the record’s anchor
record is encountered.

Syntax
rio> up [number]

Argument
number
Specifies the number of records to go up the subtree. If an asterisk (“*”) is entered in the number field,
then hdbrio is positioned directly to the anchor record.

Options
None.

Proprietary – See Copyright Page 203 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: verify
The verify command validates parent, child, and indirect pointer field values in the database.
Command options allow the user to verify only parent pointers or only indirect pointers. When no
database object is specified on the command line, the default verification is performed on all
database records and fields.

Syntax
rio> verify options [dbobject]

Argument
dbobject [optional]
The database object to perform verify on. If no database is specified, all objects selected by the option
value are verified. The dbobject can be a record type, a parent/child pointer field, or an indirect
pointer field, depending on the selected option.

Options
-a
Verifies all pointer fields: parent, child, and indirect pointers. The dbobject can be a record type, a
parent pointer, a child pointer, or an indirect pointer field.

-i
This option only verifies indirect pointer fields.

-p
Verifies parent pointer fields. The dbobject can be a child record type or a parent pointer field.

Proprietary – See Copyright Page 204 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: where
The RIO “where” command helps identify the current position of hdbrio. If the current record is
hierarchical, hdbrio can display all of the parents above or all of the children below the current. By
default, all of the parents (and their subscripts) above the current record are output.

Syntax
rio> where [options]

Options
-s
Lists the children, or the subtree below the record current position.

-d [number]
Optionally used with -s to limit the number of sublevels shown.

-f
Lists the FREE record subtree below the root.

-c
Lists the number of records for each record type in the subtree.

Proprietary – See Copyright Page 205 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: zero
The zero command initializes (zeros out) the database by deleting all records in the database.
Verification must be entered before the database is deleted.

Syntax
rio> where [options]

Options
-y
Represents “Yes”. Confirms that hdbrio is to zero (delete) all records in the database. Enter “Y” (or “N”
for no) when the following confirmation is displayed:
Initializing Entire database database_app_family. Do you want to continue
[Y/N]?

Proprietary – See Copyright Page 206 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: +
The + (plus) command increments the current record by the number specified in the command. If no
number is entered, hdbrio increments to the next record. For hierarchical records, if an asterisk (“*”) is
entered, then the subscript is incremented to the last record in the subtree defined by hdbrio’s anchor
record. For non-hierarchical records, if an asterisk (*) is entered, then the subscript is incremented to
the LV value of that record.

Syntax
rio> + [number]

Argument
number
Specifies the number of records to increment.

Options
None.

Proprietary – See Copyright Page 207 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: –
The - (minus) command decrements the current record by the specified number. If no number is
entered, hdbrio decrements to the next record. For hierarchical records, if an asterisk (“*”) is entered,
then the subscript decrements to the last record in the subtree defined by hdbrio’s anchor record. For
non-hierarchical records, if an asterisk (*) is entered, then the subscript is decremented to the LV
value of that record.

Syntax
rio> - [number]

Argument
number
Specifies the number of records to decrement.

Options
None.

Proprietary – See Copyright Page 208 Hdb Utilities Reference


Hdb User’s Guide
hdbrio command: /
The slash (/) command is used to display and/or edit record fields. The field name must be specified.
The field display command shows the contents of a field within either an instance of a record or all
instances of the record. The field edit command changes the contents of a field within either an
instance of a record or all instances of the record. To set a field to null, assign the field value as the
value <null>.
If no field is specified, the default is to display the current record’s field. The content portion of a field
can be displayed or, if the field is fully qualified with a record number (e.g., id_devtyp(22)), then the
contents of the field in that instance of the record are displayed (and the current record position does
not change).
The options field controls input format for field edit and output format for field displays. The default
input and output format is determined by the field type.

Syntax for Displaying Fields


rio> / [option] [field]

Syntax for Editing Fields


rio> / [option] field = value

Arguments
field
Identifies the field within a record.

value
Identifies a value for the field.
<null> is the value to set a field to null.

Options
To control the appearance of the output, select one of the following options:

-a
Displays all fields, including pointer fields and pseudo fields.

-c
Performs a constraint check on data entry if field constraint is defined in the database definition for
this field.

-d
Displays numbers in decimal format.

-h
Displays numbers in hexadecimal format.

-o
Displays numbers in octal format.

Proprietary – See Copyright Page 209 Hdb Utilities Reference


Hdb User’s Guide
-p
Displays all the pseudo fields associated with the current record occurrence.

Proprietary – See Copyright Page 210 Hdb Utilities Reference


Hdb User’s Guide
hdbserver
The hdbserver is the database server that maintains the online, memory-resident cloning database. If
the server is not running, clone data cannot be accessed by the Hdb API. If the server is not running,
archive or savecase files can still be accessed by Hdb utilities.
The server is typically configured to start when the system is booted. However, it can be manually
started and stopped for maintenance or development work.
The hdbserver requires a specific operational environment before it can be started and operated. For
more information about hdbserver environment requirements, refer to the section “Environment
Variables” below.

Syntax
hdbserver -mxclones <mxclones> <other-parameters>

Environment Variables
HABITAT_GROUP
The HABITAT_GROUP variable must be set to the name of the HABITAT group.
HABITAT_CDBROOT
The HABITAT_CDBROOT variable must be set to the root directory of the HABITAT group.
HABITAT_MXCLONES
The HABITAT_MXCLONES variable is another way to set the maximum number of clones allowed
online. This variable can be overridden by the -mxclones command-line option.
HABITAT_SERVER_HANGUP
Optional, Linux only: The valid values are Y to abort when a SIGHUP is received, or N to ignore the
SIGHUP signal. The default is N.

On-Disk Directory Structure


$HABITAT_CDBROOT/cloning_database/clone_directory.cdr
This directory file contains the names of clone files known to the HABITAT group. This directory can be
reconstructed using the hdbdirectory utility if it is damaged or lost.

Parameters
-hangup [optional]
The -hangup parameter specifies handling of the hangup signal (SIGHUP) by the hdbserver process. If
-hangup parameter is specified, then hdbserver aborts when it receives the SIGHUP signal. This
feature is used in Linux only. The default setting is -nohangup.

-nohangup [optional]
The -nohangup parameter specifies handling of the hangup signal (SIGHUP) by the hdbserver
process. If -nohangup is specified, then hdbserver ignores the SIGHUP signal. This feature is used in
Linux only. The default setting is -nohangup.

-mxclones <mxclones> [optional]


This parameter specifies the maximum number of clones allowed for the online, memory-resident
database. The value assigned to this value sizes the memory and page file space required for the
cloning database. The default is 150.

Proprietary – See Copyright Page 211 Hdb Utilities Reference


Hdb User’s Guide
The value of <mxclones> (maximum clones) must be specified as an unsigned integer with a value
of 50 or greater, which is the minimum value allowed.
The size of the memory-resident cloning database varies with a number of factors. However, its size
can be approximated based on the value specified for <mxclones>. To estimate memory
requirements, use the following table:

Table 19. Memory Sizing Requirements


Maximum Clones Memory Requirements Disk and Page File Space
150 4.7 MB 5.0 MB
500 15.3 MB 15.5 MB
1000 31 MB 32 MB
5000 160 MB 165 MB

Disk and page file space have individual requirements. A locking file is created that mirrors the
memory layout and is therefore roughly the same size as memory layout.
The page file space is required because it is used as the backing store for the memory section.
Note: The page file is simulated on systems where page file backing, store file mapping is not
supported. On these systems, the page file space requirement merely applies to the disk requirement
needed for the backing store file created to simulate page file mapping.
Linux: In Linux, when using the -mxclones command-line option or the HABITAT_MXCLONES
environment option to increase the maximum number of clones that can be placed online, the user
will need to shut down Habitat and remove the backing store file manually. This is due to the way
OSAL shared memory is implemented, where resizing the shared memory section does not
automatically adjust the size of the backing store file. Deleting the backing store file forces OSAL to
regenerate the backing store file. The backing store file is located in $HABITAT_DATAROOT/osalipc
(default) and the format of the filename is osm_<grp>_mcdb_<grp>, where <grp> is the
HABITAT_GROUP where the hdbserver is running.

-noincrease [optional, debug usage only]


This parameter starts the server, but does not create and initialize the online, memory-resident
cloning database. This parameter is meant to be used as a debug, system analysis, and recovery tool
when the server aborts, but the database is still intact in memory. This parameter does not guarantee
recovery of the database if the database is corrupted.

-sleep <sleepinterval> [optional]


This parameter specifies the server’s sleep time interval (in milliseconds). The sleep time interval is the
time the server pauses before checking queues for work. Typically, the sleep interval is between 1000
and 2000. The default is set to 1321 milliseconds. Performance becomes sluggish if the interval is set
above 2000.
On Linux systems, the interval timer has a resolution of one (1) second. Intervals are rounded to the
next one (1) second, with one (1) second being the minimum interval allowed (zero is not valid).

-output <filespec> [optional]


This parameter specifies the <filespec> as the output log file where all normal, verbose, and debug
output is sent to. The default output is to standard output (print, file, screen).

Proprietary – See Copyright Page 212 Hdb Utilities Reference


Hdb User’s Guide
You can redirect output to a specified file using the redirection operator on the command, as in the
following example:
hdbserver -verbose 3 > server_log_file.txt

-append [optional]
This parameter appends output to a specific file, if it exists. If the file does not exist, then it is created.
You can append and redirect output to a file using the append form of the redirection operator, as in
the following example:
hdbserver -verbose 3 >> server_log_file.txt

-verbose <v1> [optional]


This parameter specifies the level of detail for reports when the server is running in debug mode. The
level can be set from 0 (zero) to 10. Zero means that no detail is wanted unless an error condition
occurs. Level 10 means that detailed, trace information for debugging is wanted. Output is to
standard output devices.
Level one (1) provides basic clone information. Level two (2) includes database information, and level
three (3) includes partition information.

-debug [optional]
This parameter sets the server in debug mode. Debug mode sets wait events to use minimum timeout
periods during the main processing loop, so that the interactive debugger can be used to debug and
analyze the server.

Proprietary – See Copyright Page 213 Hdb Utilities Reference


Hdb User’s Guide
Index
A D
Alignment, segment page, 17
Data management, 7
Application CLS. see Schema, application schema
Data types of exported values, 50
Archive file, accessing, 96
Database Version Titles, 19
Archive files, 125
Datafill, 38
Archive Stamp, 27
DBD. see Schema, database schema
DBDEF source file, 2
C Declaratives
#boolean, 76, 81
Clone
#insert, 59, 60
clone archive file, 5
#keys, 87, 89
context, 5
#record, 72, 73, 80
converting to an archive file, 20
#update, 59, 60, 86
creating, 5, 6, 7, 15
definition, 70
directory
summary of commands, 79
administration, 120
syntax, 79
concurrent access, 124
Deleting records, 101
correcting, 122
creating, 120
creating outside of HABITAT group, 122 E
description, 123
Environment Variables, 10
file, 128
HABITAT_APPLICATION, 11
format, 123
HABITAT_CASE_COMPRESSION, 11
updating, 121, 122
HABITAT_CASES, 11
file
HABITAT_CDBROOT, 10
administration of, 116
HABITAT_CLONES, 10
conversion to archive, 117
HABITAT_DEBUG_CHECKPOINT, 13
description, 125
HABITAT_DEBUG_DBLOCKS, 13
moving to HABITAT group, 117
HABITAT_DEBUG_HDBMRS, 13
scanning, 124
HABITAT_DEBUG_TXN, 13
locking file, 128
HABITAT_DEFAULT_VERSION, 19
moving, 7
HABITAT_DEFAULT_VERSION, 12
removing, 7, 16
HABITAT_DISABLE_BACKUP, 12
replacing, 6, 16
HABITAT_DISABLE_CHECKPOINT, 12
replication, 18
HABITAT_DISABLE_DBLOCKS, 12
updating, 7
HABITAT_FAMILY, 11
Cloning Database (CDB) root directory, 129
HABITAT_FULL_FIELD_REPLICATE, 12
CLS
HABITAT_GROUP, 10
application. see Schema, application schema
HABITAT_HDBEXPORT_AUDITDIR, 13
Clone. see Schema, clone schema
HABITAT_HDBEXPORT_DISABLE_LOGGING, 13
source file, 2
HABITAT_HDBRIO_AUDITDIR, 13
Comment lines, 72
HABITAT_HDBRIO_DISABLE_LOGGING, 13
Container Specifications, 22
HABITAT_MXCLONES, 11
Copy
HABITAT_MXDEFDIR, 13
compatible database, 28
HABITAT_MXTITLE, 14
databases with same definition, 27
HABITAT_NOJOINTITLE, 14
field, 30
HABITAT_SERVER_HANGUP, 12, 211
field algorithm, 31
HDBMONITOR_STARTUP_TIMEOUT, 13
incompatible database, 29
HDBMONITOR_TIMEOUT, 13
partition, 29
HDBRIO_DISABLE_COMMAND_LOGGING, 14
records, 99
HDBRIO_HAVE_COPY_CMD, 14
Copy Context, 25
Core Data Files, 128

Proprietary – See Copyright Page 214 Index


Hdb User’s Guide
F datafill, 38
stamps, 26
Field Line, 74 archive, 27
Field Value, data conversion rule, 77 database, 27
File record time
archive, 125 database, 27
subschema truncation, 38
file management, 63 hdbdocument, 42
subschema, 62 document layout, 42
C/C++, 63 hdbdump, 43, 44
types of hdberserver
clone, 125 sleep interval, 115
core data, 128 hdbexport, 46
DNA, 127 data types of exported values, 50
savecase, 126 display
File types, recognized by Hdb, 125 special field, 50
Format format
data output, 49 data output, 49
record, 49 record, 49
Fortran 90, 4, 15 Functional Overview, 46
Fortran INCLUDE statement, 4 modes
FREE record, 99 default, 48
field, 48
H operational, 47
pattern, 49
Hdb record, 48
database overview, 46
cloning, 129 hdbformat, 62
dictionary files
schema, 130 C/C++ subschema, 63
directory Fortran 90
default, 130 subschema, 62
savecase, 130 files
files, on-disk structure of, 124 subschema, 62
hdbcloner hdbimport, 66
converting a clone to an archive file, 20 comment lines, 72
creating clones, 15 data
database version titles, 19 boolean, 76
import clone from another HABITAT Group, 19 calendar date and timedate, 76
loading schema, 15 character, 75
Moving Clones within a HABITAT Group, 20 field value conversion rule, 77
removing clones, 16 keyword, 76
removing schema, 16 null rules, 78
replication, 18 numeric, 75
hdbcloner replacing clones, 16 declaratives, 70, 78
hdbcloner utility, 15 command-line options, 71
hdbcopydata, 22 operational scope, 78
container specification, 22 summary, 79
copy syntax, 79
compatible field line, 74
definition database, 28 field values, specification of, 74
context, 25 files
field, 30 input format, 71
field algorithm, 31 multiple input, 71
incompatible first visit, 81
database definition, 29 input data file, creating, 66
partition, 29 insert
same definition database, 27 empty database, 83
selcop, 33

Proprietary – See Copyright Page 215 Index


Hdb User’s Guide
key atstart, 85
atend, 85 hierarchical records, 84
atstart, 85 non-empty database, 83
hierarchical, 84 positional
non-empty database, 83 circular records, 84
positional MaxLV records, 84
circular records, positioning to, 84 subtree, 85
MaxLV records, 84 Insert mode, 81
positional record, 82 Inserting
lines records, 97
field, 69 FREE, 99
record, 69 hierarchical, 98
mode non-hierarchical, 98
insert record, 70
update record, 70 K
modes, 81
record lines, 72 Key insert, for hierarchical records, 84
records with multiple parents, 85 Key vield value, positioning with, 101
subscripts, using to locate records for update, 86
syntax, record lines, 73 L
update key for hierarchical records, 86
hdbrio, 91 Lines
archive file, accessing, 96 field, 69
changing the current record position, 97 record, 69
command syntax, 92
commands, table of, 105 M
copying records, 99
decrementing the current record, 102 Modes
deleting records, 101 default, 48
displaying record content, 102 field, 48
exiting, 91 pattern, 49
Help command, 92 record, 48
incrementing the current record, 102 MXPARAM, 2
inserting
FREE records, 99 N
hierarchical records, 98
non-hierarchical records, 98 Null Data Rules, 78
navigating a subtree, 96, 97
positioning using key field values, 101 O
reading records, 102
record fields, displaying and editing, 103, 104 On-Disk structure, setup of, 130
record position, indicating, 96 Operational Scope of Declaratives, 78
scripts, using, 92
hdbserver P
clones, online and offline, 113
shutdown, 116 Positional insert
kill command, 116 circular records, 84
tview command, 116 hierarchical records, 82
startup, 114 MaxLV records, 84
non-hierarchical records, 82
Positioning records for insert and update, 81
I
Importing Clones from Another HABITAT Group, 19 R
Input data file, creating, 66
Input File Format, 71 Record
Insert changing content, 104
empty database, 83 displaying content, 103
key insertion, 70
atend, 85 line, 72

Proprietary – See Copyright Page 216 Index


Hdb User’s Guide
update, 70 Stamps, 26
Record lines, 73 archive, 27
Record type, changing positioning with, 97 database, 27
Replication, 18 partition change, 26
administration of, 119 record time, 27
clones, removing, 120 Structure, on-disk, setup of, 130
Root Directory, Hdb cloning database, 129 Subschema, 4
subtree, 101
S System administration, 113
Hdb server, 9
Savecase files, 126 tasks of, shutdown, 113
Schema tasks of, startup, 113
application schema, 3
clone schema, 3 T
database schema, 3
dictionary, 130 Truncation, 38
loading, 15
removing, 16 U
schema dictionary, 3
Segment page alignment, 17 Update mode, 81
Selcop, 33 Updating keys, and hierarchical records, 86
Special Fields, display of, 50
SPR, submitting, 118

Proprietary – See Copyright Page 217 Index


Hdb User’s Guide

You might also like