0% found this document useful (0 votes)
646 views569 pages

AI5.2-Workbench User Guide

Uploaded by

ballumishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
646 views569 pages

AI5.2-Workbench User Guide

Uploaded by

ballumishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

5125.

275-3CD

Application Integrator™
Workbench
User’s Guide
Version 5.2

January 2010

Copyright © 2007. GXS. All Rights Reserved.


DISCLAIMER Information in this document is subject to change without notice. GXS reserves the
right to change (upgrade) data to provide the most accurate, reliable quality
product available. Specific mention of a product in this document is not a
guarantee by GXS of complete hardware and software compatibility with your
data processing system. If you have questions about hardware and/or software
compatibility, please contact your GXS representative.
The following electronic data interchange (EDI) standards are developed and
maintained by these organizations:
TRADACOMS Article Number Association
Accredited Standards Committee (ASC) and Data Interchange
X12
Standards Association (DISA), the secretariat and
administrative branch for ASC X12
Data Interchange Standards Association (DISA), the secretariat
UN/EDIFACT
and publisher for the Pan American EDIFACT Board (PAEB)
Verband der Automobilindustrie (German Automotive
VDA
Association)
Center for Informatization of Industry / Electronic Industries
CII/EIAJ
Association of Japan
The Health Insurance Portability and Accountability Act of
HIPAA
1996 maintained by The Department of Health and Human
Services (HHS), U.S.A.
Subset of UN/EDIFACT (EDI) messages. EANCOM®
EANCOM
messages are developed to simplify the use of UN/EDIFACT
messages.
The following eXtensible Markup Language (XML) standards are developed and
maintained by these organizations.

RosettaNet RosettaNet Consortium

CIDX Chemical Industry Data Exchange

UCCnet A non-profit subsidiary of the Uniform Code Council


(developers of the UPC code).
TRADEMARKS Application Integrator is a trademark of GXS.
Trade Guide, Workbench, RuleBuilder, MapBuilder, Interactive Gateway
Extension, and User Exit Extension are trademarks or registered trademarks of
GXS.
Adobe and Adobe Acrobat Reader are either registered trademarks or trademarks
of Adobe Systems Incorporated in the United States and/or other countries.
Apache is a trademark of The Apache Software Foundation, and is used with
permission. This product includes software developed by The Apache Software
Foundation.
eXceed is a trademark of Hummingbird Ltd.
Eclipse is a platform available from eclipse.org.
HP, Hewlett-Packard, HP-UX are registered trademarks of Hewlett-Packard.
InstallShield is a registered trademark of InstallShield Software Corporation.
Intel is a registered trademark of Intel Corporation.
IBM, AIX, Risc System/6000, and RS6000 are trademarks of International Business 
Machines, Inc. 
PKZIP is a registered trademark of PKWARE, Inc. 
Linux is a registered trademark of Linus Torvalds. 
Red Hat Linux is a registered trademark of Red Hat, Inc. 
Sun, Sun Microsystems, Solaris, and Java are trademarks or registered trademarks 
of Sun Microsystems, Inc., in the United States and/or other countries. 
UNIX is a registered trademark of Open Software Group. 
UCCnet and its logos are trademarks or service marks of UCCnet. 
NT, Windows, and Microsoft are registered trademarks, and SQL is a trademark of 
Microsoft Corporation. 
WinZip is a registered trademark of WinZip Computing, Inc. 
Xmanager is a trademark of NetSarang Computer, Inc.

All product names and corporations mentioned in this document may be


trademarks, registered trademarks, or copyrighted by their respective owners.
Contents

PREFACE...................................................................................................................................................... I
About the Workbench User’s Guide ............................................................................................ ii
Documentation Conventions........................................................................................................iv
Typographical Conventions .....................................................................................................iv
User Input ................................................................................................................................iv
Notes, Hints, and Cautions.......................................................................................................iv
Screen Images ........................................................................................................................... v
Tables ........................................................................................................................................ v
Prerequisites for Workbench .......................................................................................................vi
System Prerequisites ................................................................................................................vi
User Prerequisites....................................................................................................................vi
Customer Support ...................................................................................................................... vii
Workbench Customer Support ................................................................................................vii
Before you call Customer Support ..........................................................................................vii
Calling for Customer Support.................................................................................................vii
Listing the Contents of the Disk ................................................................................................xvi
SECTION 1. WORKBENCH USER INTERFACE OVERVIEW......................................................... 17
Overview of Workbench Features ..............................................................................................18
Menu Bar.................................................................................................................................19
Tool Bar ..................................................................................................................................25
Views .......................................................................................................................................27
SECTION 2. DATA MODELING PROCESS OVERVIEW.................................................................. 33
Overview of Workbench Features ..............................................................................................34
Workbench Development Tools...............................................................................................34
Defining the Structure of Input and Output Data .......................................................................35
Defining Items .........................................................................................................................35
Tag Items.................................................................................................................................36
Container Items.......................................................................................................................37
Group Items.............................................................................................................................37
Relationship of Data Model Items ..............................................................................................38
Source and Target Data Models .............................................................................................38
The Access Model....................................................................................................................40
Understanding the Role of the Access Model.............................................................................40
Pre-Condition Values..............................................................................................................42
Base Values .............................................................................................................................43
Post-Condition Values ............................................................................................................46
Associating Input Data with Output Data...................................................................................49
Variables .................................................................................................................................49
Rules........................................................................................................................................50
Environments (Map Component Files) ...................................................................................51
Other Data Modeling Components .............................................................................................52
Profile Database .....................................................................................................................52
Administration Database ........................................................................................................52
Environment Files ...................................................................................................................53
Translation Session Files and Trace Logs ..............................................................................54
Include Files............................................................................................................................55
Understanding Environments (Map Component Files) ..............................................................56
Purpose of Environments ........................................................................................................56
Environment Sequence of Parsing ..........................................................................................57
Processing Flow within the Model..........................................................................................59
Single Environment Process Flow ..........................................................................................60
Multiple Environments ............................................................................................................60
Changing Environments During a Translation.......................................................................63
Map Component Files for Enveloping/ De-enveloping..............................................................66
Processing Using OTRecogn.att (De-enveloping) ...............................................................66
Processing Using OTEnvelp.att (Enveloping) ........................................................................68
Traditional Data Models vs. Style Sheets ...................................................................................70
Differences Between Traditional Data Models and Style Sheets ............................................70
Compliance Checking.................................................................................................................71
SECTION 3. WORKBENCH OVERVIEW ............................................................................................ 83
Accessing Workbench ................................................................................................................84
Workbench Preferences...........................................................................................................85
Advanced Settings Preferences ...............................................................................................88
Derived Links Feature Preference Page .................................................................................89
Macro Definitions ...................................................................................................................90
Map Builder Preferences ........................................................................................................92
Menu Preference Page ............................................................................................................96
Server Connection Preferences ...............................................................................................98
Version Validator Preference..................................................................................................99
Views Preference Page .........................................................................................................100
XSD Validator preferences ...................................................................................................103
Overview of the Map Editor .....................................................................................................107
Opening an Existing Environment ........................................................................................107
Rearranging Views and Editors ................................................................................................108
Setup......................................................................................................................................108
Drop cursors .........................................................................................................................108
Rearranging views ................................................................................................................109
Tiling editors .........................................................................................................................110
Rearranging tabbed views.....................................................................................................111
Maximizing............................................................................................................................112
Fast views..............................................................................................................................112
Creating fast views................................................................................................................112
Working with fast views ........................................................................................................113
Tips and Tricks......................................................................................................................115
Map Editor Work Area..........................................................................................................115
Map Definition Tab ...............................................................................................................115
Mapping Tab .........................................................................................................................118
Source Properties Tab...........................................................................................................119
Target Properties Tab ...........................................................................................................121
Input Tab ...............................................................................................................................123
Output Tab ............................................................................................................................124
Tool Bar Options...................................................................................................................125
Menu Options ........................................................................................................................128
Overview of the Model Editor ..................................................................................................130
Opening an Existing Data Model..........................................................................................130
Model Editor Work Area .......................................................................................................131
Overview Tab ........................................................................................................................131
Properties Tab.......................................................................................................................133
Model Text Tab .....................................................................................................................134
Tool Bar Options...................................................................................................................134
Other Toolbar Options..........................................................................................................136
Menu Options ........................................................................................................................138
Additional Editors.....................................................................................................................141
Access Model Editor .............................................................................................................141
Include File Editor ................................................................................................................142
Views ........................................................................................................................................143
Views Overview.....................................................................................................................143
Interactive Process Manager (IPM) ..........................................................................................148
Navigator View.........................................................................................................................154
Toolbar..................................................................................................................................155
Icons ......................................................................................................................................158
Context Menu ........................................................................................................................161
Narrowing the scope of the Navigator view .............................................................................165
Sorting resources in the Navigator view...................................................................................165
Showing or hiding files in the Navigator view .........................................................................165
Trading Partner Navigator ...................................................................................................166
Trading Partner Attribute Viewer .........................................................................................167
Built-Ins.................................................................................................................................168
Message Variables ................................................................................................................169
Performs................................................................................................................................170
Problems ...............................................................................................................................171
Outline...................................................................................................................................171
Properties..............................................................................................................................172
Remote Site Navigator...........................................................................................................172
SECTION 4. CREATING MAP COMPONENT FILES (ENVIRONMENTS) ................................. 176
Defining a Map Component File ..............................................................................................177
Recommended Naming Convention ......................................................................................177
Defining a New Map Component File...................................................................................178
Modifying an Existing Map Component File ........................................................................182
SECTION 5. CREATING DATA MODELS FOR EDI AND APPLICATION DATA ..................... 186
Working with Data Models.....................................................................................................187
Defining a New Data Model..................................................................................................187
Working with Standard Data Models....................................................................................189
Working with XML based Data Models ................................................................................191
Converting SEF Format to a Data Model.............................................................................191
Converting COBOL Copy Book Format to a Data Model....................................................195
Defining a Data Model Item..................................................................................................197
Deleting a Data Model Item..................................................................................................200
Assigning a Data Model Item Type .......................................................................................200
Assigning Data Model Item Attributes ..................................................................................202
Data Hierarchy .....................................................................................................................223
Including Files in Data Models.............................................................................................224
Saving a Data Model.............................................................................................................228
Closing the Editor .................................................................................................................229
SECTION 6. BUILDING RULES INTO DATA MODELS ................................................................. 232
Overview of Rules Entry ..........................................................................................................233
Modes for Processing Rules..................................................................................................233
Types of Rule Conditions ......................................................................................................234
Variables ...............................................................................................................................235
Keywords and Functions.......................................................................................................236
Two Methods for Creating Rules ..........................................................................................237
Using MapBuilder ....................................................................................................................238
Overview ...............................................................................................................................238
Setting Preferences................................................................................................................240
Loop Control .........................................................................................................................248
Troubleshooting ....................................................................................................................250
Using RuleBuilder ....................................................................................................................252
RuleBuilder Window .............................................................................................................252
Accessing RuleBuilder ..........................................................................................................252
Adding Rules .........................................................................................................................253
Inserting Functions and Keywords .......................................................................................262
Cutting, Copying, and Pasting Rules ....................................................................................267
Finding the Next Parameter..................................................................................................269
Syntax Checking of Rules ......................................................................................................269
SECTION 7. COMPARE AND REPORTS............................................................................................ 282
Comparing Two Model Files....................................................................................................283
To Compare Two Files..........................................................................................................284
Context Menu on the Compare View.....................................................................................285
Next Difference......................................................................................................................286
To Find items ........................................................................................................................286
To Go To a Data Model Item ................................................................................................286
Report Generation.....................................................................................................................288
Data Model Listing Report .......................................................................................................289
To access the Data Model Report dialog box .......................................................................289
To Run the Data Model Listing.............................................................................................289
Source to Target Map Report....................................................................................................291
To access the Source to Target Map Listing report ..............................................................291
To run the Source to Target Map Listing..............................................................................291
SECTION 8. AI VERSION VALIDATOR............................................................................................. 294

SECTION 9. WORKING WITH MACROS .......................................................................................... 295


Creating/Recording a macro .....................................................................................................296
Playing a Recorded/Imported/Predefined Macro......................................................................298
SECTION 10. XML MAPPING AND PROCESSING ......................................................................... 300
XML Overview for Traditional Models ...................................................................................301
Inbound Processing...............................................................................................................301
Outbound Processing ............................................................................................................304
XML Requirements ..................................................................................................................305
Well-Formed Documents.......................................................................................................305
Valid Documents ...................................................................................................................306
Case Sensitivity .....................................................................................................................306
Document Type Definition Overview ......................................................................................307
DTD Example........................................................................................................................307
XML Schema Definition Requirements ...................................................................................309
Namespace Definition ...........................................................................................................309
XSD Example ........................................................................................................................309
XML Parsers .............................................................................................................................312
Parser Overview....................................................................................................................312
Why Do We Have A Parser? .................................................................................................312
XML Special Characters .......................................................................................................312
Xerces .......................................................................................................................................314
Source Parsing ......................................................................................................................314
Target Construction ..............................................................................................................315
Validation of the XML data...................................................................................................316
XML Constraints ...................................................................................................................316
XML Constraint File .............................................................................................................319
Generate Constraint Style Sheet ...........................................................................................325
otxmlcanon................................................................................................................................326
otxmlcanon Overview............................................................................................................326
Invoking the Parser ...............................................................................................................328
Optional Parameters .............................................................................................................330
Parser’s Character/Entity Reference Conversion.................................................................331
XML Input to the Parser........................................................................................................333
More about Namespace.........................................................................................................334
Parser Output........................................................................................................................335
Supported Encodings ................................................................................................................335
Traditional Models....................................................................................................................335
XSD Validator ..........................................................................................................................336
Generating Data Models from Schemas ...................................................................................341
Source Data Model Considerations ......................................................................................350
XML or Schema Editor.............................................................................................................351
XSD Editor ............................................................................................................................351
XML Editor ...........................................................................................................................354
Adding Rules to Generated Data Models .................................................................................356
Rules for Pattern Facets........................................................................................................356
Using Generated Data Models with the Standard Models ...................................................356
Examples ...............................................................................................................................356
Testing and Debugging XML Data Models .............................................................................358
Testing Translations..............................................................................................................359
XML Validation Parameters .................................................................................................359
Validating Against a DTD.....................................................................................................359
Disabling the Prolog .............................................................................................................360
Other Optional Parameters........................................................................................................361
Canonical XML Format ........................................................................................................361
Empty Elements with End Tag ..............................................................................................361
Deterministic .........................................................................................................................362
XML COMMENTS ................................................................................................................362
XML Troubleshooting ..............................................................................................................363
Parser Error Handling..........................................................................................................363
Parser Error Example Table.................................................................................................364
Data Model Generator Error Handling................................................................................367
XML Samples ...........................................................................................................................369
Inbound Processing Examples ..............................................................................................369
Outbound Processing Example .............................................................................................372
Testing and Debugging XML Schema Data Models................................................................373
Testing Translations..................................................................................................................373
XSD Validation Parameters..................................................................................................373
Setting Validation Parameters ..............................................................................................373
Validating Against an XSD ...................................................................................................374
List of Items Validated ..........................................................................................................375
Additional Data Model Validations ......................................................................................376
List of Items Not Implemented...............................................................................................377
XML Schema Troubleshooting ................................................................................................377
Parser Error Handling..........................................................................................................377
Types of Errors......................................................................................................................377
Data Model Generator Error Handling................................................................................378
Inbound Processing, Method 2 Error Codes ........................................................................379
Troubleshooting Complex XML Schema Documents............................................................379
XML Schema Samples .............................................................................................................381
Inbound Processing Examples ..............................................................................................381
Outbound Processing Example .............................................................................................383
Style sheets (XSLT)..................................................................................................................384
Style sheets Overview ............................................................................................................384
Creating Style sheets .............................................................................................................385
Style sheet Functions.............................................................................................................393
XSL Samples..........................................................................................................................394
Xpath Source Models................................................................................................................395
General Rules........................................................................................................................395
Data Presence .......................................................................................................................397
Process Flow .........................................................................................................................399
Creating Xpath Source Models .............................................................................................401
Xpath Samples.......................................................................................................................405
SECTION 11. THE DATA MODELING PROCESS............................................................................ 406
List of Steps to Data Modeling .................................................................................................407
Step 1: Obtain the Translation Definition Requirements.....................................................408
Step 2: Analyze the Definition Requirements .......................................................................408
Step 3: Obtain the Test Input File(s)....................................................................................413
Step 4: Lay Out the Environment Flow................................................................................414
Step 5: Complete the Environment Definition......................................................................415
Step 6: Create Source and Target Data Model Declarations ..............................................420
Step 7: Create a Map Component File for Each Environment ............................................423
Step 8: List Source to Target Mapping Variables................................................................423
Step 9: Create Data Model Rules.........................................................................................424
Step 10: Enter the Profile Database Values.........................................................................431
Step 12: Make Backup Files.................................................................................................434
Step 13: Migrate the Data....................................................................................................434
Notes on Data Model Development..........................................................................................435
Assigning Names ...................................................................................................................435
Comparing Numerics or Strings ...........................................................................................437
Using Application Integrator Models and Files ...................................................................437
References to Files ................................................................................................................437
SECTION 12. TRANSLATING AND DEBUGGING........................................................................... 440
Overview...................................................................................................................................441
Before Translating ................................................................................................................442
Translating Using Workbench ..................................................................................................444
Setting a Trace Level.............................................................................................................448
Translating at the Command Line.............................................................................................453
Invoking the Translation Processes – Unix and Linux..........................................................453
Invoking the Translation Process – Windows .......................................................................454
Terminating Runaway Translations ......................................................................................461
If the Translator Does Not Execute Successfully .....................................................................463
Unix and Linux Troubleshooting ..........................................................................................463
Windows Troubleshooting.....................................................................................................464
Using the Trace Log .................................................................................................................464
Ways to Set the Trace Log.....................................................................................................464
Viewing the Trace Log ..........................................................................................................466
Understanding the Trace Log ...................................................................................................468
Organization of a Trace Log.................................................................................................468
Debugging Hints ...................................................................................................................470
Example Trace Log ...............................................................................................................477
Using Trade Guide Reporting to Debug ...................................................................................493
Generating User-Defined Reports ........................................................................................493
SECTION 13. MIGRATING TO TEST AND PRODUCTION............................................................ 502
Planning Development Migration.............................................................................................503
Recommended Migration Approach .....................................................................................504
Permission Guidelines ..........................................................................................................505
Migrating Applications.............................................................................................................506
Deploying Maps Overview ....................................................................................................506
Deploying Files .....................................................................................................................509
File Permissions Guidelines .................................................................................................512
Deploying the Profile Database............................................................................................512
Deploying Access Models .....................................................................................................515
Deploying Data Models ........................................................................................................516
Deploying Map Component Files .........................................................................................519
Deploying Environment Files ...............................................................................................520
Deploying Include Files ........................................................................................................521
Deploying Stylesheet Files ....................................................................................................523
SECTION 14. UPDATE MANAGER ..................................................................................................... 527
Updating Workbench with Features/Patches ............................................................................527
Managing Installed Features.....................................................................................................534
SECTION 15. CVS INTEGRATION...................................................................................................... 538
Open Perspective ......................................................................................................................538
Share a Workspace Project .......................................................................................................541
Compare a Local File with a Repository File ...........................................................................543
Commit Changes to Repository Files .......................................................................................544
INDEX........................................................................................................................................................ 546
This page is intentionally left blank.
Preface

Preface

The Workbench User’s Guide provides information on using the


Application Integrator™ development product. This preface
contains information on:
Workbench documentation
Prerequisites for Workbench
Documentation conventions
Mouse and keyboard conventions
Keyboard shortcuts
Application Integrator™ Customer Support

Application Integrator™ Workbench 5.2 is sold and supported on


the following platforms:

Windows® XP, 2000 Standalone

Note: Within this document, items directed specifically to the


Windows® single user system are referred to as “single user” and
to the Windows® multiple user system as “server.”

Workbench User’s Guide i


Preface

This guide is designed to give you a working understanding of the


About the operation and capabilities of the product. The information in this
Workbench document is arranged so you can quickly and easily understand
User’s Guide the Workbench features, menus, and operations.
The Workbench User’s Guide is divided into the following sections:

Section Description
Section 1. Workbench Provides a high level overview of the
User Interface Workbench user interface including
Overview toolbars and menu items.
Section 2. Data Discusses how translations are
Modeling Process processed and the use of Workbench to
Overview create the proper structures.
Section 3. Workbench Provides a detailed overview of each
Overview menu, toolbar, and work area.
Section 4. Creating Gives instructions for creating the map
Map Component Files component files or environments.
(Environments)
Section 5. Creating Describes complete procedures for
Data Models for EDI creating data models.
and Application Data
Section 6. Building Describes how to use RuleBuilder to
Rules into Data add processing logic to your data
Models models.
Section 7. Compare Describes the Compare and Reports
and Reports options.
Section 9. Working Describes how macros can be recorded
with Macros and re-used
Section 10. XML Provides complete procedures for
Mapping and creating style sheets to process XML
Processing data.
Section 11. The Data Discusses the steps to the data
Modeling Process modeling process, including
Application Integrator™ conventions
and tips

ii Workbench User’s Guide


Preface

Section Description
Section 12. Translating Provides procedures for translating and
and Debugging debugging data models.
Section 13. Migrating Provides instructions on migrating
to Test and Production from development to test or from
development or test areas to production
areas.
Section 14. Update Provides information on how to install
Manager features/patches using Update
Manager
Section 15. CVS Describes how to use the CVS
Integration repository for version management of
maps.
Index Provides an alphabetical list of subjects
and corresponding page numbers
where information can be found.

A separate document called Workbench User’s Guide-Appendix


contains the various appendices that should be read in conjunction
with this guide.

Workbench User’s Guide iii


Preface

This section describes the conventions used in this manual.


Documentation
Conventions

Typographical
Conventions
Regular This text style is used in general.
Courier This text style is used for system output and syntax
examples.
Italic This text style is used for book titles, new terms, and
emphasis words.

User Input In this document, anything printed in Courier and boldface type
should be entered exactly as written. For example, if you need to
enter the term, “userid,” it is shown in the document as userid.

Notes, Hints, and


Cautions
Notes provide additional information and are boxed inside the text,
using the following format.

Note: This is a note.

Hints provide helpful tips on performing operations. They are


formatted in a similar manner as Notes.

Hint: This is a hint.

Cautions provide information on practices or places where you


could possibly overwrite data or program files. This is an example
of a Caution.

Caution: This is a caution.

iv Workbench User’s Guide


Preface

Screen Images The screen images in this manual are taken from Workbench
running on the Windows® 2000 operating system. If you are
running Workbench on Windows® XP, the actual screens
(windows and dialog boxes) may differ slightly in appearance.
Differences between platforms are slight (such as border thickness)
and do not affect features or usage.

Tables Tables appear frequently in this manual. They have headings with
dark underlining. The body has no gridlines (in most cases). The
end of the table is indicated by double underlines.

Workbench User’s Guide v


Preface

The system, software, and user prerequisites for implementing


Prerequisites for Workbench are described in this section.
Workbench

System Refer to the Control Server Installation Guide for information on the
Prerequisites hardware and software requirements to run Application
Integrator™ on Windows® operating systems.

Note: It is recommended that you use the Windows® default


colors for your Window’s applications. Changing or customizing
your display colors may result in readability problems when using
Workbench in a Windows® environment.

User Prerequisites This guide assumes that AI users have adequate understanding of
the following:
Mouse and graphical user interface (GUI) experience –
specifically windows and dialog boxes.
Basic knowledge of their operating system and an on-line
editor.
Program concept knowledge, including
− An understanding of data organization
− An understanding of data manipulation
− An understanding of program process flow
− An understanding of testing and debugging
Knowledge of electronic data interchange, database
management, and systems reporting.
Knowledge of the standards implementation applicable to
their environment.

vi Workbench User’s Guide


Preface

Customer Support is offered through a separate Support Services Agreement.


Support The type of support offered is described within the contract. The
scope of Support Services includes the following:

Program updates.
Help Desk Support.

Workbench GXS provides support services to install and run Workbench.


Customer Support
However, GXS does not provide support services for debugging,
analyzing, or correcting the contents of Workbench files if any of
the original files that are shipped with the product are modified by
the user.

Before you call If possible, attempt to resolve the problem internally or with the
Customer Support help of the documentation provided, including the printed, on-line,
and training documentation.

Calling for Customer Support would be able to provide you with effective
Customer Support help if you follow these steps when you call:

1. Call the GXS Help Desk for support at (800) EDI–CALL Ext.
3005. You can also send an email to aihelp@gxs.com
The above phone number and email are for AI Help in the
Americas region. Other regional support information can be
found at: www.gxs.com-> Customer-> Customer Support->
select appropriate region.
2. To retrieve the version information of all the installed
components, select the “Help->Version Information” menu in
Trade Guide 5.2.
Version information appears along with the AI Control Server
Program Version and Build Dates.

Workbench User’s Guide vii


Preface

However, if AI Control Server is running on a remote machine,


the program build dates do not appear for the AI Control Server
program.

viii Workbench User’s Guide


Preface

A message appears indicating that Program Version and Build


dates cannot be retrieved.
Make sure you have copied down the exact version number of
the product for which you are seeking assistance.
3. The version number and build dates of AI Control Server and
translator can also be found by executing the command
“otver”.

Workbench User’s Guide ix


Preface

Windows® Users
To access the version information of all the installed
components and program build dates for AI Control Server
from Windows®:
a. From Windows Explorer (Windows® 2000, or Windows®
XP), click the filename cservr.exe.
b. With the filename highlighted, choose Properties from the
File menu. The Properties dialog box opens and displays
information such as the filename, path, last change, version
number, copyright, size, and other attributes
c. Open a command prompt session. Browse to the directory
where Application Integrator™ is installed.
To list the versions of cservr.exe and otrun.exe, type
“otver.bat” or “otver” in the command line.

x Workbench User’s Guide


Preface

Unix and Linux Users


Open an X-Windows session. Browse to the directory where
Application Integrator™ is installed.
To list the versions of cservr.exe and otrun.exe, type “./otver”
in the command line.

4. Copy the exact release number of the operating system under


which you are running Application Integrator™, for example:
HP-UX 11.11 (include interim release numbers, not simply HP
or HP11) or Windows® 2000 Professional (include interim
release numbers, not simply Windows® 2000 Professional).
5. Check the version information for Workbench.
a. Go to Help Menu > About AI Workbench.

Workbench User’s Guide xi


Preface

b. The screen that follows displays the version information for


Workbench.

xii Workbench User’s Guide


Preface

The various options available on the Version Information screen are:


- Feature Details: A feature is a method of grouping and describing different
functionalities that make up a product. Grouping Plug-Ins into features
allows the product to be installed and updated using update servers.

Workbench User’s Guide xiii


Preface

- Plug-in Details: Gives details about various Plug-Ins that make up the
product with their version number, name, and details of the provider of the
Plug-In.

- Configuration Details: Gives a snap-shot of the current product


environment including: Platform Details, System properties, Features, Plug-
in Registry, User Preferences, Update Manager Log, and so on.

xiv Workbench User’s Guide


Preface

6. Make sure the person placing the support call has a thorough
knowledge of the issue for which you are seeking assistance.

Sending a Copy of At times, it may be necessary for an example of your problem to be


Your Files to Customer sent to the support staff. For consistency and compatibility across
Support the various platforms that execute the product, files should be sent
to Customer Support as follows:

Backing Up and Restoring Windows Files


Microsoft® Windows has various backup programs available
depending on the version of the software you is running on your
computer. Users should either compress all the files together using
programs such as WinZip® or PKZIP® for Windows, or send the
uncompressed files (assuming they are small in number and size).
Be sure to inform Customer Support of the backup/compression
program used to send the files, and supply the decompression
program, whenever necessary.
Customer Support will return the files in a similar format.

Workbench User’s Guide xv


Preface

Listing the
Contents of the
Disk
To view the contents of the Workbench installation CD, place the
CD in the CD-drive of the system and browse through the contents.
To review the contents of the Workbench installation programs,
you must first install the product. Then use File Manager,
Windows Explorer, or the MS-DOS dir command to list the
contents of the \WB52 directory.

xvi Workbench User’s Guide


Section 1. Workbench UI Overview

Section 1. Workbench User Interface


Overview

Workbench is a graphical user interface tool that enables you to


create translation models for electronic commerce transactions.
This section provides an overview of the user interface and
terminology of Workbench. It covers the menus, toolbars, and
work areas in a high level overview.

Workbench User’s Guide 17


Section 1. Workbench UI Overview

Workbench provides you with a graphical user interface (GUI) in


Overview of which to map data. In Application Integrator™ terms, this means
Workbench creating data models.
Features
Workbench runs on Windows® 2000 and XP systems only.
However, it can connect remotely to AI Control Servers running on
Unix or Linux systems.
Workbench is used to create Application Integrator™ data models
or style sheets for processing data. When used along with
Application Integrator’s Trade Guide, you can create models/style
sheets, translate data, and debug their translations.
Given below is a high level overview of the Workbench menus,
icons, and work areas:
Menu Bar Tool Bar

Editor Work
Area

Views

18 Workbench User’s Guide


Section 1. Workbench UI Overview

Menu Bar The menu bar contains all available menu items that can be used
from within Workbench.

Note: Not all the menu items are valid. Some of them may not be
applicable to certain scenarios and would therefore be disabled.

File Menu:

Menu Item Description


New Creates new projects, data models, or map
component files.
Close Closes the editor work area currently in focus.
Close All Closes all editor work areas that are currently
open.
Save Saves the editor work area currently in focus.
Save As Saves the editor work area currently in focus to
a user specified name.
Save All Saves all editor work areas that are currently
open.
Rename Renames the editor work area currently in
focus.
Refresh Refreshes the Navigator view to display any
added files.
Convert Line Part of Eclipse framework. Converts the end of
Delimiters to line delimiters based on the platform selected.
Print Prints the editor work area currently in focus.
Switch Allows you to switch to a different workspace.
Workspace
Open External Opens any external file to be viewed in the
File editor workspace.
Import Imports another Workbench project.
Properties Displays the properties of the currently opened
file.
Exit Exits Workbench.

Workbench User’s Guide 19


Section 1. Workbench UI Overview

Edit Menu:

Menu Item Description


Undo Will undo the last action performed.
Redo Will redo the last action that was undone.
Cut Cuts the currently highlighted text or DMI.
Copy Copies the currently highlighted text or DMI.
Paste Pastes the currently highlighted text or DMI.
Delete Deletes the currently highlighted text or DMI.
Select All Selects all items or text in the currently opened file.
Find / Finds data within the currently opened file and
Replace replaces with data.
Add Adds a bookmark in the file so you can easily
Bookmark return to that point in the file.
Add Task Part of Eclipse framework. Adds a task to a
bookmark selected. Tasks can be viewed in the
Tasks view.
Content Assists in the syntax of the include file and access
Assist model editors.
Encoding Part of Eclipse framework. Changes the encoding
used by the text editor to display source files.

20 Workbench User’s Guide


Section 1. Workbench UI Overview

Navigate Menu:

Menu Item Description


Go Into Displays the contents of a hightlighted project or
folder in Navigator view.
Go To Allows navigation after using the Go Into feature.
This allows you to go Back, Forward, or Up One
Level.
Next Part of Eclipse framework. Reserved for future use.
Previous Part of Eclipse framework. Reserved for future use.
Last Edit Moves to the last modified view.
Location
Back Moves backward to the last view in focus.
Forward Moves forward to the last view in focus.

Search Menu:

Menu Item Description


Search Searches all files within the workspace / Model
Search Order ( MSO) for a specified value. Refer to
Model Search Order in Views Preference Page

Test Menu:

Menu Item Description


Translate Brings up the translation run dialog box.

Utility Menu:

Menu Item Description


Generate Generates a schema for an xml or dtd file.
Schema
Compare Compares any two model files.
Model Files
Report Generates Data Model Listing and Source to

Workbench User’s Guide 21


Section 1. Workbench UI Overview

Menu Item Description


Target Map Listing
Data Model This report shows the data model and offers the
Listing option of printing the data model with or
(Report Sub- without rules.
menu)
Source to This report shows the source data model item
Target Map labels, the associated variable labels, and the
Listing target data model item labels.
(Report Sub-
menu)

Tools Menu:
The tools menu contains various operations that can be used within
Workbench.

Note: Some of the below mentioned tools are available only for
relevant scenarios (such as attachment and Model Editor).

Menu Item Description


Shift Down Moves the currently highlighted data model item
one level down to restructure the model's data
hierarchy.
Shift Up Moves the currently highlighted data model item
one level up to restructure the model's data
hierarchy.
Duplicate Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated data
model item is changed to be a system assigned
unique name which can be changed.
Shift Right Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.

22 Workbench User’s Guide


Section 1. Workbench UI Overview

Menu Item Description


Shift Left Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Insert Adds an empty data model item below the
Below currently highlighted item. The newly created DMI
has a default name which can be changed.
Insert Adds an empty data model item above the
Above currently highlighted item. The newly created DMI
has a default name which can be changed.
Go To Navigates to a specified DMI. You can select the
available DMI from the “Go to” dialog.
Includes References Include files with the current data
model.
Refresh Refreshes the mapping area and redraws DND
Links link lines.
Deploy Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.

Link Menu:

Menu Item Description


Delete Link Deletes the selected link and its associated rules.
Loop Highlights repeating DMIs if source or target
Control model is XSL. Automatically creates loop control
Mode DMIs and rules when mapping repeating defining
DMIs.
Map Rule Allows you to select a DMI as an owner node on
Owner the source XSL model to hold the map builder
Node rules of the child nodes when Drag and Drop is
performed.
It is mandatory that the Map Rule Owner Node be
the Parent or Grandparent or any hierarchy above
the source node to Drag and Drop DMIs from
source to target.

Workbench User’s Guide 23


Section 1. Workbench UI Overview

Window Menu:

Menu Item Description


New Opens an entirely new project area, including
Window new views and work areas.
Open Opens a saved perspective which contains certain
Perspective views that are opened.
Show View Brings focus to the selected view or opens a view
not currently opened.
Customize Customizes a perspective for the views that are to
Perspective be opened.
Save Saves the currently opened perspective.
Perspective
As
Reset Resets the perspective as it was when opened.
Perspective
Close Closes the currently opened perspective.
Perspective
Close All Closes all currently opened perspectives.
Perspectives
Navigation Navigates between all open work areas, projects,
and views.
Preferences Used to set all Workbench preferences.

Help Menu:

Menu Item Description


Key Assist Lists shortcut keys to perform specific tasks.
Interactive Displays the Interactive Process Manager Selection
Process dialog box.
Manager
Software Leads to Find and Install or Manage Configuration
Updates of updates.
About AI Displays version information for Application
Workbench Integrator™ Workbench.

24 Workbench User’s Guide


Section 1. Workbench UI Overview

Tool Bar The tool bar contains icons that can be used for various functions
within Workbench.

Note: Some of the below mentioned tools are available only for
relevant scenarios (such as attachment and Model Editor).

Icons Functions
Creates new projects, data models, or map component files.

Saves the editor work area currently in focus.


Prints the editor work area currently in focus.
Identifies the offsets in different fields of an input file. Click
here for more details.
Brings up the translation run dialog box.
Starts recording a macro. Click here for more details.
Plays the recorded macro. Click here for more details
Searches all files in workspace for specified text.
Shows whitespace characters in an input file. Click here for
more details.
Moves to last modified portion of editor work area.
Moves backward to the last view in focus.
Moves forward to the last view in focus.

Moves the currently highlighted data model item one level


down without restructuring the model's data hierarchy.
Moves the currently highlighted data model item one level up
without restructuring the model's data hierarchy.
Removes the current data model item (DMI). The item is
removed from the model and stored on the clipboard until
you paste it.

Workbench User’s Guide 25


Section 1. Workbench UI Overview

Icons Functions
Makes a copy of the current data model item. The item is
copied and stored on the clipboard until you paste it.
Pastes a copy of the stored item (DMI). The item on the
clipboard is stored until you perform another cut or copy.
Duplicates a selected data model item (DMI) at the same
hierarchy and level. All attributes of the data model item are
duplicated except for the data model item name. The name of
the duplicated data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item one level
right to restructure the model's data hierarchy. If the
preference is set to show the target model as "Right to Left
Tree", this action moves the currently highlighted data model
item one level left.
Moves the currently highlighted data model item one level left
to restructure the model's data hierarchy. If the preference is
set to show the target model as “Right to Left Tree", this action
moves the currently highlighted data model item one level
right.
Allows you to add an empty data model item below the
currently highlighted item. The newly created DMI has a
default name which can be changed.
Adds an empty data model item above the currently
highlighted item. The newly created DMI has a default name
which can be changed.
Navigates to a specified DMI. You can select the available
DMI from the “Go to” dialog.
Allows you to view only the highest level of the data model.
Allows you to see all levels of the data model.
Refreshes the mapping area and redraws DND link lines.
Allows you to visually see the links or derives links for those
maps that are developed outside of Workbench.
Deploys (copies) relevant mapping files to a directory (local or
remote). These files can be used during debugging tasks or for

26 Workbench User’s Guide


Section 1. Workbench UI Overview

Icons Functions
moving files into a production functional area.
Checks the syntax of the current mapping file in the active
Text Editor.
Applies the changes made to the current Data Model file in the
active Model Text Editor.

Views Workbench provides several views that can be either shown or


hidden. Views are dock-able window pallets that contain various
helpful information. They can be closed at any time to maximize
the workspace and can be re-opened at any time. Also, views can
be placed anywhere – left, right, top, bottom.

Navigator View: Provides a view of all files within the workspace


specified. Allows you to open map component files, data models,
XPath models, AI created style sheets, include files, and access
models.

Trading Partner Navigator View: Provides a view of the Trading


Partners set up in Trade Guide.

Workbench User’s Guide 27


Section 1. Workbench UI Overview

28 Workbench User’s Guide


Section 1. Workbench UI Overview

Trading Partner Attribute View: Provides a list of all values stored


for the selected Trading Partner. Only two fields, Target Model
Name and Attachment Name, can be edited.

Properties View: Displays the properties of the file opened in the


editor.

Workbench User’s Guide 29


Section 1. Workbench UI Overview

Problems View: Displays error messages that appear while


working with AI artifacts .

Tasks View: Displays a list of tasks to be done.

Console View: Publishes important informational messages for the


user where popping up a dialog could be intrusive. Currently,
whenever Model Search Order preference has no paths, a message
is displayed in console view. In future releases of Workbench, more
informational messages will reside in Console View.

30 Workbench User’s Guide


Section 1. Workbench UI Overview

Built-ins View: Provides a list of all functions within Workbench


that can be used within data models. The functions are separated
by category (Data Model Functions, AI Control Server Functions,
Database Functions, Data Model Structure Functions, SQL
Functions, Date and Time Functions, Default, String Functions,
Keywords, Operators and so on). The functions within each
category are sorted in alphabetical order. The All Functions tab
contains a complete list of functions.

Message Variables View: Provides a list of all data model items


(DMI), VAR’s, and ARRAY’s used in the data model.

Workbench User’s Guide 31


Section 1. Workbench UI Overview

Performs View: Displays include files used in the data model, and
the PERFORM declarations contained within.

Outline View: Displays the hierarchical structure of the data


model or the ‘DECLARE’ statements in the Include file in focus.

Note: The Workbench architecture is based on the Eclipse


Platform framework and the necessary modules (Eclipse Plug-Ins)
are also installed along with AI Workbench.

32 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Section 2. Data Modeling Process


Overview

Workbench is a graphical user interface tool that enables you to


create translation maps and models for electronic commerce
transactions.
This section provides an overview of the features of Workbench
and the terminology used.
Workbench is the development component of Application
Integrator™.

Workbench User’s Guide 33


Section 2. Data Modeling Process Overview

Workbench provides you with a graphical user interface (GUI) in


Overview of which to map data. In Application Integrator™ terms, this means
Workbench creating map component files and data models or style sheets.
Features These models represent the data structure and necessary rules for
processing the input and output data files of your electronic
commerce. Workbench supports public standards, such as ASC
X12, UN/EDIFACT, and TRADACOMS, as well as proprietary and
other non-standard formats.

Workbench Data modeling, sometimes referred to as data mapping, consists of


Development Tools two processes:
Defining the structure of the input and output data.
Associating input data with output data with appropriate
transformation logic.
Workbench provides many graphical features to easily define the
structure and characteristics of the data in the input and output
files. These features include applying standard or custom formats,
setting minimum and maximum data length and occurrences, and
using the standard features of Windows® Clipboard (Cut, Copy,
and Paste).
In addition to these features, Workbench provides a powerful tool,
called RuleBuilder, to define mapping rules. With RuleBuilder,
you can determine how data from the input file and/or the output
file is to be referenced, assigned, and/or manipulated. These rules
may be as simple as moving a field from the input to the output, or
as complex as combining data from many sources with conditional
comparisons, cross-references, and logical operations.
In those cases where input and output files have the same or very
similar structures, Workbench provides an even more automated
mapping tool called MapBuilder to ease this process.
Workbench, used in conjunction with Trade Guide 5.2, provides
you with all the functionality necessary to easily develop, perform
test translations, debug transaction models and trading partner
profiles.

34 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Workbench uses a building block approach to define data


Defining the structures. These building blocks are referred to as Data Model
Structure of Items (DMI), or simply items. An icon in RuleBuilder represents
Input and each type of data model item. There are four basic classes of items.
Output Data They are:
Defining

Tag

Container

Group

Note: When working with an XPath data model or style sheet, the
structure is already defined in a referenced document type
definition (DTD) or Schema (XSD). Defining of the structure in the
XPath model or style sheet is not necessary.

Defining Items Defining items are the lowest level descriptors in the data model.
Examples of defining items include elements or fields. They define
a data string’s characteristics, such as size and type. Some
examples of item type characteristics are:
Alpha characters (letters only [A-Z] [a-z])
Numeric characters (numbers only [0-9])
Alphanumeric characters (a combination of numbers and
letters [A-Z] [a-z] [0-9] and the “space” character)
Date
Time
You can specify that defining items are variable in length by using
an item type that includes delimiters to denote the end of one field
and the start of the next. Or you can define items that are fixed in
length by specifying the number of characters in the field (in which
case, no delimiters are necessary).

Workbench User’s Guide 35


Section 2. Data Modeling Process Overview

Numeric, date, and time item types need a format definition to


describe how the field is to be parsed or constructed. For example,
a date can be formatted as MMDDYYYY, MM/DD/YYYY, or
YYYY-MM-DD. During the building of the data structure,
Workbench provides an easy method of masking for the
appropriate format.

Tag Items Tag items enable you to identify different records or segments.
The “tag” is the string of data at the beginning of the
record/segment. A record delimiter in the input or output may
separate tag items. If multiple types of records exist in a file, there
is normally a “tag” referenced to differentiate each type. For
example, a heading record may begin with an ‘H’ in the input
stream and a detail record may begin with a ‘D.’
Tag items can be of fixed length or variable length.

Fixed Length Record In a fixed length record, you determine the number of characters
allowed in each field. If the data is not long enough to fill each
field, space characters are added (either to the beginning or the end
of the field, depending on whether you have right- or left-
alignment specified in each field).

Record

D 3 5 0 0 1 A B C 4
Fld
T
Field of Field of of
a
5 characters 3 characters 1
g
char.

Variable Length Record A variable length tag item uses delimiters to denote the end of one
field and the start of the next. You determine the minimum and
maximum number of characters to be used in each field. If there is
no data available for a particular field, the field’s two delimiters
appear next to each other with no spaces between them. (See Field
4 in the example below.)

36 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Record

B E G * 0 0 * N E * 0 0 1 2 3 * * 0 1 0 1 9 7

Tag Field 1 Field 2 Field 3 Field 4 Field 5

Container Items Like a tag, a container is used to group two or more defining items.
Unlike a tag, a container does not include a tag (or match value) at
the beginning, and in its absence a placeholder in the data stream is
usually required. For example, in the X12 translation session,
where a composite element is used to determine a measurement
based on data within the input stream (height, width, and length,
for MEA04), it is defined using a container in Application
Integrator™.
Depending on your standards implementation, containers are not
used as often as the other data modeling items. A list of container
item types found in the access models supplied by Application
Integrator™ are described in the appendices of each of the
standards implementation guides e.g. ASC X12 Standards Plug-In
User’s Guide.

Group Items When two or more items have the ability to repeat or “loop,” you
use the group item to define this characteristic. The group item
does not reference any data in the input or output and, therefore,
has no value associated with it. For example, when an invoice
contains a series of individual line items, each line item is
characterized as a record and the records are grouped together by a
group item.

Workbench User’s Guide 37


Section 2. Data Modeling Process Overview

Organizing these data model items–defining, tag, container, and


Relationship of group–in a hierarchy further defines the data structure.
Data Model
Items
The highest level of data is the parent data model item. The next
level of data is the child data model item. Children on the same
hierarchical level are called siblings.
In the example below, “Heading_Record” is the parent item and
“Heading_Document_Number” and “Heading_Document_Date”
are siblings to each other.

Parent Heading Record

Child Heading _Document_Number

Child Heading_Document_Date

Group items indicate an association among child items (tags and


defining items). Tag items further define a parent-child
relationship. Defining items cannot have any child items.
The hierarchy of the data model determines the processing flow as
well. See the “Understanding Environments (Map Component
Files)” section for more information on process flow. See Section 4.
Creating Map Component Files (Environments) for further details.

Source and Target The four data model items–defining, tag, group, and container–are
Data Models used to describe the structure of the input data in the source data
model and the structure of the output data in the target data model.
These two data models also contain actions to be performed on the
data to correctly map it from the source to the target.

38 Workbench User’s Guide


Section 2. Data Modeling Process Overview

In addition to creating a data model from “scratch,” you can open


an existing data model, modify the items and rules, and save the
model under a new name to create a new data model. This is
described in Section 5. Creating Data Models for EDI and
Application Data. As you define the data structure, attributes
(such as size, format, and occurrence) of each field or
record/segment are specified. GXS supplies data model templates
for the major public standards, including ASC X12, UN/EDIFACT,
and TRADACOMS.
Both the input (source) and output (target) data require a data
model. Usually, the input data can be parsed using one source
data model and the output can be constructed using one target data
model. However, in some cases, multiple source or target data
models are involved in a single transaction.
For example, the public standard X12 utilizes enveloping segments
(records) to enclose documents, which are part of the data
structure. (See the following figure.) Application Integrator™ X12
implementation utilizes one data model to parse/construct the
envelope segments, and another data model to process the
documents contained within the envelopes.

Application
OmniTrans Data
Integrator Data
Model
Model 11

X12 Enveloping Segment Parses the envelope


segment

Data

Application
OmniTrans Data
Integrator
Model 2Data
Model 2

Processes data in the


envelope

Workbench User’s Guide 39


Section 2. Data Modeling Process Overview

The Access Model Each data model must be associated with an access model. The
access model contains a definition of the data model item types
available for the defining, tag, and container items that are to be
associated within this data model. The access model information
describes the items that are to be parsed (input) or constructed
(output) in the data streams. (The group type is always available,
although it is not specifically described in the access model.)
For example, a data model item named “InvoiceDate” could be
assigned an item type of “DateFld” in the source data model. On
examining the access model associated with the data model, we
find that item type “DateFld” is defined by an Application
Integrator™ function #DATE where either spaces or all zeros in the
data field are valid.
Application Integrator™ supplies access models for the standards
implementation. See Source and Target Data Models for a list of
these access models and for more information on the item types in
these files.

Each data model must be associated with an access model. The


Understanding access model contains a generic definition of each type of data
the Role of the model item for which data is parsed or constructed. (The access
Access Model model does not define the group item.)
Application Integrator™ supplies access models per standards
implementation.

Access Model Standards


OTFixed.acc Generic fixed length data access model
OTX12S.acc ASC X12 source access model
OTX12T.acc ASC X12 target access model
OTEFTS.acc UN/EDIFACT source access model
OTEFTT.acc UN/EDIFACT target access model
OTANAS.acc TRADACOMS source access model
OTANAT.acc TRADACOMS target access model

40 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Note: Other access models are installed with each standards


implementation. These models are used by the basic data models
and for customized work. Access models must be installed in the
Models directory specified in Trade Guide’s System Configuration
screen.

The access model sets three conditions for each item type: the pre-
condition, the base, and the post-condition. The pre-condition
describes any rules about the data that precedes this item, for
example, a leading delimiter or “tag.” The base describes the value
or character set allowed for this item, for example, a set of
alphabetic characters A-Z and a – z. The post condition describes
any rules about the data that follows this item, for example, a
trailing delimiter.
If you review any of the access models supplied by Application
Integrator™, you will see these item type specifications in the
format:

<Item Type>= Pre-Condition Base Post-


Condition
Example:
ElementA= (Elem_delim) ^(Alpha) ?
Where:
ElementA is the name assigned to an item type. This name appears
in the Access Type list box when defining data model items. It’s
meaning is established by its complete access model definition
(pre-condition, base, and post-condition).
(Elem_delim) refers to a pre-condition delimiter set earlier in the
access model.
^(Alpha) is the base condition. The caret (^) determines two things:
1) The value parsed is to be passed back to the data model and

Workbench User’s Guide 41


Section 2. Data Modeling Process Overview

2) The item type should appear in the Access Type list box when
this access model is associated with a data model. (Alpha) refers to
another non-display base element in the access model, which sets a
range of acceptable alphabetic values. A base value preceded by a
pound sign (#), such as #CHARSET, refers to access model
functions that precisely describe the data with which they are
associated.

Note: The caret (^) must appear in front of the base condition for it
to appear in the Access Type list and be used in the data model.

The question mark (?) in the post-condition is a wild card indicating


that all characters are valid.
Access model statements specify the following information for each
data model item that can parse or construct data:

Tag Type Sets the size of the tag and the post delimiter.
Defining Sets the pre-condition value (delimiter before data),
Type the character set, and any special formatting.
Container Sets the base value to CONTAINER. See the
Type description of each “composite” item in the
appropriate standards manual for examples.

When defining each data model item in your data model, you will
specify an item type. The possible list of item types available is
based on the access model you associated with the data model. The
complete set of attributes associated with each data model item
(such as, the possible format or maximum occurrence) is also
related to the item type.

Pre-Condition Values The following is a list of some pre-condition values. Certain values
may not apply to your standards implementation.

Pre- Description
Condition
Elem_delim Defined by the access function
#SECOND_DELIM or the data model function

42 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Pre- Description
Condition
SET_SECOND_DELIM.

Note: Can also be a post-condition.

Comp_delim Defined by the access function #THIRD_DELIM


or the data model function SET_THIRD_DELIM.

Note: Can also be a post-condition.

Seg_label Defines the character set as A – Z, 0 – 9, and the


space character, allowing one through three
repetitions, followed by an element delimiter
(Elem_delim).
Rec_code Defines the character set as space through tilde
(~), allowing one through five repetitions,
followed by an element delimiter (Elem_delim).

Note: Rec_code in OTFixed.acc allows 1 – 15


repetitions, not followed by an element
delimiter.

Base Values The following is a list of some base values. Certain values may not
apply to your standards implementation.

Base Value Description


#FIRST_DELIM Access model function; causes
character to be read and
compared to the value set with
function #SET_FIRST_DELIM or
the data model function
SET_FIRST_DELIM. This
character is removed from the
active character set.

Workbench User’s Guide 43


Section 2. Data Modeling Process Overview

Base Value Description


#SECOND_DELIM, Access model functions that
#THIRD_DELIM, work in the same manner as
#FOURTH_DELIM , #FIRST_DELIM, for the second,
#FIFTH_DELIM third, fourth, and fifth delimiter,
respectively.
#CHARSET Access model function; defines
the character set as space
through tilde (~) unless
overwritten by the data model
function SET_CHARSET.
#LOOKUP Access model function; performs
automatic value verification
each time the data model item is
referenced in the data model.
The character set is defined by
the #SET_CHARSET access
function.
#DATE/#DATE_NA Access model function; verifies a
valid month, day of month, and
year. Uses a default format or a
format defined in the data
model. #DATE_NA will parse
or construct a date of all zeros or
spaces.
#TIME/#TIME_NA Access model function; verifies a
valid hour, minute, and second.
Uses a default format or a
format defined in the data
model. #TIME_NA will parse or
construct a time of all zeros or
spaces.

44 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Base Value Description


#NUMERIC/#NUMERIC_N Access model function; defines
A the character set as zero through
nine, decimal notation character
(.), comma (,) dollar sign ($),
negative (–), and plus sign (+).
Uses a default format or a
format defined in the data
model. #NUMERIC_NA will
parse or construct a numeric
value of all zeros or spaces.
CONTAINER Defined by the child data model
items. Base component for item
type Composite.
TAG Only the pre- and post-
conditions are used when a data
model item has a base value of
TAG; it is defined by the child
data model items of the tag item.
Base component for item types
Segment, FixedLgthRecord,
LineFeedDelimRecord, and
VariableLgthRecord.
RECORD Will write out all trailing
optional fields with spaces. You
do not have to literally assign a
space to each DMI.

Workbench User’s Guide 45


Section 2. Data Modeling Process Overview

Post-Condition The following is a list of some post-condition values. Certain


Values values may not apply to your standards implementation.

Value Description
Seg_term Defined by the access function #FIRST_DELIM
or the data model function SET_FIRST_DELIM.
RecordDelim Defined by the access function #FIRST_DELIM
or the data model function SET_FIRST_DELIM.
For a list of all data model item types, Refer to the appendices of
the Application Integrator™ standards implementation manuals,
for example, the ASC X12 Standards Plug-in User’s Guide.

Parsing Blank or Data models must be modified to enable them to read through
Empty Records empty records, that is, records that contain only the record
delimiter character (line feed). For Application Integrator™
versions 3.0 and greater, the items LineFeedDelimContainer and
AnyCharO have been added to the OTFixed.acc access model.
Use the OTFixed.acc access model with the following examples.
Use the following input data where (l/f) represent a single
character, the line feed:
1AB(l/f)
2ABCD(l/f)
(l/f)
4DEFG(l/f)
(l/f)
Example 1: In Application Integrator™ versions prior to 3.0, the
following snippet of the data model would parse and display each
record:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimRecord ""
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1

46 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Example 2: With Application Integrator™ version 3.0 or greater,


the container LineFeedDelimContainer is used in place of the tag
LineFeedDelimRecord. The change to processing is — if the Tag
does not have a MatchValue defined and no children are parsed,
then the access model post-condition (the line feed character) is not
read. However, with a container, even if no children of the
container are parsed, the post-condition is read:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimContainer
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1
When using the access model COUNTER function to automatically
count the number of LineFeedDelimContainer items parsed, use
the defining item AnyCharO (Any Character Optional) in place of
AnyChar. AnyChar returns error code 171 (no children parsed)
back to the container, which prevents COUNTER from being
incremented for empty containers. AnyCharO returns an error
code of 0, so the COUNTER for LineFeedDelimContainer is always
incremented.

Workbench User’s Guide 47


Section 2. Data Modeling Process Overview

Forcing Blank or Data models must be modified to enable them to write out empty
Empty Records records/elements, that is, records that contain only the record
delimiter character (line feed) or empty fields within the record.
The items LineFeedDelimDefaultRecord and
LineFeedDelimContainer are available in the OTFixed.acc access
model.
Use the OTFixed.acc access model with the following examples.
Output:
HDRABC DEF(l/f)

Note the spaces between ABC and DEF.


Example 1: In Application Integrator™ version 5.2, the following
snippet of the data model would write the above output record:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimDefaultRecord "HDR"
Field1 { AlphaFld @3 .. 3 none
[]
Field1 = “ABC”
}*0 .. 1
Field2 { AlphaFld @5 .. 5 none }*0 .. 1
}*0 .. 1
Field3 { AlphaFld @3 .. 3 none
[]
Field3 = “DEF”
}*0 .. 1
}*1 .. 1
}*1 .. 1

48 Workbench User’s Guide


Section 2. Data Modeling Process Overview

The second part of data modeling uses variables and rules to map
Associating data between the input (source) and output (target) data models.
Input Data with Workbench provides two graphical tools known as RuleBuilder
Output Data and MapBuilder for defining rules and associating data model
items with variables.

Variables A variable is a named area in memory where a value is stored and is


referenced by a label. By referring to the name, you can access the
value to use in an evaluation, computation, string manipulation,
and assignment or to pass as an argument to a function during a
translation session. There are five types of variables:
1. Data model item (Defining Type)
2. Temporary (VAR–>)
3. Array (ARRAY–>)
4. Environment (keyword and user-defined)
5. Database (state: substitutions and domain)

The types of variables differ among one another in:


Scope (when they can be referenced or updated)
Length of existence
Number of values associated
Ability to maintain instances along with the value

Workbench User’s Guide 49


Section 2. Data Modeling Process Overview

Rules A rule in its most basic form is an assignment of a value from an


input field to a variable, and then an assignment from that variable
to an output field. From this simple form, you can create more
complex rules to manipulate the data as necessary. See the figure
below for an example of using rules and variables:

Input Field Variable Date Calc. Output Field

03/12/95 03/12/95 03/12/95 03/17/95


+5 Days

In this example, a date function was used to accept the input date
and calculate a new date.

Note: The date calculation function (DATE_CALC) can be used


before or after storing the date in the variable field.

You can use rules for:


Moving data from the source to the target data model
Writing data to or utilizing data from a database (such as, a
trading partner profile, cross-reference, or code verification
from the Profile Database, Administration Database values,
or user-defined database values)
Altering the natural processing flow
Absent handling (defaulting data)
Error handling
Compliance checking (X12: exclusion, paired, required,
conditional, and list conditional)
Changing character sets, triad (thousand position separator
character), release and delimiter characters
Obtaining/modifying stream position
Computations (math, data)
Manipulating a string of data - concatenation, substring,
case conversion, and so on.
Converting dates
Logging data
Placing conditional expressions around selected rules

50 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Environments (Map The final step to map the data is to create an environment or map
Component Files) component file (hereafter referred to as MCF). An MCF defines all
the pieces that need to be brought together to configure the
translator to process in a certain way. An MCF consists of
components that control what data is to be translated, such as the
input/output files, and the source, target, and access models to be
used.
You can attach another MCF definition (using the keyword
ATTACH) to reconfigure the translator during processing. MCF
files are given the suffix “.att” (for example, OTRecogn.att,
OTEnvelp.att) and are referred to as “map component files.”
Several examples of the functions of a translation environment are:
Processing fixed length data
Processing variable length data
Bypassing data
Generating acknowledgments
Recognizing data
Enveloping data
Committing output streams

Multiple environments are typically brought together to complete a


translation session. By using multiple environments, you can do
such operations as: making use of generic models (for enveloping,
de-enveloping, bypassing, and acknowledging), dynamically
reconfiguring the process flow, and constructing multiple output
streams from one input stream. During a translation session, the
environment can be changed through the use of different map
component files.

Workbench User’s Guide 51


Section 2. Data Modeling Process Overview

During the translation session, other components of Application


Other Data Integrator™ provide information that is crucial to processing,
Modeling tracking, or reporting on activities; among these components are
Components the Profile Database and the Administration Database.

Profile Database The Profile Database is a resource of values that can be accessed
during a translation session. The Profile Database stores:
Communication and trading partner profiles
Substitutions, used to replace a label with a value
Cross-references, used to replace a value with another value
Verifications, used to verify a value against a specified code
list
Refer to the Trade Guide Help System for more information on the
Profile Database.

Administration The Administration Database consists of one or more files that are
Database used to capture information from translation sessions. The
Administration Database provides you with information for:
Process tracking, recording information on all translation
sessions
Archive tracking, recording information on archived
documents
Message tracking, recording each outbound document
translation along with a status
Bypass tracking, recording information on all exception data
(errors)
Refer to Section 12. Translating and Debugging for hints on how to
use the Administration Database reporting features for debugging.
Refer to the Trade Guide Help System for more information on the
setup and full reporting features of the Administration Database.

52 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Environment Files An environment file (given the extension “.env”) can be used to
enhance the current configuration of the translator. It declares
user-defined environment variables with their associated values,
for example:

ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
MESSAGE_TRACK_IN=“DM_MsgI”
MESSAGE_TRACK_OUT=“DM_MsgO”
EXCEPTION_TRACK_SUM=“DM_BypS”
EXCEPTION_TRACK_DET=“DM_BypD”

An environment file is loaded into the translation session by using


the function ENVIRON_LD( ) in the data model, as per this
example statement:
[ ]
ENVIRON_LD(“OTDMDB.env”)

Where “ENVIRON_LD” is the Application Integrator™ function


and “OTDMDB.env” is the environment file that defines additional
environment variables that specify the names of Administration
Database files used for message tracking and activity tracking. By
placing the names of these files in this definition file, the names can
be changed here and in turn, will affect all references to them.
An environment file may contain any keyword environment
variables except the following that are reserved by Application
Integrator™:
INPUT_FILE
OUTPUT_FILE
S_ACCESS
T_ACCESS
S_MODEL
T_MODEL
S_CONSTRAINT
T_CONSTRAINT

Workbench User’s Guide 53


Section 2. Data Modeling Process Overview

Translation Session Translation Session File


Files and Trace Logs The Translation Session ID (tsid) file contains the next session
number to be used. Each time a session is run the tsid file is
updated. The session number is used to create unique
administration records and trace log filenames.

Caution: You must not change the tsid file because the
administration reporting may become corrupted.

For Unix and Linux users, the format of the translation session
control number is user-definable. See the Control Server Installation
Guide for details on how to set the OT_SNFMT environment
variable to do this.
For Windows® 2000 and Windows® XP users, the session number
has the format 6C (six places long, numeric only).

Trace Logs
The trace log is a log of the translation process. It shows the process
flow through the data models, including the assignment of
variables and their associated values, conditions and actions, and
map component files. The trace log (or trace) can be set to various
levels from minimal to full details. The trace log provides an
immediate and detailed debugging tool.
When a translation is run and depending on the data model
functionality and process, the system automatically can create up to
three trace logs:

<queue_ID>.<session# Main trace log providing feedback on


>.log the translation session; the amount of
feedback provided is user-definable.
Refer to Section 12. Translating and
Debugging for details.

e<session_no>trace.log The error log.


s<session_no>trace.log The session log.

54 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Include Files Include files only contain rules that can be called from within data
models or style sheets.

Include files must end with .inc as it’s extension, and can be user
created. Below is a sample include file (UserDefined.inc) and its
declaration of a PERFORM:
DECLARE Perform_Name () {
[]
…Rules to be performed each time
Perform_Name is called from a data model
}
Where:
Perform_Name is a name given to the PERFORM.

Data models must contain an INCLUDE statement to use user


defined PERFORMs. This declaration must be at the very top of the
data model.
DECLARATIONS {
INCLUDE “UserDefined.inc”
}

To use PERFORMs within a data model, the following rule would


be used for the above examples:
Initialization {
[]
PERFORM(“Perform_Name”)
}*1 .. 1

The rules in Perform_Name are processed each time the rules for
the DMI Initialization are processed.

Workbench User’s Guide 55


Section 2. Data Modeling Process Overview

An environment consists of components that control what data is


Understanding to be translated, such as the input/output files, the source and
Environments target data models, and the access models to be used.
(Map
Component
Files)
In a Workbench application, an environment is referred to as a map
component file, which is attached to the translator. The name
comes from attaching another environment definition (using the
data model keyword ATTACH) to reconfigure the translator
during processing. Environment files are given the suffix “.att”
(for example, OTRecogn.att, OTEnvelp.att, which are two of the
standard files used to de-envelope and envelop data).

Purpose of Several examples of the use of translation environments (map


Environments component files) are:
User-defined ASC X12 mappings
User-defined UN/EDIFACT mappings
Processing fixed length data
Processing variable length data
Bypassing data
Generating acknowledgments
Recognition of data
Enveloping of data
Committing output streams
A map component file must specify at least a source or a target
definition, or it can specify both. The data model structure and
rules define the processing to be performed within the
environment, as the following illustration shows:

56 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Environment

Source Map Map Target


Data Model to from Data Model
Declarations Variables Declarations
& &
Rules Rules

Source Target
Input Access Model Access Model Output
Data (parsing) (construction) Data

The data structure may contain only group items, with no input or
output occurring, or just rule processing logic. Information placed
on variables in the parent and grandparent environments can be
referenced in child environments.

Environment The following list describes the parsing sequence:


Sequence of Parsing 1. The environment file (for example, OTRecogn.att) is opened
and read in.
2. The following may be set by the environment (the map
component file). You must use the value entry boxes provided
in the Map Component File Attributes dialog.
input file (INPUT_FILE)
output file (OUTPUT_FILE)
source access model (S_ACCESS) – available only when
Source Data is a traditional model and XPATH model
source data model (S_MODEL)
source constraint (S_CONSTRAINT) – available only when
Source Data is a stylesheet and must have a suffix of “.xsl”
target access model (T_ACCESS) )– available only when
Target Data is a traditional model
target data model (T_MODEL)
target constraint (T_CONSTRAINT) – available only when
Target Data is a stylesheet and must have a suffix of “.xsl”
You may also specify the following in the Environment
Variables section while creating or editing the Map Component
File.

Workbench User’s Guide 57


Section 2. Data Modeling Process Overview

trace level (TRACE_LEVEL)


find match limit (FINDMATCH_LIMIT)
substitution key prefix (HIERARCHY_KEY)
cross-reference key prefix (XREF_KEY)
code list verification key prefix (LOOKUP_KEY)
any user defined envionrment variable
All other environment variables (user-defined) are set when
they are referenced in:
The definition of other keywords or user-defined variables.
For example:
SESSION_NO=“$$”
OUTPUT_FILE=“(SESSION_NO).tmp”
Data model rules. For example:
VAR->SessionNo=GET_EVAR(“SESSION_NO”)
3. The input file (INPUT_FILE) is opened and read into memory.
4. The output file (OUTPUT_FILE) is created or opened in append
mode. (To open in append mode, a plus sign (+) is added to the
end of the filename when it is entered.)
5. The source access model (S_ACCESS) is opened and parsed
when the S_MODEL is not a style sheet. Processing stops upon
errors being reported.
6. The source data model (S_MODEL) is opened and parsed.
Processing stops upon errors being reported. This includes:
Data model syntax checking
Verifying references to source access model items when the
S_MODEL is not a style sheet
Declarations of first time references to temporary variables,
and arrays.
Verifying references to perform declares located in include
files (.inc).
7. If S_MODEL references a style sheet (ends with .xsl) and if a
constraint file is specified in S_CONSTRAINT (also ending
with .xsl), then the constraint stylesheet is applied. Procesing
stops upon errors being reported.

58 Workbench User’s Guide


Section 2. Data Modeling Process Overview

8. Source mode processing occurs – data model or style sheet.


Processing stops upon errors being reported.
9. The target access model (T_ACCESS) is opened and parsed
when the T_MODEL is not a style sheet. Processing stops upon
errors being reported.
10. The target data model (T_MODEL) is opened and parsed.
Processing stops upon errors being reported. This includes:
Data model syntax checking
Verifying references to target access model items
Declarations of first time references to temporary variables,
and arrays.
Verifying references to perform declares located in include
files (.inc).
11. Target mode processing occurs. Processing stops upon errors
being reported. If T_MODEL references a style sheet (ends with
.xsl) and if a constraint file is specified in T_CONSTRAINT
(also ending with .xsl), then the constraint style sheet is
applied.
12. Processing stops upon errors being reported.

Processing Flow Within a source or target data model, rules processing flows down
within the Model the hierarchy from parent to child (starting with the first child
encountered) and then back to parent, as per the following
illustration:

Data Model Structure X12 Example Processing Order


Group (Parent) Initialization (8)
Group (Parent/Child) Document 1 (7)
Tag 1 (Parent/Child) BIG (3)
Defining BIG_01 (1)
(Child)
Defining BIG_02 (2)
(Child)
Tag 2 (Parent/Child) N1 (6)

Workbench User’s Guide 59


Section 2. Data Modeling Process Overview

Data Model Structure X12 Example Processing Order


Defining N1_01 (4)
(Child)
Defining N1_01 (5)
(Child)

In each case, the current status is returned from the child to the
parent data model item. The process moves down the data
structure from child to child; once the children are read, process
returns to the parent item, then proceeds to the next parent item.

Single In a single environment process, once the map component file


Environment input and output streams are read or opened, environment
processing begins with the source data model. Once source
Process Flow
processing is completed, target data model processing occurs.
Once target processing is completed, the environment ceases to
exist.

Environment Layer

Source Model Target Model

Multiple Multiple environments are typically brought together to complete a


Environments translation session. During a translation session, the environment
is changed through the use of different map component files.
These environments (map component files) are called by the use of
the data model keyword ATTACH from within the data model.
Multiple environments allow for:
Use of generic models and modular modeling.

60 Workbench User’s Guide


Section 2. Data Modeling Process Overview

− Includes enveloping, de-enveloping, bypassing


errors, generating acknowledgments
− Once written, eliminates rewriting, testing, and
debugging among multiple translations
− Allows one-time modifications for multiple common
translations
Ability to dynamically reconfigure the translator to parse the
input stream (eliminating the need for a pre-parsing
program).
− Once identified, X12 utilizes the X12 syntax models
− Once identified, UN/EDIFACT utilizes the
UN/EDIFACT syntax models, and so forth through
the various standards
Ability to dynamically reconfigure the translator to
construct the output stream.
− Determines the recipient from a batch of application
documents
− Determines the appropriate target data model for
the standard, and implementation within the
standard
Ability to parse the input stream once, and construct
multiple output streams.
If a file contains multiple documents to be processed:
Using one map component file, the source processing has to
parse in all documents before switching over to the target.
The target then outputs from memory all of the translated
documents.
Using multiple map component files, optionally, the parent
environment could repeat the child environment for each
document processed. Then the child environment reads one
document and outputs one document per pass through the
environment.
The following illustration shows the use of multiple map
component files in the processing flow for an X12 application. The
data model keyword ATTACH calls a second environment, which
in turn calls another environment. In each case, the current status
is returned from the child environment, to the parent environment,
just like child-to-parent data model items within a data model.

Workbench User’s Guide 61


Section 2. Data Modeling Process Overview

Recognition Environment

ATTACH 1

ATTACH 2

Source Model only

X12 De-Enveloping Environment

ATTACH

Source Model only

X12 Message Processing Environment

Source Model Target Model

X12 Acknowledgment Environment

Target Model only

Refer to the Changing Environments During a Translation section


for details on how to use ATTACH to call a second environment.

62 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Changing For a new environment to be introduced into the translation


Environments session, the keyword ATTACH must be encountered in the data
model. ATTACH requires one argument, the map component file
During a
name.
Translation
An environment can be attached in the rules for either the source or
target data model using either a specific name or a variable,
allowing for the substitution of map component files.
[ ]
ATTACH “OTX12Env.att” (specific file)

[ ]
ATTACH VAR->map_component_filename (variable
name)
When ATTACH is encountered during a translation session, the
current environment’s processing stops. The map component file
associated with the ATTACH statement is opened and processing
begins. Processing continues in this environment until it completes
successfully, an error is returned, or the data model keyword
ATTACH is encountered again.
Processing returns to the parent environment immediately
following the data model keyword ATTACH. The error code
returned to the parent environment can be captured and errors
handled, as per the following example:
[ ]
ATTACH “OTX12NxtStd.att”
[ ]
VAR->RtnStatus=ERRCODE( )
[VAR->RtnStatus > 0]
<actions to recover from error>

Note: It is good modeling practice to follow these


recommendations for error handling:
1. Capture the returned status of the ATTACH data model keyword
with a new rule. This new rule must be defined immediately
following the ATTACH statement with a null condition.
2. Define actions in the new rule to recover from any possible
non-zero status return (error). If ATTACH returns an error, the
balance of the actions in the current rule will not be executed.

Workbench User’s Guide 63


Section 2. Data Modeling Process Overview

To change environments using an ATTACH statement


1. Open the data model for which you want to add an ATTACH
statement.
2. Select the data model item for which you want to edit the rules.
3. From the Model Editor, move focus to the right to see Rule
Builder. The RuleBuilder window displays, as shown below:

4. Select the appropriate mode (Present, Absent, Error).


5. Place the pointer in the Rules Edit workspace.
6. If necessary, define a null condition or conditional expression.
7. Select the Built-ins view, then select ATTACH from the Data
Model Functions list.
8. Complete the function by typing the name of a new map
component file (.att) to be used.
9. From the RuleBuilder toolbar, choose Apply to enter the
changes to the data model. RuleBuilder will now look as
follows:

64 Workbench User’s Guide


Section 2. Data Modeling Process Overview

10. From the File menu choose Save (or use the Save icon on the
tool bar) to save the changes made to your model.

Common ATTACH During translation processing, the following errors are commonly
Errors Encountered found when there are problems with the map component file
definition. Refer to Appendix E. Application Integrator Runtime
Errors in Workbench User’s Guide-Appendix for a complete
description of these errors.

Error Code Description


133 Source Access Syntax Error
134 Source Data Model Syntax Error
135 Target Access Syntax Error
136 Target Data Model Syntax Error
137 ATTACH Error
138 Data Model Item Not Found
139 Data Model Item - No Value Found
145 Parse Environment Error
160 Error Opening Infile
161 Error Opening Outfile
169 Data Model Type Not Found In Model
170 Command Line (-at) ATTACH Error

Workbench User’s Guide 65


Section 2. Data Modeling Process Overview

Error Code Description


172 Data Model Type Not Found in Access Model
173 Improper Access Model Item Definition

Map Component Application Integrator™ provides map component files (and


Files for associated models) for enveloping and de-enveloping data from/to
public standards. During the course of data modeling, consider the
Enveloping/ De-
use of these map component files for your application.
enveloping

Processing Using The OTRecogn.att file is a map component file typically used for
OTRecogn.att processing public standards into application data.
(De-enveloping) The rule logic necessary to perform the extraction of the values
from the input stream is already included in the OTRecogn.mdl. It
also automatically sets the HIERARCHY_KEY keyword
environment variable. To use this feature, you must define the
trading partner in the Profile Database by using the Trading
Partner option from the Trade Guide Profiles menu.

Recognizing the The process of recognizing which trading partner needs to be read
Trading Partner from the Profile Database is handled with the generic model
OT<std>Bgn.mdl called from OTRecogn.mdl. This model is
designed to allow multiple interchanges from multiple trading
partners within the input file.
To define the trading partner at the interchange level, the model
sets an environment variable XREF_KEY to the value “ENTITY.”
In the simplest of terms, this means that whenever the translator
attempts to do a cross-reference from the database, it looks for a
line or record within the database that starts with “ENTITY,” until
the environment variable XREF_KEY is changed to another value.
Once the environment variable XREF_KEY is set, the model uses
the functions STRCAT and STRTRIM to concatenate the Sender’s
Qualifier, Sender’s ID, Receiver’s Qualifier, and the Receiver’s ID
that it has read from the input file, in that sequence, and assigns the
results to a temporary variable VAR->OTICRecognID. At this
point a cross-reference is performed using the function XREF and
passing in some required parameters as per the function:

66 Workbench User’s Guide


Section 2. Data Modeling Process Overview

XREF(“ENTITY”, VAR->OTICRecognID, &VAR->OTICHierarchyID, “N”)


Where the arguments are:
“ENTITY” Category of XREF or where to look in the
Profile Database
OTICRecognID Value that was concatenated together to
perform the actual cross-reference
OTICHierarchyID Variable name for which the return value of
the cross-reference will be assigned
Y/N Determines whether (Y) or not (N) to turn on
inheritance
The model logic accesses the Profile Database, searching for a line
or record that starts with “ENTITY” and has the concatenated
value assigned to the variable VAR->OTICRecognID. For example:
“ENTITY|X|ENTITY|C|(<sender qualifier>~<sender id>~
<receiver qualifier>~<receiver id>”) “TP|NAME”
Where
<sender’s qualifier> 02
<sender id> SENDER ID
<receiver’s qualifier> ZZ
<receiver id> RECEIVER ID

“ENTITY|X|ENTITY|C|02~SENDER ID~ZZ~RECEIVER ID” “TP|NAME”


Therefore,
VAR->OTICRecognID = 02~SENDER ID~ZZ~RECEIVER ID
The “ENTITY” clause is the XREF lookup key and value; the
“TP|NAME” clause is the value to be returned to the XREF.
A return status is then tested to make sure the cross-reference was
successful.
The “ENTITY” line/record is automatically placed in the Profile
Database when you save a trading partner profile through Trade
Guide. Depending on the standard selected, an “ENTITY” record
might exist for each level of the trading partner profile hierarchy:
IC - Interchange level
FG - Functional group level
Message - Document level
Refer to the section on de-enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards Plug-in

Workbench User’s Guide 67


Section 2. Data Modeling Process Overview

User’s Guide) for instructions on how to use this environment.

Processing Using The OTEnvelp.att file is a map component file typically used for
OTEnvelp.att processing application data into the public standards (enveloping).
Unlike the processing of public standards, where generic models
(Enveloping)
are provided, each application system requires customized models.
When the models are created, the entity lookup logic must be
included. To do this, you must define the trading partner in the
Profile Database by using the Trading Partner option from the
Trade Guide Profiles menu.
Refer to the section on enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards Plug-in
User’s Guide) for instructions on how to use this environment. For
this section, ASC X12 is used as the standard being mapped to.
The logic to perform extraction, concatenation, and entity lookup,
to obtain a trading partner view into the database, needs to be
included in the custom application model. The logic is represented
below:
[ ]
;sets the cross reference view into the database as
;“ENTITY”
SET_EVAR(“XREF_KEY”, “ENTITY”)

;ids are extracted and concatenated together


VAR->OTRecognID =STRCAT(STRTRIM(PartnerID “T”, “ ”),
STRCAT(”~”, (STRTRIM(SetID “T”, “ ”)))

;entity cross reference lookup


VAR->OTXRefStatus = XREF(“ENTITY”, VAR->OTRecognID,
&VAR->OTHierarchyID, “N”)

[ VAR->OTXRefStatus != 0 ]
;entity lookup failure
EXIT 501

[ ]
;sets the trading partner’s substitution view
;into the database
SET_EVAR(“HIERARCHY_KEY”, VAR->OTHierarchyID)

68 Workbench User’s Guide


Section 2. Data Modeling Process Overview

The concatenated lookup must be specified in the Application


Cross-reference value entry box of the Outbound X12 Values dialog
box exactly as it is created from the input data. (This dialog box is
opened at the Message Level of the Trading Partner Profile Tree
with Trade Guide.)
In the syntax example, the value entered in the Application Cross-
reference value entry box is “PartnerID~SetID”, representing the
trading partner’s ID/name and the document type which may or
may not be the same as the Set-ID field.

An example of this logic can be reviewed in the ASC X12 model


provided called, OTX12SOS.mdl. Refer to the Present mode rules
on the group item “DocRead.”
Refer to the section on enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards Plug-in
User’s Guide) for instructions on how to use this environment.

Note: It is good modeling practice for the entity lookup to include


the Trading Partner name and the document type.

Workbench User’s Guide 69


Section 2. Data Modeling Process Overview

Application Integrator™ uses data models to parse and construct


Traditional Data data, including EDI standards, flat files, and XML. Another option
Models vs. Style for processing XML data is through the use of style sheets. For a
Sheets translation, a traditional data model or style sheet exists for both
parsing (source side) and construction (target side). Both sides can
be a model, style sheet or a combination.

Differences A traditional data model defines the “complete” structure of the


Between data by arranging data model items (Group, Tag, Defining, and
Containers) hierarchically. Within each of these item types
Traditional Data
mapping rules are defined. These rules make use of not only data
Models and Style assignments and conditional testing, but also inbuilt data model
Sheets functions to manipulate, store, and validate the data.
When processing XML data, a traditional data model requires
several data model items to process the data. Data model items are
required for the start tag, the attributes, the value, and the end tags.
This may require an extremely large data model to handle some
XML data.
Style sheets differ from traditional data models in that a data type
definition (DTD) or schema (XSD) is referenced for the “complete”
structure definition of the XML data. The style sheet itself only
defines which elements and attributes are to be referenced for their
values on the source and which elements and attributes need to be
populated and written out on the target. So the style sheet itself
only references a subset of the elements or attributes defined in the
DTD or XSD, causing the style sheets to be much smaller in size
when compared to traditional data models.
Like traditional data models, style sheets do contain the mapping
rules for assignments, and testing. Functions for data manipulation
are also available for mapping. AI extended the basic style sheet
list of functions available, to offer consistency with the functions
available in traditional data models to access the AI database and
variables.

70 Workbench User’s Guide


Section 2. Data Modeling Process Overview

The Application Integrator™ translator contains a compliance


Compliance checking capability, which captures the majority of XML parsing
Checking errors. The error handling code within data models can be reduced
using the compliance checking data model functions, keywords,
and keyword environment variables. The error handling code
ensures that the proper error code is captured and the natural
processing flow of the translation session continues to the next
element.
The following data model functions are used in compliance
checking:
DMI_INFO()
ON_ERROR()
PERFORM()
The following data model keywords are used in compliance
checking.
INCLUDE
STOP
The following keyword environment variables are used in
compliance checking:
RECOVERY
Specific information about each of these items can be found in
Appendix A. Application Integrator Model Functions in the
Workbench User’s Guide-Appendix.
The DMI_INFO() data model function obtains data model item
information associated with the specified data model item and
updates an array variable. The data model item can be a Group,
Tag, Container, or Defining item.
The STOP data model keyword is used to alter the normal
translation processing flow in the PERFORM() declarations.
Processing stops and returns to the data model with a status of
zero.

Workbench User’s Guide 71


Section 2. Data Modeling Process Overview

Error handling routines, which are used to capture the envelope


header errors, appear in all the generic models supplied with
Application Integrator™. However, the user-defined message
models must be modified to include the necessary error handling
code to capture errors at the message level. Modifying the user-
defined models is discussed later in this section.
The PERFORM() data model function provides the ability to
modularize the data models for error handling and database
access. This is useful to applications that use an external database
rather than the Application Integrator™ Administration Database
to track information. The code for accessing the external database
can be placed in the PERFORM() procedures to replace existing
Administration Database access. The rules associated with the
PERFORM() functions are defined in an external file called an
INCLUDE file.
The INCLUDE files are declared in a data model in a Group item
labeled DECLARATIONS. As the data model is parsed, so are all
declared INCLUDE files. The INCLUDE files contain only
procedures; structure is not allowed in INCLUDE files. By having
the procedures in an external file, the same PERFORM() routines
can be shared by many data models.
The translator contains recovery routines that are enabled when an
error occurs. The purpose of the recovery routines is to allow
processing to continue instead of exiting the data model on the first
error encountered. Multiple errors can be captured and reported.
The recovery routines are activated and deactivated using the
RECOVERY keyword environment variable.
Recovery occurs during parsing of the input data. It is
implemented in both the access model and the data model. Each
data element is allowed one fault/error. When an error is
encountered, the translator performs the appropriate recovery
routine to correct the first error in the data element, reports the
appropriate error code to the data model, and sets the file position
to the start of the next item. However, if the data element contains
more than one error, the translator returns processing flow to the
data model with error code 200. Error code 200 indicates to the
data model that the data problem is unknown and the translator is
unable to recover.

72 Workbench User’s Guide


Section 2. Data Modeling Process Overview

The ON_ERROR data model function defines a standard ERROR


mode PERFORM routine to be invoked on an item when no
ERROR mode rules have been defined. The PERFORM must be
loaded using an INCLUDE statement before it is referenced,
otherwise, a 134 parse error is returned. When the INCLUDE file
containing the ON_ERROR PERFORM declaration is loaded, it is
inherited into the child environment and can be used without
having to reload the INCLUDE file. If an INCLUDE file is loaded
into a child environment which has the same declaration
name/label, the second declaration will override the first for this
child environment only.
RECOVERY and ON_ERROR() are independent of each other. This
means that one can be applied without the other. RECOVERY has
to do with source access model parsing of the data. ON_ERROR
has to do with the execution of default data model error rules when
ERROR mode is not defined on the specific data model item. This
table shows the four settings of RECOVERY and ON_ERROR().

RECOVERY ON_ERROR()
Yes Yes
Yes No
No Yes
No No

During processing of inbound files, errors that occur during


parsing of the envelope headers (ISA, GS, or ST in ASC X12) cause
processing of the envelope segments to stop. When Reject or
Bypass exceptions on the envelope are encountered, parsing stops
and the set action is taken on the unit based on which envelope it
is. When source errors are encountered during processing of the
messages on either inbound or outbound files, target processing
will not occur. Instead, a code can be placed on the source model
to force it to exit at a specific error code, or to continue on to the
target side by modifying the supplied source models to attach to
the target environment and generate an application file.

Data Access Parsing Error codes return descriptive meanings. With RECOVERY on, the
and Recovery file position is reset so that the next element can be read if the error

Workbench User’s Guide 73


Section 2. Data Modeling Process Overview

is reset to zero. RECOVERY deals with Tag, Container, and


Defining type items only. To enable RECOVERY, type:

VAR->OTPriorEvar = SET_EVAR(“RECOVERY“,“Yes“)
By default, in the enveloping and de-enveloping generic models,
recovery is set to “Yes”.
The following table shows the error codes that are returned
depending on whether recovery is set to “Yes” or “No”, and which
mode of rules are entered upon returning from the access model
with the specific error code.

Error Recovery Recovery Rule mode


Code Description “No” “Yes” entered
-1 End Of File* Returned Returned ABSENT
0 OK Returned Returned PRESENT
138 Data model item Returned ERROR
not found
141 Mandatory Returned Returned ABSENT
segment/record
missing*
146 Invalid date, time, Returned Returned ERROR
or numeric string
152 Lookup ID failure Returned Returned ERROR
171 No Children found* Returned Returned ABSENT
176 Data field/element Returned Returned ERROR
too short
177 Data field/element Returned ERROR
too long
190 Mandatory data Returned Returned ABSENT
field/element is
missing*
191 Invalid character in Returned ERROR
the data
field/element

74 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Error Recovery Recovery Rule mode


Code Description “No” “Yes” entered
192 Missing post Returned Returned ERROR
delimiter
200 Unable to recover Returned ERROR
*Represents soft errors versus data was processed and is in error (hard
error).

Rules of Recovery These rules apply when the keyword environment variable
RECOVERY is set. RECOVERY processing takes place only on
defining data model items. When the translator reads a character
out of character set, end of file, or reaches the maximum
element/field size as defined, the following processing may occur:

1. If no data has been read, then read the post condition. If the
post condition is read or no post condition is defined for the
item, error 141 is returned for Tags and error 190 is returned
for Composites and Definings.
2. If the minimum size has been met and no post condition is
defined, error 0 is returned back to the data model.
3. If the minimum size has not been met and a post condition is
defined, the next character is read for the post condition. If the
post condition is read, error 176 is returned.
4. If the post condition is not read, continue reading until the post
condition, end of file or a size of 4096 is read. Return 191 if the
post condition is finally read, else return 200.
5. If maximum defined size is met, read next character for post
condition. If not defined, return 0. If defined but not present,
read till post condition, return error 177 if finally read, or
return 200 if not read.
6. If Date/Time/Numeric format, test format. If the format test
fails, return error 146.
7. If Tag or composite, test post condition. On post condition
failure, return error 192.

Workbench User’s Guide 75


Section 2. Data Modeling Process Overview

When an element/field contains an invalid character, the character


is removed from the string of that element and tested. For
example, when an element has data as “ABC^DE” where ^ is out-
of-character set, the value ABCDE is returned and an error of 191 is
set. Defined delimiters are considered valid characters and are not
automatically removed from the string during processing.

Examples of Recovery

Example Element Action


1 Data Element1 {ElementAN @1 .. 20 none }*1 ..
Model 1
Data N2**
Result Returns error 190 to ABSENT mode
rules
2 Data Tag1 { SEGMENT “N1” }*1 .. 1
Model
Data N2*Value
Result Returns error 141 to ABSENT mode
rules of Tag1
3 Data Element1 {ElementAN @1 .. 20 none }*1 ..
Model 1
Data N2*1234
Result Returns 0 to PRESENT mode rules, and
does not do recovery because the
minimum was met
4 Data Element1 {AlphaNumericFld @1 .. 20
Model none }*1 .. 1
Data N21234
Result Returns 0 to PRESENT mode rules, and
does not do recovery because the
minimum was met
5 Data Element1 {ElementAN @20 .. 20 none }*1
Model .. 1
Data N2*1234*A
Result Returns 176 to ERROR mode rules, and
recovery positions to “A”

76 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Example Element Action


6 Data Element1 {ElementAN @20 .. 20 none }*1
Model .. 1
Data N2*12^34*A where ^ is out of character
Result Returns 176 to ERROR mode rules and
recovery positions to “A”. The last error
would be returned which is minimum
not met. If referencing the DMI, the
value is 1234 because the ^ is removed
from the data as it is out of character set.
7 Data Element1 {ElementAN @1 .. 20 none }*1 ..
Model 1
Data N2*12^34*A where ^ is out of character
Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. If referencing
the DMI, the value is 1234 because the ^
is removed from the data as it is out of
character set.
8 Data Element1 {ElementAN @1 .. 2 none }*1 ..
Model 1
Data N2*1234*A
Result Returns 177 to ERROR mode rules, and
recovery positions to “A”.
9 Data Element1 {ElementAN @1 .. 6 none }*1 ..
Model 1
Data N2*12^345678*A where ^ is out of
character
Result Returns 177 to ERROR mode rules, and
recovery positions to “A”. The last error
would be returned which is maximum
not met. If referencing the DMI, the
value is 1234 because the ^ is removed
from the data as it is out of character set.
10 Data Element1 {ElementAN @1 .. 4 none }*1 ..
Model 1
Data N2*12^34*A where ^ is out of character

Workbench User’s Guide 77


Section 2. Data Modeling Process Overview

Example Element Action


Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. The last error
would not be maximum not met because
the ^ is removed from the data.
11 Data Tag1 { LineFeedDelimRecord “N2”
Model Element1 {NumericFld @4 .. 4 none }*1 .. 1
Element2 {AlphaNumericFld @4 .. 4 none
}*1 .. 1
Data N21234ABCD<lf>
Result Returns 0 to PRESENT mode rules, and
does not do recovery because the
elements met their definition.
12 Data Tag1 { LineFeedDelimRecord “N2”
Model Element1 {NumericFld @4 .. 4 none }*1 .. 1
Element2 {AlphaNumericFld @4 .. 4 none
}*1 .. 1
Data N21234ABCD - No line feed record after
the D
Result Returns error 192 to ERROR mode rules
on Tag1.
13 Data Tag1 { LineFeedDelimRecord “N2”
Model Element1 {NumericFld @5 .. 5 none }*1 ..
1
Element2 {AlphaNumericFld @4 .. 4
none }*1 .. 1
Data N21234ABCD<lf>
Result Returns error 146 to ERROR mode rules,
for Element1 and 176 to ERROR mode
rules for Element2, if the error was
cleared on Element1.
14 Data Element1 {ElementN @1 .. 8 none }*1 .. 1
Model
Data N2*12^34*A where ^ is out of character
Result Returns 191 to ERROR mode rules, and
recovery positions to “A”. The last error
would not be format because the ^ is not
part of the character set.

78 Workbench User’s Guide


Section 2. Data Modeling Process Overview

Example Element Action


15 Data Element1 {ElementN @1 .. 8 none }*1 .. 1
Model
Data N2*12B34*A where B is part of
#CHARSET
Result Returns 146 to ERROR mode rules, and
recovery positions to “A”. The last error
would not be format because the B is
part of #CHARSET.

Rule Mode Processing There are three types of mode processing: PRESENT, ABSENT, and
ERROR. When an item other than a group parses data, it enters
one of the 3 modes of processing:
PRESENT mode: When the error code is 0.
ABSENT mode: When the error code is -1, 139, 140, 141, 171,
190.
ERROR mode: Any other error.
The modeler is able to put rules on any of these modes. When it is
said to “Clear the Error”, the last action in the mode results in a 0
error code. (A “null condition” by itself will “Clear the Error” or
reset it back to zero.) The following code shows how an error 190
would change into 0. This case would be if an element BIG_01
were missing.
BIG_01 { ElementAN @1 .. 10 none
[]
ARRAY->Big_01 = BIG_01
:ABSENT
[]
ARRAY->Big_01 = “ “
}*1 .. 1
Data model item BIG_01 is mandatory. During translation, when
no data is read for the data model items, the translator enters
ABSENT mode and processes the ABSENT mode rules. In this
example, the array variable is populated with a space character.

Workbench User’s Guide 79


Section 2. Data Modeling Process Overview

Occurrence Validation
The codes [-1, 139, 140, 141, 171, 190] enter ABSENT mode rules. If
no ABSENT mode rules are defined, the error remains at its value –
the error is not cleared. When processing is done with PRESENT
or ABSENT (whether rules are defined or not), and the error code is
-1, 139, 140, 141, 171, 190, occurrence validation is checked. If this
occurrence of the item is optional, the error is reset to zero and
processing proceeds onto the next sibling. If this occurrence of the
item is mandatory, the error code is taken into ERROR mode.
Like ABSENT, ERROR mode can also clear the error. If the error is
not cleared, processing converts it to a hard error (138).

Process Flow Between After Rule Mode Processing and Occurrence Validation, if a non-
Elements zero error value remains, the error is changed to 138 and is
returned to the parent of the item. This indicates that a hard error
is found. Processing continues down the structure and will back up
the data model structure and/or structure of environments until
the error is cleared or nothing remains to backup. When the session
ends the error is reported.
Soft errors are missing value type items. Occurrence Validation
processing is done before ERROR mode rules. These errors first go
into ABSENT mode rules. If the error code is set to zero,
processing continues to the next element. Keeping the same error
code or no ABSENT rules, Occurrence Validation is checked first.
If the minimum is not met, it enters the ERROR mode of that item.
If the minimum is met, the item is not considered an error.
Processing for groups is broken down into two categories:
Groups only within groups
Groups that have access items (Tags, Containers, Defining).
For the first type, the group will loop until the maximum
occurrence has been reached unless a Keyword like BREAK or
RETURN is used to leave the group or the group rules processing
ends with error 139, 140. For example, for a group that has the
occurrence 1 to 100 it will loop 100 times. If you want to break the
group after 50, the code would look like:
Group1 {
[VAR->OTCount == 50]
BREAK

80 Workbench User’s Guide


Section 2. Data Modeling Process Overview

[]
VAR->OTCount = VAR->OTCount + 1
}*1 .. 100
For the second type, the group will loop either to the maximum
occurrence or no more data is read. When this happens, an error
171 is returned to the group. The processing flow enters ABSENT
mode rules. If the error remains after ABSENT mode and the
minimum occurrence has been met, the looping stops and
continues to the next element. If the minimum has not been met,
processing will go to the ERROR mode rules and then act like a
hard error by going to the group’s parent with error 138.
Sometimes, missing required elements are not considered hard
(138). If the first item in a group is missing but required, the error
171 is return back to the parent. This way occurrence validation is
checked to verify if the occurrence has been met. If so, it will
continue to the sibling. For example:
Group1 {
Group2 {
Tag1 { Segment “BIG”
}*1 .. 1
}*1 .. 1
Group3 {
}*1 .. 1
}*0 .. 1
Even though it is required, if the BIG segment is missing, error 171
is returned to Group1. Since the occurrence validation is 0 .. 1
(optional group), there is no hard error.

Workbench User’s Guide 81


Section 2. Data Modeling Process Overview

This page is intentionally left blank.

82 Workbench User’s Guide


Section 3. Workbench Overview

Section 3. Workbench Overview

This section provides a detailed overview of the features of


Workbench and the terminology used. This section also provides
instructions on how to access different features of Workbench.

Workbench User’s Guide 83


Section 3. Workbench Overview

Workbench only runs on the Windows® operating system and can


Accessing connect to an AI Control Server running on Windows®, Unix or
Workbench Linux platform.
To start Workbench from Windows® Explorer:
1. Select the folder where Workbench is installed.
2. Double click on wb.bat.

Note: While working with very large files, it is advisable to add -


Xmx512M to the wb.bat file. For example, the command:

“C:\Workbench5.2.6.7\WB52\WorkBench.exe –vm”
"C:\Program Files\Java\jre1.6.0_03\bin\javaw.exe" -cs
localhost v -bp 5551
Would look like this:
"C:\Workbench5.2.6.7\WB52\WorkBench.exe –vm”
"C:\Program Files\Java\jre1.6.0_03\bin\javaw.exe" -cs
localhost v -bp 5551 -vmArgs -Xmx512M"

Where 512M is 512 Mb of system memory. The argument may be


increased depending on need and availability.

To start Workbench from the command line:


1. Change to the folder where Workbench is installed.
2. Type wb.bat.
To start Workbench from the Start menu on the Windows® task
bar:
1. Select Start on the Windows task bar.
2. Select Programs -> GXS -> Application Integrator 5.2 ->
Workbench 5.2

The Workbench screen is displayed.

84 Workbench User’s Guide


Section 3. Workbench Overview

Workbench Preferences are available for both the Eclipse™ software, and for
Preferences Application Integrator™. This section covers the Application
Integrator™ preferences.

To set the preferences, select Preferences from the Window menu.


The following dialog appears:

Workbench User’s Guide 85


Section 3. Workbench Overview

You can enter a filter criteria in the edit box.


The following options are applicable to all the preference screens
and are therefore listed only once.

Option Description
Restore Restores default settings for these preferences.
Defaults
Apply Applies any changes made to the preferences.

Click on Application Integrator to set basic color preferences.

86 Workbench User’s Guide


Section 3. Workbench Overview

These are the basic color preferences that you can set for the model
editor.

Option Description
Comment Color used to display comments within a data
Color model.
String Color used to display literal strings within a rule.
Color
Keyword Color used to display keywords within a rule.
Color
Function Color used to display functions within a rule.
Color
Predicate Color used to display the predicate characters,
Color such as [, ], =, {, }, and so on.
Default Default color for any color not selected.
Color

Workbench User’s Guide 87


Section 3. Workbench Overview

Expand Application Integrator™.

Advanced Settings Use this option to set the Advanced Settings.


Preferences

The Advanced Settings Preference Page looks similar to the one


below:

Option Description
Maximum Used to set the schema recursion level. The
Schema default value is 2. The maximum value is 25.
Recursion
Level
Schema Builder
Default To use the default jar file which is shipped
along with Workbench

88 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Custom To sepecify the custom jar file
Schema Select the custom jar file to be used in
Generator generating the xml schema
executable file
Parameters to Enter the parameters required for the custom
executable file jar file separated by a semi colon (;).

Enable Checking this will enable dynamic loading of


dynamic XSDs
loading of
XSDs
Replace Enabling this will replace substitutable element
Substitutable automatically, if only one Substitutable
Element element is present
Automatically
Undo history Enter the undo history limit number
limit
Confirm all Select the check box to prompt for confirmation
undo of each undo operation.
operations

Model Indentation
Indent Save the data model file with indentation as
shown in the text area in the preference page.
Don’t Indent Save the data model file without indentation as
shown in the text area in the preference page.

Derived Links Use this preference page to set the options for Derived Links which
Feature Preference get inferred in the Attachment Editor. Click here for more details.
Page

Workbench User’s Guide 89


Section 3. Workbench Overview

Option Description
Derived Click on the color bar and choose the color of your
Link Color choice for the derived link.
Discard Rdf Checking this discards the Rdf file created when
File any att file is opened in Workbench.

Macro Definitions Under Macro Definitions, all the macros (predefined/imported


through xmls /recorded) get listed.
We can add a new Macro, delete/edit/copy existing Macros, and
export/import a Macro(s) using this option.

Option Description
New Creates a new command
Delete Deletes a selected command
Edit Edits a selected command

90 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Copy Makes a copy of a selected command
Export Exports a macro as an xml file. Select the Macro(s) you need to
export and click Export. Click on Browse to give an appropriate
location and file name of the xml file.

Click OK.
Import Imports a macro. Click on the Import button to import macros from
an xml file.
Browse to the required location and select the apropriate xml file.

Workbench User’s Guide 91


Section 3. Workbench Overview

Option Description

Click OK. The macro is added to the displayed list on the Macro
Definitions’ page. Click Apply, then OK in the Preference page to
populate these macros in the toolbar. For more details see the
Working with Macros section

Map Builder To set Map Builder preferences, enter options in two screens – the
Preferences Map Builder screen and the Map Builder Preferences screen.

The Map Builder screen looks similar to the one below:

92 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Link Color Color used to display link lines when drag and
drop is used to map data.
Selected Color used to display currently selected link line.
Link Color
Loop Color used to display loop control rule link lines
control when drag and drop is used to map data for XSL
Link Color models
Right to Yes – Target model data model items will be right
Left Tree justified when viewing in Map Editor.
for Target No - Target model data model items will be left
justified when viewing in Map Editor.
Enable Check this option to show Rule Builder in Map
Rule Editor.
Builder
Area on

Workbench User’s Guide 93


Section 3. Workbench Overview

Option Description
Map Editor

The Map Builder Preferences screen looks similar to the one below:

Option Description
Link Type
Tag to When using drag and drop, rules are placed on the
Defining tag in the source model, and on the defining in the
target model.
Defining to When using drag and drop, rules are placed on the
Defiing defining item in both the source and target model.
Variable Type
Array Arrays (ARRAY->) are used when creating rules
using drag and drop.

94 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Variable Temporary variables (VAR->) are used when
creating rules using drag and drop.
Variable Name
Both Concatenates the source and target DMI names to
create the variable name.
Source Uses the source DMI name to create the variable
name.
Target Uses the target DMI name to create the variable
name.
Function Assignment
Automatic Automatically use the STRTRIM function on the
Function source side rules to trim off trailing spaces.
Assignment
Manual Enables the Source and Target etched boxes to
Function allow for advanced function use when using drag
Assignment and drop.
Source
Use Uses the DEFAULT NULL function when
DEFAULT assigning data to the variable on the source side.
NULL on
Source
Use Uses the STRTRIM function when assigning data
STRTRIM to the variable on the source side.
on Source
Target
Use NOT Uses the NOT NULL function when assigning data
NULL on to the DMI on the target side.
Target
Use NOT Uses the NOT TRUE NULL function when
TRUE assigning data to the DMI on the target side.
NULL on
Target
Do not use No functions are used when assigning data to the
functions DMI on the target side.
on Target

Workbench User’s Guide 95


Section 3. Workbench Overview

Option Description
Loop Control
Automatic Automatically creates loop control DMIs and rules
when mapping from DMIs which are within
repeating Groups or Tags.
Manual Requires the user to create loop control DMIs and
rules when mapping from DMIs which are within
repeating Groups or Tags.
Prompt Prompts user that loop control must be added if
with Loop Manual Loop Control is selected.
Control
warning
message
Enable Automatically creates loop control DMIs and rules
Loop when mapping repeating defining DMIs.
Control
when
mapping
Defining
To
Defining

Menu Preference Some menu items are used by the underlying Eclipse™ platform,
Page but not by Application Integrator™ Workbench. These menu items
are hidden by default.

96 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Hidden Displays the hidden menu items when selected.
Menu Enabling any of these menu item will have no
Options impact on Application Integrator™ Workbench –
since these menus are not needed for Application
Integrator™ Workbench’s operation.

Workbench User’s Guide 97


Section 3. Workbench Overview

Server Connection
Preferences

Option Description
Hostname Refers to the machine name or an alias where AI
Control Server is running.
Queue Id Application Integrator™ OT_QUEUEID for the AI
Control Server being connected to.
Base Port Application Integrator™ Base Port for the AI
Control Server being connected to.
Server Periodically the system checks whether the AI
Polling Control Server is running or not. The interval
Frequency specified here (in minutes) is used for checking to
(minutes) see whether the AI Control Server is running or
not.

98 Workbench User’s Guide


Section 3. Workbench Overview

Version Validator You can validate and check the syntax of the maps of different AI
Preference versions. Currently maps of 4.0, 4.1, 5.0 and 5.2 can be validated
and syntax checked using this feature.

Option Description
Target AI Runtime Version Set AI version you want the maps to
get saved in
Saving the Map
Use the selected version Choosing this radio button uses the
while saving the Map version mentioned in the target AI
runtime version while saving the map
Use the version available in Choosing this radio button uses the
the Model Header while version mentioned in the Model
saving the Map Header while saving the map

Workbench User’s Guide 99


Section 3. Workbench Overview

Views Preference
Page

Option Description
Use Displays AI function dialog box when any of the
Function AI function is dragged and dropped into
Template RuleBuilder from the Built-ins view. This dialog
helps in the syntax of the function. For more
details go to Inserting Functions and Keywords in
Rulebuilder

100 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Show When checked it allows display of a Message
Warning Dialog indicating that rules of a data model item
For have unapplied changes
Unsaved
Changes In
Rules
Editor
Don't Checking this preference,will cause the workbench
prompt to add any new path, selected in Remote Site
when Navigator ,directly to Model Search Order.When
updating unchecked,the workbench will show a dialog for a
Model new path in Remote Site Navigator.The dialogs
Search enables the user to allow,reject and reorder the
Order from addition of new path toModel Search Order.
Remote
Site
Navigator
Add path When a file outside the Workbench is opened with
to Model “Open File..” , the new path is added to Model
Search Search Order if this preference is checked.
Order
when file is
opened
with "Open
File.."
Do not “Open File..” dialog will allow only one file to be
allow more opened at a time if this preference is checked.
than one
file to be
opened at a
time with
"Open
File.."
Enveloping Defines the initial environment (Map Component
Map File File) to be called when running outbound
translations.

Workbench User’s Guide 101


Section 3. Workbench Overview

Option Description
De- Defines the initial environment (Map Component
enveloping File) to be called when running inbound
Map File translations.
Model Defines the file search order used by Workbench to
Search resolve file references similar to OT_MDLPATH
Order used by AI Control Server. If your AI Control
Server is on a local system and is running before
you launch Workbench and the Model Search
Order list is empty, then Workbench reads the
OT_MDLPATH variable defined in the file
aiserver.bat and populates this list automatically.
The search order can be modified, but it is
preferable that it matches the OT_MDLPATH
specified. Once populated, the list is cached
automatically. So, if you add any additional
path(s) to your OT_MDLPATH, then you have to
manually add the same path in this preference
page.
New Creates a new path in the Model Paths list.
Remove Removes a path in the Model Paths list.
Up Moves a path up in the Model Paths list.
Down Moves a path down in the Model Paths list.
Don’t When you fetch a file from a remote system using
prompt Remote Site Navigator, Workbench attempts to
when include the remote file’s path in Model Search
udpating Order.
Model
Search
Order from
Remote
Site
Navigator

If you don’t want to be prompted for above, check


this preference.

102 Workbench User’s Guide


Section 3. Workbench Overview

Note: If the AI Control Server goes down while Workbench is up


and connected to it, immediately re-starting AI Control Server may
occasionally fail. In such a case it is advised that you allow for a
delay of approximately 5 minutes before you attempt to restart the
AI Control Server.

Note: 1) It is recommended that Model Search Order matches the


OT_MDLPATH specified for AI Control Server.
2) If Workbench is installed under the OT_DIR, adding the OT_DIR
path in Model Search Order does not create a linked folder in
Resource Navigator view for OT_DIR folder. OT_DIR path will not
be used in Model Search Order when resolving file references. A
workaround for this problem is to have the Workbench workspace
outside OT_DIR folder. You can create a new workspace using
File->Switch Workspace menu from the main menubar.
3) A general limitation of Workbench is that, you cannot create a
linked folder for paths that are parent to the workspace directory.
Eg. If the workspace is located at c:/x/y/z then you cannot create
linked folders for c:/, c:/x, c:/x/y. You can create a linked folder for
c:/x/y/abc since it’s not a parent to the workspace directory.

XSD Validator
preferences

Workbench User’s Guide 103


Section 3. Workbench Overview

Note: The preferences apply to the validator as well as the editors


while opening XSL and XPath models.

Option Description
Validation
Validate all Validates the main schema, as well as the schemas
(included imported and included directly or indirectly by
and the main schema.
imported)
schemas
Validate Validates only the main schema.The imported or
main inlcuded schemas are not validated.
schema
only
Details

104 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Show Shows/displays the details (Element declarations,
details for Type definitions, Modelgroup definitions) for main
all schema, as well as for schemas imported and
(included included directly or indirectly by the main
and schema.
imported)
schemas
Show Shows the details (Element declarations, Type
details for definitions ,Modelgroup definitions) for main
main schema only.
schema
only
Parsing Options
Stop on Choose this option to stop the schema validator
errors from further proccessing of schemas if an error is
encountered.
Tolerate Choose this option to let the schema validator
errors continue proccessing of schemas regardless of
errors encounterd.
Auto Correction Options
Strict Validates the schemas as they are, without
validation attempting possible corrections.
Auto Choose this option for the validator to consider
Correct possible corrections and then proceed to
validating the schemas.
Problem Markers
Show Choose this option to enable the validator to attach
problem and show problem markers on schema(xsd) files.
Markers
Delete Choose this option for the validator to delete or
Problem snub problem markers related to the schema file.
Markers
Validation mode
Foreground Choose this option for the validator to run in the
foreground with a progress dialog

Workbench User’s Guide 105


Section 3. Workbench Overview

Option Description
Background Choose this option for the validator to run in
background.
Break on This option defines the behaviour of the validator
first error or on encountering an error of any kind.
warning If enabled, the validator stops proccessing on
encountering the first error and returns.

106 Workbench User’s Guide


Section 3. Workbench Overview

Map Editor allows you to modify the map component file, do a


Overview of the Drag and Drop (DnD) of the source and target items, build data
Map Editor models, view the structure of data models, view input data, run
translations, and view output data.

Opening an From the Navigator view, double click on the environment (Map
Existing Component File) to be opened and modified.
Environment

Workbench User’s Guide 107


Section 3. Workbench Overview

This section explains how to rearrange editors and views in order


Rearranging to customize the layout of the Workbench.
Views and
Editors

Setup Before rearranging Workbench, a little housekeeping is required.


Start by choosing Window> Reset Perspective and selecting OK.
This resets the current Mapping Perspective to its original views
and layout.
Open any text file. It should open in a text editor. Close any other
editors.
Workbench should now look like this:

Drop cursors Drop cursors indicate where it is possible to dock views in the
Workbench window. Several different drop cursors may be
displayed when rearranging views.

108 Workbench User’s Guide


Section 3. Workbench Overview

Dock above: If the mouse button is released when a dock


above cursor is displayed, the view appears above the view
underneath the cursor.
Dock below: If the mouse button is released when a dock
below cursor is displayed, the view appears below the view
underneath the cursor.
Dock to the right: If the mouse button is released when a
dock to the right cursor is displayed, the view appears to
the right of the view underneath the cursor.
Dock to the left: If the mouse button is released when a
dock to the left cursor is displayed, the view will appear to
the left of the view underneath the cursor.
Stack: If the mouse button is released when a stack cursor is
displayed, the view appears as a tab in the same pane as the
view underneath the cursor.
Restricted: If the mouse button is released when a restricted
cursor is displayed, the view will not dock there. For
example, a view cannot be docked in the editor area.

Rearranging views The position of the Navigator view in the Workbench window can
be changed.

1. Click in the title bar of the Navigator view and drag the view
across the Workbench window. Do not release the mouse button
yet.
2. While still dragging the view around on top of the Workbench
window, note that various drop cursors appear. These drop
cursors (see previous section) indicate where the view will dock
in relation to the view or editor area underneath the cursor when
the mouse button is released. Notice also that a rectangular
highlight is drawn that provides additional feedback on where
the view will dock.
3. Dock the view in any position in the Workbench window, and
view the results of this action.

Workbench User’s Guide 109


Section 3. Workbench Overview

4. Click and drag the view's title bar to re-dock the view in another
position in the Workbench window. Observe the results of this
action.
5. Finally, drag the Navigator view over the Outline view. A stack
cursor is displayed. If the mouse button is released the
Navigator is stacked with the Outline view into a tabbed
notebook.
Tiling editors Workbench allows for the creation of two or more sets of editors in
the editor area. The editor area can also be resized but views
cannot be dragged into the editor area.

1. Open at least two editors in the editor area by double-clicking


editable files in the Navigator view.
2. Click and drag one of the editor's tabs out of the editor area. Do
not release the mouse button.
3. Notice that the restricted cursor displays if an attempt is made to
drop the editor either on top of any view or outside the
Workbench window.
4. Still holding down the mouse button, drag the editor over the
editor area and move the cursor along all four edges as well as
in the middle of the editor area, on top of another open editor.
Notice that along the edges of the editor area the directional
arrow drop cursors appear, and in the middle of the editor area
the stack drop cursor appears.
5. Dock the editor on a directional arrow drop cursor so that two
editors appear in the editor area.
6. Notice that each editor can also be resized as well as the entire
editor area to accommodate the editors and views as necessary.
7. It is important to observe the color of an editor tab (in the figure
below there are two groups, one above the other)
Blue - Indicates that the editor is currently active.
Default - Indicates that the editor was the last active editor.
If there is an active view, it will be the editor that the active
view is currently working with. This is important when
working with views like the Outline and Properties that
work closely with the editor.

110 Workbench User’s Guide


Section 3. Workbench Overview

8. Drag and dock the editor somewhere else in the editor area,
noting the behavior that results from docking on each kind of
drop cursor. Continue to experiment with docking and resizing
editors and views until Workbench has been arranged to your
satisfaction. The figure below illustrates the layout if one editor
is dragged and dropped below another.

Rearranging In addition to dragging and dropping views on Workbench, the


tabbed views order of views can also be rearranged within a tabbed notebook.

1. Choose Window > Reset Perspective to reset the Resource


perspective back to its original layout.
2. Click on the Outline title bar and drag it on top of the Navigator
view. The Outline will now be stacked on top of the Navigator.
3. Click on the Navigator tab and drag it to the right of the Outline
tab.

Workbench User’s Guide 111


Section 3. Workbench Overview

4. Once the cursor is to the right of the Outline tab and the cursor is
a stack cursor, release the mouse button.

Observe the Navigator tab is now to the right of the Outline tab.

Maximizing Sometimes it is useful to be able to maximize a view or editor.


Maximizing both views and editors is easy.
To maximize a view, either double click on its tab or choose
Maximize from the tab's menu.
To maximize an editor, double click on the editor tab or choose
Maximize from the tab's popup menu.
Restoring a view to its original size is done in a similar manner
(double click or choose Restore from the menu).

Fast views Fast views are hidden views, which can be quickly made visible.
They work in the same manner as normal views, only when they
are hidden they do not take up screen space on the Workbench
window.
This section explains how to convert the Navigator view into a fast
view.

Creating fast views These instructions commence by creating a fast view from the
Navigator view and then explain how to use the view once it is a
fast view.

There are two ways to create a fast view


Using drag and drop.
Using a menu operation available from the view System menu.
Create a fast view using drag and drop as follows.
1. In the Navigator view, click on the title bar and drag it to the
shortcut bar at the bottom left of the window.
2. Once over the shortcut bar, the cursor will change to a "fast
view" cursor. Release the mouse button to drop the Navigator
onto the shortcut bar.

112 Workbench User’s Guide


Section 3. Workbench Overview

The shortcut bar now includes a button for the Navigator fast
view.

To create a fast view using the second approach, start by


popping up the context menu over the Navigator view's tab.
From this menu, select Fast View.

Working with fast The navigator has now been converted into a fast view. This
views section demonstrates what can now be done with it.
Confirm that the shortcut bar at the bottom left of the window still
has the Navigator view and looks like this:

1. In the shortcut bar, click on the Navigator fast view button.


2. Observe the Navigator view slides out from the left side of the
window.

Workbench User’s Guide 113


Section 3. Workbench Overview

3. The Navigator fast view can be used as it would be


normally. To resize a fast view, move the mouse to the right
edge of the fast view where the cursor changes to a double-
headed arrow. Then hold the left mouse button down as the
mouse is moved.
4. To hide the fast view simply click on another view or editor or
click on the Minimize button on the fast view's toolbar

Note: If a file is opened from the Navigator fast view, the fast view
automatically hides itself to allow the file to be worked with.

To convert a fast view back to a regular view either:


Choose Fast View from the context menu of the icon in the top
left corner of the view.
Drag the fast view icon from the tool bar, and drop it
somewhere in the Workbench window.

114 Workbench User’s Guide


Section 3. Workbench Overview

Tips and Tricks For further information on views check out


http://help.eclipse.org/help31/index.jsp?topic=/org.eclipse.platf
orm.doc.user/tips/platform_tips.html

Map Editor Work


Area

Map Definition Tab Select the Map Definition tab to edit or view the map component
file values.

Workbench User’s Guide 115


Section 3. Workbench Overview

Option Description
Map Definition tab
Source Data
Traditional Type of source model.
Model
XPATH Type of source model.
Model
Style Sheet Type of source model.
Schema Checked if the schema is based on a DTD.
based on
DTD
Source Source data model defined in the map component
Model file.
Source Source access model defined in the map
Access component file.
Source Source schema file the XPATH or XSL model is
Schema based on.
Root Root element of the source schema file chosen.
Element
Source Source constraint validation style sheet. It is used
Constraint when the Source model specified is an XPATH
Model or Style Sheet. The value must have an .xsl
extension.
Clear This button is used to remove the Source Model
Source from the Map Component file.
Model
Target Data
Traditional Type of the target model.
Model
Style Sheet Type of the target model.
Schema Checked if the schema is based on a DTD.
based on
DTD

116 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Target Target data model defined in the map component
Model file.
Target Target access model defined in the map
Access component file.
Target Target schema file the XSL model is based on.
Schema
Root Root element of the target schema file chosen.
Element
Target Target constraint validation style sheet. It is used
Constraint when the Target model specified is an XPATH
Model or Style Sheet. The value must have an .xsl
extension.
Clear This button is used to remove the Target Model
Target from the Map Component file.
Model
Environment Variables
Variable Names of environment variables.
Name
Variable Values of the environment variables.
Value
Add Add an environment variable.
Delete Delete an environment variable.
Comments Displays all comments found in the map
component file. Comments are denoted by a semi-
colon (;) at the beginning of the line.

Workbench User’s Guide 117


Section 3. Workbench Overview

Mapping Tab Select the Mapping tab to view the source and/or target data
model(s).

Option Description
Mapping tab
Source Displays source data model in hierarchical format.
Allows expanding and collapsing of structure.
Target Displays target data model in hierarchical format.
Allows expanding and collapsing of structure.

118 Workbench User’s Guide


Section 3. Workbench Overview

Source Properties Select Source Properties to edit attributes for the source data model
Tab items.

Column Description
Note: Click the + to the left of a DMI to expand it, select – to the left
of a DMI to collapse it.
Source Properties
Name Name of each data model item within the source
model.
Access Access model type assigned to each data model
Type item.
Occurence Minimum number of times the data for this data
Min model item can occur in the input data.
Occurence Maximum number of times the data for this data
Max model item can occur in the input data.
Size Min Minimum size for the data being read in for this
data model item.

Workbench User’s Guide 119


Section 3. Workbench Overview

Column Description
Size Max Maximum size for the data being read in for this
data model item.
Format Defines the format for numerics, dates, and times.
Start Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the starting offset of the dmi.
End Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the ending offset of the dmi.
Match Available on Tag items only. Matches against first
Value characters of record/segment of input data.
Verify ID Performs cross reference look up to determine if
value in input data is in id list.
File Available on Group items only. Used to change the
input file to be read in. New input file only parsed
within the Group defined on.
Sort Not available in source data models

Note: The start offset and end offset are calculated based on the
size min and size max of the data model items and the match value
of tags. They are shown only when the size min and size max of a
dmi are equal. Once a dmi with a non-matching size min and size
max is reached, the offset for it and the rest of the data model
items are displayed as 0.

Note: Whenever the size min or size max of a dmi is modified,


“Recalculate Offsets” on the tool bar can be clicked to recalculate
the offsets.

120 Workbench User’s Guide


Section 3. Workbench Overview

Target Properties Select Target Properties to edit attributes for the target data model
Tab items.

Workbench User’s Guide 121


Section 3. Workbench Overview

Column Description
Target Properties
Name Name of each data model item within the target
model.
Access Access model type assigned to each data model
Type item.
Occurence Minimum number of times this DMI can appear in
Min the output data.
Occurence Maximum number of times this DMI can appear in
Max the output data.
Size Min Minimum size for the data being written out for
this data model item.
Size Max Maximum size for the data being written out for
this data model item.
Format Defines the format for numerics, dates, and times.
Start Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the starting offset of the dmi.
End Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the ending offset of the dmi.
Match Available on Tag items only. First data written out
Value for this record/segment.
Verify ID Performs cross reference look up to determine if
value to be written out is in id list.
File Available on Group items only. Used to change the
output file to be written to. New output file only
written to within the Group defined on.
Sort Used to sort data to be written out within the
Group. The Group item must control the looping.

122 Workbench User’s Guide


Section 3. Workbench Overview

Input Tab Select the Input tab to view the input file.
Input file will not be shown if:
1. Attachment file does not define INPUT_FILE environment
variable.
2. If the AI Control Server is remote and a FTP site corresponding
to AI Control Server is not present in Remote Site Navigator.

Note: Data is displayed without alteration. If records/segments do


not end with a line feed, the data may appear as one long record in
the display.

You can edit the input file and then save it. The “Save Input
File” button gets enabled as you start editing.
You can search for text within the data file by using the Search
menu item.

Workbench User’s Guide 123


Section 3. Workbench Overview

You can narrow your search by determining whether to:


Match case: Select the Match case box to do this.
Use the regular expression: Select the Regular
expression box to do this.
Choose the Up or Down radio button to indicate the
direction of the search.
Choose the Cancel button to exit the Find dialog.

Output Tab Select the Output tab to run a translation and view the output file.

Option Description
Output – Provides ability to run a translation and view the
output file created.
Map Initial map component file to be used for this
Component translation, if not inbound or outbound
processing.
Trace Level Determines the amount of data to be written to
the session trace log. 0 – Minimal 1023 – Full
Translation Type

124 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Application Denotes that application data is being processed
and not a public standard.
Enveloping Denotes that outbound data is being processed
and that OTEnvelp.att should be used as the
initial environment.
De- Denotes that inbound data is being processed and
enveloping that OTRecogn.att should be used as the initial
environment.

Keep Input Denotes the file the input data will be copied
File from. This file will not be removed during
translation.
Environment Variables – User defined variables passed in to
translation when invoked. These environment variables are set
only for the translation session and not saved in the Map file.
Variable Name of user defined environment variable.
Name Recommended to be in upper case.
Variable Value to be passed in for the environment
Value variable to be used during translation.

Translate Runs translation based on fields filled in on this


Button dialog.
Translations that take long time can be run as a
background thread and you can continue
working in Workbench.
Translation Displays output file created by translation.
Output
Session Displays messages generated during a translation
Messages session.
Trace Log Displays the trace log generated by the last
translation operation.

Tool Bar Options When in Map Editor, new icons are added to the tool bar.

Workbench User’s Guide 125


Section 3. Workbench Overview

Icons Description
Re-calculates the offsets in the properties page for
traditional models
Moves the currently highlighted data model item
one level down without restructuring the model's
data hierarchy.
Moves the currently highlighted data model item
one level up without restructuring the model's
data hierarchy.
Removes the current data model item (DMI). The
item is removed from the model and stored on the
clipboard until it is pasted.
Makes a copy of the current data model item. The
item is copied and stored on the clipboard until it
is pasted.
Pastes a copy of the stored item (DMI). The item
on the clipboard is stored until another cut or copy
operation is performed.
Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Adds an empty data model item below the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Adds an empty data model item above the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Navigates to a specified DMI. You can select the

126 Workbench User’s Guide


Section 3. Workbench Overview

Icons Description
available DMI from the “Go to” dialog.
Checks the syntax of the current mapping file in
the acive Text Editor
Applies the changes made to the current Data
Model file in the active Model Text Editor.
Collapses all the branches for the tree in focus.

Expands all the branches for the tree in focus.

Refreshes the mapping area and redraws DND


link lines
Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.

Workbench User’s Guide 127


Section 3. Workbench Overview

Menu Options When in Map Editor, a new item is added to the Menu.

Tools Menu
Duplicate Duplicates the highlighted data model item.
Shift Right Shifts the highlighted data model item to the right
one hierarchical level.
Shift Left Shifts the highlighted data model item to the left
one hierarchical level.
Insert Inserts a new data model item below the
Below highlighted data model item.
Insert Inserts a new data model item above the
Above highlighted data model item.
Go To Navigates to a specified line number in the
currently opened file.
Includes References Include files with current data model.
Refresh Refreshes the mapping area and redraws DND
Links link lines.
Check Checks the syntax of the current mapping file in
Syntax the active Text Editor.
Deploy Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.

128 Workbench User’s Guide


Section 3. Workbench Overview

Link Menu
Delete Link Deletes the selected link and its associated rules
Loop Highlights repeating DMIs if source or target
Control model is XSL. Automatically creates loop control
Mode DMIs and rules when mapping repeating defining
DMIs
Map Rule Allows you to select a DMI as an owner node on
Owner the source XSL model to hold the map builder
Node rules of the child nodes when Drag and Drop is
performed.
It is mandtory that the Map Rule Owner Node be
the Parent or Grandparent or any hierarchy above
the source node to Drag and Drop DMIs from
source to target.

Workbench User’s Guide 129


Section 3. Workbench Overview

Overview of the Model Editor allows you to modify the data model attributes,
Model Editor structure, and add rules to the data model.

Opening an From the Navigator view, double click on the data model to be
Existing Data opened and modified.
Model

Note: Tools Menu is added to the Menu bar.

Note: Each view or working area can be closed, maximized, or


minimized. The next screen shots for Model Editor will have all
other views closed.

130 Workbench User’s Guide


Section 3. Workbench Overview

Model Editor Work


Area

Overview Tab Select the Overview tab to modify the data model structure,
attributes, and rules.

Option Description
Model Displays the type of model file (Source / Target)
Header and allows for selection of Access Model File to
use for current model.
Model Displays the data model in hierarchical format.
Items Ability to change the structure of the data model.
Model Item Modifies attributes for the selected data model
Editor item.
Rule Area to create rules for the selected data model
Builder item.

Workbench User’s Guide 131


Section 3. Workbench Overview

Option Description
Rule Mode Tabs
Present Rules to be performed if the data model item is
Rules present
Absent Rules to be performed if the data model item is
Rules absent.
Error Rules Rules to be performed if the data model item is in
error.
View All Displays all the Present, Absent and Error rules.
Rules
Rule Builder Toolbar
Icons Description
Inserts an equal sign into the rule. Used for
assignments.
Inserts a literal into the rule. Cursor is placed
between the double quotes.
Inserts an IF condition into the rule. Cursor is
placed between the square brackets.
Inserts an ELSE IF condition into the rule. Cursor is
placed between the square brackets.
Inserts an ELSE condition into the rule. Cursor is
placed between the square brackets.
Inserts a Null Condition. Rules under a Null
Condition will always be performed.
Inserts a Conditional Expression. Condition will
then need to be created.
Inserts a carriage return in the rules. Aides in
readability.
Moves focus area to next parameter when creating
a Conditional Expression or implementing a
function.
Checks the syntax of all rules and displays any
errors found.

132 Workbench User’s Guide


Section 3. Workbench Overview

Option Description
Applies all new rules added to the data model
item. Also runs syntax checker.
Applies all new rules in all data model items. Also
runs syntax checker.
Inserts cross reference rules for outbound
translations. See Map Component Files for
Enveloping/ De-enveloping for more information.
Cuts the highlighted text within Rule Builder.

Copies highlighted text within Rule Builder.

Pastes copied or cut text from Rule Builder in the


cursor’s current position.
Insert Available DMI

Properties Tab Select the Properties tab to modify the data model item attributes.
See Source Properties Tab/Target Properties Tab in this section for
a description of columns.

Workbench User’s Guide 133


Section 3. Workbench Overview

Model Text Tab Select the Model Text tab to view the data model in raw data
format.

Tool Bar Options When in Model Editor, new icons are added to the tool bar.

Icons Description
Moves the currently highlighted data model item
one level down without restructuring the model's
data hierarchy.
Moves the currently highlighted data model item
one level up without restructuring the model's
data hierarchy.
Removes the current data model item (DMI). The
item is then removed from the model and stored
on the clipboard until it is pasted.
Makes a copy of the current data model item. The
item is copied and stored on the clipboard until it
is pasted.

134 Workbench User’s Guide


Section 3. Workbench Overview

Icons Description
Pastes a copy of the stored item (DMI). The item
on the clipboard is stored until another cut or copy
operation is performed.
Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Adds an empty data model item below the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Adds an empty data model item above the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Navigates to a specified DMI. You can select the
available DMI from the “Go to” dialog.
Checks the syntax of the current mapping file in
the acive Text Editor.
Applies the changes made to the current Data
Model file in the active Model Text Editor.
Collapses all the branches of the tree.

Expands all the branches of the tree.

Refreshes the mapping area and redraws DND


link lines
Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.

Workbench User’s Guide 135


Section 3. Workbench Overview

Other Toolbar There are some other tool bar options available while working on
Options model files

Offset Coloring Icon

Open an input file and click on the icon to identify the offsets
in different fields. The following dialog comes up:

Choose an appropriate model file.

Note: Model files need to be associated with an OTfixed access


file. Else, an error will be thrown.

136 Workbench User’s Guide


Section 3. Workbench Overview

Only the model files which have their access file as OTFixed and
those models which have no Access files will be listed. If the
ModelFile cannot be parsed or if the Model File has no offsets
defined then Workbench will popup the following error.

On choosing a correct model file, click OK. The alternate fields are
highlighted in the input file based on the offset.

Note: When the Input tab of Attachment File Editor, Offset Coloring
button will not bring up the file selection window, it will
automatically consider the Source Model file of the attachment.

Show Whitespace Character Icon

Click on this icon to see the whitespace characters between


fields in an input file.

Workbench User’s Guide 137


Section 3. Workbench Overview

Menu Options When in Model Editor, a new item is added to the Menu.

Tools Menu
Duplicate Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Shift Right Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Shift Left Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Insert Adds an empty data model item below the
Below currently highlighted item. The newly created DMI
has a default name which can be changed.
Insert Adds an empty data model item above the
Above currently highlighted item. The newly created DMI
has a default name which can be changed.
Go To Navigates to a specified DMI. You can select the
available DMI from the “Go to” dialog.
In a model Text page, you can use this menu to

138 Workbench User’s Guide


Section 3. Workbench Overview

“Go to” the specified column or byte. See below for


more details.
Includes References Include files with current data model.
Check Checks the syntax of the current mapping file in
Syntax the active Text Editor.
Refresh Refreshes the mapping area and redraws DND
Links link lines.
Deploy Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.
Go to menu
1. Open a model in the model editor and go to the model text
editor tab.
2. Right click on the ruler to access the menu.
3. Go to Line takes you to the required line, Go to Column takes
you to the required column and Go to byte takes you to the
required byte in the model text.

You can access the Go To option from the Tools menu also.

Workbench User’s Guide 139


Section 3. Workbench Overview

In the Overview page and the Properties page of the model editor,
GoTo Column and GoTo Byte is not valid. Hence clicking on them
will result in Workbench showing an error in the status bar as
below.

140 Workbench User’s Guide


Section 3. Workbench Overview

Additional editors are available to allow you to modify other files


Additional
with Application Integrator
Editors
Access Model This editor provides the ability to view access models or to modify
Editor user defined access models.

In the Navigator view, double click the access model to be


modified or viewed.

Note: When opening an access model, the Tool menu option is


added to the Menu bar, and the Check Syntax icon is added to the
tool bar.

Workbench User’s Guide 141


Section 3. Workbench Overview

Include File Editor This editor provides the ability to view include files or to modify
user defined include files.

In the Navigator view, double click the include file to be


modified or viewed.

Note: When opening an include file, the Tool menu option is added
to the Menu bar, and the Check Syntax icon is added to the tool
bar.

142 Workbench User’s Guide


Section 3. Workbench Overview

Views Multiple views are available in Workbench.

Views Overview Views support editors and provide alternative presentations or


navigations of the information in the Workbench. For example:
The Bookmarks view displays all bookmarks in
Workbench along with the names of the files with which
the bookmarks are associated.
The Navigator view displays the projects and other
resources.
A view might appear by itself or stacked with other views in a
tabbed notebook.

To activate a view that is part of a tabbed notebook, simply click its


tab. Workbench provides a number of quick and easy ways to
configure an environment, including whether the tabs are at the
bottom or top of the notebooks.

Workbench User’s Guide 143


Section 3. Workbench Overview

Views have two menus. The first, which is accessed by right


clicking on the view's tab, allows the view to be manipulated in
much the same manner as the menu associated with the
Workbench window.

The second menu, (called the "view pull-down menu"), is accessed


by clicking the down arrow . The view pull-down menu
typically contains operations that apply to the entire contents of the
view, but not to a specific item shown in the view. Operations for
sorting and filtering are commonly found on the view pull-down
menu.

144 Workbench User’s Guide


Section 3. Workbench Overview

Customizing Workbench is a good opportunity to use the Window


> Reset Perspective menu operation. The reset operation restores
the layout to its original state.
A view can be displayed by selecting it from the Window > Show
View menu. A perspective determines which views may be
required and displays these on the Show View submenu.
Additional views are available by choosing Other... at the bottom
of the Show View submenu. This is just one of the many features
that provide for the creation of a custom work environment.

Several views are available within Workbench. These views are


used to display different information, such as, information about
the data models, trading partners, and errors from translations.

Workbench User’s Guide 145


Section 3. Workbench Overview

Select the view to be displayed. Application Integrator™ specific


views are found at the top of the list. The other option consists of
more generic views.
Each view can be closed using the X icon on the view tab.

Close icon.

Each view can also be moved and docked anywhere within


Workbench. To move a view, left click and hold the tab of the view
and drag it to any position. As shown below:

146 Workbench User’s Guide


Section 3. Workbench Overview

Note: Each view or working area can be closed, maximized, or


minimized.

Workbench User’s Guide 147


Section 3. Workbench Overview

Interactive Interactive Process Manager guides you through a series of


Process complex tasks to achieve an overall goal.
Manager (IPM)

Tasks that can be accomplished using the IPM are:


o AI Translation
o Creating Data Model
o Creating Map Component
o Creating XPath Data Model
o Creating XSL Stylesheet
o Working with Remote Resources
The following are the steps that are to be followed to work with
Interactive Process Manager.

Launching IPM (Interactive Process Manager)


1. From the Help menu choose Interactive Process Manager. The Interactive
Process Manager Selection dialog opens up. Select the IPM that is to be
displayed and click the OK button to open the particular IPM (Creating Data
Model in this case).

148 Workbench User’s Guide


Section 3. Workbench Overview

2. The next view explains the various steps to be followed to perform the
specific task. The first step provides an introduction of the IPM task to be
performed.
3. The IPM then guides you through the steps to be followed to perform a
specific task. As you go through the various steps, you are prompted to
perform certain tasks.
4. By following all the steps and performing the specified tasks, an overall goal
is achieved. In this case a New Data Model is created.
The following table lists the set of buttons provided in the IPM. These
buttons helps you iterate through the IPM and also perform the required
tasks.
Symbol Title
/ Click to Begin/Click to Restart

/ Click to Perform/Click to Redo


Click to Complete

Workbench User’s Guide 149


Section 3. Workbench Overview

Click to Skip

Example:
AI translation:
1. Goto Help>Interactive Process Manager, select AI Translation. Click OK. The
following screen comes up. It displays the major tasks to be performed to
carry out an AI translation:
• Introduction
• Connecting to the Server
• Creating a New Map Component File
• Mapping and Running a Translation.

2. Click on (Click to Begin/Restart button) in the Introduction section. This


takes you to the section on Connecting to the Server.

150 Workbench User’s Guide


Section 3. Workbench Overview

3. You could follow the steps listed in this section OR


Click on the (Click to perform button) to launch Server Connection
Details’ window. Refer to Server Connection Preferences of the User guide
for details.
After the server connections are done, the next task (Creating a new Map
Component File) is highlighted in the IPM window (as shown below):

Workbench User’s Guide 151


Section 3. Workbench Overview

4. Follow the listed steps


OR
Click on (Click to perform button) to launch the Map Component File
wizard.
Refer to the section Creating Map Component Files of the User guide for
more details.
Once the Map Component file is created, the next task (Mapping and
Running a Translation) is highlighted in the IPM window( as shown below):

152 Workbench User’s Guide


Section 3. Workbench Overview

5. Follow the steps listed in this section and click (Click to Complete
button).

Workbench User’s Guide 153


Section 3. Workbench Overview

The Navigator view provides a hierarchical view of the resources


Navigator View in Workbench, populated from the Model Search Order set. From
here, you can open files for editing or select resources for
operations such as renaming.

Right-click on any resource in the Navigator view to open a pop-up


menu that allows you to perform operations such as copying,
deleting, renaming, moving, creating new resources, comparing
resources with each other ,opening files, adding directories to
model search order ,invoking translations on attachment files, and
so on.
To add it to the current perspective, click Window > Show View >
Navigator.

154 Workbench User’s Guide


Section 3. Workbench Overview

Toolbar The toolbar of the Navigator view contains the following buttons:
Back
This command displays the hierarchy that was displayed
immediately prior to the current display. For example, if you Go
Into a resource, then the Back command in the resulting display
returns the view to the same hierarchy from which you activated
the Go Into command. The hover help for this button tells you
where it will take you. This command is similar to the Back button
in a web browser.
Forward
This command displays the hierarchy that was displayed
immediately after the current display. For example, if you've just
selected the Back command, then selecting the Forward command
in the resulting display returns the view to the same hierarchy from
which you activated the Back command. The hover help for this
button tells you where it will take you. This command is similar to
the Forward button in a web browser.
Up
This command displays the hierarchy of the parent of the current
highest-level resource. The hover help for this button tells you
where it will take you.
Collapse All
This command collapses the tree expansion state of all resources in
the view.
Link with Editor
This command toggles whether the Navigator view selection is
linked to the active editor. When this option is selected, changing
the active editor automatically updates the Navigator selection to
the resource being edited.
Menu
Click the icon at the left end of the view's title bar to open a menu
of items generic to all views. Click the black upside-down triangle
icon to open a menu of items specific to the Navigator view. Right-
click inside the view to open a context menu.
Select Working Set

Workbench User’s Guide 155


Section 3. Workbench Overview

Opens the Select Working Set dialog to allow selecting a working


set for the Navigator view.
Deselect Working Set
Deselects the current working set.
Edit Active Working Set
Opens the Edit Working Set dialog to allow changing the current
working set.
Sort
This command sorts the resources in the Navigator view according
to the selected schema:
By Name: Resources are sorted alphabetically, according to the
full name of the resource (e.g., A.TXT, then B.DOC, then
C.HTML, and so on.)
By Type: Resources are sorted alphabetically by file
type/extension (e.g., all DOC files, then all HTML files, then all
TXT files, and so on.).

156 Workbench User’s Guide


Section 3. Workbench Overview

Filters
This command allows you to select filters to apply to the view so
that you can show or hide various resources as required. File types
selected in the list are not shown in the Navigator. This is what the
file filters dialog looks like:

In addition to these menu items, the Navigator view menu shows a


list of recently used working sets that have been selected in the
view.

Workbench User’s Guide 157


Section 3. Workbench Overview

Icons The following icons appear in the Navigator view.

Icon Description
Project (open)
Folder
File
Indicates that the folder is
present in the Model Search
Order (MSO) path
Once a file is selected, select it and right click on it. Another menu
is provided:

Menu Description
New
Project Create a new project.

158 Workbench User’s Guide


Section 3. Workbench Overview

Menu Description
Data Model Create a new data model.
XPATH Data Create a new XPath data model.
Model
XSL Style Create a new XSL style sheet.
sheet
Map Create a new map component file.
Component
Other Create a new data model, map component file,
or folder.

Open Opens the selected file.


Open With – Note: The first option will be the editor used to open
the selected file based on the extension of the file.
Model Editor Opens the selected file in the Model Editor.
Map Editor Opens the selected file in the Map Editor.
Access Model Opens the selected file in the Access Model
Editor Editor.
Text Editor Opens the selected file in the text editor.
System Editor Opens the selected file in the default system
editor for the file type, such as Notepad.
In-Place Part of Eclipse framework. Reserved for future
Editor use.
Default Editor Opens items in the default editor set for the
type of file selected.

Copy Copies the selected file. Allows pasting to


create a new file.
Paste Pastes the last cut file.
Delete Deletes the selected file.
Move Moves the selected file to a new destination.
Rename Renames the selected file.

Workbench User’s Guide 159


Section 3. Workbench Overview

Menu Description
Import Imports existing project file into current
workspace.
Export Exports selected file to a specified directory.
Refresh Refreshes the current view to display newly
added files in the folder.
Copy to Copies selected file or folder to another remote
Remote system.
System
Run Displays the run translation dialog box to
Translation allow users to run translation. Only available
on map component files.
Properties Provides system information on the selected
file.

160 Workbench User’s Guide


Section 3. Workbench Overview

Context Menu

New This command allows you to create a new resource in Workbench.


Select the type of resource to create from the submenu.

Go Into This command displays a new hierarchy in the Navigator view,


with the children of the selected resource as its contents. For
example, if you Go Into a project, the Navigator will be refocused
on the immediate files and folders of the project.

Open This command opens the selected resource. If the resource is a file
that is associated with an editor, then Workbench launches the
associated internal, external, or ActiveX editor and opens the file in
that editor.

Open With This command allows you to open an editor other than the default
editor for the selected resource. Specify the editor with which to
open the resource by selecting an editor from the submenu.

Copy This command copies the selected resource to the clipboard.

Workbench User’s Guide 161


Section 3. Workbench Overview

Paste This command pastes resources on the clipboard into the selected
project or folder. If a resource is selected the resources on the
clipboard are pasted as siblings of the selected resource.

Delete This command deletes the selected resource from the workspace.

Move This command moves the selected resource to another location. A


dialog appears, prompting for the destination location to which the
resource is to be moved.

Rename This command allows you to specify a new name for the selected
resource.

Import This command opens the import wizard and allows you to select
resources to import into Workbench.

Export This command opens the export wizard and allows you to export
resources to an external location.

Add Bookmark This command adds a bookmark that is associated with the
selected resource (but not to a specific location within the
resource).

Refresh This command refreshes Workbench's view of the selected resource


and its children. For example, this is used when you create a new
file for an existing project outside Workbench and want the file to
appear in the Navigator view.

Close Project The close project command is visible when an open project is
selected. This command closes the selected project.

Open Project The open project command is visible when a closed project is
selected. This command opens the selected project.

Copy to Remote Copies selected file or folder to another remote system.


system

Add Path To Model Adds the selected folder to the Model Search Order.

162 Workbench User’s Guide


Section 3. Workbench Overview

Search Order

Team Menu items in the Team submenu are related to version control
management and are determined by the version control
management system that is associated with the project. Eclipse
provides the special menu item Share Project... for projects that are
not under version control management. This command presents a
wizard that allows the user to choose to share the project with any
version control management systems that has been added to
Eclipse. Eclipse ships with support for CVS.

Compare With Commands on the Compare With submenu allow you to do one of
the following types of compares:
Compare two or three selected resources with each other
Compare the selected resource with remote versions (if the
project is associated with a version control management system).
Compare the selected resource with a local history state
After you select the type of compare you want to do, you will either
see a compare editor or a compare dialog. In the compare editor,
you can browse and copy various changes between the compared
resources. In the compare dialog, you can only browse through the
changes.

Replace With Commands on the Replace With submenu allow you to replace the
selected resource with another state from the local history. If the
project is under version control management, there may be
additional items supplied by the version control management
system as well.

Generate Schema Generates Schema file from XML.

Properties This command displays the properties of the selected resource. The
kinds of properties that are displayed depend on what type of
resource is selected. Resource properties may include (but are not
limited to):
Path relative to the project in which it is held
Type of resource

Workbench User’s Guide 163


Section 3. Workbench Overview

Absolute file system path, or name of path variable when using


linked resources
Resolved path variable when using a path variable for a linked
resource
Size of resource
Last modified date
Read-only status
Derived resource status
Execution arguments, if it is an executable resource
Program launchers, if it is launchable
Project dependencies, if any

164 Workbench User’s Guide


Section 3. Workbench Overview

By default, the Navigator view shows all resources in your


Narrowing the Workbench. You can focus on a subset of resources by temporarily
scope of the "going into" a project or folder and hiding all other resources.
Navigator view
In the Navigator view, right-click the project of folder that you
want to focus on.
From the pop-up menu, select Go Into.
The Navigator now shows only the contents of the selected project
or folder. The title of the Navigator shows the name of the resource
you are currently looking at. You can use the Back, Forward, and
Up buttons on the Navigator view's toolbar to change the scope.

To sort Workbench resources in the Navigator view by name or by


Sorting file type:
resources in the
Navigator view
1. On the toolbar for the Navigator view, click the Menu button

to open the drop-down menu of display options.


2. Select Sort.
3. Select the desired sort option.

You can choose to hide system files or generated class files in the
Showing or Navigator view. (System files are those that have only a file
hiding files in extension but no file name, for example .classpath.)
the Navigator
view
1. On the toolbar for the Navigator view, click the Menu button

to open the drop-down menu of display options.


2. Select Filter.
3. In the dialog box that opens, select the checkboxes for the types
of files that you want to hide.
In addition, you can restrict the displayed files to a working set.
1. On the toolbar for the Navigator view, click the Menu button

to open the drop-down menu of display options.

Workbench User’s Guide 165


Section 3. Workbench Overview

2. Choose Select Working Set...


3. Select an existing working set from the list or create a new one
by selecting New...

Trading Partner Trading Partner Navigator displays trading partners set up in the
Navigator Application Integrator™ Profile Database.

To expand each trading partner within Trading Partner Navigator,


click the + icon to the left of the trading partner name. To collapse
the trading partner, click the – icon.

166 Workbench User’s Guide


Section 3. Workbench Overview

Trading Partner The Trading Partner Attribute Viewer displays each of the fields
Attribute Viewer saved in the Profile Database for the selected trading partner.

Only two of the fields are editable:


Option Description
Target The target data model to be used by the selected
Model trading partner during translation.
Name
Attachment The map component file to be used by the
Name selected trading partner during translation.

Workbench User’s Guide 167


Section 3. Workbench Overview

Built-Ins Displays all functions that are available within Workbench.

Tab Description
Data Displays all data model functions with a
Model description of what they do and the arguments
Functions passed to them.
XSL Displays all XSL model functions with a
Functions description of what they do and the arguments
passed to them.
String Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
Data Displays all data model functions with a
Model description of what they do and the arguments
Structure passed to them.
Functions
Database Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
SQL Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
Default Miscellaneous functions that do not fall into any of
the other categories.

168 Workbench User’s Guide


Section 3. Workbench Overview

Tab Description
Keywords Displays all keywords with a description of what
they do and arguments passed to them if
applicable.
Operators Displays all Operators can be used in Data models
with a description of what they do.
Date and Displays all data model functions with a
Time description of what they do and the arguments
Functions passed to them.
Control Displays all data model functions with a
Server description of what they do and the arguments
Functions passed to them.
All Displays all functions with a description of what
Functions they do and the arguments passed to them.

Message Variables Displays all DMIs, arrays, and variables that are present in the
currently opened data models or map component files.

All variables (VARs and


ARRAYs) used in the data
model that is opened

Tab Description
Sorts the tree elements alphabetically.
Expands/collapses the tree.

Workbench User’s Guide 169


Section 3. Workbench Overview

Tab Description
Adds a new variable. This inserts a new node
below the Model Variables tree node. By double
clicking on the new node, the new node’s name
can be changed.
Adds a new array. This inserts a new node below
the Model Arrays tree node. By double clicking on
the new node, the new node’s name can be
changed.

Performs Displays all PERFORM declarations of the include files included in


the opened data model.

170 Workbench User’s Guide


Section 3. Workbench Overview

Problems Displays any errors encountered when opening a data model or


map component file. The DMI(s) in error will be in the color
selected for errors in the Preferences dialog, and the problem will
be listed in the Problems view.

Outline Displays the opened model in Hierarchical format.

Workbench User’s Guide 171


Section 3. Workbench Overview

Properties Displays the properties for the currently selected file in Navigator.

Remote Site Allows you to browse a remote system using FTP.


Navigator

The remote site navigator provides access to files on remote


systems through an ftp connection. The left hand pane provides a
hierarchical view of the resources contained in the home directory
on the remote system. The right hand pane shows the contents of
the directory selected on the left pane. Double clicking a
workbench artifact will generate its local copy in Navigator view
followed by the opening of the artifact in the respective editor. Any
changes saved to the local copy will also be saved in the original
artifact.

172 Workbench User’s Guide


Section 3. Workbench Overview

In order to create a new FTP connection the following steps should


be followed:

1. Right click on the left hand side pane to get the context menu.
Now select New->Target Site

2. Enter the URL of the FTP site, username, password, time out and
other details in the dialog that appears and click the Finish
button.

Workbench User’s Guide 173


Section 3. Workbench Overview

3. The new remote site is created and displayed in the Remote Site
Navigator view. You could click on any folder to view the files.
4. Right click to view the context menu.
The context menu on the panes allows the user to perform the
following:
Create new FTP connections(site)
Discard existing connections
Edit the properties (username/password) of an FTP
connection.
Copy file/directory to another remote (FTP ) location

174 Workbench User’s Guide


Section 3. Workbench Overview

Open a workbench artifact in the respective editor


Add a remote location to Model Search Order

Note: When you open, modify and save a file in the Remote site
Navigator, the timestamp gets changed. The file takes the
timestamp of the remote machine from which it is fetched.

Workbench User’s Guide 175


Section 4. Creating Map Component Files (Environments)

Section 4. Creating Map Component


Files (Environments)

This section discusses how to create new map component files and
modify existing map component files, including the recommended
naming conventions for these files.

Workbench User’s Guide 176


Section 4. Creating Map Component Files (Environments)

Defining a Map A key step in mapping data and preparing it for translation using
Component File Application Integrator™ is to create an environment by defining
and saving a map component file. As described earlier, an
environment consists of components that control how the data is to
be translated, such as the input/output files, and models/style
sheets to be used. In a Workbench application, an environment is
referred to as a “map component file,” and the environment
definition is “attached” to the translator.

Recommended When naming the map component files, keep the following
Naming considerations in mind:
Convention
Use “.att” for the suffix.
Do not use the prefix “OT,” since it will conflict with names
already assigned in the Application Integrator™ application.
The prefix “OT” is a reserved prefix for Application
Integrator™ application files. Using it can compromise the
product’s performance.
Use upper- and lowercase letters and underscore “_” only.
Do not use spaces.

Note: When implementing public standard EDI messages, the


Application Integrator™ generic processing method is normally
invoked. The generic processing method appends the suffix “.att”
to the base filenames before attaching the map component file to
the translator.
Although the translator does not require file extensions of “.att”,
the generic method does. So it is recommended that you use the
extension “.att” in all map component file filenames.

Workbench User’s Guide 177


Section 4. Creating Map Component Files (Environments)

Defining a New Map The Map Component Editor dialog box is used to define and
Component File modify map component files.
The map component file must have at least the source or target
model defined. It is recommended that you save your map
component files and data models to the models directory specified
in Trade Guide System Configuration or “MyModels” directory
specified at AI Control Server installation time.

To define a map component file


1. From the Workbench File menu, choose New>Map
Component. The New Map Component File dialog is
displayed.

3. Browse and select a value for the Parent Folder. This is where
the map component file will be stored.

178 Workbench User’s Guide


Section 4. Creating Map Component Files (Environments)

4. Enter a name for the new map component file and click Next>.

5. Select the type of models that will be created/used for either


Source or Target.
6. Type in a name for the data model/style sheet, or use the
Browse button to select one.

Note: Do not type double quotation marks (“ ”) around text in


the Map Component Editor dialog box. The system
automatically places the quotes around the necessary text
within the map component file.

Source/Target Model: Enter either the explicit source data


model name or an environment variable name.

Workbench User’s Guide 179


Section 4. Creating Map Component Files (Environments)

Source/Target Access: Select either the explicit source access


model name or the description from the Access drop down
list box.
To enter an exact model name, type (or select from the Browse
button) the name of the data model to be used by this map
component file. (Both source and target data models display in
the list.) Be sure to add the extension “.mdl” for the data model
or “.xsl” for the style sheet.
To reference a model name by an environment variable, enclose
the variable name in parenthesis. Multiple variables, or a
combination of string and variable names, can be entered on
the line. Add the extensions, if it is not specified in the variable
reference. The system automatically concatenates the name.
7. Select the access type for the model being used and click Next>.

180 Workbench User’s Guide


Section 4. Creating Map Component Files (Environments)

8. Enter any comments you would like added to the map


component file.
9. Enter any Environment Variables to be used by this
environment.
10. Complete the map component file.
a. To save the map component file for the first time, choose the
Finish button. Finish writes the map component file to
disk. The newly created Map Component file is opened in
the appropriate Editor. If the Map Component file already
exists, a prompt is displayed to confirm that you want to
overwrite the existing Map Component file.
b. To exit the New Map Component Wizard dialog box
without saving, choose the Cancel button.
If the model does not already exist, the following dialog is
displayed:

Click Yes to have the model created.


The newly created map component file and Data Model are
opened in the Map Editor work area:
See Section 5. Creating Data Models for EDI and Application Data
for creating data models for EDI and application data.
See Section 10. XML Mapping and Processing for creating Xpath
models and style sheets.

Workbench User’s Guide 181


Section 4. Creating Map Component Files (Environments)

Modifying an An existing map component file can be modified or used as a


Existing Map template instead of creating a new map component file. To modify
an existing map component file, select and open a map component
Component File
file that is similar to the one to be created. To edit a map
component file, follow the instructions below and save the file
under the same name.

To modify an existing map component file


1. From the Workbench Navigator view, double click the map
component file to be modified.
2. From the Map Editor work area choose the Map Definition
Tab.

3. Make the changes, using the same techniques described in the


section Defining a New Map Component File.

4. Choose Save from the File menu (or icon from the Tool
bar) to save your changes.

182 Workbench User’s Guide


Section 4. Creating Map Component Files (Environments)

To save a Map 1. Activate the Map Editor window of the data model to be saved
Component File under with a new name. (Click the title bar of the window to activate
a new name it.)
2. From the File menu, choose Save As.
3. If the File is a Map Component file with two model files, the
Save As dialog is as follows.

4. If you want to change the names of the files, either browse and
select the name and location or type the new file name in the
box provided. Then save the files.
5. If the files already exist, a dialog appers saying that the files
already exist and asks if you want to overwrite the files. Select
the option you want.

Workbench User’s Guide 183


Section 4. Creating Map Component Files (Environments)

6. If you open a single model file in Map editor, and wish to save
it as an Attachment file, check the check box Save as .att
provided at the left extreme corner of the dialog and then
provide a valid value for the Map Component File.

7. Click OK to save the files.

To save all open files Use this procedure to save all files that are open in the work area
appearing in the work using their current filenames. If you want to save any of the files
area (Save All) using a different filename, use the Save As procedure.
• From the File menu, choose Save All.

184 Workbench User’s Guide


Section 4. Creating Map Component Files (Environments)

This page is intentionally left blank.

Workbench User’s Guide 185


Section 5. Creating Data Models for EDI and Application Data

Section 5. Creating Data Models for EDI


and Application Data

This section discusses how to create traditional models for


Electronic Data Interchange (EDI), and for application data. It also
describes how to define the structure and the attributes of the input
data in a source data model and the structure and attributes of the
output data in a target data model.

Workbench User’s Guide 186


Section 5. Creating Data Models for EDI and Application Data

Working with
Data Models

Defining a New Data The following are the steps to follow to create a new data model.
Model
When creating a new map component file, the data models are
automatically created if they were not already present. Section 4.
7

Creating Map Component Files (Environments) for information on


creating new map component files to create new data models.

To define a new data model


1. From the Workbench File menu, choose New>Data Model.

2. Browse and select a value for the Parent Folder. This will be the
location where the map component file will be stored.
3. Enter a file name for the new data model.
4. Select the mode for the data model. This will be either Source
for parsing input data or Target for writing out the output data.
5. Select the type as EDI.

Workbench User’s Guide 187


Section 5. Creating Data Models for EDI and Application Data

Select XML if you want to create an XML based data


model.
Select From template if you are using an existing data
model as a starting point. Refer to Working with Standard
Data Model for more information on how to create a data
model from a template.
6. Click Next.

7. Select an Access Model to be used with the new data model.

188 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

8. Click Finish. The new data model is opened in Model Editor.


Working with The following are the steps to open an existing data model or
Standard Data Models standard data model. Standard data models are provided with EDI
Plug-ins. These models provide the structure for EDI documents
and can be used as a starting point for mapping. You need to
simply create the other model for the application data, and add
rules.

To open an existing data model


1. From the Workbench Navigator view, double click the data
model you wish to open.
2. The model is displayed in the model editor.

Workbench User’s Guide 189


Section 5. Creating Data Models for EDI and Application Data

To create a data model from a Standard Data Model


1. From the Workbench File menu, choose New.
2. From the New menu, choose Data Model.

3. Browse and select a value for the parent folder. This will be the
location where the map component file will be stored.
4. Enter a file name for the new data model.
5. Select the mode for the data model. This is either Source for
parsing input data or Target for writing out the output data.
6. Select the From template radio button under the Type label.
7. Click Next.

190 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

8. Select the standard data model to be used as the template.


9. Click Finish. The new data model will be opened in Model
Editor.

Note: Standard data models are distributed in each Plug-In CD


where applicable. The standard data model must be copied from
the CD to your system to be used.

Working with XML Refer to Generating Data Models from Schemas for details
based Data Models

Converting SEF Follow these steps to convert a Sef format file to a Data Model.
Format to a Data
Model

Workbench User’s Guide 191


Section 5. Creating Data Models for EDI and Application Data

1. In Workbench, click File >Import.


The following screen is displayed.

2. In the Import dialog, select the Sef to Mdl option and click
‘Next’.
3. In the Sef To Mdl Import Wizard, specify the Input file to use
(.sef) and the output directory for the output files to be
generated in. Choose the options for model type:
Model Direction: Source, Target

192 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

4. Click “Finish” to run conversion


5. The console view presents you with the steps and their result
during conversion.

Workbench User’s Guide 193


Section 5. Creating Data Models for EDI and Application Data

6. A dialog appears at the end, specifying the directory where the


output files are contained.
7. Use the generated standard files to create model files.

Note: We support only X12 and EDIFACT based SEF documents as


of now.

194 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Converting COBOL Follow these steps to convert a COBOL Copy Book file to a Data
Copy Book Format Model.
to a Data Model
1. In Workbench, click File > Import.
The following screen is displayed.

2. In the Import dialog, select the Copy Book to Mdl option and
click Next.
3. In the Copy Book To Mdl Import Wizard, specify the Input file
to use (.lib) and the output Mdl file to be generated. Choose the
options for model type:

Workbench User’s Guide 195


Section 5. Creating Data Models for EDI and Application Data

• Model Direction: Source, Target


• Platform: Select your platform from the drop down.
• Data Type: Data is in Least Significant Bit (LSB) to Most
Significant Bit (MSB) order or Data is in Most Significant
Bit (MSB) to Least Significant Bit (LSB) order. When you
select the platform, the Data Type gets selected by
default. You can change it if required.

Note: Depending on the platform selected, the


computational/binary data representation is taken care of in the
generated model.

196 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

4. Click Finish to run conversion.


5. The console view present you with the steps and their result
during conversion.
6. At the end of conversion the output file is opened in the
associated editor.

7. Add other mapping rules to the model file as required.

Defining a Data Model The following are the steps to open an existing data model or
Item standard data model. Standard data models are provided with EDI
Plug-Ins. These models provide the structure for EDI documents
and can be used as a starting point for mapping. You need to
simply create the other model for the application data, and add
rules.

Adding Data Model


Items

Note: The default hierarchy level for a new item is the same as the
currently selected data model item.

Note: For this section, model editor will be the only editor open.

To add a new data model item

Workbench User’s Guide 197


Section 5. Creating Data Models for EDI and Application Data

1. In the Model Editor window (in the Overview page), select the
data model item above or below which you would like to add a
new data model item.
2. To append (add below) a data model item, do the following:

Toolbar Icon – Click the toolbar Insert Below icon.


3. To insert (add above) a data model item, do the following:

Toolbar Icon – Click the toolbar Insert Above icon.


4. To further define the data model item, click the row containing
the data model item and specify the additional attributes
required. Refer to the following sections for more information.

Changing Data Model Each data model item you add has a default name of
Item Names NewDMI_<date>_<time>.
To change the name of a data model item
1. In the DMI Editor area, highlight the name and type over the
default or existing name with a new name.
2. To accept the new name, click outside the data model item name
box.
Copy and Paste
To copy and paste a data model item
1. In the Overview of Data Model File area, highlight the name

and select the Copy icon.


2. Select the data model item to insert the new item below, and

select the Paste icon.


3. The new data model item name will be the same as the one
copied with _0001 appended to the end.
Duplicating Data Model
Items
To duplicate a data model item
1. In the Overview of Data Model File area, highlight the name

and select the Duplicate icon.

198 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

2. The name of the new DMI will be the same as the original with
_0001 appended to the end.

Workbench User’s Guide 199


Section 5. Creating Data Models for EDI and Application Data

Deleting a Data
Model Item
To delete a data model item
1. In the Overview of Data Model area, highlight the name and
select the Cut icon.

Assigning a Data For each data model item you add to your model, you must assign
Model Item Type an item type. The options you view in the Item Type selection list
are based on the access model associated with your model.

Hint: If you are unsure of the exact definitions of the item types,
you can view the access model associated with your model. To do
this, double click on the access model in the Navigator view.

Data Model Item There are four major data model item structures: group, tag,
Structures defining, and container items. One or more item type names may
be associated with each of these structures, based on your access
model. All data model items default to the item type Group. Once
you define the item type, the leftmost icon in the Layout Editor
(referred to as the access icon) will change to reflect the major
structural type of the item.

Group
Tag
Defining
Container

200 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

To assign an item type


1. In the DMI Editor area, select the Access Type column for the
data model item you want to modify.

Note: Refer to the appendix in each standards


implementation manual, such as the ASC X12 Standards
User’s Guide, for a list of item types that apply to the
standard.

2. Click the Access Type arrow. A selection list appears.

3. Select the item type from the list.


4. To accept the selection, click outside the Item Type box or press
Tab to move to the next box. The icon of the Data Model in the
Model Tree changes (if applicable) to reflect the item type
change.

Workbench User’s Guide 201


Section 5. Creating Data Models for EDI and Application Data

Assigning Data Model


Item Attributes

Hint: In the Model Editor work area; there are two tabs at which
attributes of a data model item can be modified. On the Overview
tab, you are working with one data model item at a time, by
selecting it in the Overview of Data Model view. On the Properties
tab, you can modify all data model items.

Occ Min/Max
Setting Minimum and Maximum Occurrence
The minimum and maximum occurrence value controls the
number of times a data model item must and can be present in the
data stream. The minimum and maximum occurrence value of a
new data model item is user-defined. The default is 0. The
minimum occurrence value must be less than or equal to the
maximum occurrence value. A minimum occurrence of 0 indicates
that the data model item is optional. The maximum value can be
set with the asterisk (*) wild card to specify a variable amount.

To modify the minimum and/or maximum occurrence values


1. Select the data model item and click in the Occ Min or Occ Max
box.
The Minimum and Maximum boxes displays as shown below:

2. For each box, type a numeric value that specifies the minimum
and maximum occurrence.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.

202 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Size Min/Max
Setting Minimum and Maximum Size
The minimum and maximum size value controls the data model
item’s field size in the data stream. The minimum and maximum
size value of a new data model item is user-defined. The minimum
size value must be less than or equal to the maximum size value.
The size maximum value cannot exceed 4092.

Note: Size is not available for numeric Date or Time fields. You
must specify the exact size through correct masking in the Format
box. See the next section for instructions on formatting.

To modify the minimum and/or maximum values


1. Select the data model item to be modified and click in the Size
Min or Size Max box.
The Minimum and Maximum boxes display as shown below:

2. For each box, type a numeric value that specifies the minimum
and maximum size allowable for data that is mapped to this
item.

Note: The minimum must be greater than zero.

3. To accept the values entered, click outside the box or press Tab
to move to the next option.

Workbench User’s Guide 203


Section 5. Creating Data Models for EDI and Application Data

Format
Defining the Data Model Item’s Format
The data model item format box is only available if the data model
item is defined as a date, time, or numeric item type.

Note: See Appendix A. Application Integrator Model Functions in


Workbench User’s Guide-Appendix for examples of possible
numeric, date, and time formats.

To add a format
1. Select the data model item to be modified and select the Format
box.
2. Type the format for the date, time, or numeric field using the
numeric and sign masking characters described in this section,
for example, you might type “MM/DD/YYYY” for a date item.

For a numeric field, be sure to consider the decimal placement,


positive or negative sign, and alignment desired.
3. To accept the new format, click outside the box or press Tab to
move to the next option.

Numeric Formatting and Masking Characters


The tables on the following pages list the numeric formatting
characters for floating point, whole numbers, numeric signs
(positive or negative), decimal characters, and alignment. Consider
the following limitations to numeric handling before you set the
numeric formats.

204 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

A. Numeric Handling Description


Application Integrator™ supports unlimited numeric lengths in the
parsing and constructing of data within your input or output
streams. During these processes, numeric values are handled as
strings, conforming to the format you set up in your data models.
There is, however, a limit to the number of digits that Application
Integrator™ supports during computation processing. In these
cases, Application Integrator™ converts the string into a numeric.
The following is a brief description of this limitation:
Application Integrator™ limit per number is 15 digits, not
including the decimal character, sign character, or triads
(thousand separator character). This limit applies to each
element in the equation and the result.
For example, if you have a number greater than 15 digits
such as 1234567890123456, the system returns
1234567890123458, where the 16th and greater positions are
populated with random numbers from the memory stack.
If a number has more than 6 digits after the decimal point, it
will round to the 6th decimal place. For example,
0.123456789 returns in memory 0.123457
0.123454321 returns in memory 0.123454
Decimal values must be preceded either by a whole number
value or by zero (0). Otherwise, a syntax error occurs on
parsing.
For example,
VAR VALUE=.12345 + .12345 returns Error message
VAR VALUE=0.12345 + 0.12345 returns 0.24690

Workbench User’s Guide 205


Section 5. Creating Data Models for EDI and Application Data

Hint: Should your application require the computation of


extremely large numeric values or the carrying of lengthy
decimal values, the Application Integrator™ User Exit
Extension product provides support for user-defined
functions to handle these numerics and/or equations.
These functions can be invoked like standard Application
Integrator™ functions during data modeling.

B. Floating Explicit Decimal


In floating explicit decimal, the format does not define the position
of the decimal. The data stream must contain a decimal in order to
output a decimal. Use the following masking characters to format
floating explicit decimals. These examples depict values being
target formatted.

Mask Example
N Non space-taking sign. Includes a negative sign for a
negative value. No character is used to indicate a
positive value.
Example:
–0.12 “NRRRRR” “–.12”
+0.12 “NRRRRR” “.12”
R Floating number with an explicit decimal when required
Example:
0.12 “RRRRR” “.12”
r Used with “R” to indicate decimal precision
Example:
0.12 “RRRrrrr” “.1200”
0 Used with “R” to specify a whole zero digit is required
for a decimal value
Example:
0.12 “0RRRRR” “0.12”
:n Minimum size, where “n” is from 1 to 9
Example:
0.12 “0RRRRR:5” “000.12”

206 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Mask Example
:, Decimal notation defined in format
Example:
0.12 “RRRRR:,” “,12”
:rn Maximum decimal size, where “n” is from 1 to 5.
Example:
0.12 “RRRRR:r3” “.120”

C. Notes on the Floating Explicit Decimal Masking Characters (Rr0)


The following notes pertain to floating explicit decimal masking
characters R, r, and 0:
Applies to the #NUMERIC and #NUMERIC_NA access
model function.
“0” used in format must precede the “R”s, for example,
“0RRRR.”
“0” is not counted in the length.
“r”s must be to the right of all “R”s.
Invalid to use the following in the format: period (.), comma
(,) and caret (^).
The automatic insertion of the decimal notation character is
not counted in the length.
The decimal notation character is only output when needed.
When the decimal notation is not defined within a format
(“RRRRR:,”), the decimal will default from
SET_DECIMAL( ) if set, or else will default to the “.”
character.
To define both minimum size and decimal notation, be sure
to use a colon after each, for example: “RRRRR:2:,” would
do this.

Workbench User’s Guide 207


Section 5. Creating Data Models for EDI and Application Data

D. Other Than Floating Explicit Decimal


In non-floating decimal, the format defines the implied or explicit
position of the decimal. As per the format, the value will always
contain the format-defined number of decimal places. An explicit
decimal defined in the format requires the decimal to be
parsed/constructed in the data stream. An implied decimal
defined in the format requires the decimal to not be
parsed/constructed in the data stream. Use the following masking
characters for numerics other than floating explicit decimals. The
examples below depict values being target formatted.

Mask Example
9 Zero fill whole leading or decimal trailing zero digits
Examples:
123 “99999” “00123”
1.1 “99.99” “01.10”
Z Space fill whole leading or decimal trailing zero digits
Examples:
123 “ZZZZZ” “ 123”
1.1 “ZZ.ZZ” “ 1.1 ”
F Suppress whole leading or decimal trailing zero digits
(variable length)
Examples:
000123 “FFFFF” “123”
1.100 “FF.FF” “1.1”
$ Monetary symbol, treated like the “F” mask character,
but inserts the dollar sign at the beginning of the string
(variable length)
Examples:
134567 “$ZZZ,Z99.99” “$ 134,567.00”
134567 “$FFF,F99.99” “$134,567.00”
1.25 “$$$,$$$.99” “$1.25”

208 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

E. Sign Masking Characters


Use the following masking characters to return the appropriate
sign (or no sign):

Sign (Masking)
Character Explanation Examples
N Displays a negative sign for a Negative:
negative value. -123 “99999N” “00123-”
No character is used to indicate a Positive:
positive value. 123 “99999N” “00123”
- Displays a negative sign for a Negative:
(use the hyphen negative value. -123 “99999-” “00123-”
character) Displays a space for a positive Positive:
value. 123 “99999-” “00123 ”
None No character is used to indicate a Negative:
positive or negative value. -123 “99999” “00123”
Positive:
123 “99999” “00123”
+ Displays a negative sign for a Negative:
(use the plus negative value. Displays a plus -123 “99999+” “00123-”
sign character) sign for a positive value. Positive:
123 “99999+” “00123+”
_ Displays a negative sign for a Negative:
(use the negative value. -123 “99999_” “00123-”
underscore Displays a zero (0) for a positive Positive:
character) value. The zero is dropped when 123 “_99999” “000123”
there are only whole digits and is 123 “999.99_” “001.230”
right justified. 123 “99999_” “000123”
(right justified)
A The ASCII overpunch table is Negative:
(must be placed used to indicate a negative or -123 “99999A” “00012s”
in the rightmost postive value. Positive:
position) 123 “99999A” “000123”

Workbench User’s Guide 209


Section 5. Creating Data Models for EDI and Application Data

Sign (Masking)
Character Explanation Examples
E The EBCDIC table is used to Negative:
(must be placed indicate a negative or positive -123 “9999E” “0012L”
in the rightmost value. Positive:
position) 123 “9999E” “0012C”

F. Notes Pertaining to Sign Masking Characters (-_+NAE)


The masking format character for a sign is only valid at the
beginning or end of the format, except for “A” and “E”
which can only be placed at the end of the format.
G. Decimal Masking Characters

^ The caret (^) is used for implied decimal position


formatting
Example:
1.2 “99^99” “0120”

Explicit decimal notation can be defined in several ways:


1. A colon followed by the “.” or “,” character defines the decimal
notation within the format string.
Example:
“RRRRR:.” defines “.” as the decimal notation.
2. A single occurrence of “,” or “.” in a format defines it as
decimal notation.
Examples:
“ZZZ.ZZZ” defines “.” as decimal notation.
“FFF,FFF” defines “,” as decimal notation.
3. Multiple occurrence of the same character denotes a triad, with
the other character (. or ,) defined as the decimal notation.
Examples:
“ZZZ,ZZZ,ZZZ” defines “.” as decimal notation.
“FFF.FFF.ZZZ” defines “,” as decimal notation.

210 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

4. One occurrence of each character (,.), defines the rightmost


character as the decimal notation.
Examples:
“ZZZ,ZZZ.ZZZ” defines “.” as decimal notation.
“FFF.FFF,ZZZ” defines “,” as decimal notation.
H. Notes Pertaining to Decimal Masking Character
When no decimal digits are being output, the decimal
notation is not output.
I. Binary and Packed Decimal Masking Characters
In order to take advantage of a computer central processing unit’s
(CPU) processing cycles, the significant byte order must be taken
into account. The most significant byte (MSB) stores data in the
low order and least significant byte (LSB) stores data in the high
order. In simplest terms, this means that MSB data is read from
right-to-left and LSB data is read from left-to-right. Because of the
MSB versus LSB situation, data models built using MSB cannot be
directly used on computers with LSB. Also, the profile databases
can be directly copied from MSB to MSB computers or from LSB to
LSB computers. However, if the databases are to be copied from
MSB to LSB, they must be first exported and then imported using
Trade Guide.
The purpose of the binary and packed decimal masking characters
is to allow data to be read and processed between different CPU
processors. Packed decimal and binary data formats are supported
by Application Integrator™ and enhance its use with legacy
applications (such as COBOL application data). For example, data
created with a Hewlett Packard PA-RISC computer system could
be read on an Intel based computer system.
When you are modeling a source data model, you must know
where the input data will be created.
If the input data is created with an Intel-based CPU, you
would use ‘p’ or ‘b’ because it is LSB.
If the input data is created with a non-Intel-based CPU, you
would use ‘P’ or ‘B’ because it is MSB.
When you are modeling a target data model, you must know
where the input data will be going.

Workbench User’s Guide 211


Section 5. Creating Data Models for EDI and Application Data

If the output data is going to an Intel-based CPU, you would


use ‘p’ or ‘b’ because it is LSB.
If the input data is going to a non-Intel-based CPU, you
would use ‘P’ or ‘B’ because it is MSB.
The following table shows various types of platforms and the
formats used for each. Use this table to determine the masking
character for binary and packed decimal numbers in your inbound
or outbound data models.

MSB LSB
Platform Binary Packed Binary Packed
Intel/NT b p
Intel/Linux‡ b p
HP PA–RISC, B P
Itanium
Sun SuperSparc B P
IBM B P
IBM PowerPC‡ B P
SGI MIPS B P
‡ As of date of publication, Application Integrator™ is not available on these
platforms.

The translator converts the formatted input or output into the


Application Integrator™ internal numeric format. This means that
negative numbers are preceded with a hyphen (-). When the
number includes a decimal notation character, the decimal is
explicit. When a fractional number occurs, a leading zero is placed
before the explicit decimal. The character set for numerics is 0–9,
“.”, “-”.

Note: Application Integrator™ does not support unsigned


numerics.

Binary data can be stored in 1, 2, or 4 bytes. This is 8, 16, and 32


bits, respectively. Therefore, the data modeler would represent
binary data as:

212 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

LSB Mask MSB Mask Numeric Value Range


B b -128 to +127
BB bb -32,768 to +32,767
BBBB bbbb -2,147,483,648 to +2,147,483,647

Packed decimal or Comp data stores data in 1 to 7 bytes. Seven


bytes will hold a decimal value containing 13 digits, which is the
maximum number of numeric characters allowed for Application
Integrator™.

LSB Mask MSB Mask Length of Numeric Storage Required


P p 1 1
PP pp 2 2
PP pp 3 2
PPP ppp 4 3
PPP ppp 5 3
PPPP pppp 6 4
PPPP pppp 7 4
PPPPP ppppp 8 5
PPPPP ppppp 9 5
PPPPPP pppppp 10 6
PPPPPP pppppp 11 6
PPPPPPP ppppppp 12 7
PPPPPPP ppppppp 13 7

Workbench User’s Guide 213


Section 5. Creating Data Models for EDI and Application Data

For example, to format a field for mainframe data that will contain
the packed values of +123, -123, and 123, you would use the format
‘PP’. The translator would read and store the values as follows:

Value stored in Hexadecimal COBOL Picture Clause


two bytes Value
+123 12 3C S9999 COMP-3
-123 12 3D S9999 COMP-3
123 12 3F 9999 COMP-3
(unsigned is not supported)

The value of +1234567890 stored in six bytes and modeled as


PPPPPP, would be:

Value stored Hexadecimal Value COBOL Picture Clause


in six bytes
+1234567890 01 23 45 67 89 0c S999999999 COMP-3

J. Other Masking Characters


Use the following masking characters for justification, triads, and
literals:

Mask Usage
:L Left justify
:R Right justify
Examples:
12 “ZZZZZ.ZZ” “ 12 ”
12 “ZZZZZ.ZZ:L” “12 ”
12 “ZZZZZ.ZZ:R” “ 12”
triads “,” or “.” can be used with “9”, “F”, “Z”, “$”, but
not with “R” for the thousand position placement
character.
@ Escape literal characters defined within the format.
(Escape) Example:
“@For: $ZZ,ZZZ” escapes the “F” literal.

214 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

K. Notes on Other Formatting Characters


Multiple colon formatting characters can be combined in a
format string, but each must be separated with a colon. For
example: “ZZZZ:L:,” not “ZZZZ:L,”

L. Date Masking Characters (#DATE, #DATE_NA)


Use the following masking characters to establish a date format:

Mask Usage
M Date location for month, requires two Ms
Example:
19940902 “MM/DD/YY” “09/02/94”
D Date location for day of month, requires two Ds
Example:
19940902 “DD/MM/YYYY” “02/09/1994”
Y Date location for year, requires one, two or four Ys
Example:
19940902 “YMMDD” “40902”
m Replaces leading month digit (if zero) with space
Example:
19940902 “mM/DD/YY” “ 9/02/94”
d Replaces leading day digit (if zero) with space
Example:
19940902 “dD/MM/YY” “ 2/09/94”
0 Defines a date of all zeros to be constructed
(#DATE_NA)
y Date location for variable length year must be in this
form: “yyYY”
<spac A space “ ” as a leading character in a mask defines a
e> date of all spaces to be parsed or constructed
(#DATE_NA)

Workbench User’s Guide 215


Section 5. Creating Data Models for EDI and Application Data

M. Notes on Date Masking Characters


Source Processing
When using a variable length year (“yyYY”), literals may be
used in the format for masking; however, no escape
characters may be used. This format, “The @date is:
yyYYMMDD” is not permitted. This format, “Birth:
yyYYMMDD” is correct.

N. Time Masking Characters (#TIME, #TIME_NA)


Application Integrator™ internal time format consists of 8 digits for
valid hours, minutes, seconds, and decimal seconds. Hours and
minutes must be specified in the mask (a mask must be at least 4
digits). Use the following masking characters to establish a time
format:

Mask Usage
H Time location for mandatory hours, requires two Hs
(Required) Example:
120959 “HH:MM:SS” “12:09:59”
M Time location for mandatory minutes, requires two
(Required) Ms
Example:
120959 “HH:MM:SS” “12:09:59”
S Time location for mandatory seconds, requires two
Ss
Example:
120959 “HH:MM:SS” “12:09:59”
s Time location for optional seconds, requires two s’
(Source Example:
only) 1209 “HH:MM:ss:” “12090000”
120959 “HH:MM:ss” “12095900”
D Time location for mandatory decimal seconds,
requires two Ds
Example:
12095900 “HH:MM:SS:DD” “12095900”

216 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Mask Usage
d Time location for optional decimal seconds, requires
(Source two ds
only) Example:
120959 “HH:MM:SS:dd” “12095900”
1209591 “HH:MM:SS:dd” “12095910”
12095912 “ HH:MM:SS:dd” “12095912”
<space> A space “ ” as a leading character in a mask defines
a time of all spaces to be parsed or constructed
(#TIME_NA). The value parsed and passed back to
the source data model will be spaces, not zeros.

O. Notes on Time Masking Characters


Source Processing
A time parsed by the source access model is supplied back
to the source data model in the Application Integrator™
internal format of 8 digits, irrespective of whether the time
was parsed as 4, 6, 7, or 8 digits. The additional digits of 0
are added to the parsed value to construct the internal
format.
A minimum of 4 masking characters is required. A mask
must be 4, 6, or 8 characters in length, not counting the
<space> mask character.

Target Processing
H, M, S, and D are the target formatting characters. Use of
the source masking characters ‘s’ or ‘d’ will be taken as
literals and output as such, for example, “12:14:ss:dd.”
The value received is first converted to an 8-digit number by
adding trailing zeros and then output based on the format
definition. If the value is a single digit (e.g., “2”), a leading
zero is first inserted before the trailing zeros are added
(e.g., “02”).
A value of more than 8 digits generates error code 146.

Workbench User’s Guide 217


Section 5. Creating Data Models for EDI and Application Data

Match Value Setting a Match Value


Setting a match value is optional and is only available on a tag data
model item.
The Match Value box takes a literal value and has a maximum size
of 15 characters when using OTFixed.acc as the access model; or a
maximum size of 3 characters when using OTX12S.acc,
OTX12T.acc, OTEFTS.acc, OTEFTT.acc, OTANAS.acc or
OTANAT.acc as the access model.
During processing of the source data model, the value in the Match
Value box is compared with the characters at the beginning of the
record in the input stream. If the match value is not encountered,
processing continues at the next record/segment.
During processing of the target data model, the value in the Match
Value box will be constructed in the output stream at the beginning
of the record.

Note: To define the match value as case insensitive, preface the


value with a caret (^), for example, “^from.” The “^” character is
not counted as a character during parsing.

To define a data model item’s match value


1. Select the tag to be modified and select the Match Value box.

2. Type the appropriate match value.


3. To accept the value, click outside the box or press Tab to move
to the next box.

218 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Verify Specifying a Verification List


The Verify box is only available on a defining item. In this box, you
enter the name of the list against which the data for this item will
be verified. The verification list is created via the Trade Guide
from the Standards dialog box of the Xrefs/Codes menu. Refer to
the Trade Guide on-line help for details on how to enter these lists.
Besides specifying this list, verification requires additions to the
data model rules that specify the lookup key (using the
environment keyword variable “LOOKUP_KEY”) and the
appropriate lookup value must be defined in the source and/or
target data models for the verification list values to be found in the
Profile Database (using the functions “DEF_LKUP” or “LKUP.”).
For details, see the explanation of the keyword “LOOKUP_KEY”
and the functions “DEF_LKUP( )” and “LKUP( )” in Appendix A.
Application Integrator Model Functions in Workbench User’s Guide-
Appendix.

To define a data model item’s verification list


1. Select the data model item to be modified and select the Verify
box.

2. Type the name of the list.


3. To accept the list entry, click outside the box or press Tab to
move to the next box.

Workbench User’s Guide 219


Section 5. Creating Data Models for EDI and Application Data

File Specifying a Secondary Input or Output File


Workbench supports the ability to read from or write to a
secondary file during either inbound or outbound processing. The
File option can alter the input or output stream within an
environment. This option provides a means to parse from or
construct data to a second file. For example, you could specify that
selected output go to a secondary file for analyzing or reporting
purposes.
This feature is set up by specifying a file (or an environment
variable) to read from/write to during data modeling.
The following parameters apply to using the File option of the
Layout Editor:
The File option is only available for group items in either a
source or target data model.
On the source side, the File option specifies a file to read
from, opening the file upon the initial read of the
environment, reading continuously until the source data
model processing is complete. If the secondary file is not
found on the initial read of the model, an error occurs.
On the target side, the File option specifies a file to construct.
The I/O name specified in the File value entry box for the
group item is resolved in the following sequence:
1. The name is determined at model parsing time versus at
execution time; the name cannot be altered within the
data model in which it is used.
2. Attempt to resolve the I/O name by treating it as a user-
defined environment variable.
If an environment variable of the same name does not exist,
the filename is taken as a string literal.
All items hierarchically contained within the group item are
parsed/constructed as per the specified input/output
stream. The parsing/constructing of the data continues
until control returns to the parent data model item of the
group data model item.

220 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

Outside of the group item, the system uses the


environment’s specified input/output stream (specified in
the map component file or at the command line) during
parsing/constructing, unless, the File option is set for
another group item.
An attached environment specified within a “File” group
item inherits the previous environment’s specified
input/output stream, and not the secondary stream
specified by the File group item.
In a single layered environment, within a File group item,
the data is appended to the specified output stream.
In a multiple layered environment, within a File Group item,
the data is overwritten in the specified output stream, since
the File option is reprocessed each time the environment
where the File group item is specified is reattached.

Note: Refer to Error! Not a valid result for table. for more
details on environments.

To specify a secondary input/output file


1. Select the group item to be modified and click in the File box.

2. Type the appropriate filename (including the full path, if


necessary).
3. Click outside the box to accept the value.

Sort Sorting Defining Item Output


This option provides a means to reorder a section of the output
stream for reporting or other purposes. The output for the group
will be in the order based on the sort order (primary, secondary,
and so on) specified. A sorted group can be output to a secondary
file. See the previous section on the File option for details.

Workbench User’s Guide 221


Section 5. Creating Data Models for EDI and Application Data

Note: The Sort option is available on items within a group in a


target data model only. This option provides a method of sorting
selected defining items associated within (children of) the group
item.

To define a group item’s sort value


1. Select the group item to be modified and select the Sort box.

2. Single click the ellipses that appear in the box to open a dialog
box that allows you to select the sort sequence of the defining
items within the group selected. This Sort dialog box displays
two areas labeled “List” and “Sort.”

3. From the List box, select each defining item by which to sort
the data model items in the group. To place the item in the Sort
box, choose the >> button. To remove an item from the Sort,
select it and choose the << button, returning it to the List box.
The first item you select is the primary sort, the second item
becomes the secondary sort, and so forth.

222 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

4. Choose the OK button to save your sort order for the group or
choose the Cancel button to return to the Layout Editor
window without specifying a sort order.
Once you return to the Layout Editor and select another area of
the window, the sort order appears in the Sort box.

To review or edit the complete list of defining items (since only


the first few characters of the primary sort appear in the box),
select the Sort box and click the ellipses to return to the Sort
dialog box.
Data Hierarchy When a new data model item is inserted or appended, it is placed
into the data model at the same hierarchy level as the original data
model item.

Hierarchical The relationship between parent, children, and sibling items is


Relationships shown below:

Parent Item 1

Child Item 1

Child Item 2

Child Item 3

Parent Item 2
The hierarchy also determines processing flow, child to sibling and
then back to parent.
Parent (3)
Child (1)
Sibling (2)
Refer to the Understanding Environments (Map Component Files)
section for a discussion of processing flow.

Note: AI allows only 50 levels in the Data Hierarchy.

Workbench User’s Guide 223


Section 5. Creating Data Models for EDI and Application Data

Shifting Left and Right

To change a data model item hierarchy level


1. Select the data model item you want to modify.
2. To make an item a parent (top hierarchy level), a child (one
hierarchy lower than a parent data model item), or a sibling
(same level), you have several options. Choose one of the
following options:

Toolbar Icon – Click the Change Level Right icon to


restructure the selected item’s hierarchical level to one of a
lower level, child, or sibling;
- Or -

Click the Change Level Left icon to restructure the


selected item’s hierarchical level to one of a higher level,
parent or sibling.

Including Files in The Include… option allows you to attach Include files to your
Data Models data models. Include files contain rules that you can reference
from your data model so you can use them once or multiple times.
The Include file’s extension is “.inc”. The rules are in the form of
declare statements.

To access the 1. From the main menu choose Tools. The Tools drop down
Include… option menu appears.
2. Choose Includes. The Include Files dialog box appears.

224 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

The Include dialog box displays Available Files on the left side
and Included Files on the right. The Available Files are those
Include files that are available to this data model. The data
model cannot access the Available Files until they are linked to
the data model. To do this move the filename from the
Available Files list into the Included Files list, then apply the
change and save the data model. The following table describes
the items found on the Include dialog box.

Item Description
Available Files This list box displays the filenames of the
Include files available to this data model.
Included Files This list box display the filenames of the
Included files that will be or are linked to
the data model.
<< Choosing this button moves the filename
from the Included Files list to the Available
Files list.
>> Choosing this button moves the filename
from the Available Files list to the Included
Files list.
OK Saves the changes.
Cancel Exits the Include dialog box.

Workbench User’s Guide 225


Section 5. Creating Data Models for EDI and Application Data

To link an Include file 1. From the Model Editor view, choose Tools. The Tools drop
to a data model down menu appears.
2. Choose Includes. The Includes dialog box appears.
3. In the Available Files list box, highlight the filename of the
Include file to be linked to the data model.
4. Choose the >> button. The filename moves from the Available
Files list box to the Included Files list box.
5. To complete the entry, choose the OK button.

To unlink an Include 1. From the Model Editor view, choose Tools. The Tools drop
file to a data model down menu appears.
2. Choose Includes. The Includes dialog box appears.
3. In the Included Files list box, highlight the filename of the
Include file to be de-linked from the data model.
4. Choose the << button. The filename moves from the Included
Files list box to the Available Files list box.
5. To complete the entry, choose the OK button.

226 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

To view an Include file 1. From the Navigator view, double click the Include file to be
viewed.

2. To locate a specific item in the file, choose Find from the Edit
menu.

3. Close the Include Editor by choosing the Close icon.


The Include files contain INCLUDE rules that load PERFORM()
declare statements (or declarations). These declarations can be
used any number of times in the data model without having to
duplicate the code.
Information about adding declarations to a data model can be
found in the Inserting Declarations section. Refer to Appendix A.
Application Integrator Model Functions in Workbench User’s Guide-
Appendix for additional information on how to use the INCLUDE
data model keyword and the PERFORM() data model function.

Workbench User’s Guide 227


Section 5. Creating Data Models for EDI and Application Data

Saving a Data It is good modeling practice to save your model frequently during
Model development. It is recommended that you save your data models
and map component files to the models sub-directory.

To save a data model 1. Activate the Model Editor window of the data model to be
saved. (Click the title bar of the window to activate it.)
2. Save the data model in one of the following ways:
Menu – From the File menu, choose Save.

Toolbar Icon – Click the Save icon.


3. If you have already named your data model, the application
will save the work under the current name. If you have not
named the data model, a dialog box appears for you to enter a
path and name.

To save a data model 1. Activate the Model Editor window of the data model to be
under a new name saved with a new name. (Click the title bar of the window to
activate it.)
2. From the File menu, choose Save As.

228 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

3. In the Save As dialog box that appears, type the new name in the
box provided.

4. Choose the Open button to save to the new name and close the
dialog box.

To save all open files Use this procedure to save all files that are open in the work area
appearing in the work using their current filenames. If you want to save any of the files
area (Save All) using a different filename, use the Save As procedure.
• From the File menu, choose Save All.

Hint: It is also possible to print out a data model definition


with or without rules using OTmdl.att. See Appendix D.
Application Integrator Utilities in Workbench User’s Guide-
Appendix for a complete description of this program.

Closing the Editor

Workbench User’s Guide 229


Section 5. Creating Data Models for EDI and Application Data

To completely exit from 1. Activate the Model Editor window from where you want to
the Model Editor exit. (Click the title bar of the window to activate it.)
2. From the File menu, choose Close.

-Or-

Click the close icon in the Model Editor View.


If changes were made, but not saved or applied (rules), a
prompt appears asking if you want to apply and save changes.

230 Workbench User’s Guide


Section 5. Creating Data Models for EDI and Application Data

This page is intentionally left blank.

Workbench User’s Guide 231


Section 6. Building Rules into Data Models

Section 6. Building Rules into Data


Models

This section describes how to use RuleBuilder to add processing


logic to your data models. This section also describes how to use
MapBuilder, the tool for automating data mapping, when source
and target data models have the same or nearly the same structure.

Workbench User’s Guide 232


Section 6. Building Rules into Data Models

Rules allow for the movement of data from the source to the target
Overview of data model. Rules can be placed on any type of data model item in
Rules Entry the data model (group, tag, container, or defining items) to
describe how data is referenced, assigned, and/or manipulated.
In the source data model (input side), the rules are normally placed
on the parent item (tag) to ensure the entire tag has been parsed in
and validated before any rules are executed and data mapping
occurs. In the target data model (output side), the rules are placed
on the defining items in order to specify the variables from which
values are to be mapped. (These variables having been assigned
via rules in the source data model.)

Modes for There are three modes for processing rules that are available for all
Processing Rules data model items within a data model. They are performed in the
following sequence:

Mode Description
PRESENT Rules are performed when entering rules
processing with a status of 0
(no errors).
ABSENT Rules are performed when entering rules
processing if one of the following statuses is found:
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
PRESENT mode with the same statuses.

Note: You cannot use an Absent rule in fixed


length data.

Workbench User’s Guide 233


Section 6. Building Rules into Data Models

Mode Description
ERROR Rules are performed when entering rules
processing with any other status than the following:
0-okay
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
ABSENT mode processing with a non-zero status.

Note: An error typically occurs when an invalid


date, time, or numeric function is used.

Refer to the Understanding Environments (Map Component Files)


section for more details on rules processing.

Types of Rule Each rule consists of a condition with one or more actions. There
Conditions are five types of conditions: Null, Conditional Expression, IF, ELIF,
and ELSE.
A Null condition is always true and the actions will always be
performed. It is also referred to as No Condition.
With a Conditional Expression, IF, or ELIF, the condition must
come true before the actions are performed. Any data model item
can have one or more conditions, and each condition can have one
or more actions.
With ELSE, the actions will be performed if the preceding IF does
not come true.

234 Workbench User’s Guide


Section 6. Building Rules into Data Models

Variables Variables are the links between the source and target data model
items. There are two types of variables supported by Application
Integrator™, as noted in the following table:

Variable Description
Variable This type of variable is a single value, also referred to
as a temporary variable. If more than one
assignment is made to the same variable name, the
last assigned value is the value that will be
referenced. A variable is useful for referencing the
same value multiple times, as a counter, or in a
concatenation.
Array This type of variable is a list of values. Manual
controls are recommended with this variable
whenever multiple levels in the data model are
mapped. These controls are used to ensure that the
proper data stays together, such as: detail records
with the proper header record or sub-detail records
with the proper detail records. There is a set index
and a reference index associated with the list of
values. The set index points to the last value placed
on the list and the reference index points to the next
value to be referenced from the list. The reference
index can be reset to the top of the list by using the
data model keyword RESET_VAL.

Workbench User’s Guide 235


Section 6. Building Rules into Data Models

Keywords and Application Integrator™ provides a library of keywords and


Functions functions.
Keywords provide a means to alter the natural processing flow
within an item, among items in a data model, and within an
environment. For example, the keyword BREAK leaves the file
pointer unchanged and processing moves to the next sibling item
to process.
Application Integrator™ functions fulfill numerous tasks, including
manipulating data in the input/output stream, assigning and
referencing values in the Administration Database, and calculating
date, time, and mathematics.
Refer to Appendix A. Application Integrator Model Functions in
Workbench User’s Guide-Appendix for a complete reference to the
Application Integrator™ keywords and functions.

236 Workbench User’s Guide


Section 6. Building Rules into Data Models

Two Methods for There are two different methods for creating rules to map your
Creating Rules data:
RuleBuilder
MapBuilder
RuleBuilder allows you to create customized mapping rules. Using
RuleBuilder, you have access to the full functionality of the
Workbench rules system. Depending on the expertise of the
developer, rule definition can be done either in a free-format text
editor or through prompting via the RuleBuilder interface. When
using RuleBuilder, the order that the rules appear in is the order in
which they will be executed during a translation session. Child
data model items are acted on before parent data model items,
hence the re-ordering of the rules to match the order of execution.
RuleBuilder, along with the Built-ins view, provides a series of
tabbed pages or “tabs” which organize the components of rules
(conditions, data model items, functions, variables, and so on) into
categories. Using the mouse or keyboard shortcuts, you can
quickly build the data model logic. The RuleBuilder interface is
described in the “Using RuleBuilder” section.
MapBuilder is an automated way of applying rules on data model
items. MapBuilder uses a drag and drop feature to map from
source to target data model. The rules are placed on the defining
items only and are a NULL condition (that is, the actions will
always be performed). In the source data model, MapBuilder
creates a rule that assigns a data model item’s value to a variable.
In the target data model, MapBuilder creates a rule that references
the variable for its value and assigns it to the data model item.
MapBuilder is an efficient way to map from source to target data
models when the input and output stream are the same, or
extremely similar, in structure. Refer to the “Using MapBuilder”
section for details.

Workbench User’s Guide 237


Section 6. Building Rules into Data Models

Using MapBuilder MapBuilder automates the process of mapping data between


similar source and target data models. In one step, MapBuilder
creates the rules that assign the source data model items to
variables, and make the assignments from these variables to the
target data model items. MapBuilder allows you to drag and drop
data model item rules between source and target data models and
in doing so, automatically create the rules for both source and
target data models.
To access MapBuilder, open an environment in Map Editor.

Overview MapBuilder allows you to drag and drop rules between data
models using predefined settings for Variable Type, Variable
Name, Link Type, Select Data Assignment type, and Prompt with
Loop Control Warning Message. The following table shows the
predefined settings that are used when running MapBuilder.

Option Setting
Variable Type Array
Variable Name Both
Link Type Tag-To-Defining
Defining-To-Defining
Select Data Assignment Use DEFAULT_NULL() on Source EDI
Type Use STRTRIM() on Source non-EDI
Use NOT_NULL() on Target EDI

238 Workbench User’s Guide


Section 6. Building Rules into Data Models

Refer to the Workbench Preferences section for a table that contains


an explanation of these settings.

Accessing the
MapBuilder Function
To enable MapBuilder
1. Open the environment (Map Component File) that contains the
two models you wish to map.

Note: Refer to the “Error! Reference source not found.” section


for specific information on how to use the loop control
function.

Note: First, map Defining data model items, and then perform
loop control procedures. The loop control rules are inserted
at the beginning of PRESENT/ABSENT mode. These rules
must be executed before performing a data assignment, to
maintain the integrity of all mappings.

Workbench User’s Guide 239


Section 6. Building Rules into Data Models

Setting MapBuilder Preferences allows you to customize the settings for


Preferences Variable Type, Variable Name, Link Type, Select Data Assignment
type, and Prompt with Loop Control Warning Message. See the
Workbench Preferences section for more details on MapBuilder
Preferences.

During the building of the rules, you determine whether the


variable type should be a Variable or Array variable. For a
complete discussion of these types of data structures and how
Workbench uses them, refer to the “Variables” section earlier in
this section.
You also have the option to establish the rules on either a source
Tag item or the individual source Defining items during the
mapping process (rules are always placed on Defining items on the
target side). Establishing the rules on the Tag item of the source
data model is the usual Application Integrator™ method,
providing a means to parse and check the complete tag before
mapping individual defining items to variables.

Drag and Drop In most cases, rules are placed in PRESENT mode as a null
condition (that is, actions are always performed). In the source
data model, MapBuilder creates a rule that assigns a data model
item’s value to a variable. In the target data model, MapBuilder
creates a rule that references the variable for its value and assigns it
to the data model item. Shown here are some hints to help your
modeling session:
You should map Defining items first, and then continue
with Group, Tag, and Container items. This is because the
loop control feature places rules at the beginning of
PRESENT and ABSENT mode and the loop control rules
must be processed before any data assignments are made.
You can view the rules created by MapBuilder by opening
the models in Model Editor. This displays the data model
rules you mapped so you can identify the MapBuilder and
loop control rules.

240 Workbench User’s Guide


Section 6. Building Rules into Data Models

To map data between source and target models

Note: MapBuilder will operate in either direction, source to


target or target to source.

1. From the Navigator view, open the Map Component File to


map.
2. To create rules using MapBuilder, select the desired data model
item from the source data model by holding down the left
mouse button and dragging the data model item to the desired
target data model item.
Place the pointer on the desired target data model item’s label
or name and release the left mouse button.

Note: To use the drag and drop feature, click the data model
item to highlight it. Move the mouse pointer to the new
location and release the mouse button.

Caution: Be sure the item to be dragged and dropped is


highlighted or has focus. If not, the item previously
highlighted will be used.

Workbench User’s Guide 241


Section 6. Building Rules into Data Models

3. The Variable Name is automatically created in the rule based on


the settings in MapBuilder Preferences.

Note: You can also drag and drop from the target to the
source defining items, however, MapBuilder always creates
the rules from source to target, maintaining the name of the
variable as “<source defining item name>_<target defining
item name>.”

4. As data model items are mapped using MapBuilder, they will


appear in the RuleBuilder workspace under the PRESENT
mode for the selected Defining items or Tag item on the source
data model and the selected defining items on the target data
model. The following figure shows a RuleBuilder example for
the source data model:

242 Workbench User’s Guide


Section 6. Building Rules into Data Models

The figure below shows a RuleBuilder example for the target


data model.

Workbench User’s Guide 243


Section 6. Building Rules into Data Models

In the previous examples, notice that in the source model,


the value in PhoneNumber is assigned to a temporary
variable and that in the target model, the value on the
temporary variable is being referenced and assigned to
PhoneNumber. For more information on the RuleBuilder
workspace, refer to the “Using RuleBuilder” section later in
this section.

Note: MapBuilder processing messages appear in the Status


Bar of the data model window.

5. After you have completed mapping between the source and


target data models, save both data models.

Note: The MapBuilder rules are immediately applied to the


layout. There is no need to apply the rules. However, rules
are not permanently saved to disk until the data model is
saved.

AI Derived Links If a map is developed outside Workbench, then you will not be
Feature able to see the mapping links in Workbench, even if there are
mapping relations between fields. The AI Derived Links feature
allows you to visually see the links for those maps that are
developed outside of Workbench.

Note: Preferences should first be set in the Derived Links


Feature Preference Page.

1. Open an att or mdl file in Workbench that has been developed


outside Workbench.

244 Workbench User’s Guide


Section 6. Building Rules into Data Models

2. Click on the Derive Links icon on the tool bar. The following
dialog comes up:

The Derive Link feature is activated. It goes through all the rules,
derives the mapping relationships and draws the links.

Workbench User’s Guide 245


Section 6. Building Rules into Data Models

Note: Invocation/usage of this feature will NOT modify the


files (source or target).

246 Workbench User’s Guide


Section 6. Building Rules into Data Models

Mapping Details To see the Mapping details right click on the mapping link and
click on the Mapping Details as shown in the figure below.

A dialogue box containing the Mapping Details comes up.

Workbench User’s Guide 247


Section 6. Building Rules into Data Models

Loop Control The loop control feature provides the code to create processing
loops when one of the data model items is a Group, Tag, or
Container. Loop control ensures that detail records are kept
together with the proper header record. Also, sub detail records
are kept together with the appropriate detail records.
Loop control automates the process of mapping complex data
structures that repeat. Loop control automatically adds PRESENT
mode rules, ABSENT mode rules, or group data model items to
both the source and target models. These rules or Group items
contain Array variable assignments. Control of the array variable
automatically occurs during the MapBuilder loop control process.
Normally you would not apply loop control from Defining to
Defining. However, in the rare case when this is necessary, first
enable the “Enable Loop Control when mapping Defining to
Defining” option on the MapBuilder Preferences dialog box.
Here are some points to remember when applying loop control:
Loop control is needed on items where the maximum
occurrence is greater than the minimum.
When indicating the target occurrences for loop control, be
sure the maximum is 1 greater than the maximum intended.
The last loop goes into the loop control rules to break out of
the looping process. Loop control automatically checks for
and corrects these situations.

To add loop control to your data model


1. Open the source and target data model in Map Editor by
opening the map component file.
2. Select the desired Tag or Group data model item from the
source/target model by holding down the left mouse button
and dragging the data model item to the desired Tag or Group
target/source data model item. Release the left mouse button.
3. Save the change or cancel.
Loop Control mode for Stylesheet.
1. Open the map component file having xsl file(s).

248 Workbench User’s Guide


Section 6. Building Rules into Data Models

2. In the Preferences, check the Enable Loop Control when


mapping Defining To Defining option under Map Builder
Preference Page.
3. Drag and Drop: Select the desired Tag or Group data
model item from the source/target model by holding down
the left mouse button and dragging the data model item to
the desired Tag or Group target/source data model item.
Release the left mouse button.
4. Go to Link menu and check the Loop Control Mode menu
item. This enables the Loop Control Mode and the DMI's
that support loop control are highlighted in blue.
5. Map the highlighted DMIs. Now the Loop control rules are
written on the DMIs.

Note: Unless the steps from 3 to 5 are followed, loop control


rules will NOT be written for Stylesheet unlike the loop
control rules for traditional models. In traditional models they
are done automatically.

Workbench User’s Guide 249


Section 6. Building Rules into Data Models

Troubleshooting There are several warning or error messages that can appear when
mapping using loop control.
Illegal map messages appear in a dialog box during the mapping
session. When a message such as this appears, the rules are not
updated. For example, if you try to map a source item to a source
item the following message appears in the status bar:

You cannot map or apply loop control to the topmost Group item
because it is a parent to all other Group items and their children. If
the source item does not have a parent, the following error message
appears.

If you attempt to perform loop control on the same items more than
once, the following error message appears.

250 Workbench User’s Guide


Section 6. Building Rules into Data Models

Manual Loop Control The Manual Loop Control dialog box appears when MapBuilder
finds the items that do not have loop control applied, and the
“Prompt with Loop Control warning message” check box is
selected. When you drag and drop an item onto another,
MapBuilder checks all the items appearing above and, if it finds an
item whose maximum is greater than the minimum and is not the
topmost item, it displays the Manual Loop Control dialog box.
You can drag and drop loop control on any type of item: Group,
Tag, Container, or Defining. However, you must have the “Enable
Loop Control when mapping Defining to Defining” check box
selected to enable loop control on Defining items.
In this example, the two looping items are caught because they
have multiple iterations that are greater than 1 and they are not the
topmost item. Remember, the system checks for the item needing
loop control above the items on which mapping occurred.

To apply manual loop control


1. In the Manual Loop Control dialog box, highlight the item to
which you intend to apply loop control.
2. Choose the Ok button. The source and target maps appear with
the layout items highlighted.
3. Apply loop control as indicated in the “Error! Reference source
not found.” section.

Workbench User’s Guide 251


Section 6. Building Rules into Data Models

RuleBuilder is accessed from the Layout Editor window. The


Using RuleBuilder window displays rules in the order that they are
RuleBuilder executed during a translation session.

RuleBuilder RuleBuilder provides an interface for quickly defining null and


Window conditional expressions. For more information on the tool bar and
menu items in RuleBuilder, see the Overview of the Model Editor
section.

Accessing To access RuleBuilder, open the data model that needs rules
RuleBuilder added/modified in Model Editor.

252 Workbench User’s Guide


Section 6. Building Rules into Data Models

Adding Rules Once you open the Model Editor, you are ready to add or modify
rules of the data model.
The same methods are used to insert PRESENT, ABSENT, or
ERROR mode rules.

Inserting a Null condition


1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
2. To insert a Null condition [ ] for this data model item, use one of
the following methods:

Toolbar Icon-Click the Null Condition icon.


Keyboard-Place the cursor in the RuleBuilder work area,
press [space] (left bracket space right bracket).
3. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by following
the procedure for inserting any RuleBuilder Tab option.
4. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied. Do the following to apply rules:

Toolbar Icon-Click the Apply icon.

Caution: Rules are not “saved” in RuleBuilder until they are


applied. Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

Workbench User’s Guide 253


Section 6. Building Rules into Data Models

Inserting a Conditional
Expression
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
1. To insert a conditional expression for this data model item, use
one of the following methods:
Toolbar Icon-Click the Condition icon.
Keyboard- Place cursor in RuleBuilder work area, press
[=] (bracket equal sign bracket)
2. Insert the appropriate statements for your rule by either typing
them directly into the workspace or following the procedure
for inserting a RuleBuilder tab option.
3. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Caution: Rules are not “saved” in RuleBuilder until they are


applied. Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

254 Workbench User’s Guide


Section 6. Building Rules into Data Models

Inserting Literals
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
2. Use one of the following methods to insert a literal:

Toolbar Icon-Click the Literal icon.


Keyboard-“<Literal>” (enclose the text of the Literal
between the left and right quotation marks)
3. Type the text to be interpreted literally.
4. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Workbench User’s Guide 255


Section 6. Building Rules into Data Models

Inserting an Assignment
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).

2. To insert an Assignment for this data model item, use one


of the following methods:

Toolbar Icon-Click the Insert Assignment icon.


Keyboard–Press = (equal sign).
3. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by following
the procedure for inserting a Rule Notebook option.
4. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Caution: Rules are not “saved” in RuleBuilder until they are


applied. Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

256 Workbench User’s Guide


Section 6. Building Rules into Data Models

Inserting Comments Comments can be inserted into a data model to describe the
process being modeled, to identify modifications to models, or to
explain rules. Comments can be placed on individual lines or
immediately following a rule.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
comment.
2. To insert a comment in its own line,
a. Place the cursor in the first position of an empty line.
b. Type a semicolon character (;). This indicates to RuleBuilder
that a comment follows and that all text appearing after the
semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.

3. To insert a comment in the line following a rule,


a. Insert the rule according to the appropriate procedure.
b. When you reach the end of the rule, type a space followed
by the semicolon character (;). This indicates to
RuleBuilder that a comment follows and that all text
appearing after the semicolon should be ignored.

Workbench User’s Guide 257


Section 6. Building Rules into Data Models

c. Type the comment immediately after the semicolon. You


can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.

4. Apply your changes to the RuleBuilder workspace. Changes to


the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Caution: Rules are not recorded in RuleBuilder until they are


applied. Rules, when applied, are updated to the Layout
memory area, however, rules are not permanently saved to
disk until the data model is saved.

258 Workbench User’s Guide


Section 6. Building Rules into Data Models

Inserting Data Model Data model items are often used in rules to assign a value to a
Items variable or assign a variable to a data model item.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert a DMI into a rule, drag the DMI name from the
Message Variables view, to the RuleBuilder workspace.

You could also right click on the rulebuilder workspace and choose
Insert here->DMI and select the appropriate DMI.

Workbench User’s Guide 259


Section 6. Building Rules into Data Models

To insert available DMI click on icon. It lists the DMI that are
reachable (i.e. in scope).

Select the DMI and click OK. It inserts the DMI into the Rulebuilder
area.

260 Workbench User’s Guide


Section 6. Building Rules into Data Models

Inserting Operators Conditional rules use operators for testing values against another.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.

2. To insert an operator into a rule, insert a null condition .

For example: Assign values to variables A and B as shown


above.
Finish the rule by dragging an operator (in this case its ‘<’
operator ) from the Built-ins view and set a condition IF[VAR-
>A<VAR->B].

Workbench User’s Guide 261


Section 6. Building Rules into Data Models

Inserting The Built-ins tabs allow you to easily add functions and keywords
Functions and to your set of rules.
Keywords

1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
To insert a function or keyword for this data model
item, drag the function or keyword from the appropriate
tab to the RuleBuilder workspace.

If the “Use Function Template” is enabled in the Views


preference page, and if you drag and drop a function,
the following dialog comes up:

262 Workbench User’s Guide


Section 6. Building Rules into Data Models

Check the literal checkbox if the entered value is a literal and has to
be placed within double quotes in the RuleBuilder. Click OK.
If you still want to enter the argument values in Rule builder, click
Use RuleBuilder. Follow steps listed below to continue.

Note: For a XSl function, all the parameters provided are


always considered as literals and hence the “Literal” Check
box is disabled.

2. Insert the appropriate statements for the rule by either typing


them directly into the RuleBuilder workspace or by using the
Find Next Parameter icon. See the Finding the Next
Parameter section.
3. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Inserting Variables

Workbench User’s Guide 263


Section 6. Building Rules into Data Models

1. Make sure the insertion point is placed in the RuleBuilder


workspace under the desired mode for which you are adding a
rule.
2. To insert a temporary variable (VAR) into a rule, drag the
variable name from the Message Variables view, to the
RuleBuilder workspace.
You could also right click in the rulebuilder workspace and
choose Insert here->Variable and select the appropriate
variable..

Inserting Arrays
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert an array variable (ARRAY) into a rule, drag the
variable name from the Message Variables view, to the
RuleBuilder workspace.
You could also right click in the rulebuilder workspace and
choose Insert here->Array and select the appropriate array.

Inserting Substitutions Substitutions are values stored in the profile database.


1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert a substitution into a rule, type the $ then the
substitution value from the profile database. For example:
VAR->A = $X12MsgInProdName

Inserting Declarations Declarations are PERFORM statements that contain rules that can
be called from within a data model.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert a declaration into a rule, drag the declaration from the
Performs view into the RuleBuilder area.

264 Workbench User’s Guide


Section 6. Building Rules into Data Models

Inserting IF, ELIF, Along with null conditions and conditional expressions,
ELSE Application Integrator™ offers the IF, ELIF, and ELSE rules.
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
To insert an IF, ELIF, or ELSE for this data model item,
select the appropriate IF , ELIF , or ELSE icon.
2. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by using the
tabs, functions, and so on as explained earlier in this section.
3. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

Inserting a Carriage To make the rules easier to read in the RuleBuilder workspace, use
Return a carriage return after each rule.

Workbench User’s Guide 265


Section 6. Building Rules into Data Models

1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
2. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by using the
tabs, functions, and so on as explained earlier in this section.

3. At the end of the rule, select the Carriage Return icon.


4. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:

Toolbar Icon-Click the Apply icon.

266 Workbench User’s Guide


Section 6. Building Rules into Data Models

Cutting, Copying, Cut, Copy, and Paste Clipboard functions can be performed on
and Pasting Rules rules on individual data model items for any of the modes: Present,
Absent, or Error. Cut or copy assigns the selected information to
the Clipboard. Only one mode at a time can be copied or cut, and
placed on the Clipboard.
Paste takes the information from the Clipboard to the location you
specify in the data model rules.

To cut text from the 1. Highlight the text to cut in the RuleBuilder workspace.
RuleBuilder workspace
2. Use the following method to cut the text:

Toolbar Icon-Click the Cut icon.


Right click and choose Cut.

The text is assigned to the Clipboard until something else is


assigned which replaces it.

To copy text from the 1. Highlight the text to copy in the RuleBuilder workspace.
RuleBuilder workspace
2. Use the following method to cut the text:

Toolbar Icon-Click the Copy icon.


Right click and choose Copy.

Workbench User’s Guide 267


Section 6. Building Rules into Data Models

The text is assigned to the Clipboard until something else is


assigned which replaces it.

To paste text from the 1. Move the insertion pointer to the place to paste in the
Clipboard into the RuleBuilder workspace.
RuleBuilder workspace
Toolbar Icon-Click the Paste icon.
Right click and choose Paste
Until you make another copy or cut, this text remains on the
Clipboard, allowing you to paste several copies of the current
text.
To view keyboard To invoke the list of keyboard shortcuts:
shotcuts in the
RuleBuilder workspace
1. Right click ->Keyboard Shortcuts Ctrl+/ in the Rule Builder
OR
2. Click Ctrl+/ . The list displays the operations that are
supported by the KeyBoard shortcut keys.

268 Workbench User’s Guide


Section 6. Building Rules into Data Models

Finding the Next The system makes it easy for you to enter the parameters to
Parameter functions, conditions, and keywords by prompting you for the next
required parameter. Individual parameters of a parameter list can
be selected by repeatedly choosing Find Next Parameter.

To find the next 1. Place the cursor at the position after which you want the
parameter system to begin the parameter search.
2. Use one of the following methods to issue the command:

Toolbar Icon-Click the Next Parameter icon.


Menu-From the RuleBuilder menu(if you open the file
with the map editor), choose Find Next Parameter.

3. The next parameter is selected for you to enter the proper value.
Continue clicking on Next Parameter until you finish entering
values for all the parameters.

Syntax Checking of Workbench provides a utility for checking the syntax of the rules
Rules during rule entry.

Workbench User’s Guide 269


Section 6. Building Rules into Data Models

To Check the Syntax 1. Use the following method to call the rule checking utility:

Toolbar Icon-Click the Check Syntax icon.


2. If any errors are found, a Check Syntax dialog box is displayed
explaining the location and reason for the error.

270 Workbench User’s Guide


Section 6. Building Rules into Data Models

Syntax Error Checking Syntax checking catches the first syntax error on each data model
item. A second or subsequent error will not be listed in the Check
Syntax dialog box until the first is corrected.
The following types of errors are checked in the rules during
syntax checking or when applying the rules (using the Apply
command):
1. Invalid constructed variable, for example,
VaR-> lower- vs. uppercase ‘A’
Array- missing ‘>’ and lowercase ‘rray’
2. Invalid (label) or undeclared data model item
Checks spelling
Checks character case (for example, ‘a’ vs. ‘A’)
3. Forgetting to define the condition before the action ([ ])
4. Incorrect number of parentheses ( ‘)’ ) or quotation marks ( ‘ “ ’ )
Checks for too many
Checks for not enough
5. Function expecting an identifier (variable or data model item)
for the parameter, for example:
DM_READ (“DM_X”, “Y”, 0, 1, $GET_GCOUNT(1)) where
GET_GCOUNT( ) function is not an identifier.

Note: Errors are checked and listed in the sequence of the


data model items in the Layout Editor window, not in the
order of parsing the rules.

Workbench User’s Guide 271


Section 6. Building Rules into Data Models

Parsing for Syntax Workbench catches errors when it parses the model or map
Checking component file and also catches errors before it saves.

Command Line Syntax Checking — otrun.exe and inittrans


The following items are checked:
All arguments are checked to verify they are valid defined
options or strings associated with the option. An example of
an option with its associated string is “-at OTRecogn.att”.
Options are case sensitive. Options that expect an argument
must contain an argument and not another option. If no
argument is passed on the command line, a segmentation
fault is returned.

Valid Example:
otrun.exe –at OTRecogn.att –cs dv –
DINPUT_FILE=OTIn.Flt –I
Invalid Examples:
otrun –at –cs dv –DINPUT_FILE=OTIn.Flt –I
(missing string for –at argument)
otrun –aa OTRecogn.att -cs dv –dINPUT_FILE –I
(-aa spelling and -d case sensitivity errors)
otrun.exe –at OTRecogn.att OTEnvelp.att –cs
dv –DINPUT_FILE=OTIn.Flt –I (two strings for
–at)
Requires that one of the following is an argument: –at (map
component file), –s (source data model), or –t (target date
model).
If the option does not require an argument, the presence of
an argument is not checked.
Checks for closing quotation marks when opening quotation
marks are present. Also checks for the presence of spaces in
a string when the string is not enclosed in quotation marks.

Valid Example:
otrun.exe –at OTRecogn.att –cs dv –
DINPUT_FILE=OTIn.Flt –DA=’Bob Smith’ –I

272 Workbench User’s Guide


Section 6. Building Rules into Data Models

Invalid Example:
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=Bob
Smith –I
(There is a space between Bob and Smith; string should be in quotation marks.)
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob
Smith –I
(Missing closing quotation mark.)

Translator and Workbench Syntax Checking, for the following


types of files: mdl, .att, .acc, .inc
Checks for proper use of characters, such as, closing or
balanced use of parentheses ‘()’, brackets ‘[]’ and braces ‘{}’,
single quotation marks (‘’), double quotation marks (“”), and
the use of commas (,) where required.

Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR-
>B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A VAR-
>B
*1 .. 1
(Missing “,”, “)”, “}” characters)
Checks for the item type and that all components are
present. For example, Definings require the following
syntax: label, open brace, access item label, ‘@’sign,
minimum, .., maximum, optional format, verify list ID,
closing brace, *, minimum occurrence, .., maximum
occurrence.

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = STRCAT(VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:

Workbench User’s Guide 273


Section 6. Building Rules into Data Models

DMI { alphanumericfld @5 .. 5
[]
VAR->Tmp = STRCAT(VAR-
>A, VAR->B)
}1 .. 1
(Missing verify list ID and “*” for occurrence.)

Checks for valid in scope use of data model item labels.

Valid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
}*1 .. 1
Invalid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
}*1 .. 1
(DMI_B is being referenced out of scope – before it comes
into existence)
Checks that data model item labels are not referenced in
include files

Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A)
}*1 .. 1

274 Workbench User’s Guide


Section 6. Building Rules into Data Models

include file ‘Example.inc’:


DECLARE Ex1(&defining) {
[]
CLEAR_VAL ARRAY->Tmp
VAR->Tmp = DMI_INFO(&defining,
&ARRAY->Tmp)
}
Invalid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1() {
[]
CLEAR_VAL ARRAY->Tmp
VAR->Tmp = DMI_INFO(&DMI_A,
&ARRAY->Tmp)
}
(Attempted to reference a data model item label within an
include file.)
Reference to an undefined data model item label.

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = Dmi
*1 .. 1
(Dmi is not a defined data model item label)

Workbench User’s Guide 275


Section 6. Building Rules into Data Models

Rule Execution Syntax Workbench catches errors when it executes rules in the translator
Checking and at runtime.

Translator Only Syntax Checking


Correct number of arguments for those functions that
contain a fixed number of arguments.

Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B,
VAR->C)
*1 .. 1
(STRCAT() only has two arguments, not three.)

Translator Runtime Syntax Checking


Proper use of ampersand (&) when required and not
required in a function.

Valid Example:
DMI {
[]
VAR->Pos =
GET_FILEPOS(&DMI)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(DMI)
VAR->Tmp = STRCAT(&VAR->A,
&VAR->B)
*1 .. 1
(GET_FILEPOS() requires ‘&’, STRCAT() does not.)

276 Workbench User’s Guide


Section 6. Building Rules into Data Models

Consistent use of ampersand (&) with arguments between


the PERFORM() and its declaration in the include file.

Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A, VAR->Tmp)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1(&defining, temporary) {
[]
VAR->Tmp = STRCAT(defining,
temporary)
}
Invalid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, DMI_A, &VAR->Tmp)
}*1 .. 1

include file ‘Example.inc’:


DECLARE Ex1(&defining, temporary) {
[]
VAR->Tmp = STRCAT(defining,
temporary)
}
(Ampersand character is not used consistently between the
PERFORM and the DECLARE.)
Argument type is checked.

Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR-
>Tmp, 2, 4)
}*1 .. 1
Invalid Example:

Workbench User’s Guide 277


Section 6. Building Rules into Data Models

DMI {
[]
VAR->Tmp = STRSUBS(“ABCDEF”,
2, 4)
}*1 .. 1
(The string in STRSUBS() cannot be a string literal.)
A valid defined function is either an internal Application
Integrator™ function or a User Exit Extension function.

Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBSX(VAR->Tmp 2, 4)
}*1 .. 1
(The function STRSUBSX() is not an Application Integrator™
function or User Exit Extension function.)

Syntax Checking That The following are not verified during syntax checking.
Does Not Occur
Labels are not checked for consistent use of upper- and
lowercase letters throughout the data model.
Reference to a variable’s value before it was set with a value
is not checked.

Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->Tmp,
DMI)
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->TMP,
DMI)
*1 .. 1

278 Workbench User’s Guide


Section 6. Building Rules into Data Models

(Does not catch VAR->TMP, was not previously assigned.)


Assigning a value from a function that does not return a
value is not checked.

Valid Example:
DMI {
[]
CLOSE_INPUT()
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = CLOSE_INPUT()
*1 .. 1
(CLOSE_INPUT() does not return a value. The translator
attempts to obtain a value off the stack, which can cause a stack
underflow error if no values are on the stack.)
User entry (outside of Workbench) for the proper sequence
of rule modes: PRESENT, ERROR, then ABSENT is not
checked.

Valid Example:
DMI {
[]
CLOSE_INPUT()
:ABSENT
[]
VAR->Error = ERRCODE()
:ERROR
[]
VAR->Error = ERRCODE()
}*1 .. 1
Invalid Example:
DMI {
[]
CLOSE_INPUT()
:ERROR
[]
VAR->Error = ERRCODE()
:ABSENT
[]
VAR->Error = ERRCODE()
}*1 .. 1

Workbench User’s Guide 279


Section 6. Building Rules into Data Models

(ABSENT and ERROR are in the wrong sequence.)


Correct number of arguments for those functions that
contain a variable number of arguments is not checked. (A
model can be verified for correct argument count using
OTCheck.bat on Windows® or OTCheck.sh on Unix and
Linux.)

Valid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR-
>B, VAR->C)
*1 .. 1
(Using three arguments, but telling the function that it is using
only two.)

Function Checking Function checking is performed on functions that expect data


model item addresses. If the wrong type of information is passed
to functions, an error is reported at runtime. This could happen if
the ampersand (&) character did not appear before the variable.
Valid Example:
DMI_INFO (&DMI , &ARRAY)
Invalid Example:
DMI_INFO (DMI , &ARRAY)
Error code 144 is returned in cases where the data model item has a
value assigned to it before the function is executed. If there is no
value assigned to the variable at runtime and the ampersand is
missing, the translator tries to evaluate the variable. It returns error
code 139 when it is unable to evaluate it.

280 Workbench User’s Guide


Section 6. Building Rules into Data Models

This page is intentionally left blank


.

Workbench User’s Guide 281


Section 7. Compare and Reports

Section 7. Compare and Reports

Workbench User’s Guide 282


Section 7. Compare and Reports

The Compare feature allows the content of two files to be displayed and
Comparing Two the differences between the files indicated by text of different colors,
Model Files depending upon the type of difference.
This is the color key for items appearing in the Compare dialog
box.
Item Description Color Designation Color Example
When the data Orange SE_TEST104
model item name is
in one data model
and not the other
When the data Magenta []
model item name is ARRAY-
>Output="BCH"
the same in the two
VAR->SECnt = VAR-
models, but the >SECnt + 1
attributes between
ARRAY->Output =
the two items are
BCH_01
different
When a data model Black EXIT 502
item name is the
same in the two
models and the
two items'
attributes are
identical
When the data Magenta text with CTT
model item name is Navy Blue
the same in the two highlight
models, but the
attributes between
the two items are
different and one
of the data model
items that is
different is
highlighted

Workbench User’s Guide 283


Section 7. Compare and Reports

When an item is selected from one of the data models, all of the
attributes and rules for that data model item are displayed in the
DMI Attributes portion of the Compare dialog box. If an item is
found in both models, any difference in their attributes is
highlighted.

To Compare Two 1 From the Utility menu, choose Compare Model Files. The Compare
Files dialog opens.

2. At the Compare File value entry box, Browse / type the first
filename to be compared. You can use the Browse button to
access the Open (file chooser) dialog box.
3. At the To File value entry box, type the second filename to be
compared. You can use the Browse button to access the Open
(file chooser) dialog box.
4. If the direction/mode or the access file is not specified, a
dialog prompts you to specify the same. Choose the OK button
to begin the file compare function.

284 Workbench User’s Guide


Section 7. Compare and Reports

5. The two files appear in the Compare View. The first file
appears on the left and the second file appears on the right.

6. To display rules in the lower text panes, position the cursor at


the data model item containing the rules to be displayed.
7. Choose a data model item. The rules will appear in the lower
text pane.

Context Menu on The Compare view displays the data model items and rules for two
the Compare View compared data models or standard models. The system highlights the

Workbench User’s Guide 285


Section 7. Compare and Reports

differences between the two files. The function has several options
available to aid in locating the dmis in the displayed files.

Next Difference This takes the control to the next DMI if there is a difference in its contents
with respect to its counterpart.

To Find items Use this procedure to locate a data model item by its label.

1. Position the cursor in the top half of the Compare view, and
click the cursor in the pane in which the Find should take place.
2. Use the context menu ( right click ) and select the Find option
3. At the Find What value entry box, type the text for which you
want to search.

4. Choose the Next button. The system locates the first occurrence
of the text string. If the text string is not found, the message,
"Pattern Not Found" appears in the Find dialog box.
5. To narrow the search, use one of these options.
Match Case This option looks for text with the same
capitalization as the text entered in the
Find What value entry box
Match Whole Word This option looks for the entire character
string entered in the Find What value
entry box, not parts of words.

To Go To a Data Use this procedure to find a specific data model item.

286 Workbench User’s Guide


Section 7. Compare and Reports

Model Item
1. From the Context menu ( right click ) choose GoTo.

2. At the Select Data Model Item value entry box, indicate the data
model item name to search using one of these methods –
• Type the name of the data model item.
• Choose the arrow and select the data model item name
from the list box.
3. Choose the OK button.

Workbench User’s Guide 287


Section 7. Compare and Reports

Report Generation

Note: (Common to both the reports mentioned below):

For report generation operation, Workbench must be


connected to the Control Server.

When Workbench is connected to the local Control Server,


Report generation is available for local models only.

When Workbench is connected to the remote Control Server,


Report generation is available for remote models only, which
are present under remote <server install directory> (may be
under any path with <server install directory> as the root
directory).

288 Workbench User’s Guide


Section 7. Compare and Reports

Data Model Listing Report


The Data Model Listing report shows the data model and offers the
option of printing the data model with or without rules. The report
can be displayed on the screen, printed, or sent to a file.

To access the Data Model Report dialog box

1. From the Utility menu, choose Report.


2. From the Report menu, choose Data Model Listing. The
Data Model Listing dialog box appears.

To Run the Data Model Listing


1. Access the Data Model Listing dialog box (as mentioned
above).
2. Specify the Data Model (using “Browse” button), for which
Data Model Listing is needed.

Workbench User’s Guide 289


Section 7. Compare and Reports

3. At the Rules group box, choose whether to print the rules


associated with the data model items.
4. In the Output Method area, choose the output method and
enter any required information:
• Display Report: Outputs the report to the screen (as
shown below).

• Print Report: Sends the report to the default printer that


is set up with Print Setup.
• File Report: Saves the report to a file. Select “Browse”
button (next to “File Report” option) to enter the
filename to which file is to be sent. Report will be saved
with “.rpt” extension.
5. Once all the information is entered, choose the OK button to
generate the report;
– or –
Choose the Cancel button to return to Workbench.

290 Workbench User’s Guide


Section 7. Compare and Reports

Source to Target Map Report


The Source to Target Maps Listing dialog box is used to create a
report that shows the source data model item labels, the associated
variable labels, and the target data model item labels. The report
can be displayed on the screen, printed, or sent to a file.

To access the Source to Target Map Listing report

1. From the Utility menu, choose Report.


2. From the Report menu, choose Source to Target Map
Listing. Source to Target Map Listing Options Dialog
appears.

To run the Source to Target Map Listing

1. Access the Source to Target Map Listing dialog (as


mentioned above).
2. Specify the Source Data Model (using “Browse” button).

Workbench User’s Guide 291


Section 7. Compare and Reports

• Specify the Target Data Model (using “Browse” button).


3. At the Report Format group box, choose the format of the
report. You may choose as many check boxes as necessary.
Option Description

Source Not The Source Not Mapped format lists the


Mapped data model items that appear in the
source model that do not get mapped to
a variable or to a target data model item

Source Indirect The Source Indirect format lists the data


model items that get mapped to a
variable in the source model but do not
get carried over to the target model.

Source To Target The Source to Target Direct report lists


Direct the data model items that get mapped to
a variable then mapped to a target data
model item. The title on the report
indicates ‘DIRECT

Target Indirect This report lists the data model items in


the target model that get assignments
from variables, but the variables do not
appear in the source model

Target not This report lists the data model items


Mapped that appear in the target model that have
not been mapped from a variable or
source data model item

4. At the Label Sequence group box, choose the sequence in


which the report should appear. The option chosen will
appear in the first column of the report.
5. At the Output Method group box, choose the output method
and enter any required information.
• Display Report: Outputs the report to the screen (as
shown below).

292 Workbench User’s Guide


Section 7. Compare and Reports

• Print Report: Sends the report to the default printer that


is set up using Print Setup.
• File Report: Saves the report to a file. Select “Browse”
button (next to “File Report” option) to enter the
filename to which file is to be sent. Report will be saved
with “.rpt” extension.
6. Once all the information is entered, choose the OK button to
generate the report;
– or –
Choose the Cancel button to return to Workbench.

Workbench User’s Guide 293


Section 8. AI Version Validatior

Section 8. AI Version Validator


Using this feature, from one instance of AI Workbench 5.2.7.8, you
can validate and check the syntax of the maps (atts and mdls) of
different AI versions. Currently maps of 4.0, 4.1, 5.0 and 5.2 can be
validated and syntax checked using this feature.
It will allow you to use only the functions that are valid for the
configured AI version. In other words, if you are working on a 4.1
map, this feature will not allow you to use any AI functions that are
part of AI 5.0 or 5.2. Similarly, if you are trying to open a map that
contains a function that is not supported / implemented in the
configured AI version, it will show an error message. The Built-In
view has been enhanced to tell you what function is supported in
what AI version(s).

Note: First set preferences in the Version Validator Preference


page.

For example:
• Set the target AI runtime version as 4.1 in the Version
Validator Preference page.
• In the model editor, if you try to drag and drop a function
that is not valid for the configured AI version, an error
message is displayed in the status bar as shown below:

Workbench User’s Guide 294


Section 9. Working with Macros

Section 9. Working with Macros

Macros are used to populate a standard set of commands in


workbench. One could also record keyboard inputs and play the
same at a later point of time using Macros.

Workbench User’s Guide 295


Section 9. Working with Macros

To create/record a macro follow the below mentioned steps


Creating/Record
ing a macro
1. Open a model file in Model editor. Go to the Model Text page

2. Click the Record Macro button to start recording the macro.


Enter the text that you want to record, which will be a set of
keyboard inputs. For example, you can record the following text
as a macro:
[]
VAR->$sel$=$sel$
On how to play this macro click here.

3. To stop recording, click on the same Record Macro button.


Give a name for the recorded macro in the Save recorded macro
window.

296 Workbench User’s Guide


Section 9. Working with Macros

4. If you want to save the macro for permanent use, then click on
the Save macro check box and provide an ID for the macro.
5. If an ID is not given while saving, the macro is temporary and
available only for that instance of Workbench.

Note: The Macro ID value should be unique.

The new macro will be listed under Play Macro >Macros.

Workbench User’s Guide 297


Section 9. Working with Macros

To play a recorded, imported or predefined macro follow the


Playing a below mentioned steps. You can also load macros by importing
Recorded/Import xml files in the Macro Definitions page.
ed/Predefined
Macro
1. If you want to run a macro, go to the Play Macro icon dropdown
on the toolbar, go to Macros and select the macro. The macros
that have been imported in the macro definitions page will also
be listed here.

The respective macro commands can be seen in the model text page
of the model file where the cursor is placed.
Enter some text here, for example, ERG

298 Workbench User’s Guide


Section 9. Working with Macros

Play ‘Macro 1’ that you have already recorded, the following lines
are displayed:

The selected text (ERG in this case), replaces the $sel$ in the
macro.
2. To play a temporary macro go to the Play Macro icon dropdown
on the toolbar, go to Temporary Macros and then select the
macro.

Note:

The Macros that are already listed after installing the feature will
not be available for export/editing.

The macros derived from xml will be available for import, export
and editing.

The recorded macros will be available for import, export and


editing.

Workbench User’s Guide 299


Section 10. XML Mapping and Processing

Section 10. XML Mapping and


Processing
This section covers the parsing of XML data and creating data
models for XML data. It also defines the difference between
traditional data models used for XML data versus style sheets and
Xpath source data models. A description of the parsers used to
parse and validate XML data is also included.

Workbench User’s Guide 300


Section 10. XML Mapping and Processing

XML Overview XML is the 'eXtensible markup language' – extensible because it is


for Traditional not a fixed format like HTML™. XML is defined by the World
Wide Web Consortium (W3C®) and is designed to enable the use
Models of SGML™ on the World Wide Web.
SGML is the Standard Generalized Markup Language,
which is used as the international standard for defining
descriptions of the structure and content of different types of
electronic documents.
HTML is the Hypertext Markup Language, which is a
specific application of SGML used in the World Wide Web.
XML allows groups of people or organizations to create their own
customized markup languages for exchanging information in their
specialty (be it electronics, engineering, mathematics, music,
history, mountain climbing, and so on).
XML is an abbreviated version of SGML to make it easier for you to
define your work document types and make it easier for
programmers to write programs to handle them. XML files can be
parsed and validated like SGML files.
A parser validates the XML data against a document type
definition (DTD) or a schema (XSD). The AI Control Server
installation contains a data model generator, which uses a valid
schema to create a data model structure to which rules can be
added.

Inbound In AI there are two main methods to process inbound XML data –
Processing using a pre-parser called otxmlcanon, or using the translator’s
inbuilt XML parser called Xerces.

There are two methods to process inbound data. In the first


method, translation occurs with the use of a single environment.
The input file is the XML document that is read by the parser and
validated by otxmlcanon. This data is sent to the translator for
processing from which an output file of application data is created.

Workbench User’s Guide 301


Section 10. XML Mapping and Processing

The advantage of this method is throughput. The disadvantage of


this method is that if an error is encountered, all data processed up
to the error will have been translated. Depending on how the
translation session is modeled, the data preceding the error may
have been updated into other systems or databases. The data then
needs to be uncommitted from those systems or databases.

Input file
Invokes parser and the output is
piped directly into the translator.
Application
Message Processing data
Developer written data models to
read in and write out

With the second method of processing, inbound data begins with a


pre-translation environment that fully parses and validates the
XML data before it is sent through the translator for processing.
The OTXMLPre.att environment attaches to the OTCallParser.att
environment to run the otxmlcanon program to check for well-
formed XML data and, optionally, validate against a DTD or
Schema. The output is placed into a temporary file.
If an error is reported, translation ends with error handling
procedures being performed. Upon successful parsing of the data,
the OTXMLPre.att attaches to the message processing environment
that reads in the XML data and outputs application data. The
advantage of this method is that the XML data can be fully checked
for well formedness and, optionally, validated against the DTD or
Schema before updating any systems or databases. The
disadvantage of this method is throughput. Additional time is
used to read all of the XML data and write it to a temporary file
completely before it is sent through the translator.

302 Workbench User’s Guide


Section 10. XML Mapping and Processing

OTCallParser.att Temporary
output file.
Invoke parser through a Parsed XML
batch/shell script. data
OTXMLPre.att
Performs validation

Performs error handling Yes Error?

Performs message processing


No

Message Processing
Developer written message
data models to read in and
write out.

Application
data

Workbench User’s Guide 303


Section 10. XML Mapping and Processing

Outbound During outbound processing, an input application file is sent to the


Processing translator for processing. Processing begins with a source data
model environment, which loops for each message in the input
application file until end-of-file is reached.
If no errors are found, the file is sent to the target data model
environment. The OTCallParser.att environment is attached to
invoke the XML parser. If an error is encountered, error handling
procedures are performed. Once the application data has been
successfully translated, an XML data file is created.

Target Message
Source Message Processing Processing Temporary
(Developer written data model) (Developer written data file
model)
Message Processing loop
Read in message Write out XML message

Perform output
Perform validation OTCallParser.att
Perform error handling Invoke parser through a
Error Handling batch/shell script
(Developer written data
model)
Bypass
Reject
Process XML data
file

304 Workbench User’s Guide


Section 10. XML Mapping and Processing

XML documents must be well-formed and valid. Well-formed means


XML the document follows all the notational and structural rules for
Requirements XML. Programs that intend to process XML should reject any XML
input that does not follow the rules for being well formed. A valid
document is valid because it matches its document type definition
(DTD) or Schema (XSD).

Well-Formed There are many rules regarding well-formed documents. Some


Documents important rules for creating well-formed documents are as follows:

No unclosed tags. Every start tag must have a


corresponding end tag. This is because part of the
information in an XML file has to do with how different
elements of information relate to one another. If the
structure is ambiguous, so is the information. Therefore,
XML does not allow this ambiguous structure.
No overlapping tags. A tag that opens inside another tag
must close before the containing tag closes. The structure of
the document must be strictly hierarchical. For example: the
sequence:

<greeting>
Welcome to the <response> world of XML </greeting>
</response>
is not well-formed because <response> opens inside of
<greeting> but does not close inside of </greeting>.
The correct sequence would be:
<greeting>
Welcome to the <response> world of XML </response>
</greeting>

Attribute values must be enclosed in quotation marks. For


example, <TABLE BORDER=1> would be valid in HTML but
invalid in XML because there are no quotation marks
around the attribute value 1. In XML, a valid attribute value
would be <TABLE BORDER="1">.

Workbench User’s Guide 305


Section 10. XML Mapping and Processing

The text characters (<) and (& ) must always be


represented by ‘character entities.’ The ampersand
character (&) and the left angle bracket (<) MUST NOT
appear in their literal form, except when used as markup
delimiters, or within a comment, a processing instruction, or
a CDATA section. If they are needed elsewhere, they MUST
be escaped using either numeric character references or the
strings "&amp;" and "&lt;" respectively. The right angle
bracket (>) MAY be represented using the string "&gt;", and
MUST, for compatibility, be escaped using either "&gt;" or a
character reference when it appears in the string "]]>" in
content, when that string is not marking the end of a
CDATA section. Additional information about special
character coding and processing can be found in the XML
Special Characters section.
Additional information about well-formed document rules
can be found in XML instructional documents and
programmers’ guides.

Valid Documents While all XML parsers check that the documents are well-formed
(meaning the tags are paired and in the proper sequence, attribute
values are indicated properly, and so on), some parsers also
validate the document. Validating parsers check that the structure
and number of tags makes sense.

Case Sensitivity The entire XML document file, both markup and text, is case
sensitive. Element type names, such as those used in start tags and
end tags must be defined alike, using either uppercase or
lowercase characters.
For well-formed files with no document type definition, the first
occurrence of an element type name defines the casing. The
uppercase and lowercase must match; thus, <IMG/> and <img/>
are two different element types.
Attribute names are also case sensitive on a per-element basis. For
example, <PIC width="7in"/> and <PIC WIDTH="7in"/>
within the same file exhibit two separate attributes, because the
different cases of width and WIDTH distinguish them.

306 Workbench User’s Guide


Section 10. XML Mapping and Processing

The document type definition is the grammar for a markup


Document Type language as defined by the designer of the markup language. The
Definition DTD specifies what elements may exist, what attributes the
Overview elements may have, what elements may or must be found inside
other elements, and in what order the elements can appear.
A DTD is associated with an XML document by way of a document
type declaration, which appears at the top of the XML file. The
DTD may contain either an internal (inline) copy of the DTD or a
reference to that document as a system filename or URI (universal
resource identifier).

DTD Example The DTD is arranged in hierarchical format. In this example, the
hierarchy of the elements is indicated by spaces.

<page>
<head>
<title/>
</head>
<body>
<title/>
<para/>
</body>
</page>

Here, XML data is converted to a DTD structure.


<!DOCTYPE page [
<!ELEMENT page (head, body)>
<!ELEMENT head (title)>
<!ELEMENT body (title, para)>

Notice how the hierarchy is kept intact within each element.


Page is the root element.
The page element consists of a head followed by a body.

Workbench User’s Guide 307


Section 10. XML Mapping and Processing

A head element contains a title element.


A body element contains a title element followed by a para
element.
This example was coded as it would appear as an internal DTD
subset at the start of an XML document. By keeping the DTD
inside a document during its development, you can save a lot of
file swapping until you are sure the DTD works as intended. You
can move the DTD to an external file once it has been finalized.

308 Workbench User’s Guide


Section 10. XML Mapping and Processing

XML Schema The XML Schema Definition (XSD) is the definition of a document
Definition in XML syntax. The XSD specifies what elements may exist, what
attributes the elements have, what element must be found inside
Requirements
other elements, and in what order the elements can appear.

Namespace The XSD is associated with an XML document by way of an XML


Definition namespace definition. A namespace definition is an attribute of the
root element. The XML namespace definition may reference a local
filename or URI (universal resource identifier).

XSD Example The XSD is arranged in hierarchical format. In this example, the
hierarchy of the elements is indicated by indentations.

<page>
<head>
<title/>
</head>
<body>
<title/>
<para/>
</body>
</page>

Workbench User’s Guide 309


Section 10. XML Mapping and Processing

Here, XML data is converted into an XSD structure:

<xsd:schema id="page" targetNamespace="" xmlns=""


xmlns:xsd="http://www.w3.org/1999/XMLSchema"
xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">
<xsd:element name="head">
<xsd:complexType content="elementOnly">
<xsd:all>
<xsd:element name="title" minOccurs="0"
type="xsd:string"/>
</xsd:all>
</xsd:complexType>
</xsd:element>
<xsd:element name="body">
<xsd:complexType content="elementOnly">
<xsd:all>
<xsd:element name="title" minOccurs="0"
type="xsd:string"/>
<xsd:element name="para" minOccurs="0"
type="xsd:string"/>
</xsd:all>
</xsd:complexType>
</xsd:element>
<xsd:element name="page" msdata:IsDataSet="True">
<xsd:complexType>
<xsd:choice maxOccurs="unbounded">
<xsd:element ref="head"/>
<xsd:element ref="body"/>
</xsd:choice>
</xsd:complexType>
</xsd:element>
</xsd:schema>

310 Workbench User’s Guide


Section 10. XML Mapping and Processing

Notice, through the use of complexType, how the children are


defined within their parent:
Page is the root element.
The page element consists of a head element followed by a
body element.
A head element contains a title element.
A body element contains a title element followed by a para
element.

Workbench User’s Guide 311


Section 10. XML Mapping and Processing

XML Parsers Application Integrator™ provides you with two XML parsers. One
is a separate program called by system models or user defined
batch files. The other is built into the AI Control Server.

Parser Overview A generic XML parser is a program or class that can read any well-
formed, valid XML data as its input. It will also detect and report
any errors found in the XML data.

Why Do We Have A The parser can check the input for well-formed XML and can write
Parser? output in the canonical format. The parser can also convert
characters that XML uses into characters that are recognizable to
the translator and the target application. The items checked by the
parser depend on the arguments set when invoking the parser.

XML Special Special syntax characters are used to identify structure and special
Characters sequences of characters within the XML data.
These special characters are the less than symbol (<), the greater
than symbol (>), the ampersand symbol (&), the apostrophe
symbol ('), and the quotation mark symbol ("). To use these special
characters in your XML data models, you must use their Entity
Reference value. The following table lists the Entity Reference
value for these special characters:

XML Special Character Entity Reference


< (less than symbol) &lt; or &#60;
> (greater than symbol) &gt; or &#62;
& (ampersand symbol) &amp; or &#38;
' (apostrophe symbol) &apos; or &#39;
" (quotation mark symbol) &quot; or &#34;

When these substitution characters appear in a file, they must be


recognized and converted into a different symbol so the translator
and application can process them as intended.
Shown here is a typical example of how a substitution character
should be coded in an XML file element. The program is checking
for whether Item_No is less than 5.
<ITEM>Item_No &lt; 5</ITEM>

312 Workbench User’s Guide


Section 10. XML Mapping and Processing

Rather than using the less than symbol (<) in the XML code, the
Entity Reference of &lt; was coded.
In this example, if the ampersand symbol were needed in an
element, it would be coded like this:
<DESCRIPTION>Currier&amp;Ives</DESCRIPTION>
Rather than using the ampersand symbol (&) in the XML code, the
Entity Reference of &amp; was coded.

Escape and Release During inbound processing, when the parser sees the entity
Characters reference value in an XML document, it converts it to the release
character that was specified on the command line, followed by the
intended symbol.
During outbound processing, when the parser sees the escape
character that was specified on the command line followed by the
intended symbol, it converts it to the corresponding entity
reference value.

Workbench User’s Guide 313


Section 10. XML Mapping and Processing

The parser created by Xerces is built into the Application


Xerces Integrator™.
When parsing or writing out XML data during a translation session
the following things are checked to determine if Xerces should be
used to parse the data:
Source
If the source model ends with .mdl and all data model items
are XMLRecord (Xpath model)
If the source model ends with .xsl (Style sheet)
Target
If the target model ends with .xsl (Style sheet)

Note: Xpath models can only be used on the source side, not the
target side.

Note: Xerces-C has intrinsic support for:

ASCII, UTF-8, UTF-16(Big/Small Endian), UCS4(Big/Small Endian),


EBCDIC (code pages IBM037, IBM1047, and IBM1140 encodings),
ISO-8859-1(Latin 1), and Windows-1252.

Source Parsing
During source parsing, Xerces parses the XML data into a DOM
(document object module) in memory while ensuring that the data
is well-formed XML data. Well-formed means that each XML tag
has a start and end tag, and all child tags are closed out before
parent end tags, in addition to ensuring that proper XML syntax is
followed.

314 Workbench User’s Guide


Section 10. XML Mapping and Processing

Optionally, while Xerces is populating the DOM structure,


validation of the XML data against its referenced DTD/XSD occurs,
if the XML_VALIDATE environment variable is set to “Yes”.
If errors are encountered (well-formed, DTD/XSD validation),
processing returns to the parent environment. The errors from
Xerces processing are retrieved using the data model function
LAST_MCFERR() – Last Map Component File’s Errors.

Upon the DOM structure being populated with the full XML
document and no errors encountered, if the S_CONSTRAINT
references a style sheet, it is then invoked with Xalan processing.
If errors are encountered, processing returns to the parent
environment where LAST_MCFERR() can be called to obtain
details of the errors. If no errors are encountered, processing
continues with the S_MODEL being either another style sheet or an
XPATH enabled data model.
Target
Construction
Target processing uses the style sheet defined by T_MODEL to
construct the XML document. Once the style sheet processing
ends, automatically the output XML document is parsed back into
a DOM structure in memory. (Note: The translator is aware of the
starting position of the current XML document output. If the
output file contained previous XML data, only the current
constructed XML document is parsed back in.) During parsing, the
data is verified to be well-formed. If XML_VALIDATE is set to
“Yes”, the parsed XML document is also validated against the
DTD/XSD. If errors occurred, processing returns to the parent
environment where LAST_MCFERR() can be used to retrieve the
details of the errors. If no validation errors are reported and
T_CONSTRAINT has a style sheet associated with it, it is then
invoked by Xalan referencing the DOM structure in memory.
Processing then returns back to the parent environment. If errors
were encountered during the Xalan style sheet processing,
LAST_MCFERR() can be used to retrieve the details of the errors.

Note: Use of S_CONSTRAINT/T_CONSTRAINT is not dependent on


the setting of XML_VALIDATE. With XML_VALIDATE set to “No”,
the data will be checked for well-formed and compliant using the

Workbench User’s Guide 315


Section 10. XML Mapping and Processing

style sheet defined under S_CONSTRAINT/TCONSTRAINT, with no


validation against the DTD/XSD occurring.

Validation of the Parsing using Xerces simply checks the XML data to ensure it is
XML data well formed.

To validate the XML data against a DTD or Schema, the validate


environment variable must be passed in when the translation is
invoked.
Windows®:
otrun –at XMLParseSample.att –DINPUT_FILE=XML.in –cs
%OT_QUEUEID% - DXML_VALIDATE=Yes –tl 1023 -I

Unix and Linux:


inittrans –at XMLParseSample.att –DINPUT_FILE=XML.in –
cs $OT_QUEUEID - DXML_VALIDATE=Yes –tl 1023 –I

XML Constraints Schema (.xsd) and Document Type Definitions (.dtd) are used to
define the content of an XML message. They are used to validate
an instance of their definition. Schema is considerably more
specific in its definition than DTD, in that it can specify
occurrences, size, character set, code lists, groupings, and so on.
However, neither schema nor DTD syntax can represent
requirements or constraints based on the presence, absence or data
content spanning multiple elements and/or attributes. These types
of requirements or constraints can be represented and enforced
using XPATH expressions in style sheets.
Standards, such as RosettaNet, provide DTDs or XSDs for their
messages, along with documentation, which describes those
constraints that could not be represented within the DTD/XSD
syntax. To separate translation constraint validation from
mapping, a set of environment variables have been added within
the Map Component File (.att) to define these constraint style
sheets.
The Map Component File now contains the following for XML data
processing:
INPUT_FILE

316 Workbench User’s Guide


Section 10. XML Mapping and Processing

S_ACCESS
S_MODEL (mapping XPATH data model or style sheet)
S_CONSTRAINT (source constraint validation style sheet)
OUTPUT_FILE
T_ACCESS
T_MODEL (mapping style sheet)
T_CONSTRAINT (target constraint validation style sheet)

The S_CONSTRAINT/T_CONSTRAINT environment variables are


optional. When used, the defined value must be a style sheet
ending with the suffix “.xsl”.
It should only be defined if the S_MODEL/T_MODEL is a style
sheet (ending with “.xsl”) or the S_ACCESS is “OTXPath.acc”
(S_MODEL is a XPATH data model ). This then means that the
input/output data is in the XML syntax and that Xerces will be
used for parsing/construction.

Note: If S_MODEL/T_MODEL has an ending suffix of “.xsl”, then


the S_ACCESS/T_ACCESS variable is not used during processing.

The constraint style sheet used for


S_CONSTRAINT/T_CONSTRAINT can follow the approach used
by Schematron. Schematron is a shareware tool that can generate
constraint style sheets. For more detailed information, see:
http://www.schematron.com.

The Schematron approach is as follows:


1. Define the constraints along with the text to report upon error
into an XML document. The structure of this document is
specific to Schematron. Refer to XML Constraint File for more
details.
2. Referencing the above XML constraint file as input, run a
translation to generate the constraint style sheet. Refer to
Generate Constraint Style Sheet for more details.
7

Workbench User’s Guide 317


Section 10. XML Mapping and Processing

3. Within the Map Component File, set the


S_CONSTRAINT/T_CONSTRAINT to the file name of the
generated constraint style sheet.

318 Workbench User’s Guide


Section 10. XML Mapping and Processing

XML Constraint
File

An XML file defining the constraints is used to create the constraint


style sheet that will be referenced in S_CONSTRAINT or
T_CONSTRAINT. The naming convention recommended for the
constraint XML file is to use the suffix of “-constraints.scmt”
(Schematron). Additionally, the recommended naming convention
for the generated style sheet is to use the base file name of the
constraint .scmt file followed by “-constraints.xsl”. This convention
allows for easy recognition of both the XML and Style Sheet
constraint files.

Example XML Constraint File


The following is an example of a constraint XML file, with two
constraints defined.

<sch:schema
xmlns:sch="http://www.ascc.net/xml/schematron">

<sch:pattern name="All occurrences of an element when


present must be a specific code, when another element
contains a certain code.">
<sch:rule context="//GlobalPartnerClassificationCode">
<sch:report test="(.!='End User' and
/Pip4A4PlanningReleaseForecastNotification/GlobalDoc
umentFunctionCode[. = 'Request'])">
GlobalDocumentFunctionCode is 'Request', so all
GlobalPartnerClassificationCode present must be 'End
User'.~
</sch:report>
</sch:rule>
</sch:pattern>

<sch:pattern name="Element required when another element


is not used/present.">

Workbench User’s Guide 319


Section 10. XML Mapping and Processing

<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/Core
Forecast/PartnerProductForecast/ForecastPartner/PartnerD
escription">
<sch:report
test="count(BusinessDesscription/GlobalBusinessIdentifie
r) = 0 and count(PhysicalAddress) = 0">
When GlobalBusinessIdentifier is not present,
PhysicalAddress is required.~
</sch:report>
</sch:rule>
</sch:pattern>
</sch:schema>

Types of Constraints
The following table contains examples of constraints implemented
that can be used as a reference while developing your own XML
constraint file.

1 “At least one occurrence is mandatory of two or more elements”


<sch:pattern name="At least one occurrence is mandatory of two or more
elements.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ForecastProductIdentification">
<sch:assert test="count(GlobalProductIdentifier) > 0 or
count(PartnerProductIdentification) > 0">
At least one occurrence of GlobalProductIdentifier or
PartnerProductIdentification is mandatory at
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ForecastProductIdentification'.~
</sch:assert>
</sch:rule>
</sch:pattern>
2 “Only one occurrence of a specific element’s value is allowed” - one occurrence of
‘Ship’ and one occurrence of ‘Dock’

320 Workbench User’s Guide


Section 10. XML Mapping and Processing

<sch:pattern name="Only one occurrence of a specific element’s value is


allowed.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:assert test="(count(GlobalTransportEventCode[.='Ship']) &lt;=
1) and (count(GlobalTransportEventCode[.='Dock']) &lt;= 1)">
Only one occurrence of "Ship" and "Dock" is allowed for
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ProductSchedule/ForecastProductSchedule/Forecast
Period/GlobalTransportEventCode'.~
</sch:assert>
</sch:rule>
</sch:pattern>

3 “Only one value is allowed for a specific element’s value” – only value allowed at this
XPATH location is ‘Ship’, any other value is an error
<sch:pattern name="Only one value is allowed for a specific element’s
value.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:assert test="(count(GlobalTransportEventCode[.='Ship']) =
count(GlobalTransportEventCode))">
Only the value "Ship" is allowed for
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ProductSchedule/ForecastProductSchedule/Forecast
Period/GlobalTransportEventCode'.~
</sch:assert>
</sch:rule>
</sch:pattern>
4 “Only a value from a list is allowed for a specific element’s value” - other values are an

Workbench User’s Guide 321


Section 10. XML Mapping and Processing

error
<sch:pattern name="Only a value from a list is allowed for a specific
element’s value.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:assert test="(count(GlobalTransportEventCode[.='Ship']) +
count(GlobalTransportEventCode[.='Dock']) =
count(GlobalTransportEventCode))">
Only the value "Ship" or "Dock" is allowed for
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ProductSchedule/ForecastProductSchedule/Forecast
Period/GlobalTransportEventCode'.~
</sch:assert>
</sch:rule>
</sch:pattern>
5 “Value must be previously referenced within the document.”
<sch:pattern name="Value must be previously referenced within
document.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ForecastProductIdentification">
<sch:assert
test="count(../../ForecastPartner/PartnerDescription/BusinessDescription
/GlobalBusinessIdentifier[.= current()/GlobalProductIdentifier]) &gt;
0">
The value of
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ForecastProductIdentification/GlobalProductIdent
ifier' was not previously referenced within the document.~
</sch:assert>
</sch:rule>
</sch:pattern>

322 Workbench User’s Guide


Section 10. XML Mapping and Processing

6 “Element must only be present if another element with a specific value is present.”
<sch:pattern name="Element must only be present if another element with
a specific value is present.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:report test="count(GlobalTransportEventCode) &gt; 0 and
(GlobalIntervalCode!='Named Address' or not(GlobalIntervalCode))">
GlobalTransportEventCode present yet GlobalIntervalCode not
present with a value of 'Named Address'.~
</sch:report>
</sch:rule>
</sch:pattern>
7 “Parent present, one of its children required.”
<sch:pattern name="Parent present, one of its children required.">
<sch:rule context="//PhysicalAddress">
<sch:report test="count(child::*) = 0">
At least one child element is required for Physical Address.~
</sch:report>
</sch:rule>
</sch:pattern>
8 “One and only one occurrence of an element with a specific value is required/allowed.”
<sch:pattern name="Once occurrence of an element with a specific value
is required/allowed.">
<sch:rule context="/Pip4A4PlanningReleaseForecastNotification">
<sch:report test="count(//GlobalDocumentFunctionCode[.='Request'])
!= 1">
One and only one occurrence of GlobalDocumentFunctionCode with a
value of 'Request' is required/allowed.~
</sch:report>
</sch:rule>

Workbench User’s Guide 323


Section 10. XML Mapping and Processing

</sch:pattern>
9 “Element required when another element is not used/present.”
<sch:pattern name="Element required when another element is not
used/present.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ForecastPartner/P
artnerDescription">
<sch:report
test="count(BusinessDesscription/GlobalBusinessIdentifier) = 0 and
count(PhysicalAddress) = 0">
When GlobalBusinessIdentifier is not present, PhysicalAddress is
required.~
</sch:report>
</sch:rule>
</sch:pattern>
1 “All occurrences of an- element when present must be a specific code, when another
0 element contains a certain code.”
<sch:pattern name="All occurrences of an element when present must be a
specific code, when another element contains a certain code.">
<sch:rule context="//GlobalPartnerClassificationCode">
<sch:report test="(.!='End User' and

/Pip4A4PlanningReleaseForecastNotification/GlobalDocumentFunctionCode[.
= 'Request'])">
GlobalDocumentFunctionCode is 'Request', so all
GlobalPartnerClassificationCode present must be 'End User'.~
</sch:report>
</sch:rule>
</sch:pattern>

324 Workbench User’s Guide


Section 10. XML Mapping and Processing

Within the rule element, the context (rule context=) attribute is


used to define the scope of how much of the input document is to
be checked. You can specify the whole document or limit it to each
occurrence of a certain branch within the document.

The assert and report elements output their values based on the
following conditions:
assert – when test is false then output value
report – when test is true then output value

Be sure to end each <assert> or <report> value with the tilde (“~”)
character. For error reporting, the System models will then be able
to consolidate multiple occurrences of the same constraint. If a
constraint tests multiple occurrences of a branch (for example a line
item value), then each time that constraint fails the text to report
upon error appears. If the tilde (~) is used at the end of the text, the
error text is written out once, followed by a message on how many
times the error occurred.
For example, the Error report would write out this error message, if
the tilde were not used. Notice how the same error text is printed
out each time the constraint failed.

220 Source constraint style sheet validation error Source Constraint Error: In pattern
count(child::*) = 0: At least one child element is required for Physical Address.In pattern
count(child::*) = 0: At least one child element is required for Physical Address.In pattern
count(child::*) = 0: At least one child element is required for Physical Address.

If the tilde is used, the error message would appear like the
following example, where the error text is printed out once,
followed by a message in brackets that indicates the number of
times the error occurred.

220 Source constraint style sheet validation error Source Constraint Error:In pattern
count(child::*) = 0: At least one child element is required for Physical Address.
[occurred 3 times]

Generate
Constraint Style
Sheet

Workbench User’s Guide 325


Section 10. XML Mapping and Processing

To create the constraint style sheet run the following translation


using the constraint XML file (.scmt) as input.

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLConstraint.att -
DBASEFILENAME="<CONSTRAINT_BASE_FILENAME>" -cs
%OT_QUEUEID% -I

Unix and Linux operating systems


From the command line, type:
inittrans -at OTXMLConstraint.att -
DBASEFILENAME="<CONSTRAINT_BASE_FILENAME>" -cs
$OT_QUEUEID -I

Where
<CONSTRAINT_BASE_FILENAME> is the constraint XML file’s
base filename. Do not specify the .scmt extension.
The output of this translation will be the constraint Style Sheet, the
name of this file will be <CONSTRAINT_BASE_FILENAME>.xsl.

For example:
XML constraint filename - PIP4A4-constraints.scmt,
<CONSTRAINT_BASE_FILENAME> - “PIP4A4-constraints”
Generated Style Sheet filename - PIP4A4-constraints.xsl.

The otxmlcanon parser is a separate executable that is invoked to


otxmlcanon read input from a file or standard input, and write output to the
standard output.

otxmlcanon In the product's application, the XML parser takes an incoming


Overview stream of XML data from a file or standard input, validates the
data and writes the data to the standard output. The output file
can then be read and translated by the translator component. The
output can be in canonical or non-canonical format.

326 Workbench User’s Guide


Section 10. XML Mapping and Processing

To enable the canonical option, use the –X argument.


Canonical is an industry term that describes the behavior of the
parser. When the output is in canonical format, the parser
normalizes the XML data by resolving entity references and other
markups. The document in canonical format is in its lowest form.
There is an option that allows the prolog and DTD references to be
removed. For example, perhaps you have an entity in your DTD
coded like this:
<!ENTITY GPM "XML is easy to use!">
<testcase>&GPM;</testcase>
When &GPM; is encountered, the value "XML is easy to use!" is
inserted.
If you run the parser with the canonical option turned off, the
output would be in non-canonical format, and every place you had
&GPM; you would see:
<testcase>&GPM;</testcase>
If you run the parser with the canonical option enabled, the output
would be in canonical format, and every place you had &GPM; you
would see:
<testcase>XML is easy to use!</testcase>
Notice how the element identifiers are removed. (In this case,
&GPM;.)

Workbench User’s Guide 327


Section 10. XML Mapping and Processing

Invoking the Parser All XML data needs to be passed through the otxmlcanon parser.
In the XML document, if you reference a DTD that does not exist,
you will get an error, whether or not the validation argument is
used.

Note: The XML Plug-In does not support referencing DTDs and
XSD™ schemas in the same XML document.

If the parser while processing the document detects an error, the


error is written to the standard output within special error tags.
Processing of data will terminate.
The parser command has the following syntax:
Unix and Linux operating systems
Inbound:
otxmlcanon -r <release_char> [-t] [-V | -v] [-D]
[-h]
[-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l <locale>]
<input_file>

Outbound:
otxmlcanon -o <escape_char> [-t] [-V | -v] [-D]
[-h]
[-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l <locale>]
<input_file>

Windows® operating system


Inbound:
otxmlcanon.exe -r <release_char> [-t] [-V | -v]
[-D] [-h] [-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l
<locale>] <input_file>

Outbound:
otxmlcanon.exe -o <escape_char> [-t] [-V | -v] [-
D] [-h] [-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l
<locale>] <input_file>

328 Workbench User’s Guide


Section 10. XML Mapping and Processing

Where
Option Description
-r <release_character> This argument defines the release
character to be used when converting
Entity References. Used for inbound
processing.
-o <escape_character> This argument defines the escape
character to be used by the translator so
that the parser can recognize when
Entity References need to be inserted.
Used for outbound processing.

Note: You must use either the –r or the –o argument to identify


the release or escape character.

-t This option specifies that the prolog is


not to be output.
-v This option specifies that validation will
take place only when a DTD or Schema
is referenced.
-V This option specifies that validation of
the XML document against the DTD or
Schema is to always take place.

Note: The XML Plug-In does not


automatically validate XML data.
Additional logic must be added to your
data models for validation to occur.
Refer to XML Validation Parameters for
additional information.

-X This option specifies that the output will


be in canonical format.
-l <locale> This option specifies the locale to be
used.
-c This option checks for the non-
deterministic specification of the element
type.

Workbench User’s Guide 329


Section 10. XML Mapping and Processing

Option Description
<input_filename> Indicates the filename of the input file.
The parser should be run with this
option to ensure entity reference values
are resolved to their string values.
If no <input_file> is entered, standard
input (stdin) is used.

Optional Following is a list of optional parameters that can be passed to the


Parameters parser.

Option Description
-r <release character> This argument defines the release
character to be used when converting
Entity References and is used for inbound
processing.
-o <escape character> This argument defines the escape
character to be used by the translator so
that the parser can recognize when Entity
References need to be inserted. It is used
for outbound processing.
-X This argument specifies that the output
will be in canonical format.
-t This argument indicates that the xml file
must be output without the prolog.
-V Indicates that validation must occur
against the DTD.
-v Indicates that validation must occur only
if the DTD is present. Both –V and –v
cannot be used together.
-D Print the extraInfo, which is the
ENGLISH detailed error
-h Print usage information for the program.
Same as “-?”.

330 Workbench User’s Guide


Section 10. XML Mapping and Processing

Option Description
-n Retaining/removing the Namespace in
the output xml file.
-a Retains the comments in the output xml
file
-c Indicates that the output must be
deterministic.
-x This argument outputs empty elements
that do not contain end tags, to contain
end tags.
-i Outputs CDATA without releasing
special characters.
-s This argument tests for a single xml
document, by checking for mulitple root
elements.
-l <locale> Indicates the locale to be used for the
input file.
<input specification > Indicates the filename of the input file in
standard URL format, the socket
specification, or if no input specification
is entered, the standard input is used.
File specification is optional and is in the
standard URL format. The full path must
be included. For example,
file:/C:/appl/rn.xml
The socket specification is optional and
indicates input will be coming from a
socket and identifies the socket. Refer to
Section 4, "Creating Map Component
Files," for information about specifying
sockets.
If no <input specification> is entered,
standard input (stdin) is assumed.

Parser’s The following table summarizes otxmlcanon’s character/entity


Character/Entity reference conversion for inbound (-r option) and outbound (-o
option) processing.

Workbench User’s Guide 331


Section 10. XML Mapping and Processing

Reference
Conversion

332 Workbench User’s Guide


Section 10. XML Mapping and Processing

INBOUND XML Processing – otxmlcanon’s character/entity reference conversion


Attribute Data PCData CData
Raw XML Output From Raw XML Output From Raw XML Output
Data To otxmlcanon Data To otxmlcanon Data To From
otxmlcanon To otrans otxmlcanon To otrans otxmlcanon otxmlcan
on To
otrans
&quot; or \” &quot; or \” “ \”
&#34; &#34;
&amp; or \& &amp; or \& & \&
&#38; &#38;
‘ \’ ‘ \’ ‘ \’
/ \/ / \/ / \/
&lt; or &#60; \< &lt; or &#60; \< < \<
> \> > \> > \>
\ \\ \ \\ \ \\
] ] ] ] ] \]
OUTBOUND XML Processing – otxmlcanon’s character/entity reference conversion
Attribute Data PCData CData
otrans Output otxmlcanon’s otrans Output otxmlcanon’s otrans Output otxmlcan
To XML Output To XML Output To on’s
otxmlcanon otxmlcanon otxmlcanon XML
Output
\” &#34; \” &#34; \” ”
& &#38; & &#38; & &
’ &#39; ’ &#39; ’ ’
/ / / / / /
\< &#60; \< &#60; \< <
\> > \> > \> >
\\ \ \\ \ \\ \
] ] ] ] ] ]

XML Input to the The input data stream is a series of one or more XML documents.
Input can come from a file or standard input. When multiple

Workbench User’s Guide 333


Section 10. XML Mapping and Processing

Parser documents are processed, each document in the input stream is


parsed and the data is output until the end-of-stream is reached or
an error occurs.

More about In general, deciding whether to keep namespace prefixes or not is


Namespace tied to the inbound/outbound nature of the processing. The two
types of documents that can be processed by otxmlcanon are
schema and DTD based documents.
The Xml file is considered as a schema Based document if it contains
the noNamespaceSchemaLocation or schemaLocation attribute. For
schema based documents, the following conditions apply with
otxmlcanon.

If -n option is specified, namespace is retained.


If -r option is specified, namespace is removed. -r (inbound)
If -o option is specified, namespace is retained. -o (outbound)
For any other option, namespace is removed. (default)( For instance, -X )

Note: The namespace is retained if otxmlcanon is used with -n


option only. If both n and r is used, r option takes precedence and
hence, namespace is removed. If o and n are used, o takes
precedence and hence, namespace is retained.

The Xml file is considered as a DTD based document if it does not


contain the noNamespaceSchemaLocation or schemaLocation
attribute in the xml file.
For DTD based documents, the following conditions apply with
otxmlcanon.

If -n option is specified, namespace is removed.


If -r option is specified, namespace is retained. -r (inbound)
if -o option is specified, namespace is retained. -o (outbound)
For any other option, namespace is retained. (default)( For instance, -X )

334 Workbench User’s Guide


Section 10. XML Mapping and Processing

Note: The namespace is removed if otxmlcanon is used with -n


option only. If both n and r is used, r option takes precedence and
hence, namespace is retained. If o and n are used, o takes
precedence and hence, namespace is retained.

Parser Output The output from the parser is written to standard output. When
multiple documents are parsed, output is in the same order as the
input documents.

The following is a list of the supported encodings within


Supported otxmlcanon.
Encodings

ISO-10646-UCS-2 ISO-10646-UCS-2-B ISO-10646-UCS-2-L


ISO-646 ISO-8859-1 ISO-8859-2
ISO-8859-3 ISO-8859-4 ISO-8859-5
ISO-8859-6 ISO-8859-7 ISO-8859-8
ISO-8859-9 ISO-Latin-1 ISO-Latin-2
ISO-Latin-3 ISO-Latin-4 ISO-Latin-5
ISO-Latin-6 ISO-Latin-7 ISO-Latin-8
UTF-16 UTF-16-B UTF-16-L
UTF-8

The data model generator is a utility that is used to create a data


Traditional model from a schema. The input to the data model generator is a
Models well-formed and valid XML file containing a reference to a schema.
From this file, the generator creates data model items for each
element identified with formats.

Workbench User’s Guide 335


Section 10. XML Mapping and Processing

If you have DTD’s, it is required that you generate Schema from


the DTD using the Generate Schema utility on the Navigator or the
option to Generate Schema from DTD in the wizard before
proceeding to invoke the model generator.

The XSD Validator can be used for validating XML schema


XSD Validator documents. This utility has a number of features that can be
configured from the XSD Validator preference page.It carries out
validation of the schema and puts together the results of the
validation, according to the settings made in the preference page.
The validation summary and details are displayed at the end of
validation in a user-friendly dialog.
To invoke the XSD Validator:
1. Right click on a schema file (with the file extension .xsd).
2. Choose the “Validate XSD” options as shown below.

336 Workbench User’s Guide


Section 10. XML Mapping and Processing

The Validation Report for the file is displayed (as shown below):
The window consists of six tabs: Results, Details, Suggestions,
Element Declarations, Type Definitions, Model Group Definitions.

The Result tab reports if the validation was successful or not , that
is, if it had errors. It presents a brief summary of schema files that
were validated and the ones that have errors (errors are shown in
red).
The details tab of the dialog can be used for probing deeper into the
errors.

Workbench User’s Guide 337


Section 10. XML Mapping and Processing

The Details’ tab displays the error details, for example, what errors
were found, the schema file in which the error occurred with the
line and column number(highlighted in red).
This tab also shows details about schemas like resolved location,
target namespace, number of element declarations, type
definitions and model group definitions found in a schema.

The next tab is the Suggestions’ tab. The utility has a smart module
called suggestion generator. It attempts to calculate the imports or
includes in an erroneous schema. These imports/includes are
needed to rectify the errors.

338 Workbench User’s Guide


Section 10. XML Mapping and Processing

The suggested imports and includes are highlighted in red. The


suggested line can be easily copy pasted into the indicated schema
file. Validation should be performed again to ensure the
correctness of the suggestion.
It is also possible to look for unresolved elements and type
definitions in a schema file that have not been otherwise included
or imported.
The Add Files button can be used to add other schema files to the
list of files that have to be searched. Once the files have been
added, click the Force Lookup button to initiate the process of
searching unresolved elements and typed definitions. On
completion, a new set of suggestions are presented.

The next tab is the Element Declarations tab.

It can be used to view the elements found in the schema files along
with other details such as type, its location and namespace.
Select the Element using the drop down box of Go To Element.

Workbench User’s Guide 339


Section 10. XML Mapping and Processing

This view has two other tabs in it: Tree view and Text View, the
latter being the text equivalent of tree view.

The Type Definitions tab can be used to view the type definitions
found in schema files.(see below):

The Type Definitions’ tab also contains other details of the schema
files such as base type, its location and namespace. The Go To
Element feature is a drop down and can be used for locating a type
by name. This view has two other tabs in it: Tree view and Text
View, the latter being the text equivalent of tree view.

The Model Group Definitions’ tab can be used to view the model
group definitions found in the schema files along with other details
such as type, its location and namespace.

340 Workbench User’s Guide


Section 10. XML Mapping and Processing

The Go to Element feature is a drop down and can be used for


locating a group by name. This view has two other tabs in it: Tree
view and Text View, the latter being the text equivalent of tree
view.

Traditional data models can be created from a schema within


Generating Data Workbench.
Models from
Schemas
To create a data model using a schema from Workbench
1. From the Workbench File menu, select New – Data Model.

Workbench User’s Guide 341


Section 10. XML Mapping and Processing

342 Workbench User’s Guide


Section 10. XML Mapping and Processing

2. The New Data Model dialog appears.

Enter a value for the Parent Folder. This will be the location
where the data model will be stored.
Enter a name for the new data model with or without the
extension of .mdl – the file will be created with the extension
.mdl.
Select the mode for the data model. This can be either Source for
parsing XML data or Target for writing out XML data.
Select the Type of data model you are creating. In this case XML
would be selected.

Workbench User’s Guide 343


Section 10. XML Mapping and Processing

Select Next.
3. Check “Generate Schema from XML” check box to proceed
with generation of XSD from an existing XML file. This option
enables XML file selection box.
Select XML file to generate schema file.
Else,
Check “Generate Schema from DTD” check box to proceed
with generation of XSD from an existing DTD file. This option
enables DTD file selection box.
Select DTD file to generate schema file.
Else,
Select the schema to be used to generate the data model.
Alternatively, you can type in the name of the XML file or DTD
file or schema file. If the file is not present then Workbench will
throw an appropriate error. If the file is present and the path is
not in MSO then the path will be automatically added to the
MSO. If the file is present and the path is not linked to the
Workspace, then the path will be automatically added to the
Workspace and to the MSO. Note that only absolute path will
be considered and not the relative path.

344 Workbench User’s Guide


Section 10. XML Mapping and Processing

Note: The Schemas that are used for generation of std and ids
files cannot contain duplicate element names. If the schemas
contain duplicate element names, the utility uses the first set of
values while creating the ids file.

Check the “Schema based on DTD” option if the schema was


generated from a DTD.

Note: “Schema based on DTD" needs to be checked as additional


(AI) rules will be placed in the model if it is created based on a
DTD.

Select the root element of the schema.


Select Next.
4. If you indicated that the schema is based on a DTD or if
targetNamespace is not defined in Schema file , Click on No
Namespace , click Next> and go to Step 5.

Workbench User’s Guide 345


Section 10. XML Mapping and Processing

Insert the target namespace from the Schema file specified in


Step 3.

Open the schema file using a text editor. Copy the URI of the
targetNamespace specified within the 'schema' element. Click
the Add... button.

346 Workbench User’s Guide


Section 10. XML Mapping and Processing

Fill in the Namespace value dialog, specifying


targetNamespace' in the Namespace field, and paste the URI
value into the URI field. Click the OK button to add this value
to the Namespace list. Copy and paste any remaining
namespaces and URIs listed in the schema file’s 'schema'
element.
Else, click the Namespaces button to get a list of namespaces of
the selected schema.

Choose a namespace and click OK. The selected namespace is


reflected in the wizard. You could choose multiple namespaces
at once.

Workbench User’s Guide 347


Section 10. XML Mapping and Processing

Note: At least one value must be specifed within the Namespace


list; add the targetNamespace entry before you click the Next
button, Or else, click on the No Namespace button.

Selecting “Retain Namespace” check box retains namespaces in


the generated model.
Select Next>.
5. In the next screen, select whether indentations should be used
or suppressed.

348 Workbench User’s Guide


Section 10. XML Mapping and Processing

Select the number of levels for recursion.


Select whether full validation should be performed or not.
Type in the company.
Type in the version.
Type in the locale to be used or leave the default.
Select Finish and the data model based on the schema files
definition is created and opened with the Model Editor.

Workbench User’s Guide 349


Section 10. XML Mapping and Processing

Source Data Model The XML document must contain the following items:
Considerations
• Reference to an XSD

• Start tag of the root element

• Ensure that the following delimiters are set in the Initialization


DMI.
Initialization {
[ ]
SET_SECOND_DELIM(60)
SET_FIRST_DELIM(62)
SET_THIRD_DELIM(47)
SET_FOURTH_DELIM(34)
SET_FIFTH_DELIM(39)
SET_RELEASE(92) ;this must match the "-r" or "-o"
parser argument.
}*1 .. 1 ;;|-- end Initialization --|

350 Workbench User’s Guide


Section 10. XML Mapping and Processing

Eclipse WST(Web Standard Tools) provides ready to use XML /


XML or Schema XSL / XSD utilities such as rich multipage graphical and syntax
Editor aware editors, validation and syntax checks, properties views etc.
The capabilities of the integrated editors are available in
Workbench 5.5.

XSD Editor
To view/edit an XSD file:
1. Double click on the XSD file. It is displayed in the editor. It has
two tabs at the bottom of the editor: Design and Source. The
Design view is shown below

2. Double click on any of the properties’ under the Elements list. A


graphical representation of the file is displayed.

Workbench User’s Guide 351


Section 10. XML Mapping and Processing

3. Double click on any of the elements to edit.

4. Click the top left icon of the design view to go back to


previous screen
5. Click Source tab to view the XSD source file. The editor is syntax
aware. You can edit the file by making changes to the source.

6. You can view the properties of the file in the XSD Editor’s
Properties view under General, Constraints, Documentation and
Extension tabs (accordian style) as shown below:

352 Workbench User’s Guide


Section 10. XML Mapping and Processing

7. Right click on the XSD file in the Navigator pane to choose from
the XSD specific context menu.To generate an XML file, choose
Generate > XML File.

8. To zoom in or out on design, go to XSD on the Menu Bar and


choose either the Zoom In or the Zoom Out option.

Workbench User’s Guide 353


Section 10. XML Mapping and Processing

9. You can also open the XSD file in other editors. Right click on
the XSD file, choose Open With>XML Schema Editor (or any
other editor).

XML Editor

Note: All the options for XML files specified here are available for
XSL files also.

To view/edit an XML file:

354 Workbench User’s Guide


Section 10. XML Mapping and Processing

3. Double click on the XML file in the Navigator pane. The XML
Editor displays the file in the design page. It displays a
hierarchical view of the various elements in the XML file. It also
displays details of elements on the right side of the page.

4. Click on the Source tab to view the source code of the XML file.
The tags are displayed in a particular color. The XML editor is
an XML syntax aware source editor

You can view the properties of the XML file in the Properties view.
TheXML Menu bar also provides various options.
You can also go to the Menu Bar and click on XML and choose the
required option.
You can open the XML file in other editors. Right click on the XML
file, choose Open With>XML Editor (or any other editor).

Workbench User’s Guide 355


Section 10. XML Mapping and Processing

After a data model is generated by the data model generator, you


Adding Rules to will have to add processing rules. These rules would be added
Generated Data using the WorkBench application. Refer to the WorkBench on-line
Models Help system for extensive information about using the application.

Rules for Pattern A pattern facet defined in the schema is not generated into the data
Facets model for validation. For a numeric field with a defined pattern,
such as a social security number, the data model is generated with
a format that does not permit the hyphens to be present. This will
cause errors in the data model unless the format is changed.

In the case of the social security number, the defined pattern of


"\d{3}-\d{2}-\d{4}" must be changed to "999@-99@-9999" in the
data model.

Using Generated In cases where a generated data model is to be used with a


Data Models with standard model, special rules must be placed in the generated data
model's PRESENT mode rules.
the Standard
Models

The following PERFORM statement must be added to the root


element's PRESENT mode rules in the source and target data
models respectively.
PERFORM("OTSrcEnd")
PERFORM("OTTrgEnd")
These PERFORM statements assign "Yes" to
VAR->OTSourceSuccessful and to VAR->OTTargetSuccessful.

Examples This section contains two examples of data models: a source data
model and a target data model. Each of these examples was
generated using the data model generator. All parts of the data
model can be edited.

356 Workbench User’s Guide


Section 10. XML Mapping and Processing

Source Data Model This is a source data model that was created using the data model
Example generator. Each data model item was created because the XSD
specified it.

The data model items are associated with the appropriate item
type, occurrence, size and formatting/masking characters. Add
rules to the data model to accomplish the function of the model
using Work Bench.

Target Data Model The Target Data Model invokes OTTrgInit in Initialization DMI
Considerations that sets the delimiters. For this, OTXML.inc needs to be included
in the target data model under DECLARATIONS section.
Ensure that the following delimiters are set:
SET_SECOND_DELIM(60)
SET_FIRST_DELIM(62)
SET_THIRD_DELIM(34)
SET_RELEASE(92) ;this must match the "-r" or "-o"
parser argument.

Workbench User’s Guide 357


Section 10. XML Mapping and Processing

Target Data Model This is a target data model that was created using the data model
Example generator. Each data model item was created because the XSD
specified it. The data model items are associated with the
appropriate item type, occurrence, size, and formatting/masking
characters.

Add rules to the data model to accomplish the function of the


model using Work Bench.

To test whether a generated data model is correct, you must pass


Testing and an instance of a fully defined XML document. The instance of the
Debugging XML XML document would contain all possible elements, attributes,
Data Models and so on contained in the data model.
If the XML document is inbound (that is, a source data model), run
a translation to verify that the entire document has been parsed
correctly by the translator. Likewise, if the XML document is
outbound (that is, a target data model), then the output of a
translation (for example, an instance of an XML document) may be
processed by the parser and validated against the DTD.

358 Workbench User’s Guide


Section 10. XML Mapping and Processing

Testing Use the otrun.exe (Windows®) or inittrans (Unix and Linux)


Translations commands to run testing and production translations. Information
about invoking translations can be found in Section 12. Translating
and Debugging.

XML Validation Generated data models and the inbound and outbound examples
Parameters shown later in this section do not automatically validate XML data
against a DTD. To validate the XML data, parameters must be set
in the User.inc file or at the command line. The User.inc file is an
INCLUDE file.

The XML Samples use the -DXML_VALIDATE parameter to


control validation of the XML data.
The -DXML_VALIDATE parameter can be set to "Always",
"WithReference", or "No".

Validating Against DTD validation can be set globally or by a specific translation.


a DTD

Note: These methods to control validation are necessary only


when the OTCallParser.att map component file is used.

To set DTD validation globally


1. Open the User.inc Include file using an on-line text editor.
2. Locate the DECLARE US_XMLValidate() statement.
3. Set the value of the VAR->OTXMLValidate="<value>" where
"<value>" can be:
Always: The translation will either validate the data or report
an error.
WithReference: The translation will validate only when the
DTD or XSD is referenced within the instance document.
No: The translation will not validate.
4. Save the User.inc file.
5. Close the text editor.

Workbench User’s Guide 359


Section 10. XML Mapping and Processing

To set DTD validation for a specific translation


Add the -DXML_VALIDATE=WithReference argument to the
command line. For example, to set XSD validation:

Unix and Linux operating systems


From the command line, type:
inittrans -at OTXMLSO.att -cs $OT_QUEUEID
-DINPUT_FILENAME=OTXMLSO
-DXML_VALIDATE=WithReference -tl 1023 -I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSO.att -cs %OT_QUEUEID%
-DINPUT_FILENAME=OTXMLSO.txt
-DXML_VALIDATE=WithReference -tl 1023 -I

Disabling the The prolog is output in the XML data by default, however, you can
Prolog specify when the prolog should not be produced. The XML
Samples use the-DXML_PROLOG parameter to control whether
the prolog is output.

When the prolog should not be output for all translations, use the
Disable Globally procedure.
When the prolog should not be output for a single translation, use
the procedure to disable for a specific translation procedure.

To disable prolog output globally


Use this procedure to prevent the prolog from being included in
the output for all translations.
1. Open the User.inc file using an on-line text editor.
2. Locate the DECLARE US_XMLValidate() statement.
3. Set the value of the variable to VAR->OTXMLProlog="No".
4. Save the User.inc file.
5. Close the text editor.

360 Workbench User’s Guide


Section 10. XML Mapping and Processing

To disable prolog output for a specific translation


Use this procedure to prevent the prolog from being included in
the output for a specific translation session.
Add the -DXML_PROLOG=No argument at the command line.
For example,

Unix and Linux operating systems


From the command line, type:
inittrans -at OTXMLSO.att -cs $OT_QUEUEID
-DINPUT_FILENAME=OTXMLSO
-DXML_PROLOG=No -tl 1023 -I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSO.att -cs %OT_QUEUEID%
-DINPUT_FILENAME=OTXMLSO.txt
-DXML_PROLOG=No -tl 1023 –I

Three other optional parameters can be passed in when parsing


Other Optional XML data. They are defined below. To utilize these optional
Parameters parameters, they must be set in the User.inc file or at the command
line.

Canonical XML Canonical is used to remove any white space from the xml data
and also removes prolog and DOCTYPE elements.
Format
The XML Samples use the -DCANONICAL parameter to control
outputting the data from the parser in canonical format.
The .DCANONICAL parameter can be set to "Yes", or "No". The
default is .No.

Empty Elements Empty elements can come in the format <Empty/>. The EMPTY
parameter is used to force the end tag to be present. For example,
with End Tag
<Empty></Empty>
The XML Samples use the -DEMPTY parameter to control
outputting the data from the parser with end tags on all empty
elements. The .DEMPTY parameter can be set to "Yes", or "No".
The default is No.

Workbench User’s Guide 361


Section 10. XML Mapping and Processing

Deterministic Using this parameter, the XML parser will check to ensure the XML
data is deterministic.
The XML Samples use the -DDETERMINISTIC parameter to
enforce that the XML data is deterministic.
The .DDETERMINISTIC parameter can be set to "Yes", or "No".
The default is No.

XML COMMENTS XML comments can be retained in the output xml file by specifying
this option. The DXML_COMMENTS parameter can be set to
"Yes", or "No".
The default is No.

362 Workbench User’s Guide


Section 10. XML Mapping and Processing

XML
Troubleshooting

Parser Error When the parser is successful, a return code of zero is generated by
the otxmlcanon utility. When an error is encountered, a non-zero
Handling
error code is returned. The parser writes the error message to the
standard output in the following format.

<PARSER_ERROR><error code>:<error message


[;detailed error message]> </PARSER_ERROR>

Where
Message Part Description
<error code> This is an integer indicating the
error code.
<error message[;detailed error The error message is a short
message]> generic description of the error.
The detailed error message
occurs when the parser has
collected additional information
about the error.

The error message is written as soon as the error occurs; therefore,


the error message may appear at any location in the output stream.
The parser exits as soon as an error is detected and the error
message is written.

Workbench User’s Guide 363


Section 10. XML Mapping and Processing

Parser Error The line number shown in the error message identifies the
Example Table approximate location of the error in the instance document or the
referenced DTD or schema. The actual location of the error could
be located a few lines above the line number indicated.

The line numbers indicated in the table are based on hypothetical


instance documents, DTD or schemas.

Parser Error Description


Parser detected error;Error in Syntax error within the
unnamed entity at line 12 char 13 instance document
of file: /D:/Trandev52/po1.xml
Instance document does not otxmlcanon was invoked with
reference a DTD or XSD schema validation, but no DTD or XSD
was referenced within the
instance document
File error; Could not open file The .xsd file
D:\Trandev52\po1z.xsd: No D:\Trandev52\po1z.xsd.
such file or directory referenced within the instance
document is unable to located
and opened.
File error; Error: cannot find Failed in attemp to obtain the
address for host "www.w3.org" in referenced schema through a
http URL URL address.
"http://www.w3.org/2001/XML
Schema_NO_SUCH_FILE.xsd"
Parser detected error; Error in Syntax error within the
unnamed entity at line 28 char 19 referenced schema.
of file:/D:/Trandev52/po1.xsd
Error. Line: 10. Minimum not Additional occurrences of the
achieved for item: "zip" specified item were expected.

364 Workbench User’s Guide


Section 10. XML Mapping and Processing

Parser Error Description


Error. Line: 13. Maximum Exceeded the maximum
exceeded for item: "zip" number of occurrences of this
item.
Error. Line: 21. Choice group
More than one choice item for
violation. Only one of the
an item was encountered.
following items found is allowed:
"zip"
"postal"
Error. Line: 27. SimpleType Value parsed did not match its
Violation in item: "shipDate" defined data type.
Item type is not a date
Error. Line: 35. Attribute value Attribute value parsed did not
failed simple type validation in match its defined data type.
item: "freeform" Attribute:
myAttribute="attribute" Value
found is not of type "boolean"
Error. Line: 35. Attribute has a The attribute cannot be found
valid namespace qualifier but not in its designated namespace.
found in that namespace:
"anyAttr"
Error. Line: 36. Element found The element cannot be found
has a valid namespace qualifier in its designated namespace.
but not found in that namespace:
"<comment_2>This is
great!</commentBob>"
Root element: "purchaseOrder" The root element in the
not found in schema instance document is not
contained in its referenced
schema.

Error. Line: 5. Undefined Attribute found in the instance


attribute found: "orderDate" in document, but not defined in
item "purchaseOrder" the schema.

Workbench User’s Guide 365


Section 10. XML Mapping and Processing

Parser Error Description


Error. Line: 8. Mixed content not Mixed content found in
allowed for item: "shipTo" Text instance document but not
found: "USAdress mixed data 1" defined in the referenced
schema.
File error <input_file> does not exist or
wrong permissions.
Error. Line: 23. Required attribute The required attribute is not
not present: partNumReqd in present.
item: .item.
Error. Line: 24. SimpleType maxExclusive. facet violation.
Violation in item: "quantity"
Maximum Exclusive facet
violation . Value Found: "100"
Error. Line: 22. Attribute value pattern. facet violation.
failed simple type validation in
item: "item" Attribute:
partNum="AA-872" Pattern facet
violation. Value Found: "AA-872"
Po7.xml
Error. Line: 10. SimpleType
Length facet violation.
Violation in item: "state" Length
facet violation. Value Found:
"CA1"
Error. Line: 10. SimpleType minLength facet violation.
Violation in item: "state"
Minimum Length facet violation.
Value Found: "C" .
Error. Line: 10. SimpleType maxLength facet violation.
Violation in item: "state"
Maximum Length facet violation.
Value Found: "CA3" .
Error. Line: 10. SimpleType enumeration facet violation.
Violation in item: "state"
Enumeration facet violation.
Value Found: "CC".

366 Workbench User’s Guide


Section 10. XML Mapping and Processing

Parser Error Description


Error. Line: 24. SimpleType
maxInclusive facet violation.
Violation in item: "quantity"
Maximum Inclusive facet
violation. Value Found: "101".
Error. Line: 24. SimpleType
minInclusive facet violation.
Violation in item: "quantity"
Minimum Inclusive facet
violation. Value Found: "99".
Error. Line: 24. SimpleType
.minExclusive. facet violation.
Violation in item: "quantity"
Minimum Exclusive facet
violation. Value Found: "100"
Error. Line: 24. SimpleType
.totalDigits. facet violation.
Violation in item: "quantity"
Total Digits facet violation. Value
Found: "1234.56"
Error. Line: 24. SimpleType
.fractionDigits. facet violation.
Violation in item: "quantity"
Fraction Digits facet violation.
Value Found: "1234.56"
Error. Line: 13. Unexpected Item
Element found in the instance
Found: <shipTo country='US'>
document, but not defined in
in item: "purchaseOrder"
the schema.

Data Model Shown here is an example of a Session Output dialog box from an
Generator Error unsuccessful data model generation.
Handling

Workbench User’s Guide 367


Section 10. XML Mapping and Processing

This list shows the reasons that the data model generator could fail
and the resolutions for the errors.

Cause Resolution
The specified XML document Verify that the filename
does not exist in the working specified for the XML
directory document is correct and that
the file resides in the working
directory.

The specified XML document Verify that the filename


does not refer to an existing specified for the DTD is correct
DTD. and that the file resides in the
working directory or exists in
the directory indicated in the
path specified.

Root elements in the XML Verify that the root elements


document and the DTD do not specified are correct and
match. match.

The data model generator utility will send back the following:

Return Code Description


zero (0) Process was successful

368 Workbench User’s Guide


Section 10. XML Mapping and Processing

1 The DTD did not parse


successfully
2 File not found.

XML Samples This section provides three examples for XML processing. The
first two examples are inbound examples and the third is an
outbound example.

Inbound There are two inbound processing examples. With the first
Processing example, the translation occurs using one environment. With the
second example, the translation occurs using several environments.
Examples
Before you run the examples described below, environment
variables to locate the programs inittrans/otrun and set the
OT_QUEUEID need to be set. This is done by running
.\env\aiserver.bat (on Windows®) and env/aiserver.env (on Unix
and Linux).
Inbound Example 1
In this example, translation occurs using a single environment.

To run a sample inbound DTD translation Example 1


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLSI.att -cs $OT_QUEUEID
-DINPUT_FILENAME=OTXMLSI.txt -tl 1023 –I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSI.att
-cs %OT_QUEUEID% -DINPUT_FILENAME=OTXMLSI.txt
-tl 1023 -I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSI.txt from the OTXMLSI.tx1 file.

Workbench User’s Guide 369


Section 10. XML Mapping and Processing

370 Workbench User’s Guide


Section 10. XML Mapping and Processing

Inbound Example 2
In this example, translation occurs using multiple environments
thereby allowing the XML data to be completely parsed and
validated before any translator parsing occurs.

To run a sample inbound DTD translation Example 2


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLPre.att -cs $OT_QUEUEID
-DINPUT_FILENAME=OTXMLSI.txt
–DMESSAGE=OTXMLSI2.att -tl 1023 –I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLPre.att
-cs %OT_QUEUEID% -DINPUT_FILENAME=OTXMLSI.txt
–DMESSAGE=OTXMLSI2.att -tl 1023 -I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSI.txt from the OTXMLSI.tx1 file.

Workbench User’s Guide 371


Section 10. XML Mapping and Processing

Outbound
Processing
Example

Outbound Example

To run a sample outbound DTD translation


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLSO.att -cs $OT_QUEUEID
-DINPUT_FILENAME=OTXMLSO.txt -tl 1023 –I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSO.att -cs %OT_QUEUEID%
-DINPUT_FILENAME=OTXMLSO.txt -tl 1023 -I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSO.txt from the OTXMLSO.tx1 file.

372 Workbench User’s Guide


Section 10. XML Mapping and Processing

Testing and
Debugging XML
Schema Data
Models

To test whether a generated data model is correct, you must


process an instance of a fully defined XML document. The instance
of the XML document would contain all possible elements,
attributes, and so on contained in the data model.
If the XML document is inbound (that is, a source data model), run
a translation to verify that the entire document has been parsed
correctly by the translator. Likewise, if the XML document is
outbound (that is, a target data model), then the output of a
translation (for example, an instance of an XML document) may be
processed by the parser and validated against the XSD.

Use the otrun.exe (Windows®) or inittrans (Unix and Linux)


Testing commands to run testing and production translations. Information
Translations about invoking translations can be found in Section 12. Translating
and Debugging.

XSD Validation The –V and -v arguments are used by the parser/validator to


Parameters perform XSD validations. These validations check for items beyond
those that indicate the document is well-formed.

Schema files (.xsd) are checked to be well-formed XML but are not
validated. They are expected to be syntactically correct. Only the
instance document is validated beyond checking for being well-
formed.

Setting Validation Generated data models, standard data models, and the inbound and
Parameters outbound examples shown in XML Schema Samples do not
automatically validate XML data against an XSD. To validate the XML
data against an XSD, parameters must be set in the User.inc file or at
the command line. The User.inc file is an Include file.

Workbench User’s Guide 373


Section 10. XML Mapping and Processing

The XML Schema Samples use the-DXML_VALIDATE parameter to


control validation of the XML data.
The -DXML_VALIDATE parameter can be set to "Always",
"WithReference" or "No".

Validating Against XSD validation can be set globally or by a specific translation. Refer
an XSD to List of Items Validated for additional information.

Note: These methods to control validation are necessary only


when the OTCallParser.att map component file is used.

To set XSD validation globally


1. Open the User.inc include file using an on-line text editor.
2. Locate the DECLARE US_XMLValidate() statement.
3. Set the variable VAR->OTXMLValidate="<value>" where
"<value>" can be:
Always The translation will either validate the
data or report an error.
WithReference The translation will validate only when
the DTD or XSD is referenced within the
instance document.
No The translation will not validate.

4. Save the User.inc file.


5. Close the text editor.

To set XSD validation for a specific translation


Add the -DXML_VALIDATE=WithReference argument to the
command line. For example, to set XSD validation:

Unix and Linux operating systems


From the command line, type:

374 Workbench User’s Guide


Section 10. XML Mapping and Processing

inittrans -at OTXMLSOxsd.att -cs $OT_QUEUEID -


DINPUT_FILENAME=OTXMLSOxsd -
DXML_VALIDATE=WithReference -tl 1023 -I

Windows® Operating system


From the Run dialog box, type:

<path>otrun.exe -at OTXMLSOxsd.att -cs


%OT_QUEUEID% -DINPUT_FILENAME=OTXMLSOxsd.txt -
DXML_VALIDATE=WithReference -tl 1023 –I

List of Items This list shows the items checked by the parser/validator
Validated during validation.

Elements must exist in the namespace in which they are


referenced.
o Attributes must exist in the namespace in which they are
referenced.
o Validates use of the namespace attribute.
o The use of any of the 19 primitive and 25 derived data types
are in compliance with their definition. The validation takes
into consideration the extension and reduction using any of
the 12 facets.
o Validates the 'choice' grouping.
o Validates the 'all' grouping.
o Validates the 'union' definition of elements and attributes.
o Validates the 'list' element values. Each item in the list must
conform to all defined restrictions.
o Validates empty content elements.
o Validates simple content elements.
o Validates 'complexType' when defined with an attribute of
mixed="true".
o Validates the use of the keywords 'INCLUDE', 'IMPORT',
and 'REDEFINE' when loading schemas from multiple files.
o Validates the use of the 'substitution Group' attribute to
allow other elements to be used in place of another element.
o Validates the use of the attributes 'any' and 'anyAttribute'
and namespace.

Workbench User’s Guide 375


Section 10. XML Mapping and Processing

o Allows for the use of the same label name in an instance


document for an element, data type, and attribute without a
conflict.
o Validates the use of 'documentation', 'annotation', and
'appinfo'.
o Validates the use of processContent="skip" to turn off
validation for the specified areas of the schema.
o 'length'
o 'minLength'
o 'maxLength'
o totalDigits
o fractionDigits

Additional Data This is a list of validations that occur by a data model created from
Model Validations the XSD data model generator when the –V argument is used when
generating the data model. Rules are placed on the data model
items for this validation.

o The ID Code List is verified for schema defined


enumerations.
o Boolean value of 'true' ,'false', '0', or '1' only.
o 'Token' through the use of ID verification.
o nonPositiveInteger
o nonNegativeInteger
o PositiveInteger
o NegativeInteger
o UnsignedLong
o UnsignedInt
o UnsignedShort
o UnsignedByte
o long
o int
o short
o byte
o gYearMonth
o maxInclusive
o maxExclusive
o minInclusive
o minExclusive
o enumeration

376 Workbench User’s Guide


Section 10. XML Mapping and Processing

List of Items Not Refer to Unsupported Items for XML Schema Samples in
Implemented Workbench User’s Guide-Appendix.

XML Schema
Troubleshooting

Parser Error When the parser is successful, a return code of zero is generated.
Handling When an error is encountered, a non-zero error code is returned by
the otxmlcanon utility.

The parser writes the error message to the standard output in the
following format.

<PARSER_ERROR><error code>:<error message


[;detailed error message]> </PARSER_ERROR>

Where
Message Part Description
<error code> This is an integer indicating the
error code.
<error message[;detailed error The error message is a short
message]> generic description of the error.
The detailed error message occurs
when the parser has collected
additional information about the
error.

Types of Errors Errors can be categorized into two groups – non-structural errors
and structural errors.

Non-structural errors occur when an element or attribute in the


instance document does not meet the minimum occurrence value,
maximum occurrence fails or, facet verification fails. Non-
structural errors are accumulated until a structural error is
encountered or end of the instance message occurs, at which time
all errors accumulated are output.

Workbench User’s Guide 377


Section 10. XML Mapping and Processing

Structural errors occur when an unknown item or an item in the


wrong position of the instance document is detected. With
structural errors, processing of the instance document is stopped
and all errors (both structural and non-structural) that have
accumulated are output. Once the errors are written out, processing
stops with either a non-zero status or a return code.

Data Model Errors are displayed in the status area of the WorkBench dialog box.
Generator Error Any non-zero return code represents an error.
Handling

If an error occurs when generating a data model and part of the


data model is written out, the output is written to
<output_filename>.log. This log file can then be opened using
WorkBench to see how much of the data model was generated and
to find out approximately where the error occurred.
This list shows the reasons that the data model generator could fail
and the resolutions for the errors.

Cause Resolution
The specified XML document Verify that the filename
does not exist in the working specified for the XML document
directory. (the Import filename) is correct
and that it resides in the working
directory.
The specified XML document Verify that the filename
does not refer to an existing specified for the XSD is correct
XSD. and that it resides in the working
directory or exists in the
directory indicated in the path
specified.
Root elements in the XML Verify that the root element
document and the XSD do not specified is correct and matches
match. between the instance document
and the XML Schema.

378 Workbench User’s Guide


Section 10. XML Mapping and Processing

The data model generator utility sends back the following return
codes. Any return code that is not a zero value indicates an error.

Return Code Description


zero (0) Process was successful.
1 The XSD did not parse
successfully.
2 File not found.

Inbound Processing, When the second inbound processing method is used, the
Method 2 Error Codes following error codes may be returned. The second method
uses a pre-translation environment that parses and validates
the data before translation begins. Refer to Inbound Processing
for additional information.

Error Number Description

160 INPUT_FILENAME was not supplied when


invoking the translation or the referenced file
does not exist.
350 The XML data is not well-formed or there was
an error during validation. The original input
still exists and a <sessionNo>.err file has been
created. This file contains the input XML data
up to the determined error and ends with a
description of the error.
137 MESSAGE was not supplied when invoking
the translation, the referenced map component
file does not exist, or there is a syntax error in
the referenced map component file.

Troubleshooting XML messages (business documents written in XML Schema


Complex XML Schema syntax) provided by the customer have become increasingly
large, and complex.
Documents

Workbench User’s Guide 379


Section 10. XML Mapping and Processing

The following results occur frequently and are typically the root
cause of some XML Schema Samples. Repeated use of otxsdgen
(the XSD to data model generator), and otxmlcanon (the XML
parser) can detect errors and direct you to a solution. The
recommendation is to use the two programs from the command
line for maximum effectiveness.

Result Action Comment

No output is Run otxsdgen An XML document must


generated (options) from accompany the XSD. If you
the command create a simple XML document
line. of a single root element
manually, then, this file could be
susceptible to many minor
problems.

If the root element has not been


correctly specified, then an
“Could not find root element”
error will be generated by the
model generator.

If the XSD has been misspelled


in the XML document, then a
“File Error” will be generated.

If the output filename is not


specified, then an “Character
encoding…bad output” error
will be generated.

XML input Run Verify the XML document is


file does not otxmlcanon – valid. The customer may have
translate. V <file> from provided an invalid XML input
the command file.
line.
Modify the data to conform to
the XSD, or ask the customer to
provide a valid XML input file.

380 Workbench User’s Guide


Section 10. XML Mapping and Processing

This section provides three examples for XML Schema processing.


XML Schema The first two examples are inbound examples and the third is an
Samples outbound example.

Inbound There are two inbound processing examples. With the first
Processing example, the translation occurs using one environment. With the
second example, the translation occurs using several environments.
Examples
Before you run the examples described below, environment
variables to locate the programs inittrans/otrun and set the
OT_QUEUEID need to be set. This is done by running
.\env\aiserver.bat (on Windows®) and env/aiserver.env (on Unix
and Linux).
Inbound Example 1
In this example, translation occurs with the use of a single
environment.

To run a sample inbound XSD translation Example 1


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLSIxsd.att -cs $OT_QUEUEID -
DINPUT_FILENAME=OTXMLSIxsd.txt -tl 1023 -I
Windows® operating system
From the Run dialog box, type:
<path>otrun.exe -at OTXMLSIxsd.att -cs
%OT_QUEUEID% -DINPUT_FILENAME=OTXMLSIxsd.txt -tl
1023 –I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSIxsd.txt from the OTXMLSIxsd.tx1 file.

Inbound Example 2

Workbench User’s Guide 381


Section 10. XML Mapping and Processing

In this example, translation occurs using multiple environments,


thereby allowing the XML data to be completely parsed and
validated before any translator parsing occurs.

To run a sample inbound XSD translation Example 2


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLSIPre.att -cs $OT_QUEUEID -
DINPUT_FILENAME=OTXMLSIxsd.txt -
DEMSSAGE=OTXMLSI2xsd.att -tl 1023 -I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSIPre.att -cs
%OT_QUEUEID% -DINPUT_FILENAME=OTXMLSIxsd.txt -
DMESSAGE=OTXMLSI2xsd.att -tl 1023 -I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSIxsd.txt from the OTXMLSIxsd.tx1 file.

382 Workbench User’s Guide


Section 10. XML Mapping and Processing

Outbound
Processing
Example

Outbound Example

To run a sample outbound XSD translation


Unix and Linux operating systems
From the command line, type:
inittrans -at OTXMLSOxsd.att -cs $OT_QUEUEID -
DINPUT_FILENAME=OTXMLSOxsd -tl 1023 -I

Windows® operating system


From the Run dialog box, type:
<path>otrun.exe -at OTXMLSOxsd.att -cs
%OT_QUEUEID% -DINPUT_FILENAME=OTXMLSOxsd.txt -tl
1023 -I

Hint: A backup file is provided since the INPUT_FILE is removed


automatically at the end of a translation session. Obtain additional
copies of OTXMLSOxsd.txt from the OTXMLSOxsd.tx1 file.

Workbench User’s Guide 383


Section 10. XML Mapping and Processing

A style sheet is a way to transform data out of and into the XML
Style sheets formats. A style sheet can be used in place of an AI data model,
(XSLT) when dealing with XML data. Application Integrator™ has
developed XSL or Style sheet functions that can be used for
mapping data within a Style sheet These functions allow values to
be assigned to or referenced from AI variables (VAR->s, ARRAY-
>s), access the Profile Database, execute data model rules within a
style sheet, and access AI environment variables.

Style sheets Style sheets define the transformation (mapping) rules for parsing
Overview or constructing XML data. They work together with one or more
DTDs or XSDs, which define the overall document’s structure and
constraints. Written in the Extensible Style sheet Language:
Transformations (XSLT), it is executed by a style sheet tool, such as
Xalan, which is used in AI.
For inbound XML transformation a parser, such as Xerces in AI, is
used to verify that the xml data is well-formed, optionally validate
it against its referenced DTDs/XSDs and then populate a
Document Object Module (DOM) with the xml data. A DOM is a
tree like structure in memory containing the XML documents
values. Contained within the style sheet is XML Path Language
(XPath), which references nodes (elements and attributes) of the
DOM data tree. Once these values are referenced, the values can
then be mapped using both standard XSLT functions and AI
extended style sheet functions. These functions allow for data
manipulation, testing, formatting, accessing the Profile database,
assignment and reference of AI variables and environment
variables, and the execution of AI data model rules.
Outbound style sheets can be used to output XML, HTML and
many other text-based formats. The outbound style sheet contains
not only the rules to output the mapped values, but also literals,
which are the xml start and end tags. Standard and AI extended
functions are used to access and manipulate the values being
output. If the format of the data being output is XML, Xerces is
automatically invoked to insure it is well-formed and optionally
validate it against its referenced DTDs/XSDs.

384 Workbench User’s Guide


Section 10. XML Mapping and Processing

Creating Style Style sheets are written and maintained in AI’s Workbench
sheets product. The full structure of the document is shown in what is
called a “Schema Tree” which permits mapping of all elements and
attributes, not just the ones contained in the style sheet. It is from
this “Schema Tree” that the mapping Drag-N-Drop process occurs
which creates the mapping rules in memory. Upon saving, only
the mapped nodes of the “Schema Tree” are written out with their
rules, into a newly generated style sheet. The AI translator then
recognizes during processing when a data model or style sheet is
used and invokes the proper internal processor.

To create a Style sheet data model from Workbench


1. From the Workbench File menu, select New > XSL Style sheet.

Workbench User’s Guide 385


Section 10. XML Mapping and Processing

2. The New XSL Style sheet dialog appears.

3. Enter a value for the Parent Folder. This will be the location
where the style sheet will be stored. Style sheets should be
stored together with your other maps or data models. This
would be your “<OT_DIR>\Models\MyModels” or
“<OT_DIR>\Models” sub-directories.
4. Enter a name for the new Style sheet with or without the
extension of .xsl – the file is created with the extension .xsl.
5. Select the mode for the Style sheet. This will be either Source for
parsing XML data or Target for writing out XML data.
6. Select Next.
7. The next screen prompting for the Schema File and Root
Element is displayed

386 Workbench User’s Guide


Section 10. XML Mapping and Processing

8. Check “Generate Schema from XML” check box to proceed with


generation of XSD from an existing XML file. This option will
enable XML file selection box.
Select XML file to generate schema file.
Else,
Check “Generate Schema from DTD” check box to proceed
with generation of XSD from an existing DTD file. This option
will enable DTD file selection box.
Select DTD file to generate schema file.
Else,
Select the Schema to be used to generate the style sheet.

Workbench User’s Guide 387


Section 10. XML Mapping and Processing

Alternatively, you can type in the name of the XML file or


schema file. If the file is not present then Workbench will throw
an appropriate error. If the file is present and the path is not in
MSO then the path will be automatically added to the MSO. If
the file is present and the path is not linked to the Workspace,
then the path will be automatically added to the Workspace
and to the MSO. Note that only absolute path will be
considered and not the relative path.
9. Check the “Schema based on DTD” option if the schema was
generated from a DTD.

Note: “Schema based on DTD" needs to be checked as additional


(AI) rules will be added in the model if it is created based on a DTD.

10. Select the Root Element within the schema to be used.


11. Select Finish.

388 Workbench User’s Guide


Section 10. XML Mapping and Processing

If you have checked the Enable dynamic loading of XSDs in the


Advanced Settings Preference page, then, when you open an
XSL model, it loads the XSD file dynamically. Click on “Click
to Expand” in order to see the other elements. The tree loads
dynamically and displays the rules, if any, when expanded.
This is done on a need basis.

An XSL file has “Environment” as a topmost element above the


root node at the same level. XSL functions like template, call-
template, include, param, with-param need to be dragged and
dropped into this DMI for the rules to be placed appropriately
within the stylesheet node.
Click on the Properties tab to see the properties of the XSL file.
The fields are non-editable. (See below)

Workbench User’s Guide 389


Section 10. XML Mapping and Processing

Click on the Model Text tab. This is a non-editable page which shows all
the modifications and rules created in the Overview page. (See below)

390 Workbench User’s Guide


Section 10. XML Mapping and Processing

Handling <any> Workbench helps Users to handle <any> Element and


Element and <substitutionGroup> during development of their maps.
<substitutionGroup>
Substitution and ANY elements are displayed in Bold in schema
tree to differentiate from other Elements.
Right click on Substitution element to replace with other element.

A dialog box comes up. Click the Possible Substitutable elements


button to view the possible substitutable elements directly
available to the schema.

Workbench User’s Guide 391


Section 10. XML Mapping and Processing

Click the Browse button to select the schema file which has the
substitutable elements for the selected element. The substitutable
elements are listed in the dialog from which the user selects the
appropriate element. This gets inserted in the file.
The Substitution element is retained after the addition. This allows
the user to replace the Substitution element once again with other
elements, if required.
For element name ANY (in bold) , the user right clicks and selects
Replace ANY Element.

392 Workbench User’s Guide


Section 10. XML Mapping and Processing

A dialog box comes up. Click the Browse button to select a schema
file. It lists the elements which could substitute the ANY element.
The ANY element is retained after the addition to allow the users
to replace it once again with other elements, if required.

Style sheet Style sheet processing comes with inbuilt functions for data
Functions manipulation, testing formatting, and so on. A few of these
functions are: boolean, concat, count, format-number, position,
round, string-length, substring, and sum. Additional functions or
extended functions have been added for style sheet processing by
AI. These were implemented to be able to perform the same
translation/processing logic as used in data models. These
provide the ability to access AI variables (VAR->s, ARRAY->s),
Profile Database, and environment variables, along with execute
data model rules in performs.
For a list of these extended style sheet functions and new
environment variables, see Appendix A. Application Integrator
Model Functions in Workbench User’s Guide-Appendix.
These extended style sheet functions may appear as part of Map
Builder rules or as part of Custom Rules
a. Map Builder Rules
These rules are added by Workbench when you do a drag and
drop between source and target (traditional model/ XSL)
For example:
<!-- +MapBuilder(default,
/InvoiceList/Document/InvoiceNumber, PhoneNumber, ARRAY-
>InvoiceNumber_PhoneNumber,
/InvoiceList/Document/InvoiceNumber) -->

<xsl:value-of select="otxsl:array-
put('InvoiceNumber_PhoneNumber',
/InvoiceList/Document/InvoiceNumber)" />

<!-- -MapBuilder(default,
/InvoiceList/Document/InvoiceNumber, PhoneNumber, ARRAY-
>InvoiceNumber_PhoneNumber,
/InvoiceList/Document/InvoiceNumber) -->
b. Custom Rules

Workbench User’s Guide 393


Section 10. XML Mapping and Processing

When you want to add a specific rule of your choice to any item
(node), the rule should be enclosed within a Custom tag. (Custom
tag, an XSL comment, is a marker for Workbench to identify that
the rule is added by the user). This is necessary to associate the
custom (user defined) rule to the corresponding item when you
reopen a previously mapped XSL.

For example:
<!--
+Custom(/InvoiceList/Document/InvoiceNumber,InvoiceNumbe
r) -->
<xsl:value-of select="otxsl:var-put('temp',
/InvoiceList/Document/InvoiceNumber)" />
<!-- -
Custom(/InvoiceList/Document/InvoiceNumber,InvoiceNumber)
-->

The syntax of the custom tag can be found in the Built-ins view
under the Tab “All” and “XSL Functions” with the function name
Custom_XSL_Rule.

Note: Any rule that is not enclosed within the Custom tag, will not be
saved into the XSL by Workbench.

XSL Samples Input is ExampleXML.xml


Output is ExmapleXML.out

Run using ExampleXML.bat on Windows® and ExampleXML.sh


on Unix and Linux.

394 Workbench User’s Guide


Section 10. XML Mapping and Processing

The translator on parsing ExampleXML.att finds the suffix of


S_MODEL to be ".xsl" (style sheet). It, therefore, invokes Xerces to
read the input, which creates and populates a DOM tree structure
with the data values in memory. Within ExampleXML.xsl, are
XPath expressions that are used to reference the values from the
DOM tree. These references are among the literals within the style
sheet. Both are written to the output defined in ExampleXML.att -
XSL_OUTPUT_FILE=ExampleXML.out.
The XPath referenced values can just as easily be assigned to AI
variables (VAR->, ARRAY->), which are then referenced on the
target side within another style sheet or traditional data model.

An XPath data model is a new type of data model available for


Xpath Source inbound XML processing. It is written in the same data model
Models language as other data models, which are now referred to as
traditional Data Models. The XPath data model differs from a
traditional data model in that it relies on a comment referenced
DTD/XSD for validating the instance XML document, and the
comment referenced XSD for performing the Workbench Drag-N-
Drop mapping. The XPath data model only contains DMIs
representing those nodes of the XML document to be mapped – in
other words, this data model contains the mappings and not the
full structure of the document. The structure is defined in the
DTD/XSD. Therefore, the size of an XPath data model will
typically be 70-90% smaller than a traditional data model used for
XML processing. Xpath models are only available for source
processing, not target processing.

General Rules This functionality deals with XML data only. The normal EDI file
formats do not work with this functionality. The process that AI
uses to parse the data is:

Step 1 – Determine if the model is to parse in the data for xpath


reference.
Step 2 - The file is given internally to Xerces (the XML parser) to
validate for well-formed data.

Workbench User’s Guide 395


Section 10. XML Mapping and Processing

Step 3 - Builds a DOM tree with the data from the entire document
in memory.

The AI data model will not read from the input stream but from the
DOM tree. These are some general assumptions:
1. The Xpath functionality within data models only deals with
source (input) models and not target processing.
2. Data validation is done using the Xerces XML parser and not
within the AI model. This includes size/occurrence/ID
lookups, and so on.
3. The DMI access type is:
XMLRecord is used for all types of data including
Date/Time, Number, and Alpha characters. The schema or
.dtd validates the data element format.
4. COUNTER in the access model functionality does not apply.
5. There are two different kinds of rules that exist for Xpath
models.
a. Map Builder Rules
i. These rules are added by Workbench when you
do a drag and drop between source (xpath
model ) and target (traditional model/ XSL
model)
For example:
[]
;; +MapBuilder(default,
/note/heading, LastName, ARRAY-
>heading_LastName)

ARRAY->heading_LastName = heading

;; -MapBuilder(default, /note/heading,
LastName, ARRAY->heading_LastName)
b. Custom Rules
i. When you want to add a specific rule of your
choice, it should be done using a Custom tag.
For example:

396 Workbench User’s Guide


Section 10. XML Mapping and Processing

[]
;; +Custom(/note/heading,heading)
VAR->temp = heading
The syntax of the custom tag can
be found in the Built-ins view
under the Tab “default” with the
function name Custom_Xpath_Rule.

6. You cannot save the Xpath model in Workbench 4.1 or before.

Data Presence

Element:
An element is said to be present if the element tags are present.
The resultant value will return blank. This includes
<Element></Element> and <Element/>. If the tags are missing,
then the element is said to be missing and ABSENT mode will be
executed. The resultant value would return error 139 – No Value.
When an element is missing, error 141 is returned to the ABSENT
mode to execute the code. This means both elements with children
and elements without children. If the error code is still 141,
occurrence validation is checked. If the minimum is met and error
is either 0 or 141, the process flow goes to the next element. If
within this group, no other data was read (Match value or previous
element), error 171 is returned back to the parent where occurrence
validation is checked on the parent. If the error still exists, the
ERROR mode code executes.
OnError executes just like the normal AI mapping. When an error
occurs (Missing Required Element), this perform is executed.
There is no such thing as RECOVERY in xpath reference. You
MUST NOT set the RECOVERY switch. You will get error 200 on
any elements that have an error. The following code is prohibited
within the Xpath enable data model;
VAR->OTPriorEvar = SET_EVAR(“RECOVERY”, “YES”)

Workbench User’s Guide 397


Section 10. XML Mapping and Processing

Tag:
When a group has missing children, then error 171 is returned to
the parent ABSENT mode. Occurrence validation is checked and if
not met, then ERROR mode is executed.
Code Sample 1:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*1 .. 1
Element2 { XMLRecord “Element2”}*1 .. 1
}*0 .. 1
For this code, if the data contains <Group/> or
<Group></Group>, the process flow is: Found the Group tag, go
to Element1 ABSENT, then Element 1 ERROR mode, then will
return back to Group ERROR mode with 138.

For the same code, if the data doesn’t contain the <Group> tag, the
process flow is: Error 141 is returned to the Group ABSENT mode
(never goes to the Element1 element), occurrence validation is
checked and since Group is not required, move to the next element
and clear the error.
Code Sample 2:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*0 .. 1
Element2 { XMLRecord “Element2”}*0 .. 1
}*1 .. 1
For the sample #2, if the data contains <Group/>, the process flow
is: Found the Group tag, go to element1 ABESNT, occurrence
validation is OK, get to element 2 ABSENT, occurrence validation
is OK, return 171 back to parent ABSENT, check occurrence
validation, go to parent ERROR mode (minimum is not met).

398 Workbench User’s Guide


Section 10. XML Mapping and Processing

For the same code, if the data doesn’t contain the <Group> tag, the
process flow is: Error 141 is returned to the Group ABSENT mode,
occurrence validation is checked and since Group is required, goes
to Group ERROR mode with error 141.

Code Sample 3:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*1 .. 1
Element2 { XMLRecord “Element2”}*1 .. 1
}*0 .. 1
For the sample #3, if the data contains
<Group>
<Element1>DATA</Element1>
</Group>
The process flow is: Found the Group tag, go to Element1
PRESENT, go to Element 2 ABSENT with error code 141. Since
occurrence validation is not met, goes to Element2 ERROR mode
with error 141, and then returns back to parent ERROR mode with
error 138.

Process Flow While the AI map is executed, the data is read from the DOM tree
in memory and not the input file.
Therefore the file position functions – GET_FILEPOS or
SET_FILEPOS will not give you a position of where the element
starts. GET_FILEPOS will return zero, which is the beginning of
the file. SET_FILEPOS can set the input position but will not affect
the process flow and the DOM tree.
The process flow can point to the DOM tree using either an
absolute reference or relative reference. Absolute reference means
that only the element defines where the data comes from. These
references can’t be grouped together. Therefore only the first
instance of the element is returned. If there are parent groupings,
the process flow does not execute these parent groups (The second
FirstLevel). When the xpath expression is defined, a leading “/”
tells the translator to use absolute reference.

Workbench User’s Guide 399


Section 10. XML Mapping and Processing

This is how the data looks:


<Header>
<FirstLevel>
<Data>Value1/Data>
<Element4>Data4</Element4>
</FirstLevel>
<FirstLevel>
<Data>Value2</Data>
<Element4>SecondSetData4</Element4>
</FirstLevel>
</Header>
Example of Absolute Reference:
AbsFirstLevel {
AbsDataElement { XMLRecord "/Header/FirstLevel/Data"
}*1 .. 10 ;; |-- end AbsDataElement --|
}*1 .. 10 ;;|-- end AbsFirstLevel --|

The absolute reference will read the first group Data value of
Value1. It will never read the Value2 Data element even though
there is a group of 1 .. 10.

Relative reference is building a parent-child relationship. As you


travel down the tree, the xpath is being built by multiple grouping
elements and allows for parent grouping.
Example of Relative Reference:
DocumentLoop { XMLRecord "/Header"
FirstLevel { XMLRecord "FirstLevel"
DataElement { XMLRecord "Data" }*1 .. 10 ;; |-- end DataElement --
|
Element4 { XMLRecord “/Header/FirstLevel/Element4” }*0 .. 1
}*1 .. 10 ;;|-- end FirstLevel --|

400 Workbench User’s Guide


Section 10. XML Mapping and Processing

The relative reference will allow the reading of both FirstLevel


groups, therefore Value1 and Value2 will be read by the model.

As you move down the structure in memory, the xpath tree is being
appended. If there is a leading / in the xpath expression, the tree
structure is not used and the value is taken from the root. In the
above example, even though the XMLRecord has started the tree
with /Header/FirstLevel, the Element4 uses the xpath expression
of /Header/FirstLevel/Element4 and NOT
/Header/FirstLevel/Header/FirstLevel/Element4. The data that
is found is “Data4”. It would never be able to find
‘SecondSetData4’ because it always take the first reference.

Creating Xpath While the AI map is executed, the data is read from the DOM tree
Source Models in memory and not the input file.

To create a Xpath source data model from Workbench


1. From the Workbench File menu, select New > XPATH Data
Model.

Workbench User’s Guide 401


Section 10. XML Mapping and Processing

2. The New XPATH Data Model appears.

3. Enter a value for the Parent Folder. This is the location where
the style sheet will be stored.
4. Enter a name for the new Xpath data model.
5. Select Next.

402 Workbench User’s Guide


Section 10. XML Mapping and Processing

6. Check “Generate Schema from XML” check box to proceed


with generation of XSD from an existing XML file. This option
will enable XML file selection box.
Select XML file to generate schema file.
Else,
Check “Generate Schema from DTD” check box to proceed
with generation of XSD from an existing DTD file. This option
enables DTD file selection box.
Select DTD file to generate schema file.
Else,
Define the schema to be used to generate the style sheet.

Workbench User’s Guide 403


Section 10. XML Mapping and Processing

Alternatively, you can type in the name of the XML file or


schema file. If the file is not present then Workbench will throw
an appropriate error. If the file is present and the path is not in
MSO then the path will be automatically added to the MSO. If
the file is present and the path is not linked to the Workspace,
then the path will be automatically added to the Workspace
and to the MSO. Note that only absolute path will be
considered and not the relative path.
7. Check the “Schema based on DTD” option if the schema was
generated from a DTD.

Note: “Schema based on DTD" needs to be checked as additional


(AI) rules will be added in the model if it is created based on a DTD.

8. Select the root element of the schema.


9. Select Finish.

404 Workbench User’s Guide


Section 10. XML Mapping and Processing

Xpath Samples Input is OTXPath.txt


Output is OTXpath.out

Run using OTXPath.bat on Windows® and OTXPath.sh on Unix


and Linux.
The translator parses S_MODEL=OTXPathS.mdl defined in
OTXPath.att. It finds that all DMIs use XMLRecord and therefore
knows that it has to invoke Xerces to read the input. This creates
and populates a DOM tree structure with the data values in
memory. Within OTXPathS.mdl are XPath expressions that
reference the values from the DOM tree and assign values to the AI
variables: VAR->s and ARRAY->s. Notice within OTXPathS.mdl
that DMIs are only created to reference values to be mapped,
versus having to define the whole xml document’s structure (as is
done in traditional data models for processing xml data).

Workbench User’s Guide 405


Section 11. The Data Modeling Process

Section 11. The Data Modeling Process

This section provides a detailed walk-through of the steps required


to do data modeling. This section is based on the Workbench
features and interface described in Sections 1 to 7.
This section also provides Application Integrator™
recommendations for file naming and other data modeling
conventions.

Workbench User’s Guide 406


Section 11. The Data Modeling Process

The steps to data modeling, as described in this section, are:


List of Steps to
Data Modeling
Step 1: Obtain the Translation Definition Requirements
Step 2: Analyze the Definition Requirements
Step 3: Obtain the test input file(s)
Step 4: Lay out the environment flow
Step 5: Complete the environment definition
Step 6: Create source and target data model declarations
Step 7: Create a map component file for each environment
Step 8: List source to target mapping variables
Step 9: Create data model rules
Step 10: Enter the Profile Database values
Step 11: Run test translations and debug
Step 12: Make backup files
Step 13: Migrate the data

Workbench User’s Guide 407


Section 11. The Data Modeling Process

Step 1: Obtain the Obtain all available documentation that explains the syntax,
Translation Definition structure, and mapping rules that apply to the translation you are
going to model. The syntax defines the characteristics of the
Requirements
components, such as, the character sets, fixed or delimited data,
identification, and tag definitions. The structure defines the
relationships between the components, and the occurrence
constraints. The mapping rules define the semantic meaning of the
components to accurately associate the source to the target.
If documentation is unavailable, find the person in your
organization or at your trading partner’s organization who can
provide an understanding of the content and structure of the data
for electronic commerce. From this person, obtain the syntax,
structure, and data mapping requirements.
If neither documentation nor a contact person is available to relay
the data and translation requirements, you must obtain the
information by examining the data files. This method, of course,
makes translation definition a process of trial-and-error. The more
complicated the translation requirement, the less assured you are of
an accurate modeling definition.

Step 1 should An accumulation of source


generate: data documentation - syntax, structure, semantic definition
of the data.
An accumulation of target data documentation - syntax,
structure, semantic definition of the data.
Contact(s) for source/target questions.

Step 2: Analyze Using the resources available to you (documentation, contacts,


the Definition and/or examination of the input data), analyze the syntax,
structure, and mapping requirements of the source and target until
Requirements
you have a complete understanding of each. Review the source
syntax and structure first, the target syntax and structure next, and
then the mapping rules between the source and target.

Syntax For a complete understanding of syntax, you must first understand


each type of item (field/element, record/segment) contained in the
data. Common types of items are: Alphanumeric, Alpha, Text,

408 Workbench User’s Guide


Section 11. The Data Modeling Process

Implied Decimal Numeric, Explicit Decimal Numeric, Date, Time,


Records, Segments, and so on. To prepare for your translation,
make a list of each type of item that is used. A different item
should appear on the list whenever the character set and/or the
pre- and post-conditions differ among them.

Character Set When an item has a unique character set, which differs from other
items, a different type of item is used.
For example, five different types of items would be used for the
following:
Numeric - Which allows for 0-9, -, +, .
Date - Which allows for 0-9 with valid month and day
values
Text - Which allows for all printable characters
Alpha - Which allows for the character set range of A-Z
Segment - Which contains a tag at the beginning and a
delimiter at the end.

Pre- and Post-Conditions Delimited data typically allows for the use of several delimiters.
These delimiters can be used either as the pre-conditions or the
post-conditions of items. You need to understand the delimiters
used in your input file to know when another type of item must be
defined.
For example, consider fixed length data within variable length
records. The data fields have no pre- or post-condition. The
record, however, has a post-condition of a delimiter (possibly a line
feed, or carriage return/line feed).
Another example would be the UN/EDIFACT standard which
specifies delimited data using three different delimiter characters.
The rules specifying the delimiters that can precede or follow
which item types are very specific in this standard’s syntax.

Workbench User’s Guide 409


Section 11. The Data Modeling Process

By defining many items, to accommodate specific character sets


and the pre- or post-conditions you can model a more accurate
translation definition. Using only a few items offers less of a
guarantee that the proper item has been recognized (source) or
constructed (target) during translation. Also, non-precise item type
definitions may cause invalid item recognition, causing the
translation to fail on a later item, or causing the wrong data to be
mapped.

Structure For a complete understanding of structure, you must become


familiar with how items are assembled in the message. Three
attributes are used to define structure — sequence, occurrence, and
relationship.

Sequence Sequence is the order in which items can appear. It can be rigid or
random. A rigid example is when a standard requires records to
appear in a certain order (record type A cannot appear after record
type B). A random example is when records can appear in any
order (record type A can appear before or after record type B).

Occurrence Occurrence is the number of times an item can repeat in succession.


The minimum and maximum occurrence of an item specifies the
number of times the item must (minimum) and can (maximum)
repeat. A zero minimum occurrence indicates that the item is
optional. A minimum occurrence of one or greater indicates that
the item is mandatory per the specified number of occurrences.

Relationship Relationship defines item association with other items, and is


represented with three terms: parent, child, and sibling. Parent
represents a higher-level relationship to a child item. Child
represents a lower level relationship to parent item. Sibling
represents the same level relationship to another item.
The following table lists examples for the use of these terms.
Diagram Description
Record_A Record_A - is the parent of Field_A1 and Field_A2
Field_A1 Field_A1 - is the child of Record_A
Field_A2 Field_A2 - is the sibling of Field_A1

410 Workbench User’s Guide


Section 11. The Data Modeling Process

Diagram Description
Record_B Record_B - is the sibling of Record_A
Field_B1 Field_B1 - is the child of Record_B
Field_B2 Field_B2 - is the sibling of Field_B1

Mapping For a complete understanding of mapping, you must understand


what is required to semantically identify and manipulate the data.
The specific meaning of a data model item is usually determined
by using one or more of the following:
The item’s location in the structure
The value of another item that qualifies its meaning
Its occurrence (the fourth instance has this specific meaning,
and so on.)

Once the semantic meaning of the item’s value is known, it can be


properly mapped.
Sometimes data has to be manipulated or changed from how it
appears in the source, to how it is output in the target. Conversion
between field sizes and types of items occurs automatically within
Application Integrator™. Fields are either padded or truncated to
manipulate the size. For item type differences, the field is
converted, for example, a source alphanumeric type might be
converted to a target implied decimal type.
Other types of manipulation have to be performed manually
through the use of rules (mapping rules). Functions are provided
which perform the following:
Function Description
Case convert Change to all uppercase letters or lowercase
letters
String manipulate Trim, concatenate, replace characters,
substring
Code verification Verify the value is contained with a list of
acceptable codes
Cross-reference Replace a value with another value

Workbench User’s Guide 411


Section 11. The Data Modeling Process

This manipulation allows the data to be properly prepared to be


output in the format required.
Make sure that you understand the structure for both the source
and target sides. Inaccurate occurrence constraints between the
source and target will cause errors. The areas to watch out for are:
Target minimum is greater than the source minimum
occurrence.
Structure compliance can be met on the source side with an
error occurring on the target. You can model around this error
using “absent” rules, which default a value. For example, a
default of a literal or database substitution would allow the
minimum target occurrence to be met.
Target maximum is less than the source maximum
occurrence.
Structure compliance can be met on the source side with an
error occurring on the target. You can model around this
error using source rules, which limit the number of
occurrences mapped to the Array variables, whose values
will be assigned to the target.
You can also model around this error using target rules, which
reference the variables (pulls values off the variable list) but does
not assign them to the data model items, for those occurrences
greater than the maximum.

Step 2 should A list containing the types of


generate: items needed for the source.
A list containing the types of items needed for the target.

The following are example lists of types of items for source and
target.
Source
Type: Fixed length fields in delimited records
Item Character Set Pre- Post-
Condition Condition
Alphanumeric Any character between None None
‘ ’ and ‘~’

412 Workbench User’s Guide


Section 11. The Data Modeling Process

Item Character Set Pre- Post-


Condition Condition
Alpha A-Z, a-z, space None None
Numeric Special Numeric None None
Function
Date Special Date Function None None
Time Special Time Function None None
Record Tag: Alpha None Line feed
character

Target
Type: Variable length fields in delimited records
Item Character Set Pre-Condition Post-Condition
Alphanumeric Any character between ‘ elem-delimiter elem-delimiter or sgmt delimiter
’ and ‘~’
Alpha A-Z, a-z, space elem-delimiter elem-delimiter or sgmt delimiter
Numeric Special Numeric Function elem-delimiter elem-delimiter or sgmt delimiter
Date Special Date Function elem-delimiter elem-delimiter or sgmt delimiter
Time Special Time Function elem-delimiter elem-delimiter or sgmt delimiter
Record Tag: Alphanumeric sgmt delimiter sgmt delimiter

Step 3: Obtain the When obtaining an input file for testing the data models, the
Test Input File(s) volume or size of the input file is not as important as having an
input file that contains all acceptable variations of the input
structure. This includes not just expected variations, but all
possible variations as defined in the structure definition. The goal
is to be able to test all possible structure and content combinations
to ensure that the translation definition will not fail once placed
into production mode.

Workbench User’s Guide 413


Section 11. The Data Modeling Process

If you are unable to obtain a fair representation of input data, you


will have to use a text editor to create the input file. You either
have to take an existing file and add alterations to the structure and
content, or create the file from scratch. You must complete Step 2,
Analyze the Definition Requirements, before this file can be created
or modified.

Step 3 should Test input file(s), containing


generate: all possible data variations.

Step 4: Lay Out The layout of the environment flow is a pictorial representation of
the Environment the various elements that need to be brought together to configure
the translator to process in a certain way (for example, it shows the
Flow
input files, output files, and other components) and the order in
which they are used. Each environment provides the ability to
alter the configuration of the translator and allows for the modular
creation of data models. Refer to Section 4. Creating Map
Component Files (Environments) for a further discussion and
illustrations of environments.
Changing environments during translation can affect the following
configuration components:
Access models
Changing the access models allows you to add, change or
remove item type definitions. This includes adding,
changing or removing access delimiter characters and
changing the use of access model COUNTERs.
Input and output files
Changing the input file allows you to bring different data into
the translation. By changing the output file, the output data
can be filtered to different files.
Profile Database key prefixes
Changing the database key prefixes provides different views
within the Profile Database. Different views may be required
at various points in the translation.
Find match limit

414 Workbench User’s Guide


Section 11. The Data Modeling Process

Changing the scan forward limit (FINDMATCH_LIMIT) in


the generic model OTNxtStd.mdl allows you to reduce or
increase the scope of searching for a specific character
sequence. Refer to the section on #FINDMATCH in
Appendix A. Application Integrator Model Functions in
Workbench User’s Guide-Appendix for more details.
Advantages of modular modeling in the environment flow are:
Reduction in the modeling effort
Breaking a translation down into data models facilitates
reuse. By creating a base of existing data models to choose
from, the modeling effort for future transactions is reduced.
The reduction occurs from not having to re-create and test
the same processing logic.
Reduction in data model maintenance and testing
When data models are written as modules for use in
multiple translations, changes spanning several translations
can be implemented and tested once within the generic data
models.

Step 4 should
generate:
An environment processing flow depicting the relationship
of the environments used during translation.

Step 5: Complete The Environment Definition Worksheet has been provided as a


the Environment means to consolidate information that will be needed to define
each environment. Some of this information will require some
Definition
time researching, such as the access models to be used. The output
of this step will be the input to the next step.
For each new environment defined in Step 4, complete the
Environment Definition Worksheet, an example of which can be
found in the Standards Plug-In User's Guide for the public
standard you are using.

Workbench User’s Guide 415


Section 11. The Data Modeling Process

Section Name Instructions


Map Component Filename Assign a unique label for the name
of the environment/map component
file. Do not begin the label with the
two letters “OT.” These letters are
reserved for standard Application
Integrator™ names. The length of
the label is operating system
dependent. When assigning a
name, take into consideration all
platforms on which the map
component file will be used. It is
recommended that you limit the
length of these filenames to 8
characters.
Production, Development, Check this section to indicate
or Functional area whether this environment definition
will be used strictly for development
and testing, or if it will also be used
as the production functional area.
Environment Description Write a short description about the
intended purpose of this
environment. This description will
be of use for other modelers who
may have to follow your work, or
for later reference by yourself.

416 Workbench User’s Guide


Section 11. The Data Modeling Process

Section Name Instructions


Input/Output file(s) Identify whether any new input or
output files are to be used in this
environment. The names can be
literally defined or can be
dynamically defined at translation
time by using translation data (data
from within the message being
processed, or the translation
assigned session control number).
Check the appropriate box next to
the input/output file on the
worksheet that indicates where this
name will be defined or obtained
from.
Access Models From Step 2’s analysis of the syntax,
determine whether an existing
access model will be used or a new
one will be created. Any existing
model may be used providing it
contains all required types.
Otherwise, an existing model should
be copied and modified. A new
access model also may be created if
desired. Usually copying an
existing access model to a new name
and modifying it as necessary
provides the most effective results.
The decision to add or create should
be based on the uniqueness of the
new access item types. If the items
are totally unique to this translation
(most likely not used for other
translations), then a new access
model is recommended. If,
however, the items will be regularly
encountered in other translations,
adding to the existing access model
is recommended.

Workbench User’s Guide 417


Section 11. The Data Modeling Process

Section Name Instructions


Data Models Assign a unique label for the names
of the data models. Do not begin
the labels with the two letters “OT.”
These letters are reserved for
standard Application Integrator™
names. The length of the name is
operating system dependent. When
assigning a name, take into
consideration all platforms on which
the data models will be used. It is
recommended that you limit the
length of these filenames to 8
characters.
A data model only has to be defined
if its particular mode of process will
be needed in the environment. Look
at the environment flow to
determine whether the source,
target or both modes of processing
will be needed.
If the names of the data models are
to be dynamically determined at
translation time, make sure you
check the box below. The names can
be obtained from the Profile
Database through substitutions.
Checking the box will help you
remember to set up the necessary
substitutions in the database.

418 Workbench User’s Guide


Section 11. The Data Modeling Process

Section Name Instructions


Profile Database Key If values are to be obtained from the
Prefixes, Profile Database, you will have to
X/Ref Values establish the appropriate database
key.
The database key is assigned when
database values are entered or
loaded (Trading Partner data or ID
code lists).
The x/ref values are the values
extracted out of the input stream
(for example, sender and receiver
IDs) which will be cross-referenced
to the database keys. The extracted
values are trimmed of trailing
spaces and concatenated together,
delimited by the tilde (~) character.
Cross-referencing the extracted
input values minimizes the impact
when these values change. When
the values change, the cross-
referenced values can be changed,
with no changes to the database
keys or alteration to the Profile
Database.
Refer to the Profile Database
Lookups section for details.
User-Defined Environment If special environment keyword
Keyword Variables variables are needed, they should be
recorded where they will be easy to
locate and define.

Workbench User’s Guide 419


Section 11. The Data Modeling Process

Section Name Instructions


Source and Target Special If delimited data will be parsed or
Access Characters generated, or a special decimal
and/or release character will be
needed, the characters should be
identified on the worksheet. If the
characters are dynamically
determined during data parsing,
note “As Parsed” under the Source
column. If the characters are
dynamically determined based on
substitutions, note “$Subs” under
the Target column.

Step 5 should
generate:
Functional Area Definition Worksheets completed for every
new environment defined in Step 4.
New access models or the updating of new item types to
existing access models.

Step 6: Create The syntax and structure of the translation will be modeled in this
Source and Target step. First, define the data models as per the Application
Integrator™ Model worksheet. For assistance, refer to a copy of the
Data Model
worksheet found in the Standards Plug-In User's Guide for the
Declarations public standard you are using. Then enter the definitions into
Workbench. The rules for mapping will be created in Steps 8 and
9.
You can work on the source and target data models independently.
One modeler can work on the source data models while another
modeler defines the target data models. The power of Application
Integrator™ allows the two sides to be brought together at runtime
for binding. The relationship between the two sides is established
in the mapping process, through the use of mapping variables.
Create an Application Integrator™ Model Worksheet for each data
model defined on each Environment Definition Worksheet:

420 Workbench User’s Guide


Section 11. The Data Modeling Process

Section Name Instructions


Mode Specify whether the data model will be used on
the source or target side.
Model Name Complete the data model name as indicated on
the Environment Definition Worksheet.
Environment Complete the map component filename for the
Name data model being created, as indicated on the
Functional Area Definition Worksheet.
Translation Complete any summary reference notes for use
Reference in later reviews of this data model.

For each item contained within the data structure, create a line item
entry:
Section Name Instructions
Data Model Item Assign a label name, unique within this data
Name model, by which this item will be identified.
The name must begin with a letter or an
underscore (_), and should not begin with the
two letters “OT.” It should be composed of
the character set [A-Z], [a-z], [0-9] and
underscore (_). Use the various columns
under Item Name to represent the various
hierarchical levels in the structure definition.
(Used for all types of items: group, tag,
container, and defining item.)

For example:
Message_Loop
Heading_Record
Heading_Rec_Field_1
Heading_Rec_Field_2
Heading_Rec_Field_3
Detail_Line_Item_Loop
Detail_Record_1
Detail_Rec_1_Field_1
Detail_Rec_1_Field_2
Detail_Rec_1_Field_3
Detail_Record_2

Workbench User’s Guide 421


Section 11. The Data Modeling Process

Detail_Rec_2_Field_1
Detail_Rec_2_Field_2
Detail_Rec_2_Field_3

Section Name Instructions


Item Type Specify the item type used to identify this data
model item.
Occurrence - Specify the number of times the item is required
Min/Max (minimum) and allowed (maximum) to repeat.
A minimum occurrence of zero represents the
items presence is optional. A minimum
occurrence of one or greater, represents the
items presence is mandatory. (Used for all
types of items — group, tag, container, and
defining.)
Size - Specify the number of characters, minimum and
Min/Max maximum, that is allowed for this item. When
the two lengths are the same value, the item is
considered to be fixed in length. (Used for
defining class of items only.)
Format Specify the format of numeric values that use
the #NUMERIC, #DATE, or #TIME access
model functions. (Used for defining type of
items only.)
Match Value Specify any character string used to identify a
tag item. Commonly known as a record code or
segment tag. (Used for tag class of items only.)
Verify List ID Specify the ID to be used with the Verification
Profile Database key for code list verification. It
is used together with the Verification key prefix
defined in map component file/environment file.
This ID must begin with a letter. (Used for
defining type of items only.) This ID can be
used for automatic code list verification by
using the #LOOKUP access model function. Or
can be used manually by using the LKUP( )
data model function in rules.
Sort Specify whether the defining items associated
with the group item should be sorted in a
special order.

422 Workbench User’s Guide


Section 11. The Data Modeling Process

Section Name Instructions


File Specify whether this group item should be read
from or output to a file other than the one
specified in the map component file.
Once worksheets are completed, they are ready to be entered using
Workbench. Refer to Section 2. Data Modeling Process Overview
for procedures on entering and modifying source and target data
model items.

Step 6 should Application Integrator™


generate: Model Worksheets completed for every new data model.
The creation of all data model files using Workbench.

Step 7: Create a Create a map component file for each Environment Definition
Map Component Worksheet created in Step 5. Create a new map component file by
completing the Map component file dialog box opened from the
File for Each
New Map Component option of the Workbench File menu. Refer
Environment to Section 4. Creating Map Component Files (Environments) for
instructions.

Step 7 should All environment map


generate: component files.

Step 8: List Source Using the Application Integrator™ Variable Worksheet (an
to Target Mapping example of which can be found in the Standards Plug-In User's
Guide for the public standard you are using), complete a line item
Variables
for each piece of data that will be mapped from the source to the
target. Once this worksheet is completed, the source data modeler
will be able to begin creating the rules described in Step 9,
independent of the target data modeler. The type of variable and
its ID (label) is all that the source and target data modelers need to
know.
Create the Application Integrator™ Variable Worksheet as follows:

Section Instructions
Name
Type Identify the type of variable to be used. Each

Workbench User’s Guide 423


Section 11. The Data Modeling Process

Section Instructions
Name
variable type has different mapping attributes.
Label Assign a label for the variable that will be unique
throughout the total translation session. The label
must begin with a letter and should not begin with
the letters “OT.”
Description Enter a description of what the variable type and
label represent.

Step 8 should Application Integrator™


generate: Variable Worksheet completed for every piece of data to be
mapped from the source to the target.

Step 9: Create Using Workbench, you can now apply rules to the data models.
Data Model Rules Since the source and target are independent of each other and
runtime-bound via the variables, either side can be done first or
independently.
The source assigns its data model item values to specific variables,
for example, VAR->PONumber = HeadingRec_PONumber. The
target assigns the variable values to its data model items, for
example, Rec1-PONo = VAR->PONumber.
The primary use of the rules is to create the movement of data from
the source to the target, establishing the desired format in the
process. However, rules are also used for the following purposes:
Logging of information for audit, message tracking, and for
reporting, using the functions LOG_REC( ) and
ERR_LOG( ), for example.
Capturing of information for later acknowledgment
creation.
Performing error recovery, such as defaulting when a value
is absent.
Verifying relational conditional compliance — a relational
condition may exist among two or more siblings within the
same parent based on the presence or absence of one of
those siblings.

424 Workbench User’s Guide


Section 11. The Data Modeling Process

Obtaining or changing values in the Profile Database.


(Make sure the key prefixes are set in the map component
file or through data model rules.)
Using keywords to alter the natural processing flow –
ATTACH, EXIT, BREAK, RELEASE, CONTINUE, REJECT,
RETURN.
Performing string manipulation, for example, sub-string,
trim, concatenate, replace.
Characters, case conversion.
Performing computations - +, -, *, /.
Obtaining or changing the active character sets, decimal
notation, and release characters.
Obtaining or changing values associated with access
counters.
Obtaining or changing the system’s date or time.
Obtaining or changing the current error status.

Guidelines for Rule a. If a rule action fails, the remaining actions contained in the rule
Creation are not executed. For example, if a variable or data model item
is referenced for its value, and a value has not yet been
assigned to it, the action will fail. (This can occur in a tag
item’s rule that references an optional child defining item that
was not present in the tag item.) Whenever this possibility
exists, immediately start a new rule for the balance of the
actions.

Example:
Incorrect way to model Correct way to model
Tag_A Tag_A
Defining_A1 (optional) Defining_A1 (optional)
Defining_A2 (optional) Defining_A2 (optional)
[] []
VAR->Fld1 = Defining_A1 VAR->Fld1 = Defining_A1
VAR->Fld2 = Defining_A2 []
VAR->Fld2 = Defining_A2
[]
SET_ERR( )

Workbench User’s Guide 425


Section 11. The Data Modeling Process

In the incorrect way, if the first action (VAR->Fld1 =


Defining_A1) fails because no value is available for the
reference to Defining_A1, the balance of the rule is not
performed, and Defining_A2 is not assigned to VAR->Fld2.
In the correct way, the third rule is added to set the error
status to zero, so that if second rule fails, the error of the
failure is not carried forward to the occurrence validation
of Tag_A.
b. Remember to always follow an ATTACH keyword action with a
new rule to capture the map component file’s returned status.
If you immediately follow the ATTACH keyword action with
another action, the second action would only occur if the map
component file returned a status of zero.
For example:
[ ]
ATTACH “OTX12Env.att”
[ ]
VAR->OTAttachRtnStatus = ERRCODE( )
c. Some rules can easily become complex. Take the time to lay out
on paper all complex rules, since the content of all rules cannot
be viewed within Workbench at one time. Once on paper, the
rules can easily be entered into Workbench, with minimal
editing.
d. On the source side, rules are usually placed on the tag item
rather than on each defining item. This is because often one or
more items qualify one or more other items to provide their
semantic meanings. On the target side, the rules must be
placed on the defining items for them to obtain their values.
All conditional rules should be entered before the null
conditional rules. If the last rule on an item is a conditional
rule, and the condition fails, the status the item will take on for
occurrence validation will be a failure status. To reset the
status, add a null condition rule with the function SET_ERR( ).
For example:
[ ]
SET_ERR(0)

426 Workbench User’s Guide


Section 11. The Data Modeling Process

Profile Database The following rules should be included in your data models to set
Lookups up the views into the database so that information can be accessed.
When accessing the Profile Database, your model must contain
rules that tell the translator what type of information you are going
to access and from where to access it. The type of information you
might access could be cross-references, code list verifications, or
substitutions. You could access this information from any
hierarchy level of the trading partner profile or from any of the
standard version levels. Refer to Trade Guide online help for
details on how to set up or modify the trading partner profile and
standard version code lists.
The SET_EVAR data model function allows you to set the
environment variables for these types of lookups. Refer to the
description of the SET_EVAR function in Appendix A. Application
Integrator Model Functions in Workbench User’s Guide-Appendix for
more details.
For information on the generic model used in trading partner
recognition, refer to the Map Component Files for Enveloping/ De-
enveloping section.

Database Key and Each level in the trading partner hierarchy is represented by a
Inheritance database key. Each database key is delimited by the pipe ( | )
character, and is a maximum of 12 characters long
The key is derived from the information keyed into the trading
partner’s profile, as follows:

Level Value Entry Box Example Data


Interchange Trading Partner ABCIC
(for example ISA/IEA)
Functional Group Trading Partner ABCFG
(for example GS-GE) Division
Message Name ABC850
(for example ST/SE)

The complete concatenated key, using this example, would be


ABCIC|ABCFG|ABC850. You will see this database key under the
title bar (Unix and Linux) or to the right of the tabs (Windows®) of
many of the trading partner profile dialog boxes.

Workbench User’s Guide 427


Section 11. The Data Modeling Process

Inheritance When looking up a value, the lookup is appended to the key prefix,
before the read. A cross-reference lookup of a part number, for
example, might be:
ABCIC|ABCFG|ABC850|part_a
By using this approach, the property of “inheritance” can be easily
applied, where inheritance denotes the use of values from higher
levels (ancestors) in the hierarchy when a specific value is not
found at the current level.
To continue with the example, if the cross-reference value of part_a
is not found at the message level ABCIC|ABCFG|ABC850, the
system automatically removes levels until the value is found or all
levels are exhausted.
ABCIC|ABCFG|ABC850|part_a
ABCIC|ABCFG|part_a
ABCIC|part_a
part_a
The property of inheritance exists for types of values that can be
stored in the Profile Database:
Substitutions
Cross-references
Verification code lists
Inheritance can lessen the redundancy in the Profile Database, since
all levels of a trading partner hierarchy (for example, all divisions
and/or all messages of the trading partner) may use the same
cross-references and codes.
Inheritance can be turned on/off as a parameter to database
functions. The data modeler determines whether or not the
inheritance feature should be used during data modeling.
For details on setting up the trading partner hierarchy, refer to the
Trade Guide online help.

428 Workbench User’s Guide


Section 11. The Data Modeling Process

Substitutions The environment keyword HIERARCHY_KEY is used to set the


view into the database to perform substitution lookups, where each
field in the Profile Database is associated with a substitution label.
Once you have identified the type of information and where to
access it, the next step is to identify the substitution variable to
obtain. The $ data model function allows you to do this.
In addition to verifying information, you have the ability to
manipulate the profile information stored. The SET_SUBS data
model function allows you to update values into the substitution
portion of the Profile Database. The DEL_SUBS data model
function allows you to delete a Profile Database substitution value.
For additional information regarding the data model functions,
refer to Appendix A. Application Integrator Model Functions in
Workbench User’s Guide-Appendix.
As you use Profile Database lookups in rules, enter them on the
Profile Database Interface Worksheet. An example can be found in
the Standards Plug-In User's Guide for the public standard you are
using. Complete the following columns as described:

Column Instructions
Heading
Description of Enter a description of what lookup is
Lookup occurring.
Side S/T Enter S when used in the source data model,
and enter T when used in the target data
model.
Type S/X/V Enter S for substitution type database
lookup;
Enter X for cross-reference type database
lookup;
Enter V for code verification type database
lookup.

Workbench User’s Guide 429


Section 11. The Data Modeling Process

Column Instructions
Heading
Label/Category/ When Type is ‘S’ for substitution, enter in the
Verify List ID label used for the substitution, for example,
$X12AckLevel.
When Type is ‘X’ for cross-reference, enter in
the category from which the cross-reference
is to occur.
When Type is ‘V’ for verification, enter in the
verify list ID under which the lookup is to
occur.
Hierarchy Level Enter the key prefix of the Profile Lookup for
Key Prefix the trading partner. The Key Prefix is
obtained from the dialog box in step 10.

Cross-references The environment keyword XREF_KEY is used to set the view into
the database to perform cross-reference lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the category that the data is stored underneath
along with the value in the input stream to be cross-referenced.
The XREF data model function allows you to do this.
In addition to cross-referencing information, you have the ability to
manipulate the cross-reference information stored in the database.
The SET_XREF data model function allows you to update values
into the cross-reference portion of the Profile Database. The
DEL_XREF data model function allows you to delete a Profile
Database cross-reference.

Verification Code Lists The environment keyword LOOKUP_KEY is used to set the view
into the database to perform verification lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the verify list ID that the data is stored
underneath. The verify list ID is keyed into the Verify field of the
data model item. The verify list ID is also known as the Category
in the Xrefs/Codes dialog box. The next step is to construct a rule
identifying the data model for which the verification is to occur
and the value to be looked up. The LKUP or DEF_LKUP data
model functions allow you to do this.

430 Workbench User’s Guide


Section 11. The Data Modeling Process

In addition to verifying code list information, you have the ability


to manipulate the code list information stored. The SET_LKUP
data model function allows you to update values into the
verification portion of the Profile Database. The DEL_LKUP data
model function allows you to delete a Profile Database code list
value.

Step 9 should All data model rules using


generate: Workbench, for both the source and target.
A completed Profile Database Interface Worksheet.

Step 10: Enter the All lookups into the Profile Database should have been listed on
Profile Database the Profile Database Interface Worksheet, as part of Step 9 when
creating rules within Workbench.
Values
Using Trade Guide, enter in all of the Profile Database lookups,
including values to be used in substitutions, cross-references
(x/refs) and verification code lists. Procedures for entering this
information are found in Trade Guide online help.

Note: Keep in mind the inheritance capability available in database


lookups. This feature helps to eliminate redundancy of data,
thereby reducing database maintenance and disk storage
requirements. Refer to the Trade Guide online help for detailed
steps on these Trade Guide features.

Substitutions Validate that the various


trading partner profile records from which substitution values
will be accessed exist.

Cross-references 1. From the Xrefs/Codes dialog box, add the category under
which the cross-reference will occur to the category file, if not
already present.

Workbench User’s Guide 431


Section 11. The Data Modeling Process

2. Select the category from within the list box, and enter the
extracted values from the input stream in the Value field. If a
string of values is used, these values should be trimmed of
trailing spaces and concatenated together using the pipe ‘|’
character as a delimiter between the fields.
3. Enter the value to be used for the cross-reference in the
Description field.

432 Workbench User’s Guide


Section 11. The Data Modeling Process

Verification Code Lists From the Xrefs/Codes dialog box:


1. Add the category under which the code list verification will
occur to the category file, if not already present.
2. Select the category from within the list box and enter the values
from the input stream in the Value field. If a string of values is
used, these values should be trimmed of trailing spaces and
concatenated together using the pipe ‘|’ character as the
delimiter between the fields.
3. If you would like to key in a description of the code list
verification, do so. This field is optional.

Step 10 should generate: All trading partner Profile


database lookups exist for substitutions, cross-references, and
verification code lists.

Step 11: Run Test Using the input file from Step 3, translate the file(s). The files can
Translations and be translated through the Run dialog box of Workbench, or at the
Debug command line, by invoking the inittrans command in Unix and
Linux or the otrun.exe command in Windows®. Refer to Section 12.
Translating and Debugging for detailed instructions.
During test translation, it may be beneficial to set a high trace level.
The trace facility set at a high level generates a step-by-step log of
the translation. Various levels of the trace can be turned on and off
as needed. Continue testing and debugging until your data model
is free of error and ready to migrate to an official test or production
area.

Workbench User’s Guide 433


Section 11. The Data Modeling Process

Step 11 should generate: Production ready data models


and map component files.

Step 12: Make Once the translation has been modeled, the following modified or
Backup Files created files should be backed up:
Map component files —.att
Data model files — .mdl, .xsl (in case of xsl based models,
refer Section 10. XML Mapping and Processing for more
details.)
Test and input files
Access model files — .acc
Profile Database — sdb.dat & sdb.idx
All worksheets and flowcharts should also be packaged together
for later reference. Refer to Trade Guide online help for details on
scheduling backups.

Step 12 should A backup copy of all disk files


generate: changed or created.
A consolidated packet of all
worksheets and flowcharts.

Step 13: Migrate Once you are through with the complete data mapping process,
the Data including testing, you are ready to migrate your application to an
official test or production functional area. Refer to Section 13.
Migrating to Test and Production for suggestions on migrating to a
different functional area.

434 Workbench User’s Guide


Section 11. The Data Modeling Process

This section discusses a series of general modeling notes and tips


Notes on Data that should be followed to minimize problems with translation and
Model the portability of data models.
Development

Assigning Names Consider the following when assigning names to models and files.

Operating System When assigning filenames, consider naming conventions for all
Naming Conventions operating systems under which you expect to operate. Use the
least common denominator. That is, if you are expecting to use
Windows®, limit the base portion of the filename to eight
characters and the extension to three characters. Note also that the
Windows operating system is not case-sensitive, but Unix and
Linux are. If you intend to develop applications for these
platforms, consider using all lowercase or uppercase characters to
avoid any problems following migration. FTP will migrate files,
maintaining the case it encounters.

Hint: To avoid problems when migrating between Unix, Linux, and


Windows it is recommended that you use all lowercase character
names and limit filenames to eight characters with three character
extensions.

Application Integrator When assigning names to models, variables, and so on, use a prefix
Reserved Prefixes other than the two-character “OT” or “ot.” All provided
Application Integrator™ models (.mdl, .acc), generic variables, and
utility shell scripts begin with the two letters “OT” or “ot.”

Recommended For Administration Database files, the prefix “DM_” is


Prefixes recommended.

Workbench User’s Guide 435


Section 11. The Data Modeling Process

Recommended The following extensions are required or recommended:


Extensions
Extension Description
.att Map component filename (required)
.env Environment filename (required)
.acc Access model filename (required)
.mdl Data model filename (required)
.sh Shell script file
.std Standard data models, such as ASC X12,
UN/EDIFACT or TRADACOMS
.dat Data portion of an ISAM file (Profile or
Administration Database)
.idx Index portion of an ISAM file (Profile or
Administration Database)
.tmp Temporary/work files
.in Test input file to Application Integrator™
.out Test output file to Application Integrator™
.xsl XSL based data model filename (required for such
data models).

Labels Labels are used in Application Integrator™ to reference item


values. All variable labels must begin with an alpha character: A-Z
or a-z. The following is a list of variables with their maximum
length:

Variable Length
Access model variable 40 characters
Data model variable 40 characters
Variable (temporary) variable 40 characters
Array variable 40 characters
User-defined environment variable 40 characters
Substitution variable 254 characters
Verification list ID 254 characters

436 Workbench User’s Guide


Section 11. The Data Modeling Process

Hint: Since translations use these labels, better throughput will be


obtained by using shorter vs. longer label names, and by keeping
names unique.

Comparing When performing a comparison in the rules (for example, [VAR–


Numerics or >A==DMI_01], or [DMI_02>1230]), either a string comparison or a
numeric comparison is performed. Both sides of the comparison
Strings
will be checked to determine if each contains only numeric
characters ([0–9, ., –, +] with the period, minus sign, and plus sign
characters appearing only once in the string) each is less than or
equal to 16 characters long. If both of these tests are true, then a
numeric comparison will take place. If either of these tests is false,
then a string comparison will take place. For example:
Numeric comparison of 123 to 123.000000 would result in
both sides being equal.
String comparison of 123 to 123.00000000000000 would
result in both sides being unequal.
1==000000000000001 would be TRUE (a numeric
comparison would be performed).
1==00000000000000001 would be FALSE (the second side
contains more than 16 characters, therefore, a string
comparison would be performed).

Using Application The models and files supplied by Application Integrator™ and
Integrator Models Application Integrator™ standards implementation packages have
names beginning with “OT” and “ot.” These files are read-only, in
and Files
most cases.
If you desire to modify an Application Integrator™ file, such as a
sample data model, copy the file to a file with a new name and then
modify the copy.

References to Files References to files and models within the map component files
and data models can be either relative or explicit. Relative
referencing means every file and model being used is located in the
same directory. These include source, target, access models, and
map component files. Using relative referencing, they can be
moved to another file system without changes. This allows the

Workbench User’s Guide 437


Section 11. The Data Modeling Process

models to be easily distributed to other users, as they are to


Application Integrator™ Customer Support, if necessary.
Explicit referencing means the files and models that are being used
are located in different directories, therefore, a path is necessary to
locate these files. Using explicit referencing, the same directory
structure (path or modifications to the map component files and
models) is always required.
It is recommended that you use relative referencing because it is
more structured and easier to track. The following examples
illustrate both types of referencing. Notice that the explicit
filenames begin with either a forward slash (/) or a backslash (\),
whereas the relative filenames do not.

Relative Reference Explicit Reference


within a map component file
S_ACCESS=“OTFixed.acc” S_ACCESS=“/u/OT/OTFixed.acc”
S_MODEL=“edi/OTRecogn.mdl S_MODEL=“/u/OT/edi/OTRecogn.mdl”
within a data model
ATTACH “edi/OTBypass.att” ATTACH “/u/OT/edi/OTBypass.att”

The base directory is a directory in the file system where relative


file references are located. The references are not the files
themselves but rather the pointers to the files.
In Unix and Linux, the AI Control Server’s start up directory is the
base directory. In Windows®, the base directory is the directory
you are currently working in, and not the directory that contains
the program that is initiated.

438 Workbench User’s Guide


Section 11. The Data Modeling Process

This page is intentionally left blank.

Workbench User’s Guide 439


Section 12. Translating and Debugging

Section 12. Translating and Debugging

Once you have finished describing the structure and mapping


requirements (between the input and output) and have defined
your environment, you need to test and debug your data models.
A common problem in most programming languages is that you
may enter data that is not quite what you intended. For example,
while creating your data model structure, you may have
inadvertently used an incorrect masking character for a particular
format.
Only by testing and debugging can you determine if the data
model is accurately translating the data, based on the structures,
rules, and input data being processed. You can perform translation
testing and debugging from within Workbench. You can also
perform these functions from the command line.
This section describes both of these methods, and also provides
information on viewing trace logs and debugging.

Workbench User’s Guide 440


Section 12. Translating and Debugging

Running a translation consists of specifying various parameters to


Overview a translation interface program, called inittrans in Unix and Linux
or otrun.exe in Windows®. These parameters could include
specifying the input filename, output filename, the map component
filenames, user-defined and/or generic environment variables, and
required trace levels.
The AI Control Server program, called cservr in Unix and Linux
and cservr.exe in Windows®, performs the translation process. The
AI Control Server must be invoked before running a translation.
You can also perform a translation by using the command line
interface program called inittrans on a Unix and Linux system or
the program otrun.exe on the Windows® system. In many cases,
you might consider creating a script to run inittrans (Unix and
Linux) or a batch file to execute otrun.exe (Windows®) with the
appropriate parameters for your production functional area. In
other cases, you should set up the Scheduler to periodically
perform a translation.
You may run Trade Guide, Workbench, and one or more
translations at the same time when Unix, Linux, and Windows®
are set up as multiple user systems. Should the same trading
partner or other record be called from these programs, Application
Integrator™ automatically performs the appropriate record locking,
avoiding any conflicts. However, when the Windows® product is
set up as a single-user system, you cannot work in Trade Guide or
perform multiple translations at the same time you are in
Workbench.
To ease the process of development testing and debugging, you can
save your desired parameters for each type of translation. This
makes it easier for you to rerun repetitive translations. The
translator also produces a trace log that is useful in debugging.
You can also determine the amount of information to trace.

Workbench User’s Guide 441


Section 12. Translating and Debugging

Before Translating

Note to Unix and Linux users: The environment variables must be


set before starting the AI Control Server. Some of the variables are
required and if they are not preset, the AI Control Server will not
execute. Some of the variables are optional and are provided to
improve translation performance or session number tracking. The
environment variables are set in each user’s profile found in the
user’s home directory. Refer to the Control Server Installation
Guide for specific information about setting the environment
variables.

Note to Windows® users: The environment variables must be set


before starting the AI Control Server. Some of the variables are
required and if they are not preset, the AI Control Server will not
execute. Some of the variables are optional and are provided to
improve translation performance. The environment variables are
set in each user’s aiserver.bat file. Refer to the Control Server
Installation Guide for specific information about setting the
environment variables.

Before translating, you should verify that the following information


is present or conditions are true.
1. On Unix and Linux systems, the AI Control Server must be
running, for the appropriate queue ID, for the translator to
execute. If it is not running, the translator will not execute. To
bring up the AI Control Server at the Unix or Linux command
line, type otstart.
On Windows® systems, the AI Control Server must be running
at the appropriate TCP/IP listen port, for the translator to
execute. If it is not running, the translator will not execute. To
bring up the AI Control Server in Windows®, at the Run dialog
box, type:
cservr.exe –cs %OT_QUEUEID%.
-Or-
Double click the Start Server icon on the desktop.

442 Workbench User’s Guide


Section 12. Translating and Debugging

2. Before running a translation, you should have set up or


considered the following:
The source and target data models are defined and
saved.
The map component file (or files) is defined and saved.
The generic map component files for enveloping and de-
enveloping are present, if you are using them. Refer to
the Map Component Files for Enveloping/ De-
enveloping section for details.
You have created or added trading partner information
to the Profile Database.
3. You should have an input file to perform the translation.

Workbench User’s Guide 443


Section 12. Translating and Debugging

Running a translation consists of specifying a series of parameters


Translating Using to define the names of the map component file, files, and trace level
Workbench you desire for a translation session.

To run a translation from within Workbench


1. From the Map Editor work space, choose the Output tab
-Or-

From the Workbench toolbar, select the Translate icon.


-Or-
From the Menu, select Test > Translate.
-Or-
In the Resource Navigator View, right-click on an .att file to get
the context menu and select “Run Translation” option

Note: For this section, the screen prints are done using the
Translate icon on the Workbench toolbar.

444 Workbench User’s Guide


Section 12. Translating and Debugging

2. In the Process type value entry box, type a name to define a


new translation process.
- Or -
Click the arrow to select a saved translation process session
from the list box.
If you have previously defined the parameters for a translation,
skip to Step 9.
3. In the Map Component value entry box, select (using the
Browse button) the name of the map component file to be used
in this translation.
4. In the Trace Level value entry box, type the numeric value for
the trace level desired;
- Or -

Workbench User’s Guide 445


Section 12. Translating and Debugging

Select the level using the dialog box options associated with
this box. The trace level defaults to zero if a trace level is not
entered.
Refer to the “Setting a Trace Level” section later in this section
for a complete description of this option.
5. In the Translation Type etched area, select the type of
translation to be run.
6. Select the Keep Input File if you would like the input file to be
copied to a temporary file and used during translation.
7. In the Environment Variables group box, enter any additional
parameters needed for this translation. These parameters could
include input filename, output filename, and user-defined
environment variables.

Name Type one of the valid environment variable or user-


defined environment variable resources.
Value Set the variable when the translation is executed.

Using the Name and Value box entries, the system creates an
additional parameter statement, such as
“INPUT_FILE=OTX12I.txt”.

446 Workbench User’s Guide


Section 12. Translating and Debugging

8. Select Next to run the translation.

Note: Selecting Next saves the Process Type so it can be


selected from the list.

Note: If the data model has been modified since you last ran
a translation, you will be prompted to save these changes.

The Translation Status tab displays the status of the translation run.
The Trace Log tab displays the trace file created by the translation.
9. When done, select Finish to close the translation dialog.

Workbench User’s Guide 447


Section 12. Translating and Debugging

Setting a Trace The trace level controls the content of the translation trace log file.
Level The trace level is set using a numeric value. This value represents
the options of the trace that are set. You can manually enter the
trace level options, or use a dialog box for making selections.
The following table describes the trace levels.

Trace Description
Level
0 No Trace Setting or a Trace Setting of Zero (0)
otrans or otrun.exe version, compile date, and
time
Date/time translation began and ended
Loading of libraries, with their compiled
date/time
Translation ending status
1 Data Model Item Values Listing
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Names of source access and data models
“SOURCE VALUES”
The last source item values parsed
“VAR VALUES” - declared within this source
data model
“ARRAY VALUES” - declared within this
source data model
Names of target access and data models
“TARGET VALUES”
“VAR VALUES” - declared within the source
and this target data model
“ARRAY VALUES” - declared within the
source and this target data model

448 Workbench User’s Guide


Section 12. Translating and Debugging

Trace Description
Level
2 Value Table Listing
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
“VALUE STACK” - Target item labels with the
values assigned to them
“VSTK” - Values being referenced off the value
stack in target Phase 2 processing
4 Source Data Model Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Two lines for each source item, as processed -
“DM: ItemLabel”, “FINISHED ItemLabel”
8 Target Data Model Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
One line for each target item, as processed -
“DM: ItemLabel”
One line each time processing returns to a
parent level - for example, “FINISHED
ItemLabel”
16 Rule Execution
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
One line for when entering rules on an item -
only if item has rules defined
One line reporting rule execution status - all
items
32 *Rule Functions This level does not output on its own.
Refer to 48.

Workbench User’s Guide 449


Section 12. Translating and Debugging

Trace Description
Level
48 Rule Functions (48, which is 16+32)
Includes #NUMERIC/#DATE/#TIME access
function. Function NUMERIC_in: dm
PhoneNumber pic “No format”
dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
Execution of rules - assignment, functions, and
so on.
64 Source Access Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Source TAG item matching - “pre condition
Rec_Code met”
Source parsed values being returned back to the
source data model
128 Error Details (128)
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
The clearing of the error stack - “err_dump( )”
the capturing of an error - “err_push( )”
256 IO Detail (256) - pertains to source only
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Shows file position entering each item
Shows each character read and checks if within
defined character set.
Function NUMERIC_in: dm PhoneNumber pic
“No format” dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
512 Write Output Detail
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
The order the items are written out and the
access items used
1023 Complete Trace

450 Workbench User’s Guide


Section 12. Translating and Debugging

Entering the trace level options using a dialog box


1. Open the Trace Settings dialog box by clicking the Select button
next to the Trace Level box on the Translation dialog box.

The Trace Settings dialog box appears.

2. Select each type of trace level. To select all options, choose the
Set All button. To deselect all options, choose the Clear All
button.
3. Choose the OK button to return the selected trace level;
- Or -
Choose the Cancel button to exit the trace settings without
making changes.

Note: The trace log file is generated once you run the
translation and is displayed in the Translation Results dialog.

Workbench User’s Guide 451


Section 12. Translating and Debugging

Entering the trace level options manually


You can also calculate the desired trace level and enter it in the
Translation dialog box or as a parameter to an inittrans (Unix and
Linux) or otrun.exe (Windows®) statement.
Add the value of the options you want to activate in the trace using
the table below; then type that value in the Trace Level box or as
the appropriate parameter value when setting the trace level
elsewhere in the system.

Hint: You can also subtract the number of the items you do not
want to show on the trace from the total trace value of 1023. For
example, to show a total trace minus access items, you would
subtract 64.

Trace Flag Trace Level Value


Deactivate the trace 0
DM item values listing 1
Value table listing 2
Source data model items 4
Target data model items 8
Rule execution 16
Rule functions 32
Access items 64
Error details 128
IO detail 256
Write output detail 512
Sum Total (activate all trace options) 1023

452 Workbench User’s Guide


Section 12. Translating and Debugging

For Unix and Linux systems, the program called inittrans invokes a
Translating at translation from the command line. For Windows® systems, the
the Command program otrun.exe issues a translation session. Each program is
Line invoked with arguments to specify the configuration of the
translation session (input/output files, initial environment,
environment variables, and so on.).

Note: Before invoking a translation, the AI Control Server must be


running.

Note: inittrans works on Windows systems too

The request for processing is passed on to the AI Control Server


(cservr), together with all of the supplied arguments. The AI
Control Server will then dispose of the request based on an
available translator in a Unix or Linux environment or the single
translator in a Windows® environment. The AI Control Server and
queue ID can be set at the command line.

Invoking the The translation program inittrans has the following common
Translation syntax in a Unix or Linux development area. Refer to the following
table for a list of complete arguments.
Processes – Unix
and Linux
inittrans -at <initial map component file>
-cs <Control Server queue id> -tl <trace level> -I

Note: The display of the translation process terminates


immediately if not executed interactively (set by using the
parameter -I). This parameter is recommended.

Workbench User’s Guide 453


Section 12. Translating and Debugging

Invoking the The translation program otrun.exe has the following command
Translation syntax in a Windows® development area. Refer to the “Available
Arguments to inittrans/otrun.exe” table for a list of complete
Process –
arguments.
Windows
otrun.exe –at <initial map component file>
-cs <Control Server queue id > -tl <trace
level> -I

Note: The translation must be run interactively in a Windows®


system, therefore, the parameter –I must be included in the
otrun.exe command line statement.

When this command is executed, the following dialog box appears


which shows when the program ran and the error code returned
for this translation.

Hint: Once you have established your translation “command line,”


you can add the program to your desktop by creating an icon
shortcut to translate easily.

454 Workbench User’s Guide


Section 12. Translating and Debugging

Available Arguments to
inittrans/otrun.exe
Code Description of Item Defined Environment
Variable
Resources
-a source access file S_ACCESS
-A target access file T_ACCESS
-at initial map component file None
-cs specify AI Control Server None
queue ID
-D declare user-definable None
environment variable
-hk hierarchy key prefix HIERARCHY_KEY
-i input file INPUT_FILE
-I interactive, foreground None
versus background (default
processing)
-lg specifies a file for the None
<filename> translation session output
allowing you to translate in
the background instead of
monitoring feedback in a
Session Output message box

Caution: When this


argument is used, the
destination file is not
overwritten. Instead, new
entries are appended;
therefore, the file always
increases in size.

-lk lookup key prefix LOOKUP_KEY


-o output file OUTPUT_FILE
-P priority (0-low, 99-high) None

Workbench User’s Guide 455


Section 12. Translating and Debugging

Code Description of Item Defined Environment


Variable
Resources

Note for Unix and Linux


users: This value is used to
set both the priority and the
nice value based on the
equation: (default nice value
+ [9 minus ten’s priority
value]). For example, the
nice value of 20 and a
priority of 87 would reset the
nice value to 21, that is, ([9-
8]+20). Refer to Trade Guide
online help for more details.

-s source data model S_MODEL


-t target data model T_MODEL
-tl trace level (0-minimal, 1023- TRACE_LEVEL
full)
-u user name None
-xk xref key prefix XREF_KEY

Parameter The code for an initial map component file “-at” is exclusive of -i,
Explanations -o, -a, -A, -s, -t, -hk, -xk, and -lk. If -at is used, these others cannot
be used. If the other codes are used with -at, they will have no
impact.
The following sets are paired: -a with -s (source access and data
model), and -A with -t (target access and data model).
The parameter -lg <filename> generates translation session
feedback to a background file. This parameter writes to the
filename specified the translation session feedback usually
displayed in the Session Output dialog box. If the file cannot be
opened or created, the Session Output message box will be
displayed. It is a good practice to keep this filename consistent for
all translations, thus making any cleanup easier by handling only
one file.

456 Workbench User’s Guide


Section 12. Translating and Debugging

The –D code is used to declare Application Integrator™


environment variables (or user-defined variables) from the
command line. Defined environment variables override values
defined in the map component file.

Note: Assigning a value to an environment variable that contains


parentheses causes the string within the parentheses to be
ignored if it is not an environment variable. For example:
SET_EVAR ("DOGS", "A(B)C")
VAR_TMP=GET_EVAR ("DOGS")
This returns "AC" because (B) is not an environment variable and
is, therefore, ignored.

For examples of user-defined variables, refer to the section on


administration reports the online Help of Trade Guide.
With the Application Integrator™ enveloping models written
generically, specific arguments are used to tailor the enveloping
session to the application file being processed. These arguments
are converted into environment variables, which are either used
directly (for example, keyword environment variable INPUT_FILE)
or are referenced for their values within models or environments
(such as, the user-defined environment variables MESSAGE,
BYPASS, ACTIVITY_TYPE). Environment variables are referenced
for values in models through the use of the function GET_EVAR.
Values are obtained in environments by referencing the
environment variable.
The following tables provide some common examples of these
variables:

Example Environment Variable


-DMESSAGE=OTX12SO.att MESSAGE
Must declare the name of the map
component file which will process
(read in) the specific applications’
messages. In the examples
provided, its value is “OTX12SO.att.”

Workbench User’s Guide 457


Section 12. Translating and Debugging

Example Environment Variable


-DBYPASS=OTX12Byp.att BYPASS
Must declare the name of the map
component file which will handle
the bypass/reject logic. In the
example provided, its value is
“OTX12Byp.att.”
-DACTIVITY_TYPE=Invoicing ACTIVITY_TYPE
Should declare a description of the
application for the activity tracking
system. It defines the type of
messaging activity for reporting
purposes. In the example
provided, its value is “Invoicing.”
-DINPUT_FILE=OTX12O.txt INPUT_FILE
Must declare the name of the input
file to be translated. In the
example provided its value is
“OTX12O.txt.”
Setting a parameter to two or more words require the use of double
quotes around the parameter value statement, for example:
-DACTIVITY_TYPE=“Shipping Notices”
This requirement is true for two or more words separated by
spaces or the pipe (|) symbol.

Note: On any operating system, the hyphen (-) symbol cannot be


used in parameters.

In Unix and Linux, up to 512 characters can be supplied in the


command line interface.
In Windows®, up to 235 characters can be supplied in the
command line interface.

458 Workbench User’s Guide


Section 12. Translating and Debugging

Translation Examples The following are examples of invoking translation processes:


1. This example uses the supplied map component file for de-
enveloping called “OTRecogn.att” and specifies an input file
called “Input.flt.” Note the -I parameter must be issued for a
Windows® translation. The -lg parameter generates the
session output to the file named runfile.log instead of a Session
Output dialog box.
Unix and Linux:
inittrans -at OTRecogn.att –cs 01
-DINPUT_FILE=Input.flt
Windows®:
otrun.exe –at OTRecogn.att –cs %OT_QUEUEID%
-DINPUT_FILE=IBM00001.flt -lg
runfile.log –I

2. This example calls an initial map component file “OTRpt,” then


specifies a report map component file to use “OTActR1.att”
with the -D argument, sets the trace to the highest level (-tl
1023), and sets the program to run interactively (-I).
Unix and Linux:
inittrans -at OTRpt.att –cs 01
-DREPORT=OTActR1.att
-tl 1023 -I
Windows®:
otrun.exe -at OTRpt.att –cs %OT_QUEUEID%
-DREPORT=OTActR01.att -tl 1023 -I

Workbench User’s Guide 459


Section 12. Translating and Debugging

3. This example uses the supplied map component file for


enveloping (“OTEnvelp.att”) and specifies four of the
environment variables specified in the generic map component
file using the -D arguments. As noted earlier, the -D argument
declare Application Integrator™ environment variables to
override the values defined in the initial map component file.
In this example, the first -D specifies the file to use, the second
calls a second map component file (OTX12SO.att), the third
calls a map component file to handle error (OTX12Byp.att)
processing, and the fourth provides a description of the activity
for the output and trace log. The AI Control Server ID is
specified for the Unix and Linux environment (-cs
$OT_QUEUEID) and the program is run interactively in both
cases (-I).
Unix and Linux:
inittrans -at OTEnvelp.att -cs $OT_QUEUEID
-DINPUT_FILE=OTX12O.txt -DMESSAGE=OTX12SO.att
-DBYPASS=OTX12Byp.att
-DACTIVITY_TYPE=“Invoice Processing” -I
Windows®:
otrun.exe -at OTEnvelp.att –cs %OT_QUEUEID%
-DINPUT_FILE=OTX12O.txt -DMESSAGE=OTX12SO.att
-DBYPASS=OTX12Byp.att
-DACTIVITY_TYPE=“Invoice Processing” –I

460 Workbench User’s Guide


Section 12. Translating and Debugging

Terminating The programs inittrans/otrun.exe terminate immediately if not


Runaway executed interactively (-I), or automatically terminate once the
processing session is completed. If the inittrans/otrun.exe
Translations
program does not terminate, kill the process without specifying a
signal so that the AI Control Server is notified and makes the user
slot available.
From the command line, type the following:
Unix and Linux:
inittrans -cs $OT_QUEUEID –list

The following appears:

Session Id:29

To kill the session type:

inittrans -cs $OT_QUEUEID –kill 29

Windows®:
otrun –cs %OT_QUEUEID% -list

The following appears:

To kill the session type:

otrun -cs %OT_QUEUEID% –kill 20

The following appears:

Workbench User’s Guide 461


Section 12. Translating and Debugging

Along with:

462 Workbench User’s Guide


Section 12. Translating and Debugging

Consider the following pointers if you have trouble translating.


If the Translator
Does Not Execute
Successfully

Unix and Linux Make sure the AI Control Server is running.


Troubleshooting Make sure all programs in the ./bin directory are executable.
Make sure you have write permission on the following files:
<queue id>.s<session no>.log
<queue id>.e<session no>.log
<queue id>.tr<translator no>.log
Where
<queue id> is the ID of the AI Control Server that controls the
translator, the environment variable OT_QUEUEID.
<session no> is the translation session number maintained in
the AI Control Server’s home directory within the file “tsid.”
<translator no> is the translator sequence number that is
incremented for each translator invoked by the AI Control Server.
It is reset back to zero upon restart of the AI Control Server.
Upon beginning the AI Control Server, a translator is automatically
started, waiting for the first processing request from the AI Control
Server.
If a translator process locks a record or if a record is in use by
another user through Trade Guide, other translators will wait, then
retry at 1-second intervals for up to 125 seconds to acquire the lock.
After the 125 seconds have passed, the other translators will
terminate the translations process.
For outbound standard data, error #301, Envelope Substitution
Error, will be returned and logged into the process tracking
database.

Workbench User’s Guide 463


Section 12. Translating and Debugging

Windows A “tmp” directory must be created after the Application


Troubleshooting Integrator™ installation is complete. You will receive an
error message if a “tmp” directory was not created after
Application Integrator™ was installed. Refer to the Control
Server Installation Guide for information about setting up the
“tmp” directory.

This section describes the following information:


Using the Trace Ways to set the trace log
Log Viewing the trace log

Ways to Set the You can alter the amount of detail contained in the trace in four
Trace Log places:
The Workbench Translate dialog box
The environment (map component file)
Within the data model rules
At the command line

Through the Run You can set the trace level through the Workbench Translate dialog
Dialog Box box by accessing the Trace Settings dialog box (accessed from the
Select button next to the Trace Level box). This method sets the
trace throughout the complete translation session. Refer to the
procedures earlier in this section for instructions on completing
this dialog box.

464 Workbench User’s Guide


Section 12. Translating and Debugging

Refer to Defining a New Map Component File for information on


how to create a new map component file.

Using Data Model For any item in the translation session, you can define the trace
Rules reporting detail. Use the function SET_EVAR( ) and the keyword
environment variable TRACE_LEVEL to do this.
[ ]
SET_EVAR(“TRACE_LEVEL”, 1023)

Note: The function must be performed for the trace level to be


altered. The action within a false conditional expression is not
performed; therefore, the trace level is not changed.

Workbench User’s Guide 465


Section 12. Translating and Debugging

At the Command Line You can specify a trace level when invoking a translation by using
the -tl parameter and passing a numeric trace level value. For
example, a complete trace (1023) is specified with the following
line:
In Unix and Linux, type:
inittrans -at train1.att -cs $OT_QUEUEID -tl 1023 –I
In Windows®, type:
otrun.exe. –at train1.att –cs %OT_QUEUEID% -tl 1023
–I
Refer to the earlier section, “Translating at the Command Line,” for
more details on command line parameters.

Viewing the Trace You can view the trace log through Workbench, the Unix or Linux
Log command line, or by opening the trace log via an editor.

To view the trace log through Workbench


1. From the Test menu, select Translate.
2. Run the translation.

466 Workbench User’s Guide


Section 12. Translating and Debugging

3. The trace is displayed in the Trace Log tab.


4. You can search for text within the trace log by using the Search
menu option.

You can narrow your search by determining whether to:


Match case: Select the Match case box to do this.
Use the regular expression: Select the Regular
expression box to do this.
Choose the Up or Down radio button to indicate the
direction of the search.
Choose the Cancel button to exit the Search dialog box.

Workbench User’s Guide 467


Section 12. Translating and Debugging

A trace log file is generated for each translation session.


Understanding
the Trace Log

Organization of a The typical organization of a trace is:


Trace Log
Translation Initialization:
Translator version, compile date/time
Shared library initialization - version, compile date/time

Map Component File Initialization:


Input file and output file definition and device type - “std
file device”

Source Processing:
Source data model processing - only if source data model declared
in this environment
Source data and access model filenames —
“SOURCE_TARGET get_source acc OTFixed.acc model
\Trandev52\models\Examples.mdl”
Source item processing — repeat for each item
Data model item label
Access parsing / character reading - tags, containers,
defining
Rules performed, modes (Present/Absent/Error), conditions
and actions
Source values (values are not seen if the data model is
exited, i.e., “EXIT 503”)
Data model item values
Array values (ARRAY->) declared within this source data
model, in sequence declared
Temporary variable values (VAR->) declared within this
source data model, in sequence declared

468 Workbench User’s Guide


Section 12. Translating and Debugging

Target Processing, Phase 1:


(Only if target data model declared in this environment)
Target data and access model filenames -
“SOURCE_TARGET put_target acc OTFixed.acc model
\Trandev52\models\Examplet.mdl”
Target item processing - repeat for each item
Data model item label
Rules performed - modes (Present/Absent/Error) -
conditions and actions
Target values (values are not seen if the data model is exited,
i.e., “EXIT 503”)
Array values (ARRAY->) declared within the source and
target data models, in sequence declared
Temporary variable values (VAR->) declared within the
source and target data models, in sequence declared
Value stack (in the order values were assigned to the target
data model items)

Target Processing, Phase 2:


(There are no rules executed in this phase.)
Value stack writing off (can encounter truncation (185) or
out of character set (184) errors); be sure the parent
environment properly handles this.
Closing file(s) -input/output streams
Unlocking administration file records - closes each file, if last
translator to reference
Return status to parent map component file/environment

Note: Processing during Phase 1 can be controlled in a


data model using the TRUNCATION_FLG keyword
environment variable and the LAST_TRUNC function.

Workbench User’s Guide 469


Section 12. Translating and Debugging

Debugging Hints The following are hints on using the trace log for debugging.

To See Full Trace Consider setting a full trace (1023) to see all details. To see values
as they are formatted and written out off the value stack, you must
use the full trace level setting.

Note: To avoid possible problems if your trace log become too


large, you should make sure that your system administrator has
set the Unix/Linux ulimit to a reasonable size for your operating
system environment. Most operating systems allow a default
maximum file size value of 4 gigabytes. Check with your system
administrator to verify or set this limit.

Excessive Looping Use a trace level value of “12”; this shows only the item’s labels
Concern with their occurrences.

Pinpointing Rule Set the trace level value to “16”; this shows the items with rule
Execution Errors errors.
LastName - PRESENT rules
<3> returning eval not instantiated
LastName: ERROR->> status after PRESENT rules 139
vs.
LastName - PRESENT rules
LastName: status OK after PRESENT rules

470 Workbench User’s Guide


Section 12. Translating and Debugging

Keywords to Look For


Keyword Meaning
ABSENT Indicates that the rules that follow will
be performed if the item is found to be
absent. The translator determines which
mode it is in by evaluating the current
error code value after parsing a data
model item (if it is a defining item).
ARRAY VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all array variables on the
source and target data models.
assignment to Indicates that an action is being taken.
XXXXXXX Assigns a value to variable XXXXXXX.
Map Component File Where map component file occurs, i.e,
[]
ATTACH “OTX12Env.att”
map component i.e., “OTCmt1.att”
filename
Calling function Indicates that data model function
XXXXXXX XXXXXXX is being called.
constpush n.nnnnnn Identifies the value (n.nnnnnn) of a
numeric constant being supplied,
usually as a parameter to a function
such as STRSUBS.
data model item’s i.e., “LastName”
label
data model name i.e., “OTCmt1.mdl”
dm left nn .. nn For a data model item defined as a
right nn .. nn numeric, this key phrase indicates the
radix left nn ... nn minimum to maximum number of digits
which may appear to the left of the
decimal point.
dm XXXXXXX pic Identifies the masking type for a data
nnn model XXXXXXX.
DM: XXXXXXX Identifies the initial reference to an
instance of a data model item.

Workbench User’s Guide 471


Section 12. Translating and Debugging

Keyword Meaning
ENTER fill_sdm( ): Identifies the instance of parent group
XXXXXXX parent- item XXXXXXX. The parent instance is
>instance n incremented when a new hierarchical
level is encountered.
Equal returns Indicates the result of an evaluation or
True/False condition statement.
err_dump( ) Indicates that the content of the error
dump stack is being discarded or reset
to a non-error state.
err_push( ) status nnn Assigns interval setting of an error value
msg “xxxxxxxx” to alter the state of processing
depending on keyword used, data
encountered, or data not encountered.
ERROR Indicates that the rules that follow will
be performed if the item is found to be
in error. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
error number For example, “184”
Evaluated value Displays the contents of the item just
before performing rules associated with
the item.
exec_ops status Identifies the status returned by a
condition or actions.
FINISHED Identifies the end of processing of an
XXXXXXX: occurrence of a data model item.
fp_save set to nnn in Identifies the last position read from the
xxxxxxx input stream.

472 Workbench User’s Guide


Section 12. Translating and Debugging

Keyword Meaning
FUNC XXXXXXX set Identifies where an access model item is
to value xxxx <nnnn> being set to a value of xxxx, where the
decimal value of xxxx is nnnn. The data
model “SET” functions, SET_DECIMAL,
SET_RELEASE, SET_FIRST_DELIM,
SET_SECOND_DELIM,
SET_THIRD_DELIM,
SET_FOURTH_DELIM,
SET_FIFTH_DELIM, and set the access
model items.
Function xxxxxx_in Displays the source access model
returns value: function (xxxxxx_in) used to return a
“yyyyyy” value of yyyyyy from the data input
stream.
Function xxxxxx_out: Displays the target data model output
dm yyyyyy pic function called (xxxxxx_out) to define
FFFFFFF the item type for data model item
yyyyyy. The output format for the item
is defined with the mask format
FFFFFFF.
get_source Where source data model processing
occurs.
infile “xxxxxxx” Specifies the name of the file used to
supply input data for processing.
initial pre condition Indicates that an access model function
XXXXXXX met call has returned a valid value for an
item pre-condition and assigned its
value to data model item XXXXXXX.
instance nnn Identifies which instance of the item was
acted upon. The instance is incremented
by a group. Instances are not
incremented by defining items or tag
items. Instances are reset when control
passes back to the parent item.
IO: char “x”, out of Indicates that data found in the input
char set stream is outside the range of the valid
characters defined for the item.

Workbench User’s Guide 473


Section 12. Translating and Debugging

Keyword Meaning
IO: func XXXXXXX Indicates that the access model function
called XXXXXXX is being called to return a
value to the data model.
level nnn Starting from the top of the data model
and working downward, the level is a
reference to the number of unique data
model group items encountered. This
key word allows you to more easily
determine where the item occurs in the
data model. The level is also referenced
by the pipe ‘|’ symbols along the left
side of the trace file as each data model
item is encountered and when
processing of the item is finished. One
‘|’ is displayed for each level, i.e., “|||”
would be displayed at level 3.
litpush “X” Defines the value of a literal constant
usually assigned to a variable.
Matching XXXXXXX Shows the comparision performed for a
to YYYYYYY tag item. The data taken from the input
stream (XXXXXXX) is compared against
the tag (YYYYYYYY) for a match.
max occurrence nnnn Identifies the maximum number of
times that an item has been defined to
occur successively in a looping
sequence.
occurrence nnnn Identifies the number of times the item
was encountered within a looping
sequence.
outfile “xxxxxxx” Specifies the name of the file used to
write processed data out to the disk.
pre_cond xxxxxxx An item is identified if the data found
(not) found meets the requirements of pre_condition
- item - post_condition. This statement
indicates whether the pre_condition has
been found.

474 Workbench User’s Guide


Section 12. Translating and Debugging

Keyword Meaning
PRESENT Indicates that the rules that follow will
be performed if the item is found to be
present. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
An error code value of “0” defines
present mode - the data model item is
present.
put_target Where target data model processing
occurs.
radix Defines the character to be used to
signify the decimal place within a
numeric item definition.
read_set returning Displays the returned value of a
xxxx character string read in from the data
stream.
resetting fp to nnn Indicates that the data file pointer is
being reset to value “nnn,” usually the
last value referenced for functions such
as #SET_FIRST_DELIM,
#SET_SECOND_DELIM,
#SET_THIRD_DELIM,
#SET_FOURTH_DELIM, and
#SET_FIFTH_DELIM or when an item is
in error. The file pointer may also be
reset to the beginning of the file when
keywords REJECT or RELEASE are
encountered in the data model.
right nn .. nn For a date model item defined as a
numeric, this indicates the minimum to
maximum number of digits which may
appear to the right of the decimal point.
SENTINEL sequence Indicates that sentinels occur between
each grouping of occurences.

Workbench User’s Guide 475


Section 12. Translating and Debugging

Keyword Meaning
SOURCE VALUES Indicates that keywords are marking the
start of a summarization of source
values. This summarization comes at
the end of the trace file listing for the
source data model.
SOURCE_TARGET Indicates that keywords specifying the
put_target acc names of target access and data models
xxxxxxx model are loaded at this point in the
yyyyyyy.mdl trace/translation.
status Identifies the status returned by
operations performed on the parent
item.
statusc Identifies the status returned by
operations performed on an item’s
children.
TARGET VALUES Indicates that keywords are marking the
start of a summarization of target
values. This summarization comes at
the end of the trace file listing for a
target data model.
TIME_in The “_in” (for source or model
TIME_out processing) or “_out” (for target data
model processing) is added to function
names such as TIME and DATE to
indicate access model functions.
VALUE STACK Values parsed or constructed and placed
on the value stack, beginning of Phase 2
target processing.
VAR VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all temporary variables on
the source and target data models.
VSTK The writing of each value off the value
stack - Phase 2 of target processing
VSTK->dm xxxxxxx Shows a walkthrough of data model
dm xxxxxxx value value assignments and matching
nnnn attempts.

476 Workbench User’s Guide


Section 12. Translating and Debugging

Keyword Meaning
XXXXXX read_set err Indicates that a value read in from the
input stream is outside of the character
set XXXXXX.
XXXXXXX: status OK Indicates that rules on data model items
after PRESENT rules XXXXXXX were performed sucessfully.

Debugging when 1. Insert messages in the model using the SEND_SMSG function
Processing Large Input to mark progress during the translation at a high level, for
Files example:
SEND_SMSG(2, “Field Content:”, Field1)
2. Turn the trace on and off from within the model to get more
detail:
SET_EVAR("TRACE_LEVEL", 1023)
Refer to Appendix A. Application Integrator Model Functions
in Workbench User’s Guide-Appendix for a complete description
of these functions.

Example Trace Log The following is an example using the trace level 1023 —all options
selected.
(Output begins here)

TRANSLATOR ACCELERATOR TRACE LOG [ Version 5.2.0.5 otBuild:


12/04/07 at: 16:19:30 ]
Copyright 2005. Global eXchange Services, Inc. All rights
reserved.
Start of translation at Mon Jun 30 14:32:02 2008
Execution locale: English_UnitedStates.Latin1@Binary
infile "data.in" outfile "data.out"
defaulting to std file device on "data.in"
defaulting to std file device on "data.out"
SOURCE_TARGET get_source acc C:\Trandev52\models\OTFixed.acc
model Examples.mdl
| ----------------
| ENTER fill_sdm(): Comment parent->instance 0
| ----------------
| DM: Comment instance 0 level 1
fp_save set to 0 in Comment
line number: 0; position: 0

Workbench User’s Guide 477


Section 12. Translating and Debugging

Comment - PRESENT rules


[ NULL CONDITION ]
litpush: "11/04/2005" assignment to VAR->LastChangeDate:
"11/04/2005"
| END RULE |
Comment: status OK after PRESENT rules
| FINISHED Comment: status 0 statusc 0 occurrence 1 max
occurrence 1
| DM: InputRecord instance 0 level 1
fp_save set to 0 in InputRecord
line number: 0; position: 0
InputRecord Matching to dm->dh 70292496
fill_sdm initial pre condition Rec_Code met
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 0 in FirstName
line number: 0; position: 0
IO: c = 77 <M> Set start_char 65 end_char 90
IO: c = 97 <a> Set start_char 65 end_char 90
IO: c = 97 <a> Set start_char 97 end_char 122
IO: c = 114 <r> Set start_char 65 end_char 90
IO: c = 114 <r> Set start_char 97 end_char 122
IO: c = 121 <y> Set start_char 65 end_char 90
IO: c = 121 <y> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Mary "; byte length = 5
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 5 in LastName
line number: 0; position: 5
IO: c = 83 <S> Set start_char 65 end_char 90
IO: c = 109 <m> Set start_char 65 end_char 90
IO: c = 109 <m> Set start_char 97 end_char 122
IO: c = 105 <i> Set start_char 65 end_char 90
IO: c = 105 <i> Set start_char 97 end_char 122
IO: c = 116 <t> Set start_char 65 end_char 90
IO: c = 116 <t> Set start_char 97 end_char 122
IO: c = 104 <h> Set start_char 65 end_char 90
IO: c = 104 <h> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Smith "; byte length = 6

478 Workbench User’s Guide


Section 12. Translating and Debugging

|| FINISHED LastName: status 0 statusc 0 occurrence 1 max


occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 11 in PhoneNumber
line number: 0; position: 11
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic "9999999999"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 51 <3> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 57 <9> Set start_char 32 end_char 255
IO: c = 54 <6> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: read_set returning "3055550961"; byte length = 10
Function NUMERIC_in returns value: "3055550961"
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max
occurrence 1
IO: integer 13 read.
IO: integer 10 read.
InputRecord - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ FirstName Evaluated value "Mary "
assignment to ARRAY->first: "Mary "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ LastName Evaluated value "Smith "
assignment to ARRAY->last: "Smith "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ PhoneNumber Evaluated value "3055550961"
assignment to ARRAY->phone: "3055550961"
| END RULE |
InputRecord: status OK after PRESENT rules
| FINISHED InputRecord: status 0 statusc 0 occurrence 1 max
occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 23 in InputRecord
line number: 0; position: 21
InputRecord Matching to dm->dh 70292636
fill_sdm initial pre condition Rec_Code met
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------

Workbench User’s Guide 479


Section 12. Translating and Debugging

|| DM: FirstName instance 0 level 2


fp_save set to 23 in FirstName
line number: 0; position: 21
IO: c = 74 <J> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 97 end_char 122
IO: c = 104 <h> Set start_char 65 end_char 90
IO: c = 104 <h> Set start_char 97 end_char 122
IO: c = 110 <n> Set start_char 65 end_char 90
IO: c = 110 <n> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "John "; byte length = 5
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 28 in LastName
line number: 0; position: 26
IO: c = 71 <G> Set start_char 65 end_char 90
IO: c = 114 <r> Set start_char 65 end_char 90
IO: c = 114 <r> Set start_char 97 end_char 122
IO: c = 101 <e> Set start_char 65 end_char 90
IO: c = 101 <e> Set start_char 97 end_char 122
IO: c = 101 <e> Set start_char 65 end_char 90
IO: c = 101 <e> Set start_char 97 end_char 122
IO: c = 110 <n> Set start_char 65 end_char 90
IO: c = 110 <n> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Green "; byte length = 6
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 34 in PhoneNumber
line number: 0; position: 32
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic "9999999999"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 50 <2> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: c = 50 <2> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: c = 50 <2> Set start_char 32 end_char 255

480 Workbench User’s Guide


Section 12. Translating and Debugging

IO: c = 49 <1> Set start_char 32 end_char 255


IO: c = 50 <2> Set start_char 32 end_char 255
IO: read_set returning "2125551212"; byte length = 10
Function NUMERIC_in returns value: "2125551212"
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max
occurrence 1
IO: integer 13 read.
IO: integer 10 read.
InputRecord - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ FirstName Evaluated value "John "
assignment to ARRAY->first: "John "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ LastName Evaluated value "Green "
assignment to ARRAY->last: "Green "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ PhoneNumber Evaluated value "2125551212"
assignment to ARRAY->phone: "2125551212"
| END RULE |
InputRecord: status OK after PRESENT rules
| FINISHED InputRecord: status 0 statusc 0 occurrence 2 max
occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 46 in InputRecord
line number: 0; position: 42
InputRecord Matching to dm->dh 70292692
fill_sdm initial pre condition Rec_Code met
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 46 in FirstName
line number: 0; position: 42
IO: c = 66 <B> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 97 end_char 122
IO: c = 98 <b> Set start_char 65 end_char 90
IO: c = 98 <b> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Bob "; byte length = 5

Workbench User’s Guide 481


Section 12. Translating and Debugging

|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max


occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 51 in LastName
line number: 0; position: 47
IO: c = 74 <J> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 65 end_char 90
IO: c = 111 <o> Set start_char 97 end_char 122
IO: c = 110 <n> Set start_char 65 end_char 90
IO: c = 110 <n> Set start_char 97 end_char 122
IO: c = 101 <e> Set start_char 65 end_char 90
IO: c = 101 <e> Set start_char 97 end_char 122
IO: c = 115 <s> Set start_char 65 end_char 90
IO: c = 115 <s> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Jones "; byte length = 6
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 57 in PhoneNumber
line number: 0; position: 53
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic "9999999999"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 51 <3> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: c = 51 <3> Set start_char 32 end_char 255
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 52 <4> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: c = 54 <6> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: read_set returning "3135401600"; byte length = 10
Function NUMERIC_in returns value: "3135401600"
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max
occurrence 1
IO: integer 13 read.
IO: integer 10 read.
InputRecord - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ FirstName Evaluated value "Bob "
assignment to ARRAY->first: "Bob "
| END RULE |
[ NULL CONDITION ]

482 Workbench User’s Guide


Section 12. Translating and Debugging

+++ EVAL +++ LastName Evaluated value "Jones "


assignment to ARRAY->last: "Jones "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ PhoneNumber Evaluated value "3135401600"
assignment to ARRAY->phone: "3135401600"
| END RULE |
InputRecord: status OK after PRESENT rules
| FINISHED InputRecord: status 0 statusc 0 occurrence 3 max
occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 69 in InputRecord
line number: 0; position: 63
InputRecord Matching to dm->dh 70292664
fill_sdm initial pre condition Rec_Code met
|| ----------------
|| ENTER fill_sdm(): FirstName parent->instance 0
|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 69 in FirstName
line number: 0; position: 63
IO: c = 83 <S> Set start_char 65 end_char 90
IO: c = 117 <u> Set start_char 65 end_char 90
IO: c = 117 <u> Set start_char 97 end_char 122
IO: c = 101 <e> Set start_char 65 end_char 90
IO: c = 101 <e> Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: c = 32 < > Set start_char 65 end_char 90
IO: c = 32 < > Set start_char 97 end_char 122
IO: c = 32 < > Set start_char 32 end_char 32
IO: read_set returning "Sue "; byte length = 5
|| FINISHED FirstName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: LastName instance 0 level 2
fp_save set to 74 in LastName
line number: 0; position: 68
IO: c = 87 <W> Set start_char 65 end_char 90
IO: c = 105 <i> Set start_char 65 end_char 90
IO: c = 105 <i> Set start_char 97 end_char 122
IO: c = 108 <l> Set start_char 65 end_char 90
IO: c = 108 <l> Set start_char 97 end_char 122
IO: c = 108 <l> Set start_char 65 end_char 90
IO: c = 108 <l> Set start_char 97 end_char 122
IO: c = 105 <i> Set start_char 65 end_char 90
IO: c = 105 <i> Set start_char 97 end_char 122
IO: c = 115 <s> Set start_char 65 end_char 90

Workbench User’s Guide 483


Section 12. Translating and Debugging

IO: c = 115 <s> Set start_char 97 end_char 122


IO: read_set returning "Willis"; byte length = 6
|| FINISHED LastName: status 0 statusc 0 occurrence 1 max
occurrence 1
|| DM: PhoneNumber instance 0 level 2
fp_save set to 80 in PhoneNumber
line number: 0; position: 74
IO: func NUMERIC called
Function NUMERIC_in: dm PhoneNumber pic "9999999999"
dm left 10 .. 10 right 0 .. 0 radix
IO: c = 53 <5> Set start_char 32 end_char 255
IO: c = 49 <1> Set start_char 32 end_char 255
IO: c = 55 <7> Set start_char 32 end_char 255
IO: c = 56 <8> Set start_char 32 end_char 255
IO: c = 51 <3> Set start_char 32 end_char 255
IO: c = 57 <9> Set start_char 32 end_char 255
IO: c = 56 <8> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 48 <0> Set start_char 32 end_char 255
IO: c = 56 <8> Set start_char 32 end_char 255
IO: read_set returning "5178398008"; byte length = 10
Function NUMERIC_in returns value: "5178398008"
|| FINISHED PhoneNumber: status 0 statusc 0 occurrence 1 max
occurrence 1
IO: integer 13 read.
IO: integer 10 read.
InputRecord - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ FirstName Evaluated value "Sue "
assignment to ARRAY->first: "Sue "
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ LastName Evaluated value "Willis"
assignment to ARRAY->last: "Willis"
| END RULE |
[ NULL CONDITION ]
+++ EVAL +++ PhoneNumber Evaluated value "5178398008"
assignment to ARRAY->phone: "5178398008"
| END RULE |
InputRecord: status OK after PRESENT rules
| FINISHED InputRecord: status 0 statusc 0 occurrence 4 max
occurrence 100
| DM: InputRecord instance 0 level 1
fp_save set to 92 in InputRecord
line number: 0; position: 84
InputRecord Matching to dm->dh 70292496
fill_sdm initial pre condition Rec_Code met
|| ----------------

484 Workbench User’s Guide


Section 12. Translating and Debugging

|| ENTER fill_sdm(): FirstName parent->instance 0


|| ----------------
|| DM: FirstName instance 0 level 2
fp_save set to 92 in FirstName
line number: 0; position: 84
err_push() status 171 msg "" type 4 errstkp 0
--------------
*** SOURCE VALUES ***
--------------
DM: FirstName val "" instance 0 p_inst 0
p_seq 0 level 2
DM: FirstName val "" instance 0 level 2
end FirstName values
SENTINEL sequence 0
end InputRecord values
----------
M_L VALUES
----------
----------
VAR VALUES
----------
DM: LastChangeDate val "11/04/2005" instance 0 p_inst 0
p_seq 0 level 0
DM: LastChangeDate val "11/04/2005" instance 0 level 0
SENTINEL sequence 0
end LastChangeDate values
----------
ARRAY VALUES
----------
DM: first val "Mary " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Mary " instance 0 level 2
DM: first val "John " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "John " instance 0 level 2
DM: first val "Bob " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Bob " instance 0 level 2
DM: first val "Sue " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val "Smith " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Smith " instance 0 level 2
DM: last val "Green " instance 0 p_inst 0
p_seq 0 level 2

Workbench User’s Guide 485


Section 12. Translating and Debugging

DM: last val "Green " instance 0 level 2


DM: last val "Jones " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Jones " instance 0 level 2
DM: last val "Willis" instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val "3055550961" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3055550961" instance 0 level 2
DM: phone val "2125551212" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "2125551212" instance 0 level 2
DM: phone val "3135401600" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3135401600" instance 0 level 2
DM: phone val "5178398008" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
SOURCE_TARGET put_target acc C:\Trandev52\models\OTFixed.acc
model Examplet.mdl
| DM: Comment instance 0 cur_tdm_inst 0
Comment - PRESENT rules
[ NULL CONDITION ]
litpush: "11/04/2005" assignment to VAR->LastChangeDate:
"11/04/2005"
| END RULE |
Comment: status OK after PRESENT rules
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->phone Evaluated value "3055550961"
assignment to PhoneNumber: "3055550961"
| END RULE |
PhoneNumber: status OK after PRESENT rules
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->last Evaluated value "Smith "
assignment to LastName: "Smith "
| END RULE |
LastName: status OK after PRESENT rules
|| DM: FirstName instance 0 cur_tdm_inst 0

486 Workbench User’s Guide


Section 12. Translating and Debugging

FirstName - PRESENT rules


[ NULL CONDITION ]
+++ EVAL +++ ARRAY->first Evaluated value "Mary "
assignment to FirstName: "Mary "
| END RULE |
FirstName: status OK after PRESENT rules
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->phone Evaluated value "2125551212"
assignment to PhoneNumber: "2125551212"
| END RULE |
PhoneNumber: status OK after PRESENT rules
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->last Evaluated value "Green "
assignment to LastName: "Green "
| END RULE |
LastName: status OK after PRESENT rules
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->first Evaluated value "John "
assignment to FirstName: "John "
| END RULE |
FirstName: status OK after PRESENT rules
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->phone Evaluated value "3135401600"
assignment to PhoneNumber: "3135401600"
| END RULE |
PhoneNumber: status OK after PRESENT rules
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->last Evaluated value "Jones "
assignment to LastName: "Jones "
| END RULE |
LastName: status OK after PRESENT rules
|| DM: FirstName instance 0 cur_tdm_inst 0

Workbench User’s Guide 487


Section 12. Translating and Debugging

FirstName - PRESENT rules


[ NULL CONDITION ]
+++ EVAL +++ ARRAY->first Evaluated value "Bob "
assignment to FirstName: "Bob "
| END RULE |
FirstName: status OK after PRESENT rules
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->phone Evaluated value "5178398008"
assignment to PhoneNumber: "5178398008"
| END RULE |
PhoneNumber: status OK after PRESENT rules
|| DM: LastName instance 0 cur_tdm_inst 0
LastName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->last Evaluated value "Willis"
assignment to LastName: "Willis"
| END RULE |
LastName: status OK after PRESENT rules
|| DM: FirstName instance 0 cur_tdm_inst 0
FirstName - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->first Evaluated value "Sue "
assignment to FirstName: "Sue "
| END RULE |
FirstName: status OK after PRESENT rules
|| FINISHED FirstName: returning status 0
OutputRecord: status after children 0
| DM: OutputRecord instance 0 cur_tdm_inst 0
|| DM: PhoneNumber instance 0 cur_tdm_inst 0
PhoneNumber - PRESENT rules
[ NULL CONDITION ]
+++ EVAL +++ ARRAY->phone eval: no match inst
exec_ops status 140
| END RULE |
PhoneNumber: ERROR->> status after PRESENT rules 140
|| FINISHED PhoneNumber: returning status 139
err_push() status 139 msg "Returning no data/instance
PhoneNumber status 139" type 4 errstkp 0
OutputRecord: status after children 139
| FINISHED OutputRecord: returning status 0
---------------------------------------
*** TARGET VALUES *********************
---------------------------------------

488 Workbench User’s Guide


Section 12. Translating and Debugging

----------
VAR values
----------
DM: LastChangeDate val "11/04/2005" instance 0 p_inst 0
p_seq 0 level 0
DM: LastChangeDate val "11/04/2005" instance 0 level 0
SENTINEL sequence 0
end LastChangeDate values
----------
ARRAY values
----------
DM: first val "Mary " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Mary " instance 0 level 2
DM: first val "John " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "John " instance 0 level 2
DM: first val "Bob " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Bob " instance 0 level 2
DM: first val "Sue " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val "Smith " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Smith " instance 0 level 2
DM: last val "Green " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Green " instance 0 level 2
DM: last val "Jones " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Jones " instance 0 level 2
DM: last val "Willis" instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val "3055550961" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3055550961" instance 0 level 2
DM: phone val "2125551212" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "2125551212" instance 0 level 2
DM: phone val "3135401600" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3135401600" instance 0 level 2

Workbench User’s Guide 489


Section 12. Translating and Debugging

DM: phone val "5178398008" instance 0 p_inst 0


p_seq 0 level 2
DM: phone val "5178398008" instance 0 level 2
SENTINEL sequence 0
end phone values
-----------
VALUE STACK
-----------
DM: PhoneNumber [3055550961] inst 0 vstk 24921184
DM: LastName [Smith ] inst 0 vstk 24921196
DM: FirstName [Mary ] inst 0 vstk 24921208
DM: PhoneNumber [2125551212] inst 0 vstk 24921220
DM: LastName [Green ] inst 0 vstk 24921232
DM: FirstName [John ] inst 0 vstk 24921244
DM: PhoneNumber [3135401600] inst 0 vstk 24921256
DM: LastName [Jones ] inst 0 vstk 24921268
DM: FirstName [Bob ] inst 0 vstk 24921280
DM: PhoneNumber [5178398008] inst 0 vstk 24921292
DM: LastName [Willis] inst 0 vstk 24921304
DM: FirstName [Sue ] inst 0 vstk 24921316
vstk 24921184
vstk dm PhoneNumber val 3055550961
VSTK-> dm PhoneNumber dm Comment value 3055550961
vstk 24921184
vstk dm PhoneNumber val 3055550961
VSTK-> dm PhoneNumber dm OutputRecord value 3055550961
vstk 24921184
vstk dm PhoneNumber val 3055550961
VSTK-> dm PhoneNumber dm PhoneNumber value 3055550961
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: 9999999999
dm value "3055550961"
NUMERIC_out: return value "3055550961"
vstk 24921196
vstk dm LastName val Smith
VSTK-> dm LastName dm LastName value Smith
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 24921208
vstk dm FirstName val Mary
VSTK-> dm FirstName dm FirstName value Mary
Writing dm FirstName acc AlphaFld
Writing dm FirstName acc SET OBJ
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc CONJUNCTION OBJ
Writing dm OutputRecord acc CarriageReturn

490 Workbench User’s Guide


Section 12. Translating and Debugging

Writing dm OutputRecord acc INTEGER OBJ


Writing dm OutputRecord acc INTEGER OBJ
vstk 24921220
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm OutputRecord value 2125551212
vstk 24921220
vstk dm PhoneNumber val 2125551212
VSTK-> dm PhoneNumber dm PhoneNumber value 2125551212
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: 9999999999
dm value "2125551212"
NUMERIC_out: return value "2125551212"
vstk 24921232
vstk dm LastName val Green
VSTK-> dm LastName dm LastName value Green
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 24921244
vstk dm FirstName val John
VSTK-> dm FirstName dm FirstName value John
Writing dm FirstName acc AlphaFld
Writing dm FirstName acc SET OBJ
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc CONJUNCTION OBJ
Writing dm OutputRecord acc CarriageReturn
Writing dm OutputRecord acc INTEGER OBJ
Writing dm OutputRecord acc INTEGER OBJ
vstk 24921256
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm OutputRecord value 3135401600
vstk 24921256
vstk dm PhoneNumber val 3135401600
VSTK-> dm PhoneNumber dm PhoneNumber value 3135401600
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: 9999999999
dm value "3135401600"
NUMERIC_out: return value "3135401600"
vstk 24921268
vstk dm LastName val Jones
VSTK-> dm LastName dm LastName value Jones
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 24921280
vstk dm FirstName val Bob
VSTK-> dm FirstName dm FirstName value Bob

Workbench User’s Guide 491


Section 12. Translating and Debugging

Writing dm FirstName acc AlphaFld


Writing dm FirstName acc SET OBJ
returning 3 writes
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc CONJUNCTION OBJ
Writing dm OutputRecord acc CarriageReturn
Writing dm OutputRecord acc INTEGER OBJ
Writing dm OutputRecord acc INTEGER OBJ
vstk 24921292
vstk dm PhoneNumber val 5178398008
VSTK-> dm PhoneNumber dm OutputRecord value 5178398008
vstk 24921292
vstk dm PhoneNumber val 5178398008
VSTK-> dm PhoneNumber dm PhoneNumber value 5178398008
Writing dm PhoneNumber acc NumericFld
Writing dm PhoneNumber acc NUMERIC
Function NUMERIC_out: dm PhoneNumber pic: 9999999999
dm value "5178398008"
NUMERIC_out: return value "5178398008"
vstk 24921304
vstk dm LastName val Willis
VSTK-> dm LastName dm LastName value Willis
Writing dm LastName acc AlphaFld
Writing dm LastName acc SET OBJ
vstk 24921316
vstk dm FirstName val Sue
VSTK-> dm FirstName dm FirstName value Sue
Writing dm FirstName acc AlphaFld
Writing dm FirstName acc SET OBJ
Writing dm OutputRecord acc LineFeed
Writing dm OutputRecord acc CONJUNCTION OBJ
Writing dm OutputRecord acc CarriageReturn
Writing dm OutputRecord acc INTEGER OBJ
Writing dm OutputRecord acc INTEGER OBJ
Translation successful
Closing file: "data.in"
Closing file: "data.out"
send STATUS 0 to job 4997
Session 004997 ended at Mon Jun 30 14:32:02 2008
with error cd 0
(Output ends here)

492 Workbench User’s Guide


Section 12. Translating and Debugging

Trade Guide provides reports on:


Using Trade Input View Options
Guide Reporting Exception data
to Debug Message status
Process activity
Archive activity
Run the Process Activity Tracking report by translation session
number to see an overview of the results during translating. A
“Translation Successful” message returned at the end of a
translation session reports the overall session was a success,
however, the Process Activity Tracking report may show missing
Trading Partner records, compliance errors, and other issues. The
Exception Activity report allows you to track bypassed translation
data and is also helpful in debugging.
Refer to the Trade Guide online help for more information on these
reports, including how to run them.

Generating User- Beyond using the reports defined for you, Application Integrator™
Defined Reports has provided the groundwork for application-specific report
generation. All Application Integrator™ reports are developed
using the same structures as data mapping (map component files
and models). To ease your generation of application-specific
reports, Application Integrator™ includes a set of models, map
component files, and other files that serve as templates for
handling both reports where the reporting data is known (for
example, you are reporting on an output file of X12 invoices (810s))
and others for reporting on data where the content is not known in
advance.
These files contain the logic to deal with the common report
characteristics:
Pages are set to 66 lines in length
Each report prints 57 lines per page
Report headings include:
− Company name, centered
− Date and time, report title, centered page number
− Up to six lines of heading information
Clean up of temporary report generated files

Workbench User’s Guide 493


Section 12. Translating and Debugging

Automatic calculation of report width: 80 column, 132


column, and so on.
The specific report generation models are added to these generic
models. Specific report models usually consist of a source data
model to extract information to be reported, and a target data
model to construct the heading lines and body of the report.

Reporting Where the The following diagram shows the flow of the generic report system
Data Content is Known for reports where data is pre-identified.

Printing and The printing of a report is invoked through the use of the shell
Generating Generic script OTReport.sh and can be invoked from the command line
Reports using the following syntax:
In Unix and Linux, type:
OTReport.sh <D/P> <specific_report.att> <columns>
In Windows®, type:
OTReport.bat <D/P> <specific_report.att> <columns>
For the arguments:
<D/P> — Enter either ‘D’ for display or ‘P’ for printing of report.
<specific_report.att> — Enter the name of the specific
report map component file.
<columns> — Enter the number of columns the report is be
printed into. It is an optional argument, defaults to 132.
Examples:
For Unix and Linux, type:
OTReport.sh P OTActR1.att 80

494 Workbench User’s Guide


Section 12. Translating and Debugging

For Windows®, type:


OTReport.bat P OTActR1.att 80

Unix and Linux


The shell script OTReport.sh invokes a translation, such as the
following:

inittrans -at -OTRpt.att -cs $OT_QUEUEID


-DREPORT=$2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=$3 -I
Where
inittrans — Program that passes a request for translation to the AI
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-DREPORT=$2 — Passes the second OTReport.sh argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-cs $OT_QUEUEID — Identifies the AI Control Server with which
to communicate.
-DCOLUMNS=$3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the
foreground).
The OTRpt.att map component file then attaches to the
environment specified in the environment variable “REPORT”
(OTActR1.att). Both source and target data models are defined
within this map component file. The source data model is used to
extract/gather information that will be used to construct the report.
The target data model then uses this information to generate the
body of the report. The report is output into a temporary file called
“<SESSION_NO>.tmp,” where <SESSION_NO> is the process ID
number. Once the body of the report is generated, processing
returns to the OTRpt.att.

Workbench User’s Guide 495


Section 12. Translating and Debugging

Next, the common report characteristics are added (paging,


heading, and so on.) through “OTRpt.P.att.” OTRptP.att contains
both source and target data models. The source data model reads
in the “SESSION_NO>.tmp” file. The target data model outputs
the source read data, adding the common report characteristics into
a report file called “<SESSION_NO>.rpt.” Once all data is output,
processing returns to OTRpt.att. The temporary file
“<SESSION_NO>.tmp” is removed, and processing returns to the
original shell script “OTReport.sh.”
The shell script then either prints the report invoking OTPrint.sh,
which in turn prints the report (lp <SESSION_NO>.rpt) or
displays the report (cat <SESSION_NO>.tmp). Once
printed/displayed, the report file is removed from the disk.
In the creation of the specific report, the following must be
identified in the map component file:
OUTPUT_FILE = (SESSION_NO).tmp
S_ACCESS = OTFixed.acc
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>
In the creation of the specific target data model, the following must
be identified:
The records output into the <SESSION_NO>.tmp file
should be delimited with the line feed character. For tag
items, use the item type “LineFeedDelimRecord.”
Assign a report title to the variable “VAR->OTRptTitle.”
Its maximum length is sixty characters. This maximum
length can be reduced when using the <COLUMNS>
argument, if <COLUMNS> was specified as 80, then the
maximum report title is 46 characters (34 characters of
the 80 are used for date, time, and page number).
Assign column headings to the variable “ARRAY-
>OTHeading.” The maximum number of page heading
lines is currently set to six.

496 Workbench User’s Guide


Section 12. Translating and Debugging

Windows®
The batch file OTReport.bat invokes a translation, such as the
following:

otrun.exe -at -OTRpt.att –cs %OT_QUEUEID%


-DREPORT=%2 -DSESSION_NO=$$ -P 1 -DCOLUMNS=%3 -I
Where
otrun.exe — Program that passes a request for translation to the AI
Control Server.
-at OTRpt.att — Specifies the first map component file with which
to begin the translation.
-cs %OT_QUEUEID% — Identifies the AI Control Server with
which to communicate.
-DREPORT=%2 — Passes the second OTReport.bat argument (for
example, OTActR1.att) into the translation session, using the
environment variable “REPORT.”
-P 1 — Defines the translation queue priority to be 1.
(1=low priority, 99=high priority).
-DCOLUMNS=%3 — Passes the third OTReport.sh argument into
the translation session, using the environment variable
“COLUMNS.”
-I — Invokes the translation to run interactively (in the
foreground).
OTRpt.att then attaches to the environment specified in the
environment variable “REPORT” (OTActR1.att). Both source and
target data models are defined within this map component file.
The source data model is used to extract/gather information that is
used to construct the report. The target data model then uses this
information to generate the body of the report. The report is
output into a temporary file called “<SESSION_NO>.tmp,” where
<SESSION_NO> is the process ID number. Once the body of the
report is generated, processing returns to the OTRpt.att.

Workbench User’s Guide 497


Section 12. Translating and Debugging

Next, the common report characteristics are added (paging,


heading, and so on.) through “OTRpt.P.att.” OTRptP.att contains
both source and target data models. The source data model reads
in the “SESSION_NO>.tmp” file. The target data model outputs
the source read data, adding the common report characteristics into
a report file called “<SESSION_NO>.rpt.” Once all data is output,
processing returns to OTRpt.att. The temporary file
“<SESSION_NO>.tmp” is removed, and processing returns to the
original batch file “OTReport.bat.”
The batch file then invokes the Windows® program “Write” so that
all reports (“.rpt” files) can be viewed or printed. Once printed or
viewed, all “.rpt” files are removed from disk.
In the creation of the specific report, the following must be
identified in the map component file:
OUTPUT_FILE = (SESSION_NO).tmp
S_ACCESS = OTFixed.acc
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>
In the creation of the specific target data model, the following must
be identified:
The records output into the <SESSION_NO>.tmp file
should be delimited with the line feed character. For tag
items, use the item type “LineFeedDelimRecord.”
Assign a report title to the variable “VAR->OTRptTitle.” Its
maximum length is sixty characters. This maximum length
can be reduced when using the <COLUMNS> argument, if
<COLUMNS> was specified as 80, then the maximum
report title is 46 characters (34 characters of the 80 are used
for date, time, and page number).
Assign column headings to the variable
“ARRAY->OTHeading.” The maximum number of page
heading lines is currently set to six.

498 Workbench User’s Guide


Section 12. Translating and Debugging

Reporting Where the The following diagram shows the flow of the generic report system
Data Content is for reports where data has not been pre-identified.
Unknown

OTRecogn.att recognizes the type of data contained in the input


stream. As the type of standards are identified, the appropriate
environments are attached for proper processing, for ASC X12, the
OTX12 Env.att environment would be attached. These
environments then break the data up into message or document
units and generate application interface files or report data,
depending on how the data is modeled. As a report is output, it is
accumulated into a session temporary file (<SESSION_NO>.tmp).
Once the input stream has been read through, before output files
are committed to their appropriate directories and data archived,
the report data is printed. The data is printed by the OTRecogn.att
model attaching to the OTRpt.att environment to add the common
report characteristics, for example, paging report title, column
headings. The temporary report file (<SESSION_NO>.tmp) is then
converted into a print ready report file (<SESSION_NO>.rpt).
When control returns to OTRecogn.att (from OTRpt.att), the shell
“OTPrint.sh” is invoked to “lp” the print ready report file.
In the creation of the <specific_report.att>, the following must be
defined:
In the map component file:
OUTPUT_FILE = (OUTPUTFILENAME)
S_ACCESS = <Depends on the source data>,
(Example,:OTX12S.acc)
S_MODEL = <developer defined name.mdl>
T_ACCESS = OTFixed.acc
T_MODEL = <developer defined name.mdl>

Workbench User’s Guide 499


Section 12. Translating and Debugging

In the target data model:


The records output into the (OUTPUTFILENAME) file should be
delimited with the line feed character. For tag items, use the item
type “LineFeedDelimRecord.”
The report titles, column headings, report width and paging
controls are output amongst the data. This method allows these
values to differ from message to message. These values are
specified by outputting records. This is shown in the following
example:
;;; NEXT REPORT
;;; COLUMNS=132
;;; TITLE=Activity Tracking Report
;;; HEADING=Session-No Date Time Type Sub-Type...
;;; HEADING= __________ ____ ____ ____ ___________
;;; PAGE
Syntax:
;;; NEXT Defines the start of a new message, for the
REPORT clearing out of the last message’s title,
column headings, width settings, and
resetting the page number to zero.
;;; COLUMNS Sets the report’s width.
;;; TITLE = Sets the report’s title.
;;; HEADING = Sets one of six report column headings.
;;; PAGE A form feed is inserted.

In the Trading Partner Profile,


At the Message Level on the Inbound tab in the Production value
entry box, you must type “REPORT.” This signals to the translator
to call the print script to print your report.

Note: All printing using the “lp” command uses the shell script
“OTPrint.sh.” If necessary, the command in the shell script can be
modified to add the “-d” option to control the printer or class of
printer to which the report is to be sent.

500 Workbench User’s Guide


Section 12. Translating and Debugging

This page is intentionally left blank.

Workbench User’s Guide 501


Section 13. Migrating to Test and Production

Section 13. Migrating to Test and


Production

Once you have completed the source and target data models, the
map component files, and the development testing, you are ready
to migrate your Application Integrator™ application to a test area
or production area.
This section provides background and instructions on migrating
from development areas to test areas or from development or test
areas to production areas. It includes procedures for importing
and exporting Profile Databases.

Note: This section describes migration between functional areas.


For instructions on migrating between Application Integrator™
versions or between operating systems, contact your Application
Sales Engineer or Application Integrator™ Support Specialist for
the appropriate document and/or assistance.

Workbench User’s Guide 502


Section 13. Migrating to Test and Production

Migration is the process of moving files or information from one


Planning location to another where similar files may exist. The files or
Development information come from a “source” location, and are moved to a
Migration “target” location.
To successfully migrate files or information from source to target
locations, some important questions must be considered:
Will the data to be migrated overwrite any existing data?
Is the data to be migrated dependent on data from another
file?
If the migration must be reversed or “undone,” what needs
to be done?
This section identifies files usually associated with Application
Integrator™ development to production migration. Typical
migration files are:
Map Component Files (*.att)
Data model files (*.mdl)
Access model files (*.acc)
Environment files (*.env)
Include files (*.inc)
Stylesheets ((*.xsl)
Tranding Partner Profiles from the Profile Database files (C-
ISAM: sdb.dat & sdb.idx, or Oracle)
This section is only a guideline for migration. A full analysis of all
items covered is required to ensure a successful migration or
recovery.

Workbench User’s Guide 503


Section 13. Migrating to Test and Production

Recommended There are four major considerations recommended for each


Migration migration.
Approach
1. Create a detailed migration plan.
After reading this section, plan your application migration,
making sure you account for every user-definable type of file.
a. Document what is to be migrated.
b. Analyze the differences between existing and new data.
c. Determine any dependencies to other data.
d. Save a copy (back up) of any files that might be affected by
the migration or initial operations and testing, before
beginning the migration.
2. Verify Application Integrator™ code versions and stop the AI
Control Server.
Verify that the destination functional area is currently running
on the same or later version of the Application Integrator™
programs. If development used some features or capabilities
that were only available in the latest version, then the process,
once migrated, may not execute properly.
Both the AI Control Server and Trade Guide should be
completely stopped before beginning any migration process.
Refer to the Trade Guide online help for instructions on exiting
the system.
3. Migrate files in the following order:
a. Trading Partner Profile from the Profile Database files—C-
ISAM: sdb.dat and sdb.idx, Oracle.
b. Access models, source and target data models, and map
component files —*.acc, *.mdl, *.att, *.env, *.inc, *.xsl.
c. Adjust any affected shell scripts (Unix and Linux) or batch
files (Windows®).
Details on migrating each of the types of files are provided
later in this section.
4. Check the migration.

504 Workbench User’s Guide


Section 13. Migrating to Test and Production

a. Adjust the processing environment (Unix and Linux), for


example, set permissions, user and group IDs, shell or
profile variables.
b. Begin production testing.
c. Capture any data during testing that must be undone if the
migration needs to be reversed.

Note: It is always advisable to test before and after each


migration. It is also advisable to back up the target location before
the migration.

Permission The Unix and Linux operating systems assign permissions to each
Guidelines file. For new and replacement files, the required permissions for
owner, group, and other must be specified for read, write, and
execute. For replacement files, it is customary to match the existing
target permissions unless other permissions are specifically
indicated.
To delete an existing file, you need sufficient authority.
For Windows®, the operating system controls the access
permissions and passwords for each file. These are set up by the
system administrator.

Workbench User’s Guide 505


Section 13. Migrating to Test and Production

The following sections provide instructions on preparing and


Migrating migrating each type of file used in data modeling.
Applications

Caution: Make a backup copy of every file that will be overwritten


by the migration or initial operations and testing, before you begin
the migration.

Deploying Maps Once you have completed the source and target data models, the
Overview map components files, and the development testing, you are ready
to deploy your electronic commerce application to a test functional
area or a production functional area.
Deploying is the process of moving files or information from one
location to another where similar files may exist. The files or
information come from a “source” location, and are moved to a
“target” location.
To successfully deploy files or information from source to target
locations, some important questions must be considered:
Will the data to be deployed overwrite any existing data?
Is the data to be deployed dependent on data from another
file?
If the deployment must be reversed or “undone,” what
needs to be done?
This section identifies files usually associated with Workbench
development to production deployment. Typical deployment files
are:
Map component files (*.att)
Data model files (*.mdl)
Access model files (*.acc)
Environment files (*.env)
Include files (*.inc)
Stylesheets (*.xsl)
Trading Parnter Profile from within the Profile Database
files (C-ISAM: sdb.dat & sdb.idx, Oracle)

506 Workbench User’s Guide


Section 13. Migrating to Test and Production

Planning Development This is only a guideline for deployment. A full analysis of all items
Deployment covered is required to ensure a successful deployment or recovery.

Workbench User’s Guide 507


Section 13. Migrating to Test and Production

Recommended There are three major considerations recommended for each


Deployment Approach deployment.

Note: It is always advisable to test before and after each deploy


procedure is completed. It is always advisable to back up the
target location before the deploy.

1. Create a detailed deployment plan.


Plan your application deployment, making sure to account for
every user-definable type of file.
Document what is to be deployed.
Analyze the differences between existing and new data.
Determine any dependencies to other data.
Save a copy (back up) of any files that might be affected by
the migration or initial operations and testing, before
beginning the deployment.
2. Verify the application's version and stop the AI Control
Server.
Verify that the target functional area is currently running on
the same or later version of the Workbench programs. If
development used some features or capabilities that were only
available in the latest version, then the process, once deployed,
may not execute properly.
For AI to be able to use the recently deployed files, AI Control
Server and Trade Guide should be stopped and re-started.
Refer to the Trade Guide on-line help system for instructions on
exiting the software.
3. Verify the deployment.
Adjust the processing environment (Unix and Linux), for
example, set permissions, user and group IDs, shell or
profile variables.

508 Workbench User’s Guide


Section 13. Migrating to Test and Production

Begin production testing.


Capture any data during testing that must be undone if
the deployment needs to be reversed.

Deploying Files The Deploy feature is used to copy relevant mapping files to a
directory. These files can be used during debugging tasks or for
moving files into a production functional area.

Caution: Any file not specifically referenced by the Deploy dialog


box must be copied from the source directory to the target
directory. These files could include environment (.env) files, map
component files called by an ATTACH statement, and so on.

If the Deploy operation is invoked for the active Map component


(currently active .att file in the Map Editor), then the Map file (.att)
and the associated Data Model files (Source and Target Data
Models, if present) will be picked-up implicitly for deploying.

If the Deploy operation is invoked for the active Data Model


(currently active model in the Model Editor), then the
corresponding Data Model file will be picked-up implicitly for
deploying.

Accessing the Deploy The option to Deploy a Map Component or a Data Model is
dialog box available only if the same is opened and currently active in the
respective editor (Map Component is currently displayed in the
Map Editor or if it is a Data Model, it is displayed in the Model
Editor).
1. Open the map file/model file, to be deployed, in the Map
Editor/Model Editor.

2. Click the deploy icon on the tool bar. The following


dialog appears.

Workbench User’s Guide 509


Section 13. Migrating to Test and Production

510 Workbench User’s Guide


Section 13. Migrating to Test and Production

3. Review the default settings for each of the items. Change the
settings as appropriate for the deploy process for this map file.
Option Description
Deploy Access Model Determines whether any access models
associated with this editor should be
deployed.
Deploy Include file Gives the option whether to deploy
User.inc or all include files.
Export Trading Determines whether to export a single
Partner profile trading partner's record with deploy.
This value entry list box is enabled
when you are connected to the Profile
Database containing trading partner
records.
Write Activity Log Determines whether a tracking log will
be created that would have the
information of all files included in
deploy. If selected, deploy activity log is
written at the same location where the
files are deployed, in a file named as:
readme_ver_xxxxxxx.txt, where
‘xxxxxxxx’ is the system date of the
system where Workbench is running.
Deploy Location Determines the absolute path to the
target directory for deploy. This can be
either on the local system or on the
remote system (an FTP site).
Deploy Path Displays the target path selected.

4. Choose the OK button.


If the files to be deployed are in a un-saved state, you are
prompted if the files are to be saved before deploying.
5. If you choose Yes, the files are saved before deploying. If you
choose No, the previously saved contents are deployed. If you
choose Cancel, the deploy function is cancelled.

Workbench User’s Guide 511


Section 13. Migrating to Test and Production

6. If the target location has a similarly named file, you are


prompted if you would like the file to be over-written.

7. Once the Deploy operation finishes, you are notified of this by


the following screen.

File Permissions The Unix/Linux operating systems assign permissions to each file.
Guidelines For new and replacement files, the required permissions for
owner, group, and others must be specified for read, write, and
execute. For replacement files, it is customary to match the existing
target permissions unless other permissions are specifically
indicated.
To delete an existing file, the user performing the deployment
needs sufficient authority.
For Windows®, the operating system controls the access
permissions and passwords for each file. These are set up by the
system administrator.

Deploying the Trading Partner Profile from the Profile Database (C-ISAM: sdb.dat
Profile Database and sdb.idx files, Oracle) can be updated via one of three methods:
replacement, manual maintenance, and export/import.

512 Workbench User’s Guide


Section 13. Migrating to Test and Production

Replacement To replace when the source and destination are both C-ISAM,. the
Profile Database two files must be replaced. They are:
sdb.dat — data file of the Profile Database
sdb.idx — index file of the Profile Database
When replacing these files they must be replaced together as a set
of files, replacing only one without the other will create a serious
problem.

To replace when the source and destination are both Oracle, the
source database can either be referenced as the destination
database via the aiserver.env/.bat file, or the source database can
be dumped and imported into the destination database using a
utility such as sqlplus.

To replace when the source and destination are not both C-ISAM or
Oracle, then the Profile Database needs tobe dumped and
imported. Using Trade Guide, run Export and choose “Profile
Database” and dump all of its content. Then in the destination,
make sure the database is empty (for C-ISAM remove the sdb.dat
and sdb.idx, for Oracle using sqlplus - truncate table SDB;) and run
the following translation upon restarting the destination control
server:
inittrans –at OTsdb.att –cs $OT_QUEUEID –
DINPUT_FILE=<source database export file> -
DOTDBIMPORT=Yes -I

Note: The Profile Databases (C-ISAM: sdb.data and sdb.idx)


cannot be moved between Intel™ and RISC™ systems, due to the
byte order. Use the above approach dumping the whole database
and then running a command translation to load.

Work-around: Export the Profile Database. You can use the


export feature of Trade Guide or run the “otstdump” program (only
available for C-ISAM). To import the Profile Database, you can use
the import feature of Trade Guide, or the above described

Workbench User’s Guide 513


Section 13. Migrating to Test and Production

command line invocation.

Manual Maintenance of An operator or system administrator can maintain substitutions,


Xrefs/Codes cross-references, and verification lists, all part of the Profile
Database, through Trade Guide menu options. In cases where
changes are not minor, using the Trade Guide export and import
features for these changes are the recommended methods of
migration.

Note: Use the export and import features for trading partner
profiles and standards to migrate minor changes to the trading
partner profiles and cross-reference lists.

Refer to the Trade Guide online help section “To Move Portions of the
Profile Database” for instructions.

Export/Import You can update the Profile Database by exporting the entire
database, exporting portions of the database, and then importing
the database or database portions. Refer to the Trade Guide online
help for complete instructions. Trade Guide, within Export allows
for the selection of which Trading Partner Profiles are to be
exported. Use Import Profile Database to import the profiles into
the destination database.

Development Migration During normal operations of the Application Integrator™ system,


of the Profile Database new development work will occur that often requires additions,
changes, and deletions to the production Profile Database in order
to migrate newly developed models and map component files to
the production system.
The following list of steps outlines the general flow or life cycle of
the Profile Database from development to production.

514 Workbench User’s Guide


Section 13. Migrating to Test and Production

1. The developer creates a development Profile Database in the


developer’s directory (a development seat). The developer’s
Profile Database could be a copy of the production Profile
Database. By using a copy of the production database as a
starting point, the developer has the best chance that all new
entries will successfully coexist with production entries.
2. The developer adds, changes, and/or deletes entries in the
development Profile Database as required to complete
development activities.
3. When the developer is ready to move the project to production,
the migration process begins. On Unix and Linux, stopping the
AI Control Server and waiting for all translator (otrans and
inittrans) processes to stop ensures that no further access to the
Profile Database occurs and “freezes” the Profile Database to
avoid corruption.
4. The data modeler provides a file containing only the entries
needed for migration.
5. A copy of the production Profile Database is saved for
recovery.
6. The AI Control Server is started, and migration of the
development Profile Database entries to the production Profile
Database occurs. The file containing the development Profile
Database entries is moved to the production directory. Using
the import Profile Database function, the development Profile
Database entries are loaded into the production Profile
Database.
7. The system is closely monitored until a high level of confidence
is achieved.

Deploying Access
Models

To Deploy *.acc Files 1. Obtain a list of access model files that are to be migrated.
Access model files will have a suffix of “.acc.”

Workbench User’s Guide 515


Section 13. Migrating to Test and Production

2. Determine the migration mode. Which of these access models


are new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any access files in the target directory
that will be affected (overwritten) by the migration.
4. Perform the migration. New access models are copied from the
source directory into the target directory.
Changed access models can be either replaced or edited.
When removing old access models, be sure they are not still in use
by any map component files or in use through an environment
variable.
In Unix and Linux, to determine where access models are being
used, use the grep command to search for the name of the access
model throughout the entire directory. For example, the following
command returns a list of all files that referenced OTFixed.acc:
grep OTFixed.acc *
On Windows®, use the findstr command. For example, the
following command returns a list of all files that referenced
OTFixed.acc:
findstr /l /c:OTX12S.acc *.*

Considerations In Unix and Linux, if an item type defined in an access model is


changed, then any map component file using the access model is in
question. To determine the specific effect of a changed item type
within an access model, the entire directory can be scanned using
the grep command for any data model item that uses the item type.
For example,
grep NumericFldNA *

On Windows®, give the following command for the above action:

findstr NumericFldNA *

Deploying Data
Models

516 Workbench User’s Guide


Section 13. Migrating to Test and Production

To Deploy *.mdl Files


1. Obtain a list of source and target data model files that are to be
migrated. Data model files created by Application Integrator™
from within Workbench will have a suffix of “.mdl.”
2. Determine the migration mode. Which of these data models
are new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any models in the target directory that
will be affected (overwritten or changed) by the migration.
4. Perform the migration. New data models are copied from the
source directory into the target directory. Changed data
models can be either replaced or edited.
When removing old data models, be sure they are not in use by any
map component files or in use through a referenced environment
variable.
In Unix and Linux, to determine where data models are being used,
use the grep command to search for the name of the data model
throughout the entire directory. It is also recommended that the
Profile Database be dumped and checked as well to ensure that the
model is not being referenced by a cross-reference or substitution.
For example, the following command returns a list of all files using
OT810T.mdl:
grep OT810T.mdl *
On Windows®, use the findstr command. For example, the
following command returns a list of all files that referenced
OT810T.mdl:
findstr /l /c: OT810T.mdl *.*

Workbench User’s Guide 517


Section 13. Migrating to Test and Production

Considerations
Data models can attach to other map component files. It is
important to identify all map component files that will be
affected by the use of any data model to be migrated. To
determine whether a data model performs an ATTACH,
use the grep/findstr command or an editor such as vi and
search the data model for the keyword ATTACH. If the
data model contains the keyword ATTACH, then the
model must be examined to determine what other
processing will occur and how it will affect production
processing.
Data models have the ability to execute external programs or
shells. When migrating a data model, it is important to
consider what effect it may have on existing processing in
the target area. To determine the external executions that
are being performed by a data model, use the grep/findstr
command or an editor such as vi, and search for the string
EXEC.
When Application Integrator™ generic data models are
modified, or general-purpose models are written, these
models should be updated into each development seat and
the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest data models.

518 Workbench User’s Guide


Section 13. Migrating to Test and Production

Deploying Map Map component files identify all resources (other files) used during
Component Files a processing session. When migrating the map component files, it
is important to look at the resources referenced by the map
component file to determine what else will be affected.

To Deploy *.att Files


1. Obtain a list of map component files that are to be migrated.
Map component files created by Application Integrator™ from
within Workbench will have a suffix of “.att.”
2. Determine the migration mode. Which of these are new,
replacements, or are to be deleted from the target directory?
3. Make a backup copy of the target directory that will be affected
(overwritten and changed) by the migration.
4. Perform the migration. New map component files are copied
from the source directory into the target directory. Changed
map component files can either be replaced or edited.
When removing old map component files, be sure to remove any
resources referenced that are no longer needed. Take care when
removing other resources that are no longer used since other map
component files may still reference them. In Unix and Linux, it is
possible to easily determine what is being referenced by using the
grep command utility. For example, the following command will
return a list of all the files using OTX12810.att.
grep OTX12810.att *

On Windows®, use the findstr command. For example, the


following command returns a list of all files that referenced
OTX12810.att:
findstr /l /c: OTX12810.att *.*

Workbench User’s Guide 519


Section 13. Migrating to Test and Production

Considerations
If the map component file being migrated contains
references to data models, access models or other resources
already in production, then any dependency between them
must be identified.
Map component files are usually small and can be viewed or
printed with little consideration. This will ease the process
of analysis.
Map component files may contain key prefixes for
substitutions and cross-references (xrefs). If key prefixes or
other environment variables are used, the effect on existing
Profile Database entries must be considered.

Deploying
Environment Files

To Migrate (*.env) Files


1. Obtain a list of environment files that will be migrated.
Environment files created by Application Integrator™ from
within Workbench will have a suffix of “.env.”
2. Determine the migration mode. Compare the new
environment files to any existing old environment files. If the
new environment file is different than an existing one, identify
what entries are to be moved from the new one to the old one.
If no changes exist, no migration is required.
3. Back up any environment files in the target directory that will
be affected (overwritten) by the migration.
4. Perform the migration. New environment files are copied from
the source directory into the target directory. Changed
environment files can be either replaced or edited.
When removing old environment files, be sure to remove any
resources referenced that are no longer needed. In Unix and Linux,
it is possible to easily determine what is being referenced by using
the grep command utility. For example, the following command
returns a list of all the files using the environment file
ENVIRON1.env.

520 Workbench User’s Guide


Section 13. Migrating to Test and Production

grep ENVIRON1.env *

On Windows®, use the findstr command. For example, the


following command returns a list of all files that referenced
ENVIRON1.env:
findstr /l /c: ENVIRON1.env *.*

Considerations If existing environment files change, then any model that uses
those files must be considered. Use the grep command to check for
these references.
In general, if you are unsure of whether or not a file has been
changed, you can use the diff command or the Windows® fc
command to compare the files:
diff <first file> <second file> (Unix and Linux)
fc <first file> <second file> (Windows®)

Deploying Include
Files

To Deploy *.inc Files


1. Obtain a list of include files referenced at the top of the source
and target data model files that are to be migrated. Include files
created by Application Integrator™ from within Workbench will
have a suffix of “.inc.”
2. Determine the migration mode. Which of these include files are
new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any files in the target directory that will
be affected (overwritten or changed) by the migration.
4. Perform the migration. New include files are copied from the
source directory into the target directory. Changed include
files can be either replaced or edited.
When removing old include files, be sure they are not in use by any
data mdodel files.

Workbench User’s Guide 521


Section 13. Migrating to Test and Production

In Unix and Linux, to determine where include files are being used,
use the grep command to search for the name of the include file
throughout the entire directory. For example, the following
command returns a list of all files using User.inc:
grep User.inc *
On Windows®, use the findstr command. For example, the
following command returns a list of all files that referenced
User.inc:
findstr /l /c: User.inc *.*

522 Workbench User’s Guide


Section 13. Migrating to Test and Production

Considerations
When Application Integrator™ generic include files are
modified, or general-purpose includes are written, these
files should be updated into each development seat and the
models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest include files.

Deploying
Stylesheet Files

To Deploy *.xsl Files


1. Obtain a list of source and target stylesheet files that are to be
migrated. Stylesheet files created by Application Integrator™
from within Workbench will have a suffix of “.xsl.”
2. Determine the migration mode. Which of these stylesheets are
new, replacements, or are to be deleted from the target
directory?
3. Make a backup copy of any stylesheets in the target directory
that will be affected (overwritten or changed) by the migration.
4. Perform the migration. New stylesheets are copied from the
source directory into the target directory. Changed stylesheets
can be either replaced or edited.
When removing old stylesheets, be sure they are not in use by any
map component files or in use through a referenced environment
variable.
In Unix and Linux, to determine where data models are being used,
use the grep command to search for the name of the stylesheet
throughout the entire directory. It is also recommended that the
Profile Database be dumped and checked as well to ensure that the
stylesheet is not being referenced by a cross-reference or
substitution. For example, the following command returns a list of
all files using User.xsl:
grep User.xsl *

Workbench User’s Guide 523


Section 13. Migrating to Test and Production

On Windows®, use the findstr command. For example, the


following command returns a list of all files that referenced
User.xsl:
findstr /l /c: User.xsl *.*

524 Workbench User’s Guide


Section 13. Migrating to Test and Production

Considerations
When Application Integrator™ generic stylesheets are
modified, or general-purpose stylesheets are written, these
stylesheets should be updated into each development seat
and the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest stylesheets.

Workbench User’s Guide 525


Section 13. Migrating to Test and Production

This page is intentionally left blank.

526 Workbench User’s Guide


Section 14. Updata Manager

Section 14. Update Manager

This section provides information on how to install


features/patches using Update Manager.
Update Manager is used to check whether there are updates (in the
form of patches or new versions) for a product’s existing features
or to locate and install a new feature into a product (normally
requires web access). Update Manager simplifies the task of
keeping your product up-to-date and installing additional features.
To install specific features or patches, you can use the Update
Manager feature of Workbench. Using this feature, you can connect
to the update site, examine its contents, find specific
features/patches you want to install, bring up the install wizard,
and let it complete the installation.

Follow the steps given below to install specific features/patches


Updating using Update Manager.
Workbench with
Features/Patches
1. Select Help >Software Updates >Find and Install from the
Help menu of Workbench as shown in the following screen.

Workbench User’s Guide 527


Section 14. Update Manager

2. The following screen is displayed.

Click “Search for updates of the currently installed features”


to contact the web sites associated with the product's
existing features and discover what versions of those
features are available. The potential upgrades are presented
in the next page.
For now, select ‘Search for new features to install’. Click
Next> to continue.

528 Workbench User’s Guide


Section 14. Update Manager

3. The Install Window comes up. The Eclipse.org update site is


present by default
To choose from Remote Site:
Select New Remote Site button. Enter ‘Name’ and ‘URL’ in
the New Update Site dialog box that appears. In the
example shown above, the Workbench 5.5 update site used
is http://<<IP Address>>/com.gxs.ai.ngwb.update.

To choose from Local Site:


Select New Local Site button. The Browse for Folder
window comes up.

Workbench User’s Guide 529


Section 14. Update Manager

Browse for the local site and click OK. The Edit Local Site
window comes up. Click OK.

530 Workbench User’s Guide


Section 14. Update Manager

Click Finish to continue.


4. The next screen lists all the feature categories available on
the Workbench update site as shown below.

Select a site/features from the list of updates displayed.


Select WB 5.5 CVS Integration (for example, as shown in the
figure above, features containing new functionality or
update patches can also be installed using this procedure).
Feature versions can be digitally signed by the company that
provides them. This allows you to easily verify that the
features and Plug-Ins that are to be downloaded and
installed are coming from a trusted supplier.

Note: Because of the possibility of harmful or even malicious Plug-


Ins, only download features from parties that you trust.

Workbench User’s Guide 531


Section 14. Update Manager

Click Next> to continue.


The license agreement screen is displayed as shown in the
following screen.

Carefully review the license agreements for the upgraded


features. If the terms of all these licenses are acceptable,
check "I accept the terms in the license agreements." Do not
proceed to download the features if the license terms are not
acceptable. Click Next> to continue.
The selected feature details are displayed.

532 Workbench User’s Guide


Section 14. Update Manager

5. Click on Change Location... to change the location of the to-


be-installed feature.
6. Click Finish to proceed. A verification window comes up.

Workbench User’s Guide 533


Section 14. Update Manager

7. Click Install to allow the download and installation to


proceed.

Hint: It is recommended that you install new features / Plug-ins


(like CVS) in a location outside the Workbench install directory.
This makes working with Plug-ins / features much easier, in terms
of management, sharing between multiple simultaneous
Workbench instances, and ease of upgrade from one Workbench
version to another.

Once all the features and Plug-Ins are downloaded successfully and
their files installed into the product on the local computer, a new
configuration that incorporates these features and Plug-Ins is
formulated.
You will be prompted to re-start Workbench after the installation is
complete. Click Yes when asked to exit and restart the Workbench
for the changes to take effect.

Note: To verify if the selected features and Plug-Ins have been


updated, check Workbench Plug-In and feature versions, as
described under the section “Calling for Customer Support”.

The installed features can be managed in Workbench 5.2. Follow


Managing the steps as described below.
Installed
Features
1. The current configuration can be browsed, but it can also be
operated on to enable/disable/uninstall features, revert to
previous configurations, and so on. Select Software
Updates>Manage Configuration from the Help menu of
Workbench as shown in the following screen.

2. The following screen is displayed.

534 Workbench User’s Guide


Section 14. Update Manager

Workbench User’s Guide 535


Section 14. Update Manager

The following actions can be performed.


Action Description
Disable Select the feature and click on the
"Disable" feature action in the right
pane of the dialog or in the context
menu. The action is available only
when the feature is currently enabled,
and the feature is either an optional
feature or a root feature (not included
by other features).
Enable Turn on disabled features filtering from
the dialog toolbar, then select a disabled
optional or root feature and click on the
"Enable" feature action in the right
pane of the dialog or in the context
menu.
Uninstall Features installed by you using the
update manager can be uninstalled,
provided they are already disabled, or
that they are optional or root features. If
the feature is disabled, make sure you
turn on the disabled features filtering
from the dialog toolbar. Select the
feature and click on the "Uninstall"
feature action in the right page of the
dialog or in the context menu.
Scan for Updates Scan for updates for the feature.
Show Properties Display the properties of the feature.

536 Workbench User’s Guide


Section 14. Update Manager

This page is intentionally left blank.

Workbench User’s Guide 537


Section 15. CVS Integration

Section 15. CVS Integration

Workbench can be used for version management of maps using a


built-in client for the Concurrent Versions System (CVS). With this
client you can access CVS repositories.
Section 14. Update Manager provides an example and explains
how you can install the CVS client using Workbench Update
Manager.
This section describes how to use the CVS repository for version
management of maps.
You can find more information on CVS at
http://www.cvshome.org.
You can also visit the Eclipse CVS FAQ.

Note: Workbench bundles only CVS client along with it. Update
manager can be used to add the CVS client as a feature to
Workbench. You have to install CVS server on the same system or
any system in your network, before you can configure CVS client
to access it.

Install CVS Plug-Ins to access the CVS repository through the


Open Workbench development environment.
Perspective
1. Click on CVS Repository Exploring. Then click on New>Repository Location
to create a new repository.

Workbench User’s Guide 538


Section 15. CVS Integration

2. Provide the login credentials as shown in the following screen.

Click on the Finish button to add a CVS Repository.

Workbench User’s Guide 539


Section 15. CVS Integration

Now, the projects listed under the CVS Navigator can be


checked out to the local system. A local project can also be
added to CVS.

540 Workbench User’s Guide


Section 15. CVS Integration

Follow the steps described below to share a workspace project.


Share a
Workspace
Project
1. Click on Team (from the context menu, when you right-click
on the resource)>Share Project in the Navigator window.

Workbench User’s Guide 541


Section 15. CVS Integration

2. The Share a Project wizard appears as shown in the following


screen.

The wizard helps you to import your project files into the CVS
repository. You can either create a new repository location or use
an existing repository location.
Follow the steps in sequence to complete sharing the project with
the CVS repository.

542 Workbench User’s Guide


Section 15. CVS Integration

Local files can be compared with files in the Repository. Follow


Compare a Local the steps described below to compare a local file with a repository
File with a file.
Repository File
1. Select Compare with (from the context menu, when you right-
click on the resource)>Latest from Head as shown in the
following screen.

2. The two files are compared and the results are displayed as
shown in the following screen.

Workbench User’s Guide 543


Section 15. CVS Integration

Changes can be committed back to the Repository file with


Commit Changes comments. This is done either file by file or as a set of files.
to Repository
Files
1. Select Team (from the context menu, when you right-click on
the resource)>Commit in the Navigator as shown in the
following screen.

2. Newer resources can be added to the Repository as shown


below.

544 Workbench User’s Guide


Section 15. CVS Integration

A confirmatory dialog box appears as shown below.

Click Yes to add the resources and No if you do not want to


add the resources.

Workbench User’s Guide 545


Index

Index
#CHARSET function post-condition, 40, 45
defining, 43 pre-condition, 40
#DATE function, 39, 205, 402, 428 adding
#DATE_NA function, 43, 205 validation logic, 310
#FIFTH_DELIM function Administration database, 49
defining, 43 overview, 51
#FINDMATCH function ampersand symbol, character entity, 293
defining limit, 395 apostrophe symbol, character entity, 293
#FIRST_DELIM function Application Integrator
base values for, 43 de-enveloping files provided, 65
post-condition values, 45 enveloping files provided, 65
#FOURTH_DELIM function, 43 file suffixes, 415
#LOOKUP function, 43, 402 reserved names, 414
#NUMERIC function, 44, 197, 402, 428 Array
#NUMERIC_NA function, 44, 197 overview of support for, 225
#SECOND_DELIM function, 42, 43 ATTACH keyword
#SET_FIFTH_DELIM function, 454 using, 62
#SET_FIRST_DELIM function, 454 Attachment dialog box, 56, 403, 444
#SET_FOURTH_DELIM function, 454 troubleshooting, 169
#SET_SECOND_DELIM function, 454 Attachment file, 55, 56, 84, 167
#SET_THIRD_DELIM function, 454 defining, 167, 168, 177, 179, 180
#THIRD_DELIM function, 42, 43 error codes, 62
#TIME function, 44, 206, 402, 428 for de-enveloping, 65
#TIME_NA function, 44, 206, 207 for enveloping, 67, 69
$ (dollar symbol), 198, 204 for generic report writing, 472
$ (substitution) function, 44, 258, 400 migrating guidelines, 498
$$ (session number) keyword environment modifying, 172
variable, 57 naming conventions, 167
Absent mode rules, 223 overview, 55
Access model referencing, 416
base, 40 troubleshooting, 169
list of standard models, 40 attribute names, case sensitivity, 287
migrating guidelines, 494 attribute values, well-formed rules for, 286
overview, 39, 40 Backups

Workbench User’s Guide 546


Index

Windows files, xv Container item


Base icon, 190
values for, 42 overview, 34, 36
body element, 289 CONTINUE keyword, 404
BREAK keyword, 226, 404 Control Server
Calculations directory, 417
limitations, 195 environment variables, 421
canonical format, 308 migration, 483, 494
argument, 310 queue ID, 421, 434, 439
Case sensitivity, 414 running in UNIX, 432
case sensitivity, about, 287 starting in Windows Concurrent, 421
Changing environments, 62 Copying
character entities for text characters, 287 rules, 254
Character sets COUNTER keyword, 394
analyzing for data modeling, 389 Cross-reference
Check Syntax command inheritance between trading partner levels,
using, 256 408
Child understanding before data modeling, 410
overview, 37 Cutting
Clipboard rules, 254
support in RuleBuilder, 254 CVS repository, 517
Closing Data mapping. See also Data modeling
Layout Editor, 220 overview, 17, 38
Code list steps to mapping using Application Integrator,
inheritance between trading partner levels, 388
408 data model
setting, 209 about creating, 316
understanding for data modeling, 410 Data model
Colors adding items, 187
Windows display, vi applying logic to, 48
Comments components, 38, 51
illegal characters, 244, 245 defining new, 214, 216, 217, 235
inserting into data models, 244, 246 establishing hierarchy, 213
Commit Changes to Repository Files, 523 migrating guidelines, 495, 500, 502
Compare a Local File with a Repository File, overview, 38
522 saving, 218
Condition, 404 saving under a new name, 173, 174, 218, 219
Conditional expression, 49, 63, 239, 444 data model generator
description of, 224 input, 316

Workbench User’s Guide 547


Index

output, 316 syntax of data items, 389


data model generator utility templates, 38
about, 316 test files for, 393
Data model item understanding database lookups, 407
adding, 187 understanding pre- and post-conditions, 389
assigning item type, 190 Database
assigning min/max occurrence, 192 description of key, 407
changing data hierarchy, 214 inheritance, 407
changing the name of, 188, 190 key, 407
defining format for, 194 DATE_CALC function, 49
establishing data hierarchy, 213 Debugging, 33, 59, 387, 412, 419, 420, 449, 472
relationships between, 37 hints for, 449
setting min/max size, 193 large data volumes, 456
setting verification code list for, 209 overview, 420
sorting, 211 DEF_LKUP function, 209, 410
specifying a second input/output file, 210 Defining item, 190, 209, 212, 223, 447, 450, 452
Data modeling drag and drop, 232
analyzing the data, 388 formats, 35
applying rules using MapBuilder, 229 icon, 190
considering Profile database, 411 in process flow, 58
creating attachment files, 403 output, 211
creating data models, 400 overview, 34
creating rules/logic, 404 rules, 37, 227, 230
cross-referencing, 410 Defining item type, 401, 402, 405
data logic, 391 definition of, 41
data occurrence, 390 in rules, 405
data relationships, 390 DEL_LKUP function, 410
data sequence, 390 DEL_SUBS function, 409
data structures, 390 DEL_XREF function, 410
debugging, 412 Delimiters, 35
defining environments, 395 Deploying Maps Overview, 485
laying out environments, 394 Development tools, 19, 24, 26, 33
naming conventions, 414 DM_READ function, 258
needed characters sets, 389 document type definition
obtaining requirements, 388 about, 288
overview of rules, 223 hierarchy, 288
relative references, 417 Documentation, i
running test translations, 412 DTD. See document type defintion
source to target mapping, 403 elements, 292
steps to, 387 body, 289, 292

548 Workbench User’s Guide


Index

head, 289, 292 EXIT keyword, 67, 404


page, 288 Extensible Markup Language. See XML
para, 289 Find Next Parameter command
root, 292 instructions for, 256
root, 288 FINDMATCH_LIMIT environment keyword,
title, 292 57, 395
within DTD, 288 Fixed length data, 35
Entity, 65, 66 Formatting
Xref, 66, 67 data model items, 194
entity reference dates, 205, 206
inbound, 294 Function
outbound, 294 syntax checking, 267
Entity Reference value, 293 Functions
ENVIRON_LD function, 52 GET_EVAR, 436
Environment file (.env) overview, 226
migrating guidelines, 499 SET_EVAR, 436
overview, 52 GET_EVAR function, 57, 436
Environment System Properties dialog box, 421 GET_GCOUNT function, 258
Environment variable greater than symbol, character entity, 293
assigning values to, 436 Group item
Environments icon, 190
changing during translation processing, 62 overview, 36
common errors while attaching, 64 sorting option for, 212
defining for data modeling, 395 head element, 289
multiple, 59 HIERARCHY_KEY keyword environment
overview, 50, 55 variable, 57, 65, 67, 409, 434
parsing sequence, 56 Hypertext Markup Language, 282
single, 59 inbound processing
single vs. multiple, 60 about, 282
ERR_LOG function, 404 pre-translation environment, 283
ERRCODE function, 62, 406 single environment, 282
Error handling, 62 Inheritance
checking syntax while modeling, 258 description of, 408
Error log inittrans, 412, 431, 432, 494
overview, 53 arguments for, 434
Error mode rules, 224 Control Server, 474
Errors in Parse dialog box, 257 overview, 420
escape character, 294 troubleshooting, 442
argument, 310, 311 Input data

Workbench User’s Guide 549


Index

requirements for translating, 388 default values, 228


Input file loop control, 235
specifying a secondary, 210 mapping data, 230
viewing in Workbench, 441 modeling session helps, 230
input filename overview, 33, 227, 228
argument, 311 Preferences dialog box, 230
INPUT_FILE keyword environment variable, processing messages, 234
52, 56, 57, 434, 436, 437 using, 229, 231
internal (inline) DTD, 288 Masking characters
code example, 289 for dates, 205
Justifying data for justifying, 201
masking characters for, 201 for literals, 201
Keywords for numbers, 195
overview, 226 for positive/negative sign, 199
Labels for time, 206
data modeling limits, 415 for triads, 201
Layout Editor Migrating data
closing, 220 access models, 494
less than symbol, character entity, 293 attachment files, 498
Listing disk/tape contents, xvi data models, 495, 500, 502
Literals environment files, 499
inserting into rules, 242 files to migrate, 482
masking for, 204 overview, 483
LKUP function, 209, 402, 410 planning, 482
locale, argument, 310 Profile database, 485, 488, 491
LOG_REC function, 404 steps to, 483
LOOKUP_KEY keyword environment variable, Model. See also Data model
57, 209, 410, 434 Modeling. See also Data modeling
Loop control namespace
Manual Loop Control dialog box, 238 definition, about, 290
overview, 235 Naming Conventions, 167, 414
troubleshooting, 237 Negative sign
using, 235 masking for, 199
Managing Installed Features, 513 non-canonical format, 307
Manual Loop Control non-deterministic
dialog box, 238 argument, 310
Map component file Null condition
naming conventions, 396 description of, 224
MapBuilder, 19, 20, 21, 87, 89, 90, 91, 99, 100, inserting into rules, 240, 241, 242, 243, 249,
227, 234 252, 253

550 Workbench User’s Guide


Index

Numbers Parse
handling, 195 command line syntax checking, 259
Numeric data translator and Workbench syntax checking,
formatting, 194 260
masking characters for, 195 Parse on Errors dialog box, 258
Open Perspective, 517 parser
OTCallParser.att, 283, 285 about, 282, 293
otcsvr, 420 input, 307
OTEnvelp.att, 50, 55, 84 input argument, 311
using, 67, 69 input data stream, 314
OTmdl, 219 input default, 311
otrans, 420, 427, 447, 494 invoking, 309
OTRecogn.att, 50, 55, 56, 84, 478 otxmlcanon, 283, 307
using, 65 output, 307, 316
OTReport.sh, 474 validation error, 309
description of, 473 what it does, 293
otrun.exe, 412, 427, 431, 432 Permissions
overview, 420 guidelines when migrating data, 484
otstdump utility, 492 Positive sign
otxmlcanon, 283 masking for, 199
about, 282 Post-condition
OTXMLPre.att, 283 defined in access model, 40, 45
outbound processing understanding, 389
about, 285 values for, 45
errors, 285 Pre-condition
Outbound X12 Values dialog box, 68 defined in access model, 40
Output understanding, 389
sorting, 211 values for, 42
Output data Preferences dialog box
requirements for translating, 388 default values, 228
Output file overview, 230
specifying a secondary, 210 Present mode rules, 223
viewing in Workbench, 441 pre-translation environment
page element, 288 errors, 283
para element, 289 translating with, 283
Parameter Processing flow, 58
prompting for, 256 Profile database, 209, 422
Parent changing values, 404
overview, 37 checking, 496, 502

Workbench User’s Guide 551


Index

cross-references, 66, 410 Rule


data model names, 398 modes, 223
defining, 65 notebook, 240
development, 493 RuleBuilder, 227
exporting, 492 dialog box, 239
extensions, 413, 482, 483 overview, 33, 223, 227
functions of, 66, 407 Rules
hierarchy codes, 408, 409 adding, 240
import/export overview, 481, 493 applying changes, 243, 250, 252, 253
importing, 492 checking the syntax of, 256
key prefix, 394, 399, 402 completing via prompts, 256
lookups, 399, 407, 409, 410, 411, 412 copying to Clipboard, 254
migrating, 485, 488, 491, 493 cutting to Clipboard, 254
overview, 51 methods of creating, 227
production, 494 modifying, 240
trading partner lookup, 65, 67 overview, 49, 223
troubleshooting, 492 reasons for using, 49
values, 49, 387, 408, 411 types of conditions, 224
Profile Database Interface Worksheet, 409, 410 using to set trace level, 444
prolog, argument, 310 rules for well-formed documents, 286
quotation mark symbol, character entity, 293 Run dialog box, 412, 424, 443
Radix Runtime
definition of, 454 syntax checking, 263
IO detail, 429 Save As dialog box, 219
key phrase, 450 Saving
rule functions, 428 data model, 218
Record Segment
data mapping for, 35 data mapping for, 35
Record lock Session log
acquiring. See also Profile database overview, 53
referenced DTD, 288 Session number, 53, 57, 421, 442, 472
References Session Output dialog box, 435, 438
explicit, 416, 417 SET_CHARSET function, 43
relative, 416, 417 SET_DECIMAL function, 197, 451
release character, 294 SET_ERR keyword, 405, 406
argument, 310, 311 SET_EVAR function
Reports enveloping, 67
generating user-defined, 472 explanation of, 407
on translations, 472 rules, 444
root element, 288 using with trace, 444, 456

552 Workbench User’s Guide


Index

SET_FIFTH_DELIM function, 451 understanding for data modeling, 409


SET_FIRST_DELIM function, 45, 451 Syntax
description, 43 checking error conditions, 258
SET_FOURTH_DELIM function, 451 checking rules, 256
SET_LKUP function, 410 items not checked, 265
SET_RELEASE function, 451 T_ACCESS keyword environment variable, 52,
SET_SECOND_DELIM function, 42, 451 56, 58, 434, 475, 477, 478
SET_SUBS function, 409 T_MODEL keyword environment variable, 52,
SET_THIRD_DELIM function, 42, 451 56, 58, 435, 475, 477, 478
SET_XREF function, 410 Tag item
Share a Workspace Project, 520 icon, 190
Sibling overview, 35
overview, 37 setting match value for, 208
Sign tags, well-formed rules for, 286
formatting characters for, 199 Target data model
single environment overview, 38
errors, 283 Target model. See Target data model
translating with, 282 Temporary variable
Sort overview of support for, 225
data model items, 211 Thousands separator character. See Triad
Sort dialog box, 212, 213 Trace level
Source data model setting, 427
overview, 38 setting in Run dialog box, 431
Source model. See Source data model table of values, 427
special characters for XML, 293 Trace log
Standard Generalized Markup Language, 282 example output, 456
Standards dialog box, 209 organization of, 447
Steps to data modeling, 387 overview, 53, 443
STRCAT function using for debugging, 449
concatenating, 65 viewing via Workbench, 445
logic, 67 ways to set, 443
STRSUBS function, 450 Trace Settings dialog box, 429, 443
STRTRIM function TRACE_LEVEL keyword environment variable,
concatenating, 65 57, 435, 444, 456
logic, 67 Trade Guide
substitution characters, 293 prerequisites, vi
Substitutions using to report on translations, 472
inheritance between trading partner levels, Trading partner
408 database key, 407

Workbench User’s Guide 553


Index

inheritance between levels, 408 UNIX


recognizing during processing, 65 access permission, 484
Trading Partner Profile dialog box, 68, 407 maximum command line characters, 437
Transaction Modeler Workbench. See migrating applications, 481
Workbench translating at the command line, 432
translating troubleshooting translations, 442
with pre-translation environment, 283 Update Manager, 506
with single environment, 282 URI, 288, 290
Translating User slot
arguments for, 434 terminating, 440
at the command line, 432 valid documents, about, 287
example command lines, 438 valid, definition, 286
overview, 420 validation
preparation for, 421 argument, 310
reporting on, 472 Variable length data, 35
terminating, 440 Variables
trace log for, 443 overview, 48
troubleshooting, 442 supported by Application Integrator, 225
UNIX, 432 Verfication code list. See Code list
using Workbench interface, 423 W3C
Translation requirements about, 282
obtaining, 388 well-formed, definition, 286
Translation Session ID file Windows
overview, 53 access permission, 484
translator, 307 maximum command line characters, 438
Translator migrating applications, 481
syntax checking, 263 translator program, 420
Triad Windows Concurrent
masking for, 204 access permission, 484
using, 49, 195, 204 as multiple user system, 420
Troubleshooting Control Server, 421
error messages, 237 invoking translations, 433
Manual Loop Control dialog box, 238 running translations, 420
MapBuilder, 237 troubleshooting, 443
UNIX, 442 Windows operating system
Windows Concurrent, 443 invoking parser, 309
ulimit command, 449 Workbench
using, 449 development tools, 19, 24, 26, 33
universal resource identifier. See URI overview, 17, 33
Universal Resource Identifier. See URI World Wide Web Consortium. See W3C

554 Workbench User’s Guide


Index

XML Xrefs/Codes dialog box, 410, 411, 412


about, 282 XSD
inbound processing, 282 about, 290
requirements, 286 hierarchy, 290
XREF function, 65, 66, 67, 410 structure, 291, 292
XREF_KEY keyword environment variable, 57, XSD schemas, supporting, 309
65, 67, 410, 435

Workbench User’s Guide 555

You might also like