AI5.2-Workbench User Guide
AI5.2-Workbench User Guide
275-3CD
Application Integrator™
Workbench
User’s Guide
Version 5.2
January 2010
PREFACE...................................................................................................................................................... I
About the Workbench User’s Guide ............................................................................................ ii
Documentation Conventions........................................................................................................iv
Typographical Conventions .....................................................................................................iv
User Input ................................................................................................................................iv
Notes, Hints, and Cautions.......................................................................................................iv
Screen Images ........................................................................................................................... v
Tables ........................................................................................................................................ v
Prerequisites for Workbench .......................................................................................................vi
System Prerequisites ................................................................................................................vi
User Prerequisites....................................................................................................................vi
Customer Support ...................................................................................................................... vii
Workbench Customer Support ................................................................................................vii
Before you call Customer Support ..........................................................................................vii
Calling for Customer Support.................................................................................................vii
Listing the Contents of the Disk ................................................................................................xvi
SECTION 1. WORKBENCH USER INTERFACE OVERVIEW......................................................... 17
Overview of Workbench Features ..............................................................................................18
Menu Bar.................................................................................................................................19
Tool Bar ..................................................................................................................................25
Views .......................................................................................................................................27
SECTION 2. DATA MODELING PROCESS OVERVIEW.................................................................. 33
Overview of Workbench Features ..............................................................................................34
Workbench Development Tools...............................................................................................34
Defining the Structure of Input and Output Data .......................................................................35
Defining Items .........................................................................................................................35
Tag Items.................................................................................................................................36
Container Items.......................................................................................................................37
Group Items.............................................................................................................................37
Relationship of Data Model Items ..............................................................................................38
Source and Target Data Models .............................................................................................38
The Access Model....................................................................................................................40
Understanding the Role of the Access Model.............................................................................40
Pre-Condition Values..............................................................................................................42
Base Values .............................................................................................................................43
Post-Condition Values ............................................................................................................46
Associating Input Data with Output Data...................................................................................49
Variables .................................................................................................................................49
Rules........................................................................................................................................50
Environments (Map Component Files) ...................................................................................51
Other Data Modeling Components .............................................................................................52
Profile Database .....................................................................................................................52
Administration Database ........................................................................................................52
Environment Files ...................................................................................................................53
Translation Session Files and Trace Logs ..............................................................................54
Include Files............................................................................................................................55
Understanding Environments (Map Component Files) ..............................................................56
Purpose of Environments ........................................................................................................56
Environment Sequence of Parsing ..........................................................................................57
Processing Flow within the Model..........................................................................................59
Single Environment Process Flow ..........................................................................................60
Multiple Environments ............................................................................................................60
Changing Environments During a Translation.......................................................................63
Map Component Files for Enveloping/ De-enveloping..............................................................66
Processing Using OTRecogn.att (De-enveloping) ...............................................................66
Processing Using OTEnvelp.att (Enveloping) ........................................................................68
Traditional Data Models vs. Style Sheets ...................................................................................70
Differences Between Traditional Data Models and Style Sheets ............................................70
Compliance Checking.................................................................................................................71
SECTION 3. WORKBENCH OVERVIEW ............................................................................................ 83
Accessing Workbench ................................................................................................................84
Workbench Preferences...........................................................................................................85
Advanced Settings Preferences ...............................................................................................88
Derived Links Feature Preference Page .................................................................................89
Macro Definitions ...................................................................................................................90
Map Builder Preferences ........................................................................................................92
Menu Preference Page ............................................................................................................96
Server Connection Preferences ...............................................................................................98
Version Validator Preference..................................................................................................99
Views Preference Page .........................................................................................................100
XSD Validator preferences ...................................................................................................103
Overview of the Map Editor .....................................................................................................107
Opening an Existing Environment ........................................................................................107
Rearranging Views and Editors ................................................................................................108
Setup......................................................................................................................................108
Drop cursors .........................................................................................................................108
Rearranging views ................................................................................................................109
Tiling editors .........................................................................................................................110
Rearranging tabbed views.....................................................................................................111
Maximizing............................................................................................................................112
Fast views..............................................................................................................................112
Creating fast views................................................................................................................112
Working with fast views ........................................................................................................113
Tips and Tricks......................................................................................................................115
Map Editor Work Area..........................................................................................................115
Map Definition Tab ...............................................................................................................115
Mapping Tab .........................................................................................................................118
Source Properties Tab...........................................................................................................119
Target Properties Tab ...........................................................................................................121
Input Tab ...............................................................................................................................123
Output Tab ............................................................................................................................124
Tool Bar Options...................................................................................................................125
Menu Options ........................................................................................................................128
Overview of the Model Editor ..................................................................................................130
Opening an Existing Data Model..........................................................................................130
Model Editor Work Area .......................................................................................................131
Overview Tab ........................................................................................................................131
Properties Tab.......................................................................................................................133
Model Text Tab .....................................................................................................................134
Tool Bar Options...................................................................................................................134
Other Toolbar Options..........................................................................................................136
Menu Options ........................................................................................................................138
Additional Editors.....................................................................................................................141
Access Model Editor .............................................................................................................141
Include File Editor ................................................................................................................142
Views ........................................................................................................................................143
Views Overview.....................................................................................................................143
Interactive Process Manager (IPM) ..........................................................................................148
Navigator View.........................................................................................................................154
Toolbar..................................................................................................................................155
Icons ......................................................................................................................................158
Context Menu ........................................................................................................................161
Narrowing the scope of the Navigator view .............................................................................165
Sorting resources in the Navigator view...................................................................................165
Showing or hiding files in the Navigator view .........................................................................165
Trading Partner Navigator ...................................................................................................166
Trading Partner Attribute Viewer .........................................................................................167
Built-Ins.................................................................................................................................168
Message Variables ................................................................................................................169
Performs................................................................................................................................170
Problems ...............................................................................................................................171
Outline...................................................................................................................................171
Properties..............................................................................................................................172
Remote Site Navigator...........................................................................................................172
SECTION 4. CREATING MAP COMPONENT FILES (ENVIRONMENTS) ................................. 176
Defining a Map Component File ..............................................................................................177
Recommended Naming Convention ......................................................................................177
Defining a New Map Component File...................................................................................178
Modifying an Existing Map Component File ........................................................................182
SECTION 5. CREATING DATA MODELS FOR EDI AND APPLICATION DATA ..................... 186
Working with Data Models.....................................................................................................187
Defining a New Data Model..................................................................................................187
Working with Standard Data Models....................................................................................189
Working with XML based Data Models ................................................................................191
Converting SEF Format to a Data Model.............................................................................191
Converting COBOL Copy Book Format to a Data Model....................................................195
Defining a Data Model Item..................................................................................................197
Deleting a Data Model Item..................................................................................................200
Assigning a Data Model Item Type .......................................................................................200
Assigning Data Model Item Attributes ..................................................................................202
Data Hierarchy .....................................................................................................................223
Including Files in Data Models.............................................................................................224
Saving a Data Model.............................................................................................................228
Closing the Editor .................................................................................................................229
SECTION 6. BUILDING RULES INTO DATA MODELS ................................................................. 232
Overview of Rules Entry ..........................................................................................................233
Modes for Processing Rules..................................................................................................233
Types of Rule Conditions ......................................................................................................234
Variables ...............................................................................................................................235
Keywords and Functions.......................................................................................................236
Two Methods for Creating Rules ..........................................................................................237
Using MapBuilder ....................................................................................................................238
Overview ...............................................................................................................................238
Setting Preferences................................................................................................................240
Loop Control .........................................................................................................................248
Troubleshooting ....................................................................................................................250
Using RuleBuilder ....................................................................................................................252
RuleBuilder Window .............................................................................................................252
Accessing RuleBuilder ..........................................................................................................252
Adding Rules .........................................................................................................................253
Inserting Functions and Keywords .......................................................................................262
Cutting, Copying, and Pasting Rules ....................................................................................267
Finding the Next Parameter..................................................................................................269
Syntax Checking of Rules ......................................................................................................269
SECTION 7. COMPARE AND REPORTS............................................................................................ 282
Comparing Two Model Files....................................................................................................283
To Compare Two Files..........................................................................................................284
Context Menu on the Compare View.....................................................................................285
Next Difference......................................................................................................................286
To Find items ........................................................................................................................286
To Go To a Data Model Item ................................................................................................286
Report Generation.....................................................................................................................288
Data Model Listing Report .......................................................................................................289
To access the Data Model Report dialog box .......................................................................289
To Run the Data Model Listing.............................................................................................289
Source to Target Map Report....................................................................................................291
To access the Source to Target Map Listing report ..............................................................291
To run the Source to Target Map Listing..............................................................................291
SECTION 8. AI VERSION VALIDATOR............................................................................................. 294
Preface
Section Description
Section 1. Workbench Provides a high level overview of the
User Interface Workbench user interface including
Overview toolbars and menu items.
Section 2. Data Discusses how translations are
Modeling Process processed and the use of Workbench to
Overview create the proper structures.
Section 3. Workbench Provides a detailed overview of each
Overview menu, toolbar, and work area.
Section 4. Creating Gives instructions for creating the map
Map Component Files component files or environments.
(Environments)
Section 5. Creating Describes complete procedures for
Data Models for EDI creating data models.
and Application Data
Section 6. Building Describes how to use RuleBuilder to
Rules into Data add processing logic to your data
Models models.
Section 7. Compare Describes the Compare and Reports
and Reports options.
Section 9. Working Describes how macros can be recorded
with Macros and re-used
Section 10. XML Provides complete procedures for
Mapping and creating style sheets to process XML
Processing data.
Section 11. The Data Discusses the steps to the data
Modeling Process modeling process, including
Application Integrator™ conventions
and tips
Section Description
Section 12. Translating Provides procedures for translating and
and Debugging debugging data models.
Section 13. Migrating Provides instructions on migrating
to Test and Production from development to test or from
development or test areas to production
areas.
Section 14. Update Provides information on how to install
Manager features/patches using Update
Manager
Section 15. CVS Describes how to use the CVS
Integration repository for version management of
maps.
Index Provides an alphabetical list of subjects
and corresponding page numbers
where information can be found.
Typographical
Conventions
Regular This text style is used in general.
Courier This text style is used for system output and syntax
examples.
Italic This text style is used for book titles, new terms, and
emphasis words.
User Input In this document, anything printed in Courier and boldface type
should be entered exactly as written. For example, if you need to
enter the term, “userid,” it is shown in the document as userid.
Screen Images The screen images in this manual are taken from Workbench
running on the Windows® 2000 operating system. If you are
running Workbench on Windows® XP, the actual screens
(windows and dialog boxes) may differ slightly in appearance.
Differences between platforms are slight (such as border thickness)
and do not affect features or usage.
Tables Tables appear frequently in this manual. They have headings with
dark underlining. The body has no gridlines (in most cases). The
end of the table is indicated by double underlines.
System Refer to the Control Server Installation Guide for information on the
Prerequisites hardware and software requirements to run Application
Integrator™ on Windows® operating systems.
User Prerequisites This guide assumes that AI users have adequate understanding of
the following:
Mouse and graphical user interface (GUI) experience –
specifically windows and dialog boxes.
Basic knowledge of their operating system and an on-line
editor.
Program concept knowledge, including
− An understanding of data organization
− An understanding of data manipulation
− An understanding of program process flow
− An understanding of testing and debugging
Knowledge of electronic data interchange, database
management, and systems reporting.
Knowledge of the standards implementation applicable to
their environment.
Program updates.
Help Desk Support.
Before you call If possible, attempt to resolve the problem internally or with the
Customer Support help of the documentation provided, including the printed, on-line,
and training documentation.
Calling for Customer Support would be able to provide you with effective
Customer Support help if you follow these steps when you call:
1. Call the GXS Help Desk for support at (800) EDI–CALL Ext.
3005. You can also send an email to aihelp@gxs.com
The above phone number and email are for AI Help in the
Americas region. Other regional support information can be
found at: www.gxs.com-> Customer-> Customer Support->
select appropriate region.
2. To retrieve the version information of all the installed
components, select the “Help->Version Information” menu in
Trade Guide 5.2.
Version information appears along with the AI Control Server
Program Version and Build Dates.
Windows® Users
To access the version information of all the installed
components and program build dates for AI Control Server
from Windows®:
a. From Windows Explorer (Windows® 2000, or Windows®
XP), click the filename cservr.exe.
b. With the filename highlighted, choose Properties from the
File menu. The Properties dialog box opens and displays
information such as the filename, path, last change, version
number, copyright, size, and other attributes
c. Open a command prompt session. Browse to the directory
where Application Integrator™ is installed.
To list the versions of cservr.exe and otrun.exe, type
“otver.bat” or “otver” in the command line.
- Plug-in Details: Gives details about various Plug-Ins that make up the
product with their version number, name, and details of the provider of the
Plug-In.
6. Make sure the person placing the support call has a thorough
knowledge of the issue for which you are seeking assistance.
Listing the
Contents of the
Disk
To view the contents of the Workbench installation CD, place the
CD in the CD-drive of the system and browse through the contents.
To review the contents of the Workbench installation programs,
you must first install the product. Then use File Manager,
Windows Explorer, or the MS-DOS dir command to list the
contents of the \WB52 directory.
Editor Work
Area
Views
Menu Bar The menu bar contains all available menu items that can be used
from within Workbench.
Note: Not all the menu items are valid. Some of them may not be
applicable to certain scenarios and would therefore be disabled.
File Menu:
Edit Menu:
Navigate Menu:
Search Menu:
Test Menu:
Utility Menu:
Tools Menu:
The tools menu contains various operations that can be used within
Workbench.
Note: Some of the below mentioned tools are available only for
relevant scenarios (such as attachment and Model Editor).
Link Menu:
Window Menu:
Help Menu:
Tool Bar The tool bar contains icons that can be used for various functions
within Workbench.
Note: Some of the below mentioned tools are available only for
relevant scenarios (such as attachment and Model Editor).
Icons Functions
Creates new projects, data models, or map component files.
Icons Functions
Makes a copy of the current data model item. The item is
copied and stored on the clipboard until you paste it.
Pastes a copy of the stored item (DMI). The item on the
clipboard is stored until you perform another cut or copy.
Duplicates a selected data model item (DMI) at the same
hierarchy and level. All attributes of the data model item are
duplicated except for the data model item name. The name of
the duplicated data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item one level
right to restructure the model's data hierarchy. If the
preference is set to show the target model as "Right to Left
Tree", this action moves the currently highlighted data model
item one level left.
Moves the currently highlighted data model item one level left
to restructure the model's data hierarchy. If the preference is
set to show the target model as “Right to Left Tree", this action
moves the currently highlighted data model item one level
right.
Allows you to add an empty data model item below the
currently highlighted item. The newly created DMI has a
default name which can be changed.
Adds an empty data model item above the currently
highlighted item. The newly created DMI has a default name
which can be changed.
Navigates to a specified DMI. You can select the available
DMI from the “Go to” dialog.
Allows you to view only the highest level of the data model.
Allows you to see all levels of the data model.
Refreshes the mapping area and redraws DND link lines.
Allows you to visually see the links or derives links for those
maps that are developed outside of Workbench.
Deploys (copies) relevant mapping files to a directory (local or
remote). These files can be used during debugging tasks or for
Icons Functions
moving files into a production functional area.
Checks the syntax of the current mapping file in the active
Text Editor.
Applies the changes made to the current Data Model file in the
active Model Text Editor.
Performs View: Displays include files used in the data model, and
the PERFORM declarations contained within.
Tag
Container
Group
Note: When working with an XPath data model or style sheet, the
structure is already defined in a referenced document type
definition (DTD) or Schema (XSD). Defining of the structure in the
XPath model or style sheet is not necessary.
Defining Items Defining items are the lowest level descriptors in the data model.
Examples of defining items include elements or fields. They define
a data string’s characteristics, such as size and type. Some
examples of item type characteristics are:
Alpha characters (letters only [A-Z] [a-z])
Numeric characters (numbers only [0-9])
Alphanumeric characters (a combination of numbers and
letters [A-Z] [a-z] [0-9] and the “space” character)
Date
Time
You can specify that defining items are variable in length by using
an item type that includes delimiters to denote the end of one field
and the start of the next. Or you can define items that are fixed in
length by specifying the number of characters in the field (in which
case, no delimiters are necessary).
Tag Items Tag items enable you to identify different records or segments.
The “tag” is the string of data at the beginning of the
record/segment. A record delimiter in the input or output may
separate tag items. If multiple types of records exist in a file, there
is normally a “tag” referenced to differentiate each type. For
example, a heading record may begin with an ‘H’ in the input
stream and a detail record may begin with a ‘D.’
Tag items can be of fixed length or variable length.
Fixed Length Record In a fixed length record, you determine the number of characters
allowed in each field. If the data is not long enough to fill each
field, space characters are added (either to the beginning or the end
of the field, depending on whether you have right- or left-
alignment specified in each field).
Record
D 3 5 0 0 1 A B C 4
Fld
T
Field of Field of of
a
5 characters 3 characters 1
g
char.
Variable Length Record A variable length tag item uses delimiters to denote the end of one
field and the start of the next. You determine the minimum and
maximum number of characters to be used in each field. If there is
no data available for a particular field, the field’s two delimiters
appear next to each other with no spaces between them. (See Field
4 in the example below.)
Record
B E G * 0 0 * N E * 0 0 1 2 3 * * 0 1 0 1 9 7
Container Items Like a tag, a container is used to group two or more defining items.
Unlike a tag, a container does not include a tag (or match value) at
the beginning, and in its absence a placeholder in the data stream is
usually required. For example, in the X12 translation session,
where a composite element is used to determine a measurement
based on data within the input stream (height, width, and length,
for MEA04), it is defined using a container in Application
Integrator™.
Depending on your standards implementation, containers are not
used as often as the other data modeling items. A list of container
item types found in the access models supplied by Application
Integrator™ are described in the appendices of each of the
standards implementation guides e.g. ASC X12 Standards Plug-In
User’s Guide.
Group Items When two or more items have the ability to repeat or “loop,” you
use the group item to define this characteristic. The group item
does not reference any data in the input or output and, therefore,
has no value associated with it. For example, when an invoice
contains a series of individual line items, each line item is
characterized as a record and the records are grouped together by a
group item.
Child Heading_Document_Date
Source and Target The four data model items–defining, tag, group, and container–are
Data Models used to describe the structure of the input data in the source data
model and the structure of the output data in the target data model.
These two data models also contain actions to be performed on the
data to correctly map it from the source to the target.
Application
OmniTrans Data
Integrator Data
Model
Model 11
Data
Application
OmniTrans Data
Integrator
Model 2Data
Model 2
The Access Model Each data model must be associated with an access model. The
access model contains a definition of the data model item types
available for the defining, tag, and container items that are to be
associated within this data model. The access model information
describes the items that are to be parsed (input) or constructed
(output) in the data streams. (The group type is always available,
although it is not specifically described in the access model.)
For example, a data model item named “InvoiceDate” could be
assigned an item type of “DateFld” in the source data model. On
examining the access model associated with the data model, we
find that item type “DateFld” is defined by an Application
Integrator™ function #DATE where either spaces or all zeros in the
data field are valid.
Application Integrator™ supplies access models for the standards
implementation. See Source and Target Data Models for a list of
these access models and for more information on the item types in
these files.
The access model sets three conditions for each item type: the pre-
condition, the base, and the post-condition. The pre-condition
describes any rules about the data that precedes this item, for
example, a leading delimiter or “tag.” The base describes the value
or character set allowed for this item, for example, a set of
alphabetic characters A-Z and a – z. The post condition describes
any rules about the data that follows this item, for example, a
trailing delimiter.
If you review any of the access models supplied by Application
Integrator™, you will see these item type specifications in the
format:
2) The item type should appear in the Access Type list box when
this access model is associated with a data model. (Alpha) refers to
another non-display base element in the access model, which sets a
range of acceptable alphabetic values. A base value preceded by a
pound sign (#), such as #CHARSET, refers to access model
functions that precisely describe the data with which they are
associated.
Note: The caret (^) must appear in front of the base condition for it
to appear in the Access Type list and be used in the data model.
Tag Type Sets the size of the tag and the post delimiter.
Defining Sets the pre-condition value (delimiter before data),
Type the character set, and any special formatting.
Container Sets the base value to CONTAINER. See the
Type description of each “composite” item in the
appropriate standards manual for examples.
When defining each data model item in your data model, you will
specify an item type. The possible list of item types available is
based on the access model you associated with the data model. The
complete set of attributes associated with each data model item
(such as, the possible format or maximum occurrence) is also
related to the item type.
Pre-Condition Values The following is a list of some pre-condition values. Certain values
may not apply to your standards implementation.
Pre- Description
Condition
Elem_delim Defined by the access function
#SECOND_DELIM or the data model function
Pre- Description
Condition
SET_SECOND_DELIM.
Base Values The following is a list of some base values. Certain values may not
apply to your standards implementation.
Value Description
Seg_term Defined by the access function #FIRST_DELIM
or the data model function SET_FIRST_DELIM.
RecordDelim Defined by the access function #FIRST_DELIM
or the data model function SET_FIRST_DELIM.
For a list of all data model item types, Refer to the appendices of
the Application Integrator™ standards implementation manuals,
for example, the ASC X12 Standards Plug-in User’s Guide.
Parsing Blank or Data models must be modified to enable them to read through
Empty Records empty records, that is, records that contain only the record
delimiter character (line feed). For Application Integrator™
versions 3.0 and greater, the items LineFeedDelimContainer and
AnyCharO have been added to the OTFixed.acc access model.
Use the OTFixed.acc access model with the following examples.
Use the following input data where (l/f) represent a single
character, the line feed:
1AB(l/f)
2ABCD(l/f)
(l/f)
4DEFG(l/f)
(l/f)
Example 1: In Application Integrator™ versions prior to 3.0, the
following snippet of the data model would parse and display each
record:
Init {
[]
SET_FIRST_DELIM(10)
}*1 .. 1
Group {
Record { LineFeedDelimRecord ""
Field { AnyChar @0 .. 10 none }*0 .. 1
[]
SEND_SMSG(1, STRCAT("READ: ", Field))
}*0 .. 10
}*1 .. 1
Forcing Blank or Data models must be modified to enable them to write out empty
Empty Records records/elements, that is, records that contain only the record
delimiter character (line feed) or empty fields within the record.
The items LineFeedDelimDefaultRecord and
LineFeedDelimContainer are available in the OTFixed.acc access
model.
Use the OTFixed.acc access model with the following examples.
Output:
HDRABC DEF(l/f)
The second part of data modeling uses variables and rules to map
Associating data between the input (source) and output (target) data models.
Input Data with Workbench provides two graphical tools known as RuleBuilder
Output Data and MapBuilder for defining rules and associating data model
items with variables.
In this example, a date function was used to accept the input date
and calculate a new date.
Environments (Map The final step to map the data is to create an environment or map
Component Files) component file (hereafter referred to as MCF). An MCF defines all
the pieces that need to be brought together to configure the
translator to process in a certain way. An MCF consists of
components that control what data is to be translated, such as the
input/output files, and the source, target, and access models to be
used.
You can attach another MCF definition (using the keyword
ATTACH) to reconfigure the translator during processing. MCF
files are given the suffix “.att” (for example, OTRecogn.att,
OTEnvelp.att) and are referred to as “map component files.”
Several examples of the functions of a translation environment are:
Processing fixed length data
Processing variable length data
Bypassing data
Generating acknowledgments
Recognizing data
Enveloping data
Committing output streams
Profile Database The Profile Database is a resource of values that can be accessed
during a translation session. The Profile Database stores:
Communication and trading partner profiles
Substitutions, used to replace a label with a value
Cross-references, used to replace a value with another value
Verifications, used to verify a value against a specified code
list
Refer to the Trade Guide Help System for more information on the
Profile Database.
Administration The Administration Database consists of one or more files that are
Database used to capture information from translation sessions. The
Administration Database provides you with information for:
Process tracking, recording information on all translation
sessions
Archive tracking, recording information on archived
documents
Message tracking, recording each outbound document
translation along with a status
Bypass tracking, recording information on all exception data
(errors)
Refer to Section 12. Translating and Debugging for hints on how to
use the Administration Database reporting features for debugging.
Refer to the Trade Guide Help System for more information on the
setup and full reporting features of the Administration Database.
Environment Files An environment file (given the extension “.env”) can be used to
enhance the current configuration of the translator. It declares
user-defined environment variables with their associated values,
for example:
ACTIVITY_TRACK_SUM=“DM_ActS”
ACTIVITY_TRACK_DET=“DM_ActD”
MESSAGE_TRACK_IN=“DM_MsgI”
MESSAGE_TRACK_OUT=“DM_MsgO”
EXCEPTION_TRACK_SUM=“DM_BypS”
EXCEPTION_TRACK_DET=“DM_BypD”
Caution: You must not change the tsid file because the
administration reporting may become corrupted.
For Unix and Linux users, the format of the translation session
control number is user-definable. See the Control Server Installation
Guide for details on how to set the OT_SNFMT environment
variable to do this.
For Windows® 2000 and Windows® XP users, the session number
has the format 6C (six places long, numeric only).
Trace Logs
The trace log is a log of the translation process. It shows the process
flow through the data models, including the assignment of
variables and their associated values, conditions and actions, and
map component files. The trace log (or trace) can be set to various
levels from minimal to full details. The trace log provides an
immediate and detailed debugging tool.
When a translation is run and depending on the data model
functionality and process, the system automatically can create up to
three trace logs:
Include Files Include files only contain rules that can be called from within data
models or style sheets.
Include files must end with .inc as it’s extension, and can be user
created. Below is a sample include file (UserDefined.inc) and its
declaration of a PERFORM:
DECLARE Perform_Name () {
[]
…Rules to be performed each time
Perform_Name is called from a data model
}
Where:
Perform_Name is a name given to the PERFORM.
The rules in Perform_Name are processed each time the rules for
the DMI Initialization are processed.
Environment
Source Target
Input Access Model Access Model Output
Data (parsing) (construction) Data
The data structure may contain only group items, with no input or
output occurring, or just rule processing logic. Information placed
on variables in the parent and grandparent environments can be
referenced in child environments.
Processing Flow Within a source or target data model, rules processing flows down
within the Model the hierarchy from parent to child (starting with the first child
encountered) and then back to parent, as per the following
illustration:
In each case, the current status is returned from the child to the
parent data model item. The process moves down the data
structure from child to child; once the children are read, process
returns to the parent item, then proceeds to the next parent item.
Environment Layer
Recognition Environment
ATTACH 1
ATTACH 2
ATTACH
[ ]
ATTACH VAR->map_component_filename (variable
name)
When ATTACH is encountered during a translation session, the
current environment’s processing stops. The map component file
associated with the ATTACH statement is opened and processing
begins. Processing continues in this environment until it completes
successfully, an error is returned, or the data model keyword
ATTACH is encountered again.
Processing returns to the parent environment immediately
following the data model keyword ATTACH. The error code
returned to the parent environment can be captured and errors
handled, as per the following example:
[ ]
ATTACH “OTX12NxtStd.att”
[ ]
VAR->RtnStatus=ERRCODE( )
[VAR->RtnStatus > 0]
<actions to recover from error>
10. From the File menu choose Save (or use the Save icon on the
tool bar) to save the changes made to your model.
Common ATTACH During translation processing, the following errors are commonly
Errors Encountered found when there are problems with the map component file
definition. Refer to Appendix E. Application Integrator Runtime
Errors in Workbench User’s Guide-Appendix for a complete
description of these errors.
Processing Using The OTRecogn.att file is a map component file typically used for
OTRecogn.att processing public standards into application data.
(De-enveloping) The rule logic necessary to perform the extraction of the values
from the input stream is already included in the OTRecogn.mdl. It
also automatically sets the HIERARCHY_KEY keyword
environment variable. To use this feature, you must define the
trading partner in the Profile Database by using the Trading
Partner option from the Trade Guide Profiles menu.
Recognizing the The process of recognizing which trading partner needs to be read
Trading Partner from the Profile Database is handled with the generic model
OT<std>Bgn.mdl called from OTRecogn.mdl. This model is
designed to allow multiple interchanges from multiple trading
partners within the input file.
To define the trading partner at the interchange level, the model
sets an environment variable XREF_KEY to the value “ENTITY.”
In the simplest of terms, this means that whenever the translator
attempts to do a cross-reference from the database, it looks for a
line or record within the database that starts with “ENTITY,” until
the environment variable XREF_KEY is changed to another value.
Once the environment variable XREF_KEY is set, the model uses
the functions STRCAT and STRTRIM to concatenate the Sender’s
Qualifier, Sender’s ID, Receiver’s Qualifier, and the Receiver’s ID
that it has read from the input file, in that sequence, and assigns the
results to a temporary variable VAR->OTICRecognID. At this
point a cross-reference is performed using the function XREF and
passing in some required parameters as per the function:
Processing Using The OTEnvelp.att file is a map component file typically used for
OTEnvelp.att processing application data into the public standards (enveloping).
Unlike the processing of public standards, where generic models
(Enveloping)
are provided, each application system requires customized models.
When the models are created, the entity lookup logic must be
included. To do this, you must define the trading partner in the
Profile Database by using the Trading Partner option from the
Trade Guide Profiles menu.
Refer to the section on enveloping in the appropriate standards
implementation guide (for example, the ASC X12 Standards Plug-in
User’s Guide) for instructions on how to use this environment. For
this section, ASC X12 is used as the standard being mapped to.
The logic to perform extraction, concatenation, and entity lookup,
to obtain a trading partner view into the database, needs to be
included in the custom application model. The logic is represented
below:
[ ]
;sets the cross reference view into the database as
;“ENTITY”
SET_EVAR(“XREF_KEY”, “ENTITY”)
[ VAR->OTXRefStatus != 0 ]
;entity lookup failure
EXIT 501
[ ]
;sets the trading partner’s substitution view
;into the database
SET_EVAR(“HIERARCHY_KEY”, VAR->OTHierarchyID)
RECOVERY ON_ERROR()
Yes Yes
Yes No
No Yes
No No
Data Access Parsing Error codes return descriptive meanings. With RECOVERY on, the
and Recovery file position is reset so that the next element can be read if the error
VAR->OTPriorEvar = SET_EVAR(“RECOVERY“,“Yes“)
By default, in the enveloping and de-enveloping generic models,
recovery is set to “Yes”.
The following table shows the error codes that are returned
depending on whether recovery is set to “Yes” or “No”, and which
mode of rules are entered upon returning from the access model
with the specific error code.
Rules of Recovery These rules apply when the keyword environment variable
RECOVERY is set. RECOVERY processing takes place only on
defining data model items. When the translator reads a character
out of character set, end of file, or reaches the maximum
element/field size as defined, the following processing may occur:
1. If no data has been read, then read the post condition. If the
post condition is read or no post condition is defined for the
item, error 141 is returned for Tags and error 190 is returned
for Composites and Definings.
2. If the minimum size has been met and no post condition is
defined, error 0 is returned back to the data model.
3. If the minimum size has not been met and a post condition is
defined, the next character is read for the post condition. If the
post condition is read, error 176 is returned.
4. If the post condition is not read, continue reading until the post
condition, end of file or a size of 4096 is read. Return 191 if the
post condition is finally read, else return 200.
5. If maximum defined size is met, read next character for post
condition. If not defined, return 0. If defined but not present,
read till post condition, return error 177 if finally read, or
return 200 if not read.
6. If Date/Time/Numeric format, test format. If the format test
fails, return error 146.
7. If Tag or composite, test post condition. On post condition
failure, return error 192.
Examples of Recovery
Rule Mode Processing There are three types of mode processing: PRESENT, ABSENT, and
ERROR. When an item other than a group parses data, it enters
one of the 3 modes of processing:
PRESENT mode: When the error code is 0.
ABSENT mode: When the error code is -1, 139, 140, 141, 171,
190.
ERROR mode: Any other error.
The modeler is able to put rules on any of these modes. When it is
said to “Clear the Error”, the last action in the mode results in a 0
error code. (A “null condition” by itself will “Clear the Error” or
reset it back to zero.) The following code shows how an error 190
would change into 0. This case would be if an element BIG_01
were missing.
BIG_01 { ElementAN @1 .. 10 none
[]
ARRAY->Big_01 = BIG_01
:ABSENT
[]
ARRAY->Big_01 = “ “
}*1 .. 1
Data model item BIG_01 is mandatory. During translation, when
no data is read for the data model items, the translator enters
ABSENT mode and processes the ABSENT mode rules. In this
example, the array variable is populated with a space character.
Occurrence Validation
The codes [-1, 139, 140, 141, 171, 190] enter ABSENT mode rules. If
no ABSENT mode rules are defined, the error remains at its value –
the error is not cleared. When processing is done with PRESENT
or ABSENT (whether rules are defined or not), and the error code is
-1, 139, 140, 141, 171, 190, occurrence validation is checked. If this
occurrence of the item is optional, the error is reset to zero and
processing proceeds onto the next sibling. If this occurrence of the
item is mandatory, the error code is taken into ERROR mode.
Like ABSENT, ERROR mode can also clear the error. If the error is
not cleared, processing converts it to a hard error (138).
Process Flow Between After Rule Mode Processing and Occurrence Validation, if a non-
Elements zero error value remains, the error is changed to 138 and is
returned to the parent of the item. This indicates that a hard error
is found. Processing continues down the structure and will back up
the data model structure and/or structure of environments until
the error is cleared or nothing remains to backup. When the session
ends the error is reported.
Soft errors are missing value type items. Occurrence Validation
processing is done before ERROR mode rules. These errors first go
into ABSENT mode rules. If the error code is set to zero,
processing continues to the next element. Keeping the same error
code or no ABSENT rules, Occurrence Validation is checked first.
If the minimum is not met, it enters the ERROR mode of that item.
If the minimum is met, the item is not considered an error.
Processing for groups is broken down into two categories:
Groups only within groups
Groups that have access items (Tags, Containers, Defining).
For the first type, the group will loop until the maximum
occurrence has been reached unless a Keyword like BREAK or
RETURN is used to leave the group or the group rules processing
ends with error 139, 140. For example, for a group that has the
occurrence 1 to 100 it will loop 100 times. If you want to break the
group after 50, the code would look like:
Group1 {
[VAR->OTCount == 50]
BREAK
[]
VAR->OTCount = VAR->OTCount + 1
}*1 .. 100
For the second type, the group will loop either to the maximum
occurrence or no more data is read. When this happens, an error
171 is returned to the group. The processing flow enters ABSENT
mode rules. If the error remains after ABSENT mode and the
minimum occurrence has been met, the looping stops and
continues to the next element. If the minimum has not been met,
processing will go to the ERROR mode rules and then act like a
hard error by going to the group’s parent with error 138.
Sometimes, missing required elements are not considered hard
(138). If the first item in a group is missing but required, the error
171 is return back to the parent. This way occurrence validation is
checked to verify if the occurrence has been met. If so, it will
continue to the sibling. For example:
Group1 {
Group2 {
Tag1 { Segment “BIG”
}*1 .. 1
}*1 .. 1
Group3 {
}*1 .. 1
}*0 .. 1
Even though it is required, if the BIG segment is missing, error 171
is returned to Group1. Since the occurrence validation is 0 .. 1
(optional group), there is no hard error.
“C:\Workbench5.2.6.7\WB52\WorkBench.exe –vm”
"C:\Program Files\Java\jre1.6.0_03\bin\javaw.exe" -cs
localhost v -bp 5551
Would look like this:
"C:\Workbench5.2.6.7\WB52\WorkBench.exe –vm”
"C:\Program Files\Java\jre1.6.0_03\bin\javaw.exe" -cs
localhost v -bp 5551 -vmArgs -Xmx512M"
Workbench Preferences are available for both the Eclipse™ software, and for
Preferences Application Integrator™. This section covers the Application
Integrator™ preferences.
Option Description
Restore Restores default settings for these preferences.
Defaults
Apply Applies any changes made to the preferences.
These are the basic color preferences that you can set for the model
editor.
Option Description
Comment Color used to display comments within a data
Color model.
String Color used to display literal strings within a rule.
Color
Keyword Color used to display keywords within a rule.
Color
Function Color used to display functions within a rule.
Color
Predicate Color used to display the predicate characters,
Color such as [, ], =, {, }, and so on.
Default Default color for any color not selected.
Color
Option Description
Maximum Used to set the schema recursion level. The
Schema default value is 2. The maximum value is 25.
Recursion
Level
Schema Builder
Default To use the default jar file which is shipped
along with Workbench
Option Description
Custom To sepecify the custom jar file
Schema Select the custom jar file to be used in
Generator generating the xml schema
executable file
Parameters to Enter the parameters required for the custom
executable file jar file separated by a semi colon (;).
Model Indentation
Indent Save the data model file with indentation as
shown in the text area in the preference page.
Don’t Indent Save the data model file without indentation as
shown in the text area in the preference page.
Derived Links Use this preference page to set the options for Derived Links which
Feature Preference get inferred in the Attachment Editor. Click here for more details.
Page
Option Description
Derived Click on the color bar and choose the color of your
Link Color choice for the derived link.
Discard Rdf Checking this discards the Rdf file created when
File any att file is opened in Workbench.
Option Description
New Creates a new command
Delete Deletes a selected command
Edit Edits a selected command
Option Description
Copy Makes a copy of a selected command
Export Exports a macro as an xml file. Select the Macro(s) you need to
export and click Export. Click on Browse to give an appropriate
location and file name of the xml file.
Click OK.
Import Imports a macro. Click on the Import button to import macros from
an xml file.
Browse to the required location and select the apropriate xml file.
Option Description
Click OK. The macro is added to the displayed list on the Macro
Definitions’ page. Click Apply, then OK in the Preference page to
populate these macros in the toolbar. For more details see the
Working with Macros section
Map Builder To set Map Builder preferences, enter options in two screens – the
Preferences Map Builder screen and the Map Builder Preferences screen.
Option Description
Link Color Color used to display link lines when drag and
drop is used to map data.
Selected Color used to display currently selected link line.
Link Color
Loop Color used to display loop control rule link lines
control when drag and drop is used to map data for XSL
Link Color models
Right to Yes – Target model data model items will be right
Left Tree justified when viewing in Map Editor.
for Target No - Target model data model items will be left
justified when viewing in Map Editor.
Enable Check this option to show Rule Builder in Map
Rule Editor.
Builder
Area on
Option Description
Map Editor
The Map Builder Preferences screen looks similar to the one below:
Option Description
Link Type
Tag to When using drag and drop, rules are placed on the
Defining tag in the source model, and on the defining in the
target model.
Defining to When using drag and drop, rules are placed on the
Defiing defining item in both the source and target model.
Variable Type
Array Arrays (ARRAY->) are used when creating rules
using drag and drop.
Option Description
Variable Temporary variables (VAR->) are used when
creating rules using drag and drop.
Variable Name
Both Concatenates the source and target DMI names to
create the variable name.
Source Uses the source DMI name to create the variable
name.
Target Uses the target DMI name to create the variable
name.
Function Assignment
Automatic Automatically use the STRTRIM function on the
Function source side rules to trim off trailing spaces.
Assignment
Manual Enables the Source and Target etched boxes to
Function allow for advanced function use when using drag
Assignment and drop.
Source
Use Uses the DEFAULT NULL function when
DEFAULT assigning data to the variable on the source side.
NULL on
Source
Use Uses the STRTRIM function when assigning data
STRTRIM to the variable on the source side.
on Source
Target
Use NOT Uses the NOT NULL function when assigning data
NULL on to the DMI on the target side.
Target
Use NOT Uses the NOT TRUE NULL function when
TRUE assigning data to the DMI on the target side.
NULL on
Target
Do not use No functions are used when assigning data to the
functions DMI on the target side.
on Target
Option Description
Loop Control
Automatic Automatically creates loop control DMIs and rules
when mapping from DMIs which are within
repeating Groups or Tags.
Manual Requires the user to create loop control DMIs and
rules when mapping from DMIs which are within
repeating Groups or Tags.
Prompt Prompts user that loop control must be added if
with Loop Manual Loop Control is selected.
Control
warning
message
Enable Automatically creates loop control DMIs and rules
Loop when mapping repeating defining DMIs.
Control
when
mapping
Defining
To
Defining
Menu Preference Some menu items are used by the underlying Eclipse™ platform,
Page but not by Application Integrator™ Workbench. These menu items
are hidden by default.
Option Description
Hidden Displays the hidden menu items when selected.
Menu Enabling any of these menu item will have no
Options impact on Application Integrator™ Workbench –
since these menus are not needed for Application
Integrator™ Workbench’s operation.
Server Connection
Preferences
Option Description
Hostname Refers to the machine name or an alias where AI
Control Server is running.
Queue Id Application Integrator™ OT_QUEUEID for the AI
Control Server being connected to.
Base Port Application Integrator™ Base Port for the AI
Control Server being connected to.
Server Periodically the system checks whether the AI
Polling Control Server is running or not. The interval
Frequency specified here (in minutes) is used for checking to
(minutes) see whether the AI Control Server is running or
not.
Version Validator You can validate and check the syntax of the maps of different AI
Preference versions. Currently maps of 4.0, 4.1, 5.0 and 5.2 can be validated
and syntax checked using this feature.
Option Description
Target AI Runtime Version Set AI version you want the maps to
get saved in
Saving the Map
Use the selected version Choosing this radio button uses the
while saving the Map version mentioned in the target AI
runtime version while saving the map
Use the version available in Choosing this radio button uses the
the Model Header while version mentioned in the Model
saving the Map Header while saving the map
Views Preference
Page
Option Description
Use Displays AI function dialog box when any of the
Function AI function is dragged and dropped into
Template RuleBuilder from the Built-ins view. This dialog
helps in the syntax of the function. For more
details go to Inserting Functions and Keywords in
Rulebuilder
Option Description
Show When checked it allows display of a Message
Warning Dialog indicating that rules of a data model item
For have unapplied changes
Unsaved
Changes In
Rules
Editor
Don't Checking this preference,will cause the workbench
prompt to add any new path, selected in Remote Site
when Navigator ,directly to Model Search Order.When
updating unchecked,the workbench will show a dialog for a
Model new path in Remote Site Navigator.The dialogs
Search enables the user to allow,reject and reorder the
Order from addition of new path toModel Search Order.
Remote
Site
Navigator
Add path When a file outside the Workbench is opened with
to Model “Open File..” , the new path is added to Model
Search Search Order if this preference is checked.
Order
when file is
opened
with "Open
File.."
Do not “Open File..” dialog will allow only one file to be
allow more opened at a time if this preference is checked.
than one
file to be
opened at a
time with
"Open
File.."
Enveloping Defines the initial environment (Map Component
Map File File) to be called when running outbound
translations.
Option Description
De- Defines the initial environment (Map Component
enveloping File) to be called when running inbound
Map File translations.
Model Defines the file search order used by Workbench to
Search resolve file references similar to OT_MDLPATH
Order used by AI Control Server. If your AI Control
Server is on a local system and is running before
you launch Workbench and the Model Search
Order list is empty, then Workbench reads the
OT_MDLPATH variable defined in the file
aiserver.bat and populates this list automatically.
The search order can be modified, but it is
preferable that it matches the OT_MDLPATH
specified. Once populated, the list is cached
automatically. So, if you add any additional
path(s) to your OT_MDLPATH, then you have to
manually add the same path in this preference
page.
New Creates a new path in the Model Paths list.
Remove Removes a path in the Model Paths list.
Up Moves a path up in the Model Paths list.
Down Moves a path down in the Model Paths list.
Don’t When you fetch a file from a remote system using
prompt Remote Site Navigator, Workbench attempts to
when include the remote file’s path in Model Search
udpating Order.
Model
Search
Order from
Remote
Site
Navigator
XSD Validator
preferences
Option Description
Validation
Validate all Validates the main schema, as well as the schemas
(included imported and included directly or indirectly by
and the main schema.
imported)
schemas
Validate Validates only the main schema.The imported or
main inlcuded schemas are not validated.
schema
only
Details
Option Description
Show Shows/displays the details (Element declarations,
details for Type definitions, Modelgroup definitions) for main
all schema, as well as for schemas imported and
(included included directly or indirectly by the main
and schema.
imported)
schemas
Show Shows the details (Element declarations, Type
details for definitions ,Modelgroup definitions) for main
main schema only.
schema
only
Parsing Options
Stop on Choose this option to stop the schema validator
errors from further proccessing of schemas if an error is
encountered.
Tolerate Choose this option to let the schema validator
errors continue proccessing of schemas regardless of
errors encounterd.
Auto Correction Options
Strict Validates the schemas as they are, without
validation attempting possible corrections.
Auto Choose this option for the validator to consider
Correct possible corrections and then proceed to
validating the schemas.
Problem Markers
Show Choose this option to enable the validator to attach
problem and show problem markers on schema(xsd) files.
Markers
Delete Choose this option for the validator to delete or
Problem snub problem markers related to the schema file.
Markers
Validation mode
Foreground Choose this option for the validator to run in the
foreground with a progress dialog
Option Description
Background Choose this option for the validator to run in
background.
Break on This option defines the behaviour of the validator
first error or on encountering an error of any kind.
warning If enabled, the validator stops proccessing on
encountering the first error and returns.
Opening an From the Navigator view, double click on the environment (Map
Existing Component File) to be opened and modified.
Environment
Drop cursors Drop cursors indicate where it is possible to dock views in the
Workbench window. Several different drop cursors may be
displayed when rearranging views.
Rearranging views The position of the Navigator view in the Workbench window can
be changed.
1. Click in the title bar of the Navigator view and drag the view
across the Workbench window. Do not release the mouse button
yet.
2. While still dragging the view around on top of the Workbench
window, note that various drop cursors appear. These drop
cursors (see previous section) indicate where the view will dock
in relation to the view or editor area underneath the cursor when
the mouse button is released. Notice also that a rectangular
highlight is drawn that provides additional feedback on where
the view will dock.
3. Dock the view in any position in the Workbench window, and
view the results of this action.
4. Click and drag the view's title bar to re-dock the view in another
position in the Workbench window. Observe the results of this
action.
5. Finally, drag the Navigator view over the Outline view. A stack
cursor is displayed. If the mouse button is released the
Navigator is stacked with the Outline view into a tabbed
notebook.
Tiling editors Workbench allows for the creation of two or more sets of editors in
the editor area. The editor area can also be resized but views
cannot be dragged into the editor area.
8. Drag and dock the editor somewhere else in the editor area,
noting the behavior that results from docking on each kind of
drop cursor. Continue to experiment with docking and resizing
editors and views until Workbench has been arranged to your
satisfaction. The figure below illustrates the layout if one editor
is dragged and dropped below another.
4. Once the cursor is to the right of the Outline tab and the cursor is
a stack cursor, release the mouse button.
Observe the Navigator tab is now to the right of the Outline tab.
Fast views Fast views are hidden views, which can be quickly made visible.
They work in the same manner as normal views, only when they
are hidden they do not take up screen space on the Workbench
window.
This section explains how to convert the Navigator view into a fast
view.
Creating fast views These instructions commence by creating a fast view from the
Navigator view and then explain how to use the view once it is a
fast view.
The shortcut bar now includes a button for the Navigator fast
view.
Working with fast The navigator has now been converted into a fast view. This
views section demonstrates what can now be done with it.
Confirm that the shortcut bar at the bottom left of the window still
has the Navigator view and looks like this:
Note: If a file is opened from the Navigator fast view, the fast view
automatically hides itself to allow the file to be worked with.
Map Definition Tab Select the Map Definition tab to edit or view the map component
file values.
Option Description
Map Definition tab
Source Data
Traditional Type of source model.
Model
XPATH Type of source model.
Model
Style Sheet Type of source model.
Schema Checked if the schema is based on a DTD.
based on
DTD
Source Source data model defined in the map component
Model file.
Source Source access model defined in the map
Access component file.
Source Source schema file the XPATH or XSL model is
Schema based on.
Root Root element of the source schema file chosen.
Element
Source Source constraint validation style sheet. It is used
Constraint when the Source model specified is an XPATH
Model or Style Sheet. The value must have an .xsl
extension.
Clear This button is used to remove the Source Model
Source from the Map Component file.
Model
Target Data
Traditional Type of the target model.
Model
Style Sheet Type of the target model.
Schema Checked if the schema is based on a DTD.
based on
DTD
Option Description
Target Target data model defined in the map component
Model file.
Target Target access model defined in the map
Access component file.
Target Target schema file the XSL model is based on.
Schema
Root Root element of the target schema file chosen.
Element
Target Target constraint validation style sheet. It is used
Constraint when the Target model specified is an XPATH
Model or Style Sheet. The value must have an .xsl
extension.
Clear This button is used to remove the Target Model
Target from the Map Component file.
Model
Environment Variables
Variable Names of environment variables.
Name
Variable Values of the environment variables.
Value
Add Add an environment variable.
Delete Delete an environment variable.
Comments Displays all comments found in the map
component file. Comments are denoted by a semi-
colon (;) at the beginning of the line.
Mapping Tab Select the Mapping tab to view the source and/or target data
model(s).
Option Description
Mapping tab
Source Displays source data model in hierarchical format.
Allows expanding and collapsing of structure.
Target Displays target data model in hierarchical format.
Allows expanding and collapsing of structure.
Source Properties Select Source Properties to edit attributes for the source data model
Tab items.
Column Description
Note: Click the + to the left of a DMI to expand it, select – to the left
of a DMI to collapse it.
Source Properties
Name Name of each data model item within the source
model.
Access Access model type assigned to each data model
Type item.
Occurence Minimum number of times the data for this data
Min model item can occur in the input data.
Occurence Maximum number of times the data for this data
Max model item can occur in the input data.
Size Min Minimum size for the data being read in for this
data model item.
Column Description
Size Max Maximum size for the data being read in for this
data model item.
Format Defines the format for numerics, dates, and times.
Start Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the starting offset of the dmi.
End Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the ending offset of the dmi.
Match Available on Tag items only. Matches against first
Value characters of record/segment of input data.
Verify ID Performs cross reference look up to determine if
value in input data is in id list.
File Available on Group items only. Used to change the
input file to be read in. New input file only parsed
within the Group defined on.
Sort Not available in source data models
Note: The start offset and end offset are calculated based on the
size min and size max of the data model items and the match value
of tags. They are shown only when the size min and size max of a
dmi are equal. Once a dmi with a non-matching size min and size
max is reached, the offset for it and the rest of the data model
items are displayed as 0.
Target Properties Select Target Properties to edit attributes for the target data model
Tab items.
Column Description
Target Properties
Name Name of each data model item within the target
model.
Access Access model type assigned to each data model
Type item.
Occurence Minimum number of times this DMI can appear in
Min the output data.
Occurence Maximum number of times this DMI can appear in
Max the output data.
Size Min Minimum size for the data being written out for
this data model item.
Size Max Maximum size for the data being written out for
this data model item.
Format Defines the format for numerics, dates, and times.
Start Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the starting offset of the dmi.
End Offset A non-editable column that appears only for
Application format data models (OTFixed.acc),
displaying the ending offset of the dmi.
Match Available on Tag items only. First data written out
Value for this record/segment.
Verify ID Performs cross reference look up to determine if
value to be written out is in id list.
File Available on Group items only. Used to change the
output file to be written to. New output file only
written to within the Group defined on.
Sort Used to sort data to be written out within the
Group. The Group item must control the looping.
Input Tab Select the Input tab to view the input file.
Input file will not be shown if:
1. Attachment file does not define INPUT_FILE environment
variable.
2. If the AI Control Server is remote and a FTP site corresponding
to AI Control Server is not present in Remote Site Navigator.
You can edit the input file and then save it. The “Save Input
File” button gets enabled as you start editing.
You can search for text within the data file by using the Search
menu item.
Output Tab Select the Output tab to run a translation and view the output file.
Option Description
Output – Provides ability to run a translation and view the
output file created.
Map Initial map component file to be used for this
Component translation, if not inbound or outbound
processing.
Trace Level Determines the amount of data to be written to
the session trace log. 0 – Minimal 1023 – Full
Translation Type
Option Description
Application Denotes that application data is being processed
and not a public standard.
Enveloping Denotes that outbound data is being processed
and that OTEnvelp.att should be used as the
initial environment.
De- Denotes that inbound data is being processed and
enveloping that OTRecogn.att should be used as the initial
environment.
Keep Input Denotes the file the input data will be copied
File from. This file will not be removed during
translation.
Environment Variables – User defined variables passed in to
translation when invoked. These environment variables are set
only for the translation session and not saved in the Map file.
Variable Name of user defined environment variable.
Name Recommended to be in upper case.
Variable Value to be passed in for the environment
Value variable to be used during translation.
Tool Bar Options When in Map Editor, new icons are added to the tool bar.
Icons Description
Re-calculates the offsets in the properties page for
traditional models
Moves the currently highlighted data model item
one level down without restructuring the model's
data hierarchy.
Moves the currently highlighted data model item
one level up without restructuring the model's
data hierarchy.
Removes the current data model item (DMI). The
item is removed from the model and stored on the
clipboard until it is pasted.
Makes a copy of the current data model item. The
item is copied and stored on the clipboard until it
is pasted.
Pastes a copy of the stored item (DMI). The item
on the clipboard is stored until another cut or copy
operation is performed.
Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Adds an empty data model item below the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Adds an empty data model item above the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Navigates to a specified DMI. You can select the
Icons Description
available DMI from the “Go to” dialog.
Checks the syntax of the current mapping file in
the acive Text Editor
Applies the changes made to the current Data
Model file in the active Model Text Editor.
Collapses all the branches for the tree in focus.
Menu Options When in Map Editor, a new item is added to the Menu.
Tools Menu
Duplicate Duplicates the highlighted data model item.
Shift Right Shifts the highlighted data model item to the right
one hierarchical level.
Shift Left Shifts the highlighted data model item to the left
one hierarchical level.
Insert Inserts a new data model item below the
Below highlighted data model item.
Insert Inserts a new data model item above the
Above highlighted data model item.
Go To Navigates to a specified line number in the
currently opened file.
Includes References Include files with current data model.
Refresh Refreshes the mapping area and redraws DND
Links link lines.
Check Checks the syntax of the current mapping file in
Syntax the active Text Editor.
Deploy Deploys (copies) relevant mapping files to a
directory (local or remote). These files can be used
during debugging tasks or for moving files into a
production functional area.
Link Menu
Delete Link Deletes the selected link and its associated rules
Loop Highlights repeating DMIs if source or target
Control model is XSL. Automatically creates loop control
Mode DMIs and rules when mapping repeating defining
DMIs
Map Rule Allows you to select a DMI as an owner node on
Owner the source XSL model to hold the map builder
Node rules of the child nodes when Drag and Drop is
performed.
It is mandtory that the Map Rule Owner Node be
the Parent or Grandparent or any hierarchy above
the source node to Drag and Drop DMIs from
source to target.
Overview of the Model Editor allows you to modify the data model attributes,
Model Editor structure, and add rules to the data model.
Opening an From the Navigator view, double click on the data model to be
Existing Data opened and modified.
Model
Overview Tab Select the Overview tab to modify the data model structure,
attributes, and rules.
Option Description
Model Displays the type of model file (Source / Target)
Header and allows for selection of Access Model File to
use for current model.
Model Displays the data model in hierarchical format.
Items Ability to change the structure of the data model.
Model Item Modifies attributes for the selected data model
Editor item.
Rule Area to create rules for the selected data model
Builder item.
Option Description
Rule Mode Tabs
Present Rules to be performed if the data model item is
Rules present
Absent Rules to be performed if the data model item is
Rules absent.
Error Rules Rules to be performed if the data model item is in
error.
View All Displays all the Present, Absent and Error rules.
Rules
Rule Builder Toolbar
Icons Description
Inserts an equal sign into the rule. Used for
assignments.
Inserts a literal into the rule. Cursor is placed
between the double quotes.
Inserts an IF condition into the rule. Cursor is
placed between the square brackets.
Inserts an ELSE IF condition into the rule. Cursor is
placed between the square brackets.
Inserts an ELSE condition into the rule. Cursor is
placed between the square brackets.
Inserts a Null Condition. Rules under a Null
Condition will always be performed.
Inserts a Conditional Expression. Condition will
then need to be created.
Inserts a carriage return in the rules. Aides in
readability.
Moves focus area to next parameter when creating
a Conditional Expression or implementing a
function.
Checks the syntax of all rules and displays any
errors found.
Option Description
Applies all new rules added to the data model
item. Also runs syntax checker.
Applies all new rules in all data model items. Also
runs syntax checker.
Inserts cross reference rules for outbound
translations. See Map Component Files for
Enveloping/ De-enveloping for more information.
Cuts the highlighted text within Rule Builder.
Properties Tab Select the Properties tab to modify the data model item attributes.
See Source Properties Tab/Target Properties Tab in this section for
a description of columns.
Model Text Tab Select the Model Text tab to view the data model in raw data
format.
Tool Bar Options When in Model Editor, new icons are added to the tool bar.
Icons Description
Moves the currently highlighted data model item
one level down without restructuring the model's
data hierarchy.
Moves the currently highlighted data model item
one level up without restructuring the model's
data hierarchy.
Removes the current data model item (DMI). The
item is then removed from the model and stored
on the clipboard until it is pasted.
Makes a copy of the current data model item. The
item is copied and stored on the clipboard until it
is pasted.
Icons Description
Pastes a copy of the stored item (DMI). The item
on the clipboard is stored until another cut or copy
operation is performed.
Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Adds an empty data model item below the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Adds an empty data model item above the
currently highlighted item. The newly created DMI
has a default name which can be changed.
Navigates to a specified DMI. You can select the
available DMI from the “Go to” dialog.
Checks the syntax of the current mapping file in
the acive Text Editor.
Applies the changes made to the current Data
Model file in the active Model Text Editor.
Collapses all the branches of the tree.
Other Toolbar There are some other tool bar options available while working on
Options model files
Open an input file and click on the icon to identify the offsets
in different fields. The following dialog comes up:
Only the model files which have their access file as OTFixed and
those models which have no Access files will be listed. If the
ModelFile cannot be parsed or if the Model File has no offsets
defined then Workbench will popup the following error.
On choosing a correct model file, click OK. The alternate fields are
highlighted in the input file based on the offset.
Note: When the Input tab of Attachment File Editor, Offset Coloring
button will not bring up the file selection window, it will
automatically consider the Source Model file of the attachment.
Menu Options When in Model Editor, a new item is added to the Menu.
Tools Menu
Duplicate Duplicates a selected data model item (DMI) at the
same hierarchy and level. All attributes of the data
model item are duplicated except for the data
model item name. The name of the duplicated
data model item is changed to be a system
assigned unique name which can be changed.
Shift Right Moves the currently highlighted data model item
one level right to restructure the model's data
hierarchy.
Shift Left Moves the currently highlighted data model item
one level left to restructure the model's data
hierarchy.
Insert Adds an empty data model item below the
Below currently highlighted item. The newly created DMI
has a default name which can be changed.
Insert Adds an empty data model item above the
Above currently highlighted item. The newly created DMI
has a default name which can be changed.
Go To Navigates to a specified DMI. You can select the
available DMI from the “Go to” dialog.
In a model Text page, you can use this menu to
You can access the Go To option from the Tools menu also.
In the Overview page and the Properties page of the model editor,
GoTo Column and GoTo Byte is not valid. Hence clicking on them
will result in Workbench showing an error in the status bar as
below.
Include File Editor This editor provides the ability to view include files or to modify
user defined include files.
Note: When opening an include file, the Tool menu option is added
to the Menu bar, and the Check Syntax icon is added to the tool
bar.
Close icon.
2. The next view explains the various steps to be followed to perform the
specific task. The first step provides an introduction of the IPM task to be
performed.
3. The IPM then guides you through the steps to be followed to perform a
specific task. As you go through the various steps, you are prompted to
perform certain tasks.
4. By following all the steps and performing the specified tasks, an overall goal
is achieved. In this case a New Data Model is created.
The following table lists the set of buttons provided in the IPM. These
buttons helps you iterate through the IPM and also perform the required
tasks.
Symbol Title
/ Click to Begin/Click to Restart
Click to Skip
Example:
AI translation:
1. Goto Help>Interactive Process Manager, select AI Translation. Click OK. The
following screen comes up. It displays the major tasks to be performed to
carry out an AI translation:
• Introduction
• Connecting to the Server
• Creating a New Map Component File
• Mapping and Running a Translation.
5. Follow the steps listed in this section and click (Click to Complete
button).
Toolbar The toolbar of the Navigator view contains the following buttons:
Back
This command displays the hierarchy that was displayed
immediately prior to the current display. For example, if you Go
Into a resource, then the Back command in the resulting display
returns the view to the same hierarchy from which you activated
the Go Into command. The hover help for this button tells you
where it will take you. This command is similar to the Back button
in a web browser.
Forward
This command displays the hierarchy that was displayed
immediately after the current display. For example, if you've just
selected the Back command, then selecting the Forward command
in the resulting display returns the view to the same hierarchy from
which you activated the Back command. The hover help for this
button tells you where it will take you. This command is similar to
the Forward button in a web browser.
Up
This command displays the hierarchy of the parent of the current
highest-level resource. The hover help for this button tells you
where it will take you.
Collapse All
This command collapses the tree expansion state of all resources in
the view.
Link with Editor
This command toggles whether the Navigator view selection is
linked to the active editor. When this option is selected, changing
the active editor automatically updates the Navigator selection to
the resource being edited.
Menu
Click the icon at the left end of the view's title bar to open a menu
of items generic to all views. Click the black upside-down triangle
icon to open a menu of items specific to the Navigator view. Right-
click inside the view to open a context menu.
Select Working Set
Filters
This command allows you to select filters to apply to the view so
that you can show or hide various resources as required. File types
selected in the list are not shown in the Navigator. This is what the
file filters dialog looks like:
Icon Description
Project (open)
Folder
File
Indicates that the folder is
present in the Model Search
Order (MSO) path
Once a file is selected, select it and right click on it. Another menu
is provided:
Menu Description
New
Project Create a new project.
Menu Description
Data Model Create a new data model.
XPATH Data Create a new XPath data model.
Model
XSL Style Create a new XSL style sheet.
sheet
Map Create a new map component file.
Component
Other Create a new data model, map component file,
or folder.
Menu Description
Import Imports existing project file into current
workspace.
Export Exports selected file to a specified directory.
Refresh Refreshes the current view to display newly
added files in the folder.
Copy to Copies selected file or folder to another remote
Remote system.
System
Run Displays the run translation dialog box to
Translation allow users to run translation. Only available
on map component files.
Properties Provides system information on the selected
file.
Context Menu
Open This command opens the selected resource. If the resource is a file
that is associated with an editor, then Workbench launches the
associated internal, external, or ActiveX editor and opens the file in
that editor.
Open With This command allows you to open an editor other than the default
editor for the selected resource. Specify the editor with which to
open the resource by selecting an editor from the submenu.
Paste This command pastes resources on the clipboard into the selected
project or folder. If a resource is selected the resources on the
clipboard are pasted as siblings of the selected resource.
Delete This command deletes the selected resource from the workspace.
Rename This command allows you to specify a new name for the selected
resource.
Import This command opens the import wizard and allows you to select
resources to import into Workbench.
Export This command opens the export wizard and allows you to export
resources to an external location.
Add Bookmark This command adds a bookmark that is associated with the
selected resource (but not to a specific location within the
resource).
Close Project The close project command is visible when an open project is
selected. This command closes the selected project.
Open Project The open project command is visible when a closed project is
selected. This command opens the selected project.
Add Path To Model Adds the selected folder to the Model Search Order.
Search Order
Team Menu items in the Team submenu are related to version control
management and are determined by the version control
management system that is associated with the project. Eclipse
provides the special menu item Share Project... for projects that are
not under version control management. This command presents a
wizard that allows the user to choose to share the project with any
version control management systems that has been added to
Eclipse. Eclipse ships with support for CVS.
Compare With Commands on the Compare With submenu allow you to do one of
the following types of compares:
Compare two or three selected resources with each other
Compare the selected resource with remote versions (if the
project is associated with a version control management system).
Compare the selected resource with a local history state
After you select the type of compare you want to do, you will either
see a compare editor or a compare dialog. In the compare editor,
you can browse and copy various changes between the compared
resources. In the compare dialog, you can only browse through the
changes.
Replace With Commands on the Replace With submenu allow you to replace the
selected resource with another state from the local history. If the
project is under version control management, there may be
additional items supplied by the version control management
system as well.
Properties This command displays the properties of the selected resource. The
kinds of properties that are displayed depend on what type of
resource is selected. Resource properties may include (but are not
limited to):
Path relative to the project in which it is held
Type of resource
You can choose to hide system files or generated class files in the
Showing or Navigator view. (System files are those that have only a file
hiding files in extension but no file name, for example .classpath.)
the Navigator
view
1. On the toolbar for the Navigator view, click the Menu button
Trading Partner Trading Partner Navigator displays trading partners set up in the
Navigator Application Integrator™ Profile Database.
Trading Partner The Trading Partner Attribute Viewer displays each of the fields
Attribute Viewer saved in the Profile Database for the selected trading partner.
Tab Description
Data Displays all data model functions with a
Model description of what they do and the arguments
Functions passed to them.
XSL Displays all XSL model functions with a
Functions description of what they do and the arguments
passed to them.
String Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
Data Displays all data model functions with a
Model description of what they do and the arguments
Structure passed to them.
Functions
Database Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
SQL Displays all data model functions with a
Functions description of what they do and the arguments
passed to them.
Default Miscellaneous functions that do not fall into any of
the other categories.
Tab Description
Keywords Displays all keywords with a description of what
they do and arguments passed to them if
applicable.
Operators Displays all Operators can be used in Data models
with a description of what they do.
Date and Displays all data model functions with a
Time description of what they do and the arguments
Functions passed to them.
Control Displays all data model functions with a
Server description of what they do and the arguments
Functions passed to them.
All Displays all functions with a description of what
Functions they do and the arguments passed to them.
Message Variables Displays all DMIs, arrays, and variables that are present in the
currently opened data models or map component files.
Tab Description
Sorts the tree elements alphabetically.
Expands/collapses the tree.
Tab Description
Adds a new variable. This inserts a new node
below the Model Variables tree node. By double
clicking on the new node, the new node’s name
can be changed.
Adds a new array. This inserts a new node below
the Model Arrays tree node. By double clicking on
the new node, the new node’s name can be
changed.
Properties Displays the properties for the currently selected file in Navigator.
1. Right click on the left hand side pane to get the context menu.
Now select New->Target Site
2. Enter the URL of the FTP site, username, password, time out and
other details in the dialog that appears and click the Finish
button.
3. The new remote site is created and displayed in the Remote Site
Navigator view. You could click on any folder to view the files.
4. Right click to view the context menu.
The context menu on the panes allows the user to perform the
following:
Create new FTP connections(site)
Discard existing connections
Edit the properties (username/password) of an FTP
connection.
Copy file/directory to another remote (FTP ) location
Note: When you open, modify and save a file in the Remote site
Navigator, the timestamp gets changed. The file takes the
timestamp of the remote machine from which it is fetched.
This section discusses how to create new map component files and
modify existing map component files, including the recommended
naming conventions for these files.
Defining a Map A key step in mapping data and preparing it for translation using
Component File Application Integrator™ is to create an environment by defining
and saving a map component file. As described earlier, an
environment consists of components that control how the data is to
be translated, such as the input/output files, and models/style
sheets to be used. In a Workbench application, an environment is
referred to as a “map component file,” and the environment
definition is “attached” to the translator.
Recommended When naming the map component files, keep the following
Naming considerations in mind:
Convention
Use “.att” for the suffix.
Do not use the prefix “OT,” since it will conflict with names
already assigned in the Application Integrator™ application.
The prefix “OT” is a reserved prefix for Application
Integrator™ application files. Using it can compromise the
product’s performance.
Use upper- and lowercase letters and underscore “_” only.
Do not use spaces.
Defining a New Map The Map Component Editor dialog box is used to define and
Component File modify map component files.
The map component file must have at least the source or target
model defined. It is recommended that you save your map
component files and data models to the models directory specified
in Trade Guide System Configuration or “MyModels” directory
specified at AI Control Server installation time.
3. Browse and select a value for the Parent Folder. This is where
the map component file will be stored.
4. Enter a name for the new map component file and click Next>.
4. Choose Save from the File menu (or icon from the Tool
bar) to save your changes.
To save a Map 1. Activate the Map Editor window of the data model to be saved
Component File under with a new name. (Click the title bar of the window to activate
a new name it.)
2. From the File menu, choose Save As.
3. If the File is a Map Component file with two model files, the
Save As dialog is as follows.
4. If you want to change the names of the files, either browse and
select the name and location or type the new file name in the
box provided. Then save the files.
5. If the files already exist, a dialog appers saying that the files
already exist and asks if you want to overwrite the files. Select
the option you want.
6. If you open a single model file in Map editor, and wish to save
it as an Attachment file, check the check box Save as .att
provided at the left extreme corner of the dialog and then
provide a valid value for the Map Component File.
To save all open files Use this procedure to save all files that are open in the work area
appearing in the work using their current filenames. If you want to save any of the files
area (Save All) using a different filename, use the Save As procedure.
• From the File menu, choose Save All.
Working with
Data Models
Defining a New Data The following are the steps to follow to create a new data model.
Model
When creating a new map component file, the data models are
automatically created if they were not already present. Section 4.
7
2. Browse and select a value for the Parent Folder. This will be the
location where the map component file will be stored.
3. Enter a file name for the new data model.
4. Select the mode for the data model. This will be either Source
for parsing input data or Target for writing out the output data.
5. Select the type as EDI.
3. Browse and select a value for the parent folder. This will be the
location where the map component file will be stored.
4. Enter a file name for the new data model.
5. Select the mode for the data model. This is either Source for
parsing input data or Target for writing out the output data.
6. Select the From template radio button under the Type label.
7. Click Next.
Working with XML Refer to Generating Data Models from Schemas for details
based Data Models
Converting SEF Follow these steps to convert a Sef format file to a Data Model.
Format to a Data
Model
2. In the Import dialog, select the Sef to Mdl option and click
‘Next’.
3. In the Sef To Mdl Import Wizard, specify the Input file to use
(.sef) and the output directory for the output files to be
generated in. Choose the options for model type:
Model Direction: Source, Target
Converting COBOL Follow these steps to convert a COBOL Copy Book file to a Data
Copy Book Format Model.
to a Data Model
1. In Workbench, click File > Import.
The following screen is displayed.
2. In the Import dialog, select the Copy Book to Mdl option and
click Next.
3. In the Copy Book To Mdl Import Wizard, specify the Input file
to use (.lib) and the output Mdl file to be generated. Choose the
options for model type:
Defining a Data Model The following are the steps to open an existing data model or
Item standard data model. Standard data models are provided with EDI
Plug-Ins. These models provide the structure for EDI documents
and can be used as a starting point for mapping. You need to
simply create the other model for the application data, and add
rules.
Note: The default hierarchy level for a new item is the same as the
currently selected data model item.
Note: For this section, model editor will be the only editor open.
1. In the Model Editor window (in the Overview page), select the
data model item above or below which you would like to add a
new data model item.
2. To append (add below) a data model item, do the following:
Changing Data Model Each data model item you add has a default name of
Item Names NewDMI_<date>_<time>.
To change the name of a data model item
1. In the DMI Editor area, highlight the name and type over the
default or existing name with a new name.
2. To accept the new name, click outside the data model item name
box.
Copy and Paste
To copy and paste a data model item
1. In the Overview of Data Model File area, highlight the name
2. The name of the new DMI will be the same as the original with
_0001 appended to the end.
Deleting a Data
Model Item
To delete a data model item
1. In the Overview of Data Model area, highlight the name and
select the Cut icon.
Assigning a Data For each data model item you add to your model, you must assign
Model Item Type an item type. The options you view in the Item Type selection list
are based on the access model associated with your model.
Hint: If you are unsure of the exact definitions of the item types,
you can view the access model associated with your model. To do
this, double click on the access model in the Navigator view.
Data Model Item There are four major data model item structures: group, tag,
Structures defining, and container items. One or more item type names may
be associated with each of these structures, based on your access
model. All data model items default to the item type Group. Once
you define the item type, the leftmost icon in the Layout Editor
(referred to as the access icon) will change to reflect the major
structural type of the item.
Group
Tag
Defining
Container
Hint: In the Model Editor work area; there are two tabs at which
attributes of a data model item can be modified. On the Overview
tab, you are working with one data model item at a time, by
selecting it in the Overview of Data Model view. On the Properties
tab, you can modify all data model items.
Occ Min/Max
Setting Minimum and Maximum Occurrence
The minimum and maximum occurrence value controls the
number of times a data model item must and can be present in the
data stream. The minimum and maximum occurrence value of a
new data model item is user-defined. The default is 0. The
minimum occurrence value must be less than or equal to the
maximum occurrence value. A minimum occurrence of 0 indicates
that the data model item is optional. The maximum value can be
set with the asterisk (*) wild card to specify a variable amount.
2. For each box, type a numeric value that specifies the minimum
and maximum occurrence.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.
Size Min/Max
Setting Minimum and Maximum Size
The minimum and maximum size value controls the data model
item’s field size in the data stream. The minimum and maximum
size value of a new data model item is user-defined. The minimum
size value must be less than or equal to the maximum size value.
The size maximum value cannot exceed 4092.
Note: Size is not available for numeric Date or Time fields. You
must specify the exact size through correct masking in the Format
box. See the next section for instructions on formatting.
2. For each box, type a numeric value that specifies the minimum
and maximum size allowable for data that is mapped to this
item.
3. To accept the values entered, click outside the box or press Tab
to move to the next option.
Format
Defining the Data Model Item’s Format
The data model item format box is only available if the data model
item is defined as a date, time, or numeric item type.
To add a format
1. Select the data model item to be modified and select the Format
box.
2. Type the format for the date, time, or numeric field using the
numeric and sign masking characters described in this section,
for example, you might type “MM/DD/YYYY” for a date item.
Mask Example
N Non space-taking sign. Includes a negative sign for a
negative value. No character is used to indicate a
positive value.
Example:
–0.12 “NRRRRR” “–.12”
+0.12 “NRRRRR” “.12”
R Floating number with an explicit decimal when required
Example:
0.12 “RRRRR” “.12”
r Used with “R” to indicate decimal precision
Example:
0.12 “RRRrrrr” “.1200”
0 Used with “R” to specify a whole zero digit is required
for a decimal value
Example:
0.12 “0RRRRR” “0.12”
:n Minimum size, where “n” is from 1 to 9
Example:
0.12 “0RRRRR:5” “000.12”
Mask Example
:, Decimal notation defined in format
Example:
0.12 “RRRRR:,” “,12”
:rn Maximum decimal size, where “n” is from 1 to 5.
Example:
0.12 “RRRRR:r3” “.120”
Mask Example
9 Zero fill whole leading or decimal trailing zero digits
Examples:
123 “99999” “00123”
1.1 “99.99” “01.10”
Z Space fill whole leading or decimal trailing zero digits
Examples:
123 “ZZZZZ” “ 123”
1.1 “ZZ.ZZ” “ 1.1 ”
F Suppress whole leading or decimal trailing zero digits
(variable length)
Examples:
000123 “FFFFF” “123”
1.100 “FF.FF” “1.1”
$ Monetary symbol, treated like the “F” mask character,
but inserts the dollar sign at the beginning of the string
(variable length)
Examples:
134567 “$ZZZ,Z99.99” “$ 134,567.00”
134567 “$FFF,F99.99” “$134,567.00”
1.25 “$$$,$$$.99” “$1.25”
Sign (Masking)
Character Explanation Examples
N Displays a negative sign for a Negative:
negative value. -123 “99999N” “00123-”
No character is used to indicate a Positive:
positive value. 123 “99999N” “00123”
- Displays a negative sign for a Negative:
(use the hyphen negative value. -123 “99999-” “00123-”
character) Displays a space for a positive Positive:
value. 123 “99999-” “00123 ”
None No character is used to indicate a Negative:
positive or negative value. -123 “99999” “00123”
Positive:
123 “99999” “00123”
+ Displays a negative sign for a Negative:
(use the plus negative value. Displays a plus -123 “99999+” “00123-”
sign character) sign for a positive value. Positive:
123 “99999+” “00123+”
_ Displays a negative sign for a Negative:
(use the negative value. -123 “99999_” “00123-”
underscore Displays a zero (0) for a positive Positive:
character) value. The zero is dropped when 123 “_99999” “000123”
there are only whole digits and is 123 “999.99_” “001.230”
right justified. 123 “99999_” “000123”
(right justified)
A The ASCII overpunch table is Negative:
(must be placed used to indicate a negative or -123 “99999A” “00012s”
in the rightmost postive value. Positive:
position) 123 “99999A” “000123”
Sign (Masking)
Character Explanation Examples
E The EBCDIC table is used to Negative:
(must be placed indicate a negative or positive -123 “9999E” “0012L”
in the rightmost value. Positive:
position) 123 “9999E” “0012C”
MSB LSB
Platform Binary Packed Binary Packed
Intel/NT b p
Intel/Linux‡ b p
HP PA–RISC, B P
Itanium
Sun SuperSparc B P
IBM B P
IBM PowerPC‡ B P
SGI MIPS B P
‡ As of date of publication, Application Integrator™ is not available on these
platforms.
For example, to format a field for mainframe data that will contain
the packed values of +123, -123, and 123, you would use the format
‘PP’. The translator would read and store the values as follows:
Mask Usage
:L Left justify
:R Right justify
Examples:
12 “ZZZZZ.ZZ” “ 12 ”
12 “ZZZZZ.ZZ:L” “12 ”
12 “ZZZZZ.ZZ:R” “ 12”
triads “,” or “.” can be used with “9”, “F”, “Z”, “$”, but
not with “R” for the thousand position placement
character.
@ Escape literal characters defined within the format.
(Escape) Example:
“@For: $ZZ,ZZZ” escapes the “F” literal.
Mask Usage
M Date location for month, requires two Ms
Example:
19940902 “MM/DD/YY” “09/02/94”
D Date location for day of month, requires two Ds
Example:
19940902 “DD/MM/YYYY” “02/09/1994”
Y Date location for year, requires one, two or four Ys
Example:
19940902 “YMMDD” “40902”
m Replaces leading month digit (if zero) with space
Example:
19940902 “mM/DD/YY” “ 9/02/94”
d Replaces leading day digit (if zero) with space
Example:
19940902 “dD/MM/YY” “ 2/09/94”
0 Defines a date of all zeros to be constructed
(#DATE_NA)
y Date location for variable length year must be in this
form: “yyYY”
<spac A space “ ” as a leading character in a mask defines a
e> date of all spaces to be parsed or constructed
(#DATE_NA)
Mask Usage
H Time location for mandatory hours, requires two Hs
(Required) Example:
120959 “HH:MM:SS” “12:09:59”
M Time location for mandatory minutes, requires two
(Required) Ms
Example:
120959 “HH:MM:SS” “12:09:59”
S Time location for mandatory seconds, requires two
Ss
Example:
120959 “HH:MM:SS” “12:09:59”
s Time location for optional seconds, requires two s’
(Source Example:
only) 1209 “HH:MM:ss:” “12090000”
120959 “HH:MM:ss” “12095900”
D Time location for mandatory decimal seconds,
requires two Ds
Example:
12095900 “HH:MM:SS:DD” “12095900”
Mask Usage
d Time location for optional decimal seconds, requires
(Source two ds
only) Example:
120959 “HH:MM:SS:dd” “12095900”
1209591 “HH:MM:SS:dd” “12095910”
12095912 “ HH:MM:SS:dd” “12095912”
<space> A space “ ” as a leading character in a mask defines
a time of all spaces to be parsed or constructed
(#TIME_NA). The value parsed and passed back to
the source data model will be spaces, not zeros.
Target Processing
H, M, S, and D are the target formatting characters. Use of
the source masking characters ‘s’ or ‘d’ will be taken as
literals and output as such, for example, “12:14:ss:dd.”
The value received is first converted to an 8-digit number by
adding trailing zeros and then output based on the format
definition. If the value is a single digit (e.g., “2”), a leading
zero is first inserted before the trailing zeros are added
(e.g., “02”).
A value of more than 8 digits generates error code 146.
Note: Refer to Error! Not a valid result for table. for more
details on environments.
2. Single click the ellipses that appear in the box to open a dialog
box that allows you to select the sort sequence of the defining
items within the group selected. This Sort dialog box displays
two areas labeled “List” and “Sort.”
3. From the List box, select each defining item by which to sort
the data model items in the group. To place the item in the Sort
box, choose the >> button. To remove an item from the Sort,
select it and choose the << button, returning it to the List box.
The first item you select is the primary sort, the second item
becomes the secondary sort, and so forth.
4. Choose the OK button to save your sort order for the group or
choose the Cancel button to return to the Layout Editor
window without specifying a sort order.
Once you return to the Layout Editor and select another area of
the window, the sort order appears in the Sort box.
Parent Item 1
Child Item 1
Child Item 2
Child Item 3
Parent Item 2
The hierarchy also determines processing flow, child to sibling and
then back to parent.
Parent (3)
Child (1)
Sibling (2)
Refer to the Understanding Environments (Map Component Files)
section for a discussion of processing flow.
Including Files in The Include… option allows you to attach Include files to your
Data Models data models. Include files contain rules that you can reference
from your data model so you can use them once or multiple times.
The Include file’s extension is “.inc”. The rules are in the form of
declare statements.
To access the 1. From the main menu choose Tools. The Tools drop down
Include… option menu appears.
2. Choose Includes. The Include Files dialog box appears.
The Include dialog box displays Available Files on the left side
and Included Files on the right. The Available Files are those
Include files that are available to this data model. The data
model cannot access the Available Files until they are linked to
the data model. To do this move the filename from the
Available Files list into the Included Files list, then apply the
change and save the data model. The following table describes
the items found on the Include dialog box.
Item Description
Available Files This list box displays the filenames of the
Include files available to this data model.
Included Files This list box display the filenames of the
Included files that will be or are linked to
the data model.
<< Choosing this button moves the filename
from the Included Files list to the Available
Files list.
>> Choosing this button moves the filename
from the Available Files list to the Included
Files list.
OK Saves the changes.
Cancel Exits the Include dialog box.
To link an Include file 1. From the Model Editor view, choose Tools. The Tools drop
to a data model down menu appears.
2. Choose Includes. The Includes dialog box appears.
3. In the Available Files list box, highlight the filename of the
Include file to be linked to the data model.
4. Choose the >> button. The filename moves from the Available
Files list box to the Included Files list box.
5. To complete the entry, choose the OK button.
To unlink an Include 1. From the Model Editor view, choose Tools. The Tools drop
file to a data model down menu appears.
2. Choose Includes. The Includes dialog box appears.
3. In the Included Files list box, highlight the filename of the
Include file to be de-linked from the data model.
4. Choose the << button. The filename moves from the Included
Files list box to the Available Files list box.
5. To complete the entry, choose the OK button.
To view an Include file 1. From the Navigator view, double click the Include file to be
viewed.
2. To locate a specific item in the file, choose Find from the Edit
menu.
Saving a Data It is good modeling practice to save your model frequently during
Model development. It is recommended that you save your data models
and map component files to the models sub-directory.
To save a data model 1. Activate the Model Editor window of the data model to be
saved. (Click the title bar of the window to activate it.)
2. Save the data model in one of the following ways:
Menu – From the File menu, choose Save.
To save a data model 1. Activate the Model Editor window of the data model to be
under a new name saved with a new name. (Click the title bar of the window to
activate it.)
2. From the File menu, choose Save As.
3. In the Save As dialog box that appears, type the new name in the
box provided.
4. Choose the Open button to save to the new name and close the
dialog box.
To save all open files Use this procedure to save all files that are open in the work area
appearing in the work using their current filenames. If you want to save any of the files
area (Save All) using a different filename, use the Save As procedure.
• From the File menu, choose Save All.
To completely exit from 1. Activate the Model Editor window from where you want to
the Model Editor exit. (Click the title bar of the window to activate it.)
2. From the File menu, choose Close.
-Or-
Rules allow for the movement of data from the source to the target
Overview of data model. Rules can be placed on any type of data model item in
Rules Entry the data model (group, tag, container, or defining items) to
describe how data is referenced, assigned, and/or manipulated.
In the source data model (input side), the rules are normally placed
on the parent item (tag) to ensure the entire tag has been parsed in
and validated before any rules are executed and data mapping
occurs. In the target data model (output side), the rules are placed
on the defining items in order to specify the variables from which
values are to be mapped. (These variables having been assigned
via rules in the source data model.)
Modes for There are three modes for processing rules that are available for all
Processing Rules data model items within a data model. They are performed in the
following sequence:
Mode Description
PRESENT Rules are performed when entering rules
processing with a status of 0
(no errors).
ABSENT Rules are performed when entering rules
processing if one of the following statuses is found:
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
PRESENT mode with the same statuses.
Mode Description
ERROR Rules are performed when entering rules
processing with any other status than the following:
0-okay
138-data model item not found
139-data model item no value found
140-no instance
171-no children found
These rules will also be performed when leaving
ABSENT mode processing with a non-zero status.
Types of Rule Each rule consists of a condition with one or more actions. There
Conditions are five types of conditions: Null, Conditional Expression, IF, ELIF,
and ELSE.
A Null condition is always true and the actions will always be
performed. It is also referred to as No Condition.
With a Conditional Expression, IF, or ELIF, the condition must
come true before the actions are performed. Any data model item
can have one or more conditions, and each condition can have one
or more actions.
With ELSE, the actions will be performed if the preceding IF does
not come true.
Variables Variables are the links between the source and target data model
items. There are two types of variables supported by Application
Integrator™, as noted in the following table:
Variable Description
Variable This type of variable is a single value, also referred to
as a temporary variable. If more than one
assignment is made to the same variable name, the
last assigned value is the value that will be
referenced. A variable is useful for referencing the
same value multiple times, as a counter, or in a
concatenation.
Array This type of variable is a list of values. Manual
controls are recommended with this variable
whenever multiple levels in the data model are
mapped. These controls are used to ensure that the
proper data stays together, such as: detail records
with the proper header record or sub-detail records
with the proper detail records. There is a set index
and a reference index associated with the list of
values. The set index points to the last value placed
on the list and the reference index points to the next
value to be referenced from the list. The reference
index can be reset to the top of the list by using the
data model keyword RESET_VAL.
Two Methods for There are two different methods for creating rules to map your
Creating Rules data:
RuleBuilder
MapBuilder
RuleBuilder allows you to create customized mapping rules. Using
RuleBuilder, you have access to the full functionality of the
Workbench rules system. Depending on the expertise of the
developer, rule definition can be done either in a free-format text
editor or through prompting via the RuleBuilder interface. When
using RuleBuilder, the order that the rules appear in is the order in
which they will be executed during a translation session. Child
data model items are acted on before parent data model items,
hence the re-ordering of the rules to match the order of execution.
RuleBuilder, along with the Built-ins view, provides a series of
tabbed pages or “tabs” which organize the components of rules
(conditions, data model items, functions, variables, and so on) into
categories. Using the mouse or keyboard shortcuts, you can
quickly build the data model logic. The RuleBuilder interface is
described in the “Using RuleBuilder” section.
MapBuilder is an automated way of applying rules on data model
items. MapBuilder uses a drag and drop feature to map from
source to target data model. The rules are placed on the defining
items only and are a NULL condition (that is, the actions will
always be performed). In the source data model, MapBuilder
creates a rule that assigns a data model item’s value to a variable.
In the target data model, MapBuilder creates a rule that references
the variable for its value and assigns it to the data model item.
MapBuilder is an efficient way to map from source to target data
models when the input and output stream are the same, or
extremely similar, in structure. Refer to the “Using MapBuilder”
section for details.
Overview MapBuilder allows you to drag and drop rules between data
models using predefined settings for Variable Type, Variable
Name, Link Type, Select Data Assignment type, and Prompt with
Loop Control Warning Message. The following table shows the
predefined settings that are used when running MapBuilder.
Option Setting
Variable Type Array
Variable Name Both
Link Type Tag-To-Defining
Defining-To-Defining
Select Data Assignment Use DEFAULT_NULL() on Source EDI
Type Use STRTRIM() on Source non-EDI
Use NOT_NULL() on Target EDI
Accessing the
MapBuilder Function
To enable MapBuilder
1. Open the environment (Map Component File) that contains the
two models you wish to map.
Note: First, map Defining data model items, and then perform
loop control procedures. The loop control rules are inserted
at the beginning of PRESENT/ABSENT mode. These rules
must be executed before performing a data assignment, to
maintain the integrity of all mappings.
Drag and Drop In most cases, rules are placed in PRESENT mode as a null
condition (that is, actions are always performed). In the source
data model, MapBuilder creates a rule that assigns a data model
item’s value to a variable. In the target data model, MapBuilder
creates a rule that references the variable for its value and assigns it
to the data model item. Shown here are some hints to help your
modeling session:
You should map Defining items first, and then continue
with Group, Tag, and Container items. This is because the
loop control feature places rules at the beginning of
PRESENT and ABSENT mode and the loop control rules
must be processed before any data assignments are made.
You can view the rules created by MapBuilder by opening
the models in Model Editor. This displays the data model
rules you mapped so you can identify the MapBuilder and
loop control rules.
Note: To use the drag and drop feature, click the data model
item to highlight it. Move the mouse pointer to the new
location and release the mouse button.
Note: You can also drag and drop from the target to the
source defining items, however, MapBuilder always creates
the rules from source to target, maintaining the name of the
variable as “<source defining item name>_<target defining
item name>.”
AI Derived Links If a map is developed outside Workbench, then you will not be
Feature able to see the mapping links in Workbench, even if there are
mapping relations between fields. The AI Derived Links feature
allows you to visually see the links for those maps that are
developed outside of Workbench.
2. Click on the Derive Links icon on the tool bar. The following
dialog comes up:
The Derive Link feature is activated. It goes through all the rules,
derives the mapping relationships and draws the links.
Mapping Details To see the Mapping details right click on the mapping link and
click on the Mapping Details as shown in the figure below.
Loop Control The loop control feature provides the code to create processing
loops when one of the data model items is a Group, Tag, or
Container. Loop control ensures that detail records are kept
together with the proper header record. Also, sub detail records
are kept together with the appropriate detail records.
Loop control automates the process of mapping complex data
structures that repeat. Loop control automatically adds PRESENT
mode rules, ABSENT mode rules, or group data model items to
both the source and target models. These rules or Group items
contain Array variable assignments. Control of the array variable
automatically occurs during the MapBuilder loop control process.
Normally you would not apply loop control from Defining to
Defining. However, in the rare case when this is necessary, first
enable the “Enable Loop Control when mapping Defining to
Defining” option on the MapBuilder Preferences dialog box.
Here are some points to remember when applying loop control:
Loop control is needed on items where the maximum
occurrence is greater than the minimum.
When indicating the target occurrences for loop control, be
sure the maximum is 1 greater than the maximum intended.
The last loop goes into the loop control rules to break out of
the looping process. Loop control automatically checks for
and corrects these situations.
Troubleshooting There are several warning or error messages that can appear when
mapping using loop control.
Illegal map messages appear in a dialog box during the mapping
session. When a message such as this appears, the rules are not
updated. For example, if you try to map a source item to a source
item the following message appears in the status bar:
You cannot map or apply loop control to the topmost Group item
because it is a parent to all other Group items and their children. If
the source item does not have a parent, the following error message
appears.
If you attempt to perform loop control on the same items more than
once, the following error message appears.
Manual Loop Control The Manual Loop Control dialog box appears when MapBuilder
finds the items that do not have loop control applied, and the
“Prompt with Loop Control warning message” check box is
selected. When you drag and drop an item onto another,
MapBuilder checks all the items appearing above and, if it finds an
item whose maximum is greater than the minimum and is not the
topmost item, it displays the Manual Loop Control dialog box.
You can drag and drop loop control on any type of item: Group,
Tag, Container, or Defining. However, you must have the “Enable
Loop Control when mapping Defining to Defining” check box
selected to enable loop control on Defining items.
In this example, the two looping items are caught because they
have multiple iterations that are greater than 1 and they are not the
topmost item. Remember, the system checks for the item needing
loop control above the items on which mapping occurred.
Accessing To access RuleBuilder, open the data model that needs rules
RuleBuilder added/modified in Model Editor.
Adding Rules Once you open the Model Editor, you are ready to add or modify
rules of the data model.
The same methods are used to insert PRESENT, ABSENT, or
ERROR mode rules.
Inserting a Conditional
Expression
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
1. To insert a conditional expression for this data model item, use
one of the following methods:
Toolbar Icon-Click the Condition icon.
Keyboard- Place cursor in RuleBuilder work area, press
[=] (bracket equal sign bracket)
2. Insert the appropriate statements for your rule by either typing
them directly into the workspace or following the procedure
for inserting a RuleBuilder tab option.
3. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:
Inserting Literals
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
2. Use one of the following methods to insert a literal:
Inserting an Assignment
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
Inserting Comments Comments can be inserted into a data model to describe the
process being modeled, to identify modifications to models, or to
explain rules. Comments can be placed on individual lines or
immediately following a rule.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
comment.
2. To insert a comment in its own line,
a. Place the cursor in the first position of an empty line.
b. Type a semicolon character (;). This indicates to RuleBuilder
that a comment follows and that all text appearing after the
semicolon should be ignored.
c. Type the comment immediately after the semicolon. You
can enter any character into the comment except the
following special characters: {, }, @, *, and |.
d. At the end of the comment, use the Enter key to type a
Return. This indicates to RuleBuilder that the comment is
ended.
Inserting Data Model Data model items are often used in rules to assign a value to a
Items variable or assign a variable to a data model item.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert a DMI into a rule, drag the DMI name from the
Message Variables view, to the RuleBuilder workspace.
You could also right click on the rulebuilder workspace and choose
Insert here->DMI and select the appropriate DMI.
To insert available DMI click on icon. It lists the DMI that are
reachable (i.e. in scope).
Select the DMI and click OK. It inserts the DMI into the Rulebuilder
area.
Inserting Operators Conditional rules use operators for testing values against another.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
Inserting The Built-ins tabs allow you to easily add functions and keywords
Functions and to your set of rules.
Keywords
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
To insert a function or keyword for this data model
item, drag the function or keyword from the appropriate
tab to the RuleBuilder workspace.
Check the literal checkbox if the entered value is a literal and has to
be placed within double quotes in the RuleBuilder. Click OK.
If you still want to enter the argument values in Rule builder, click
Use RuleBuilder. Follow steps listed below to continue.
Inserting Variables
Inserting Arrays
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert an array variable (ARRAY) into a rule, drag the
variable name from the Message Variables view, to the
RuleBuilder workspace.
You could also right click in the rulebuilder workspace and
choose Insert here->Array and select the appropriate array.
Inserting Declarations Declarations are PERFORM statements that contain rules that can
be called from within a data model.
1. Make sure the insertion point is placed in the RuleBuilder
workspace under the desired mode for which you are adding a
rule.
2. To insert a declaration into a rule, drag the declaration from the
Performs view into the RuleBuilder area.
Inserting IF, ELIF, Along with null conditions and conditional expressions,
ELSE Application Integrator™ offers the IF, ELIF, and ELSE rules.
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
To insert an IF, ELIF, or ELSE for this data model item,
select the appropriate IF , ELIF , or ELSE icon.
2. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by using the
tabs, functions, and so on as explained earlier in this section.
3. Apply your changes to the RuleBuilder workspace. Changes to
the RuleBuilder workspace are not complete until they are
applied, using the following method:
Inserting a Carriage To make the rules easier to read in the RuleBuilder workspace, use
Return a carriage return after each rule.
1. Highlight the data model item that will have rules added or
modified. Be sure to select the correct mode tab to which you
want to add rules (Present, Absent, or Error).
2. Insert the appropriate statements for the rule by either typing
them directly into the RuleBuilder workspace or by using the
tabs, functions, and so on as explained earlier in this section.
Cutting, Copying, Cut, Copy, and Paste Clipboard functions can be performed on
and Pasting Rules rules on individual data model items for any of the modes: Present,
Absent, or Error. Cut or copy assigns the selected information to
the Clipboard. Only one mode at a time can be copied or cut, and
placed on the Clipboard.
Paste takes the information from the Clipboard to the location you
specify in the data model rules.
To cut text from the 1. Highlight the text to cut in the RuleBuilder workspace.
RuleBuilder workspace
2. Use the following method to cut the text:
To copy text from the 1. Highlight the text to copy in the RuleBuilder workspace.
RuleBuilder workspace
2. Use the following method to cut the text:
To paste text from the 1. Move the insertion pointer to the place to paste in the
Clipboard into the RuleBuilder workspace.
RuleBuilder workspace
Toolbar Icon-Click the Paste icon.
Right click and choose Paste
Until you make another copy or cut, this text remains on the
Clipboard, allowing you to paste several copies of the current
text.
To view keyboard To invoke the list of keyboard shortcuts:
shotcuts in the
RuleBuilder workspace
1. Right click ->Keyboard Shortcuts Ctrl+/ in the Rule Builder
OR
2. Click Ctrl+/ . The list displays the operations that are
supported by the KeyBoard shortcut keys.
Finding the Next The system makes it easy for you to enter the parameters to
Parameter functions, conditions, and keywords by prompting you for the next
required parameter. Individual parameters of a parameter list can
be selected by repeatedly choosing Find Next Parameter.
To find the next 1. Place the cursor at the position after which you want the
parameter system to begin the parameter search.
2. Use one of the following methods to issue the command:
3. The next parameter is selected for you to enter the proper value.
Continue clicking on Next Parameter until you finish entering
values for all the parameters.
Syntax Checking of Workbench provides a utility for checking the syntax of the rules
Rules during rule entry.
To Check the Syntax 1. Use the following method to call the rule checking utility:
Syntax Error Checking Syntax checking catches the first syntax error on each data model
item. A second or subsequent error will not be listed in the Check
Syntax dialog box until the first is corrected.
The following types of errors are checked in the rules during
syntax checking or when applying the rules (using the Apply
command):
1. Invalid constructed variable, for example,
VaR-> lower- vs. uppercase ‘A’
Array- missing ‘>’ and lowercase ‘rray’
2. Invalid (label) or undeclared data model item
Checks spelling
Checks character case (for example, ‘a’ vs. ‘A’)
3. Forgetting to define the condition before the action ([ ])
4. Incorrect number of parentheses ( ‘)’ ) or quotation marks ( ‘ “ ’ )
Checks for too many
Checks for not enough
5. Function expecting an identifier (variable or data model item)
for the parameter, for example:
DM_READ (“DM_X”, “Y”, 0, 1, $GET_GCOUNT(1)) where
GET_GCOUNT( ) function is not an identifier.
Parsing for Syntax Workbench catches errors when it parses the model or map
Checking component file and also catches errors before it saves.
Valid Example:
otrun.exe –at OTRecogn.att –cs dv –
DINPUT_FILE=OTIn.Flt –I
Invalid Examples:
otrun –at –cs dv –DINPUT_FILE=OTIn.Flt –I
(missing string for –at argument)
otrun –aa OTRecogn.att -cs dv –dINPUT_FILE –I
(-aa spelling and -d case sensitivity errors)
otrun.exe –at OTRecogn.att OTEnvelp.att –cs
dv –DINPUT_FILE=OTIn.Flt –I (two strings for
–at)
Requires that one of the following is an argument: –at (map
component file), –s (source data model), or –t (target date
model).
If the option does not require an argument, the presence of
an argument is not checked.
Checks for closing quotation marks when opening quotation
marks are present. Also checks for the presence of spaces in
a string when the string is not enclosed in quotation marks.
Valid Example:
otrun.exe –at OTRecogn.att –cs dv –
DINPUT_FILE=OTIn.Flt –DA=’Bob Smith’ –I
Invalid Example:
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=Bob
Smith –I
(There is a space between Bob and Smith; string should be in quotation marks.)
otrun –at OTRecogn.att –cs dv –DINPUT_FILE=OTIn.Flt –DA=’Bob
Smith –I
(Missing closing quotation mark.)
Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR-
>B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A VAR-
>B
*1 .. 1
(Missing “,”, “)”, “}” characters)
Checks for the item type and that all components are
present. For example, Definings require the following
syntax: label, open brace, access item label, ‘@’sign,
minimum, .., maximum, optional format, verify list ID,
closing brace, *, minimum occurrence, .., maximum
occurrence.
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = STRCAT(VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:
DMI { alphanumericfld @5 .. 5
[]
VAR->Tmp = STRCAT(VAR-
>A, VAR->B)
}1 .. 1
(Missing verify list ID and “*” for occurrence.)
Valid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
}*1 .. 1
Invalid Example:
Group {
DMI_A { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpB = DMI_B
}*1 .. 1
DMI_B { AlphaNumericFld @5 .. 5 none
[]
VAR->TmpA = DMI_A
}*1 .. 1
}*1 .. 1
(DMI_B is being referenced out of scope – before it comes
into existence)
Checks that data model item labels are not referenced in
include files
Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A)
}*1 .. 1
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = Dmi
*1 .. 1
(Dmi is not a defined data model item label)
Rule Execution Syntax Workbench catches errors when it executes rules in the translator
Checking and at runtime.
Valid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCAT(VAR->A, VAR->B,
VAR->C)
*1 .. 1
(STRCAT() only has two arguments, not three.)
Valid Example:
DMI {
[]
VAR->Pos =
GET_FILEPOS(&DMI)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Pos = GET_FILEPOS(DMI)
VAR->Tmp = STRCAT(&VAR->A,
&VAR->B)
*1 .. 1
(GET_FILEPOS() requires ‘&’, STRCAT() does not.)
Valid Example:
DECLARATIONS {
INCLUDE “Example.inc”
}
DMI_A { AlphaNumericFld @5 .. 5 none
[]
PERFORM(“Ex1”, &DMI_A, VAR->Tmp)
}*1 .. 1
Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR-
>Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBS(“ABCDEF”,
2, 4)
}*1 .. 1
(The string in STRSUBS() cannot be a string literal.)
A valid defined function is either an internal Application
Integrator™ function or a User Exit Extension function.
Valid Example:
DMI {
[]
VAR->Tmp = STRSUBS(VAR->Tmp, 2, 4)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRSUBSX(VAR->Tmp 2, 4)
}*1 .. 1
(The function STRSUBSX() is not an Application Integrator™
function or User Exit Extension function.)
Syntax Checking That The following are not verified during syntax checking.
Does Not Occur
Labels are not checked for consistent use of upper- and
lowercase letters throughout the data model.
Reference to a variable’s value before it was set with a value
is not checked.
Valid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->Tmp,
DMI)
}*1 .. 1
Invalid Example:
DMI { AlphaNumericFld @5 .. 5 none
[]
VAR->Tmp = DMI
VAR->Temp = STRCAT(VAR->TMP,
DMI)
*1 .. 1
Valid Example:
DMI {
[]
CLOSE_INPUT()
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = CLOSE_INPUT()
*1 .. 1
(CLOSE_INPUT() does not return a value. The translator
attempts to obtain a value off the stack, which can cause a stack
underflow error if no values are on the stack.)
User entry (outside of Workbench) for the proper sequence
of rule modes: PRESENT, ERROR, then ABSENT is not
checked.
Valid Example:
DMI {
[]
CLOSE_INPUT()
:ABSENT
[]
VAR->Error = ERRCODE()
:ERROR
[]
VAR->Error = ERRCODE()
}*1 .. 1
Invalid Example:
DMI {
[]
CLOSE_INPUT()
:ERROR
[]
VAR->Error = ERRCODE()
:ABSENT
[]
VAR->Error = ERRCODE()
}*1 .. 1
Valid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A,
VAR->B)
}*1 .. 1
Invalid Example:
DMI {
[]
VAR->Tmp = STRCATM(2, VAR->A, VAR-
>B, VAR->C)
*1 .. 1
(Using three arguments, but telling the function that it is using
only two.)
The Compare feature allows the content of two files to be displayed and
Comparing Two the differences between the files indicated by text of different colors,
Model Files depending upon the type of difference.
This is the color key for items appearing in the Compare dialog
box.
Item Description Color Designation Color Example
When the data Orange SE_TEST104
model item name is
in one data model
and not the other
When the data Magenta []
model item name is ARRAY-
>Output="BCH"
the same in the two
VAR->SECnt = VAR-
models, but the >SECnt + 1
attributes between
ARRAY->Output =
the two items are
BCH_01
different
When a data model Black EXIT 502
item name is the
same in the two
models and the
two items'
attributes are
identical
When the data Magenta text with CTT
model item name is Navy Blue
the same in the two highlight
models, but the
attributes between
the two items are
different and one
of the data model
items that is
different is
highlighted
When an item is selected from one of the data models, all of the
attributes and rules for that data model item are displayed in the
DMI Attributes portion of the Compare dialog box. If an item is
found in both models, any difference in their attributes is
highlighted.
To Compare Two 1 From the Utility menu, choose Compare Model Files. The Compare
Files dialog opens.
2. At the Compare File value entry box, Browse / type the first
filename to be compared. You can use the Browse button to
access the Open (file chooser) dialog box.
3. At the To File value entry box, type the second filename to be
compared. You can use the Browse button to access the Open
(file chooser) dialog box.
4. If the direction/mode or the access file is not specified, a
dialog prompts you to specify the same. Choose the OK button
to begin the file compare function.
5. The two files appear in the Compare View. The first file
appears on the left and the second file appears on the right.
Context Menu on The Compare view displays the data model items and rules for two
the Compare View compared data models or standard models. The system highlights the
differences between the two files. The function has several options
available to aid in locating the dmis in the displayed files.
Next Difference This takes the control to the next DMI if there is a difference in its contents
with respect to its counterpart.
To Find items Use this procedure to locate a data model item by its label.
1. Position the cursor in the top half of the Compare view, and
click the cursor in the pane in which the Find should take place.
2. Use the context menu ( right click ) and select the Find option
3. At the Find What value entry box, type the text for which you
want to search.
4. Choose the Next button. The system locates the first occurrence
of the text string. If the text string is not found, the message,
"Pattern Not Found" appears in the Find dialog box.
5. To narrow the search, use one of these options.
Match Case This option looks for text with the same
capitalization as the text entered in the
Find What value entry box
Match Whole Word This option looks for the entire character
string entered in the Find What value
entry box, not parts of words.
Model Item
1. From the Context menu ( right click ) choose GoTo.
2. At the Select Data Model Item value entry box, indicate the data
model item name to search using one of these methods –
• Type the name of the data model item.
• Choose the arrow and select the data model item name
from the list box.
3. Choose the OK button.
Report Generation
For example:
• Set the target AI runtime version as 4.1 in the Version
Validator Preference page.
• In the model editor, if you try to drag and drop a function
that is not valid for the configured AI version, an error
message is displayed in the status bar as shown below:
4. If you want to save the macro for permanent use, then click on
the Save macro check box and provide an ID for the macro.
5. If an ID is not given while saving, the macro is temporary and
available only for that instance of Workbench.
The respective macro commands can be seen in the model text page
of the model file where the cursor is placed.
Enter some text here, for example, ERG
Play ‘Macro 1’ that you have already recorded, the following lines
are displayed:
The selected text (ERG in this case), replaces the $sel$ in the
macro.
2. To play a temporary macro go to the Play Macro icon dropdown
on the toolbar, go to Temporary Macros and then select the
macro.
Note:
The Macros that are already listed after installing the feature will
not be available for export/editing.
The macros derived from xml will be available for import, export
and editing.
Inbound In AI there are two main methods to process inbound XML data –
Processing using a pre-parser called otxmlcanon, or using the translator’s
inbuilt XML parser called Xerces.
Input file
Invokes parser and the output is
piped directly into the translator.
Application
Message Processing data
Developer written data models to
read in and write out
OTCallParser.att Temporary
output file.
Invoke parser through a Parsed XML
batch/shell script. data
OTXMLPre.att
Performs validation
Message Processing
Developer written message
data models to read in and
write out.
Application
data
Target Message
Source Message Processing Processing Temporary
(Developer written data model) (Developer written data file
model)
Message Processing loop
Read in message Write out XML message
Perform output
Perform validation OTCallParser.att
Perform error handling Invoke parser through a
Error Handling batch/shell script
(Developer written data
model)
Bypass
Reject
Process XML data
file
<greeting>
Welcome to the <response> world of XML </greeting>
</response>
is not well-formed because <response> opens inside of
<greeting> but does not close inside of </greeting>.
The correct sequence would be:
<greeting>
Welcome to the <response> world of XML </response>
</greeting>
Valid Documents While all XML parsers check that the documents are well-formed
(meaning the tags are paired and in the proper sequence, attribute
values are indicated properly, and so on), some parsers also
validate the document. Validating parsers check that the structure
and number of tags makes sense.
Case Sensitivity The entire XML document file, both markup and text, is case
sensitive. Element type names, such as those used in start tags and
end tags must be defined alike, using either uppercase or
lowercase characters.
For well-formed files with no document type definition, the first
occurrence of an element type name defines the casing. The
uppercase and lowercase must match; thus, <IMG/> and <img/>
are two different element types.
Attribute names are also case sensitive on a per-element basis. For
example, <PIC width="7in"/> and <PIC WIDTH="7in"/>
within the same file exhibit two separate attributes, because the
different cases of width and WIDTH distinguish them.
DTD Example The DTD is arranged in hierarchical format. In this example, the
hierarchy of the elements is indicated by spaces.
<page>
<head>
<title/>
</head>
<body>
<title/>
<para/>
</body>
</page>
XML Schema The XML Schema Definition (XSD) is the definition of a document
Definition in XML syntax. The XSD specifies what elements may exist, what
attributes the elements have, what element must be found inside
Requirements
other elements, and in what order the elements can appear.
XSD Example The XSD is arranged in hierarchical format. In this example, the
hierarchy of the elements is indicated by indentations.
<page>
<head>
<title/>
</head>
<body>
<title/>
<para/>
</body>
</page>
XML Parsers Application Integrator™ provides you with two XML parsers. One
is a separate program called by system models or user defined
batch files. The other is built into the AI Control Server.
Parser Overview A generic XML parser is a program or class that can read any well-
formed, valid XML data as its input. It will also detect and report
any errors found in the XML data.
Why Do We Have A The parser can check the input for well-formed XML and can write
Parser? output in the canonical format. The parser can also convert
characters that XML uses into characters that are recognizable to
the translator and the target application. The items checked by the
parser depend on the arguments set when invoking the parser.
XML Special Special syntax characters are used to identify structure and special
Characters sequences of characters within the XML data.
These special characters are the less than symbol (<), the greater
than symbol (>), the ampersand symbol (&), the apostrophe
symbol ('), and the quotation mark symbol ("). To use these special
characters in your XML data models, you must use their Entity
Reference value. The following table lists the Entity Reference
value for these special characters:
Rather than using the less than symbol (<) in the XML code, the
Entity Reference of < was coded.
In this example, if the ampersand symbol were needed in an
element, it would be coded like this:
<DESCRIPTION>Currier&Ives</DESCRIPTION>
Rather than using the ampersand symbol (&) in the XML code, the
Entity Reference of & was coded.
Escape and Release During inbound processing, when the parser sees the entity
Characters reference value in an XML document, it converts it to the release
character that was specified on the command line, followed by the
intended symbol.
During outbound processing, when the parser sees the escape
character that was specified on the command line followed by the
intended symbol, it converts it to the corresponding entity
reference value.
Note: Xpath models can only be used on the source side, not the
target side.
Source Parsing
During source parsing, Xerces parses the XML data into a DOM
(document object module) in memory while ensuring that the data
is well-formed XML data. Well-formed means that each XML tag
has a start and end tag, and all child tags are closed out before
parent end tags, in addition to ensuring that proper XML syntax is
followed.
Upon the DOM structure being populated with the full XML
document and no errors encountered, if the S_CONSTRAINT
references a style sheet, it is then invoked with Xalan processing.
If errors are encountered, processing returns to the parent
environment where LAST_MCFERR() can be called to obtain
details of the errors. If no errors are encountered, processing
continues with the S_MODEL being either another style sheet or an
XPATH enabled data model.
Target
Construction
Target processing uses the style sheet defined by T_MODEL to
construct the XML document. Once the style sheet processing
ends, automatically the output XML document is parsed back into
a DOM structure in memory. (Note: The translator is aware of the
starting position of the current XML document output. If the
output file contained previous XML data, only the current
constructed XML document is parsed back in.) During parsing, the
data is verified to be well-formed. If XML_VALIDATE is set to
“Yes”, the parsed XML document is also validated against the
DTD/XSD. If errors occurred, processing returns to the parent
environment where LAST_MCFERR() can be used to retrieve the
details of the errors. If no validation errors are reported and
T_CONSTRAINT has a style sheet associated with it, it is then
invoked by Xalan referencing the DOM structure in memory.
Processing then returns back to the parent environment. If errors
were encountered during the Xalan style sheet processing,
LAST_MCFERR() can be used to retrieve the details of the errors.
Validation of the Parsing using Xerces simply checks the XML data to ensure it is
XML data well formed.
XML Constraints Schema (.xsd) and Document Type Definitions (.dtd) are used to
define the content of an XML message. They are used to validate
an instance of their definition. Schema is considerably more
specific in its definition than DTD, in that it can specify
occurrences, size, character set, code lists, groupings, and so on.
However, neither schema nor DTD syntax can represent
requirements or constraints based on the presence, absence or data
content spanning multiple elements and/or attributes. These types
of requirements or constraints can be represented and enforced
using XPATH expressions in style sheets.
Standards, such as RosettaNet, provide DTDs or XSDs for their
messages, along with documentation, which describes those
constraints that could not be represented within the DTD/XSD
syntax. To separate translation constraint validation from
mapping, a set of environment variables have been added within
the Map Component File (.att) to define these constraint style
sheets.
The Map Component File now contains the following for XML data
processing:
INPUT_FILE
S_ACCESS
S_MODEL (mapping XPATH data model or style sheet)
S_CONSTRAINT (source constraint validation style sheet)
OUTPUT_FILE
T_ACCESS
T_MODEL (mapping style sheet)
T_CONSTRAINT (target constraint validation style sheet)
XML Constraint
File
<sch:schema
xmlns:sch="http://www.ascc.net/xml/schematron">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/Core
Forecast/PartnerProductForecast/ForecastPartner/PartnerD
escription">
<sch:report
test="count(BusinessDesscription/GlobalBusinessIdentifie
r) = 0 and count(PhysicalAddress) = 0">
When GlobalBusinessIdentifier is not present,
PhysicalAddress is required.~
</sch:report>
</sch:rule>
</sch:pattern>
</sch:schema>
Types of Constraints
The following table contains examples of constraints implemented
that can be used as a reference while developing your own XML
constraint file.
3 “Only one value is allowed for a specific element’s value” – only value allowed at this
XPATH location is ‘Ship’, any other value is an error
<sch:pattern name="Only one value is allowed for a specific element’s
value.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:assert test="(count(GlobalTransportEventCode[.='Ship']) =
count(GlobalTransportEventCode))">
Only the value "Ship" is allowed for
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ProductSchedule/ForecastProductSchedule/Forecast
Period/GlobalTransportEventCode'.~
</sch:assert>
</sch:rule>
</sch:pattern>
4 “Only a value from a list is allowed for a specific element’s value” - other values are an
error
<sch:pattern name="Only a value from a list is allowed for a specific
element’s value.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:assert test="(count(GlobalTransportEventCode[.='Ship']) +
count(GlobalTransportEventCode[.='Dock']) =
count(GlobalTransportEventCode))">
Only the value "Ship" or "Dock" is allowed for
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ProductSchedule/ForecastProductSchedule/Forecast
Period/GlobalTransportEventCode'.~
</sch:assert>
</sch:rule>
</sch:pattern>
5 “Value must be previously referenced within the document.”
<sch:pattern name="Value must be previously referenced within
document.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ForecastProductIdentification">
<sch:assert
test="count(../../ForecastPartner/PartnerDescription/BusinessDescription
/GlobalBusinessIdentifier[.= current()/GlobalProductIdentifier]) >
0">
The value of
'/Pip4A4PlanningReleaseForecastNotification/CoreForecast/PartnerProductF
orecast/ProductForecast/ForecastProductIdentification/GlobalProductIdent
ifier' was not previously referenced within the document.~
</sch:assert>
</sch:rule>
</sch:pattern>
6 “Element must only be present if another element with a specific value is present.”
<sch:pattern name="Element must only be present if another element with
a specific value is present.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ProductForecast/ProductSchedule/ForecastProductSchedule/
ForecastPeriod">
<sch:report test="count(GlobalTransportEventCode) > 0 and
(GlobalIntervalCode!='Named Address' or not(GlobalIntervalCode))">
GlobalTransportEventCode present yet GlobalIntervalCode not
present with a value of 'Named Address'.~
</sch:report>
</sch:rule>
</sch:pattern>
7 “Parent present, one of its children required.”
<sch:pattern name="Parent present, one of its children required.">
<sch:rule context="//PhysicalAddress">
<sch:report test="count(child::*) = 0">
At least one child element is required for Physical Address.~
</sch:report>
</sch:rule>
</sch:pattern>
8 “One and only one occurrence of an element with a specific value is required/allowed.”
<sch:pattern name="Once occurrence of an element with a specific value
is required/allowed.">
<sch:rule context="/Pip4A4PlanningReleaseForecastNotification">
<sch:report test="count(//GlobalDocumentFunctionCode[.='Request'])
!= 1">
One and only one occurrence of GlobalDocumentFunctionCode with a
value of 'Request' is required/allowed.~
</sch:report>
</sch:rule>
</sch:pattern>
9 “Element required when another element is not used/present.”
<sch:pattern name="Element required when another element is not
used/present.">
<sch:rule
context="/Pip4A4PlanningReleaseForecastNotification/CoreForecast/Partner
ProductForecast/ForecastPartner/P
artnerDescription">
<sch:report
test="count(BusinessDesscription/GlobalBusinessIdentifier) = 0 and
count(PhysicalAddress) = 0">
When GlobalBusinessIdentifier is not present, PhysicalAddress is
required.~
</sch:report>
</sch:rule>
</sch:pattern>
1 “All occurrences of an- element when present must be a specific code, when another
0 element contains a certain code.”
<sch:pattern name="All occurrences of an element when present must be a
specific code, when another element contains a certain code.">
<sch:rule context="//GlobalPartnerClassificationCode">
<sch:report test="(.!='End User' and
/Pip4A4PlanningReleaseForecastNotification/GlobalDocumentFunctionCode[.
= 'Request'])">
GlobalDocumentFunctionCode is 'Request', so all
GlobalPartnerClassificationCode present must be 'End User'.~
</sch:report>
</sch:rule>
</sch:pattern>
The assert and report elements output their values based on the
following conditions:
assert – when test is false then output value
report – when test is true then output value
Be sure to end each <assert> or <report> value with the tilde (“~”)
character. For error reporting, the System models will then be able
to consolidate multiple occurrences of the same constraint. If a
constraint tests multiple occurrences of a branch (for example a line
item value), then each time that constraint fails the text to report
upon error appears. If the tilde (~) is used at the end of the text, the
error text is written out once, followed by a message on how many
times the error occurred.
For example, the Error report would write out this error message, if
the tilde were not used. Notice how the same error text is printed
out each time the constraint failed.
220 Source constraint style sheet validation error Source Constraint Error: In pattern
count(child::*) = 0: At least one child element is required for Physical Address.In pattern
count(child::*) = 0: At least one child element is required for Physical Address.In pattern
count(child::*) = 0: At least one child element is required for Physical Address.
If the tilde is used, the error message would appear like the
following example, where the error text is printed out once,
followed by a message in brackets that indicates the number of
times the error occurred.
220 Source constraint style sheet validation error Source Constraint Error:In pattern
count(child::*) = 0: At least one child element is required for Physical Address.
[occurred 3 times]
Generate
Constraint Style
Sheet
Where
<CONSTRAINT_BASE_FILENAME> is the constraint XML file’s
base filename. Do not specify the .scmt extension.
The output of this translation will be the constraint Style Sheet, the
name of this file will be <CONSTRAINT_BASE_FILENAME>.xsl.
For example:
XML constraint filename - PIP4A4-constraints.scmt,
<CONSTRAINT_BASE_FILENAME> - “PIP4A4-constraints”
Generated Style Sheet filename - PIP4A4-constraints.xsl.
Invoking the Parser All XML data needs to be passed through the otxmlcanon parser.
In the XML document, if you reference a DTD that does not exist,
you will get an error, whether or not the validation argument is
used.
Note: The XML Plug-In does not support referencing DTDs and
XSD™ schemas in the same XML document.
Outbound:
otxmlcanon -o <escape_char> [-t] [-V | -v] [-D]
[-h]
[-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l <locale>]
<input_file>
Outbound:
otxmlcanon.exe -o <escape_char> [-t] [-V | -v] [-
D] [-h] [-c] [-X] [-x] [-i] [-s] [-n] [-a] [-l
<locale>] <input_file>
Where
Option Description
-r <release_character> This argument defines the release
character to be used when converting
Entity References. Used for inbound
processing.
-o <escape_character> This argument defines the escape
character to be used by the translator so
that the parser can recognize when
Entity References need to be inserted.
Used for outbound processing.
Option Description
<input_filename> Indicates the filename of the input file.
The parser should be run with this
option to ensure entity reference values
are resolved to their string values.
If no <input_file> is entered, standard
input (stdin) is used.
Option Description
-r <release character> This argument defines the release
character to be used when converting
Entity References and is used for inbound
processing.
-o <escape character> This argument defines the escape
character to be used by the translator so
that the parser can recognize when Entity
References need to be inserted. It is used
for outbound processing.
-X This argument specifies that the output
will be in canonical format.
-t This argument indicates that the xml file
must be output without the prolog.
-V Indicates that validation must occur
against the DTD.
-v Indicates that validation must occur only
if the DTD is present. Both –V and –v
cannot be used together.
-D Print the extraInfo, which is the
ENGLISH detailed error
-h Print usage information for the program.
Same as “-?”.
Option Description
-n Retaining/removing the Namespace in
the output xml file.
-a Retains the comments in the output xml
file
-c Indicates that the output must be
deterministic.
-x This argument outputs empty elements
that do not contain end tags, to contain
end tags.
-i Outputs CDATA without releasing
special characters.
-s This argument tests for a single xml
document, by checking for mulitple root
elements.
-l <locale> Indicates the locale to be used for the
input file.
<input specification > Indicates the filename of the input file in
standard URL format, the socket
specification, or if no input specification
is entered, the standard input is used.
File specification is optional and is in the
standard URL format. The full path must
be included. For example,
file:/C:/appl/rn.xml
The socket specification is optional and
indicates input will be coming from a
socket and identifies the socket. Refer to
Section 4, "Creating Map Component
Files," for information about specifying
sockets.
If no <input specification> is entered,
standard input (stdin) is assumed.
Reference
Conversion
XML Input to the The input data stream is a series of one or more XML documents.
Input can come from a file or standard input. When multiple
Parser Output The output from the parser is written to standard output. When
multiple documents are parsed, output is in the same order as the
input documents.
The Validation Report for the file is displayed (as shown below):
The window consists of six tabs: Results, Details, Suggestions,
Element Declarations, Type Definitions, Model Group Definitions.
The Result tab reports if the validation was successful or not , that
is, if it had errors. It presents a brief summary of schema files that
were validated and the ones that have errors (errors are shown in
red).
The details tab of the dialog can be used for probing deeper into the
errors.
The Details’ tab displays the error details, for example, what errors
were found, the schema file in which the error occurred with the
line and column number(highlighted in red).
This tab also shows details about schemas like resolved location,
target namespace, number of element declarations, type
definitions and model group definitions found in a schema.
The next tab is the Suggestions’ tab. The utility has a smart module
called suggestion generator. It attempts to calculate the imports or
includes in an erroneous schema. These imports/includes are
needed to rectify the errors.
It can be used to view the elements found in the schema files along
with other details such as type, its location and namespace.
Select the Element using the drop down box of Go To Element.
This view has two other tabs in it: Tree view and Text View, the
latter being the text equivalent of tree view.
The Type Definitions tab can be used to view the type definitions
found in schema files.(see below):
The Type Definitions’ tab also contains other details of the schema
files such as base type, its location and namespace. The Go To
Element feature is a drop down and can be used for locating a type
by name. This view has two other tabs in it: Tree view and Text
View, the latter being the text equivalent of tree view.
The Model Group Definitions’ tab can be used to view the model
group definitions found in the schema files along with other details
such as type, its location and namespace.
Enter a value for the Parent Folder. This will be the location
where the data model will be stored.
Enter a name for the new data model with or without the
extension of .mdl – the file will be created with the extension
.mdl.
Select the mode for the data model. This can be either Source for
parsing XML data or Target for writing out XML data.
Select the Type of data model you are creating. In this case XML
would be selected.
Select Next.
3. Check “Generate Schema from XML” check box to proceed
with generation of XSD from an existing XML file. This option
enables XML file selection box.
Select XML file to generate schema file.
Else,
Check “Generate Schema from DTD” check box to proceed
with generation of XSD from an existing DTD file. This option
enables DTD file selection box.
Select DTD file to generate schema file.
Else,
Select the schema to be used to generate the data model.
Alternatively, you can type in the name of the XML file or DTD
file or schema file. If the file is not present then Workbench will
throw an appropriate error. If the file is present and the path is
not in MSO then the path will be automatically added to the
MSO. If the file is present and the path is not linked to the
Workspace, then the path will be automatically added to the
Workspace and to the MSO. Note that only absolute path will
be considered and not the relative path.
Note: The Schemas that are used for generation of std and ids
files cannot contain duplicate element names. If the schemas
contain duplicate element names, the utility uses the first set of
values while creating the ids file.
Open the schema file using a text editor. Copy the URI of the
targetNamespace specified within the 'schema' element. Click
the Add... button.
Source Data Model The XML document must contain the following items:
Considerations
• Reference to an XSD
XSD Editor
To view/edit an XSD file:
1. Double click on the XSD file. It is displayed in the editor. It has
two tabs at the bottom of the editor: Design and Source. The
Design view is shown below
6. You can view the properties of the file in the XSD Editor’s
Properties view under General, Constraints, Documentation and
Extension tabs (accordian style) as shown below:
7. Right click on the XSD file in the Navigator pane to choose from
the XSD specific context menu.To generate an XML file, choose
Generate > XML File.
9. You can also open the XSD file in other editors. Right click on
the XSD file, choose Open With>XML Schema Editor (or any
other editor).
XML Editor
Note: All the options for XML files specified here are available for
XSL files also.
3. Double click on the XML file in the Navigator pane. The XML
Editor displays the file in the design page. It displays a
hierarchical view of the various elements in the XML file. It also
displays details of elements on the right side of the page.
4. Click on the Source tab to view the source code of the XML file.
The tags are displayed in a particular color. The XML editor is
an XML syntax aware source editor
You can view the properties of the XML file in the Properties view.
TheXML Menu bar also provides various options.
You can also go to the Menu Bar and click on XML and choose the
required option.
You can open the XML file in other editors. Right click on the XML
file, choose Open With>XML Editor (or any other editor).
Rules for Pattern A pattern facet defined in the schema is not generated into the data
Facets model for validation. For a numeric field with a defined pattern,
such as a social security number, the data model is generated with
a format that does not permit the hyphens to be present. This will
cause errors in the data model unless the format is changed.
Examples This section contains two examples of data models: a source data
model and a target data model. Each of these examples was
generated using the data model generator. All parts of the data
model can be edited.
Source Data Model This is a source data model that was created using the data model
Example generator. Each data model item was created because the XSD
specified it.
The data model items are associated with the appropriate item
type, occurrence, size and formatting/masking characters. Add
rules to the data model to accomplish the function of the model
using Work Bench.
Target Data Model The Target Data Model invokes OTTrgInit in Initialization DMI
Considerations that sets the delimiters. For this, OTXML.inc needs to be included
in the target data model under DECLARATIONS section.
Ensure that the following delimiters are set:
SET_SECOND_DELIM(60)
SET_FIRST_DELIM(62)
SET_THIRD_DELIM(34)
SET_RELEASE(92) ;this must match the "-r" or "-o"
parser argument.
Target Data Model This is a target data model that was created using the data model
Example generator. Each data model item was created because the XSD
specified it. The data model items are associated with the
appropriate item type, occurrence, size, and formatting/masking
characters.
XML Validation Generated data models and the inbound and outbound examples
Parameters shown later in this section do not automatically validate XML data
against a DTD. To validate the XML data, parameters must be set
in the User.inc file or at the command line. The User.inc file is an
INCLUDE file.
Disabling the The prolog is output in the XML data by default, however, you can
Prolog specify when the prolog should not be produced. The XML
Samples use the-DXML_PROLOG parameter to control whether
the prolog is output.
When the prolog should not be output for all translations, use the
Disable Globally procedure.
When the prolog should not be output for a single translation, use
the procedure to disable for a specific translation procedure.
Canonical XML Canonical is used to remove any white space from the xml data
and also removes prolog and DOCTYPE elements.
Format
The XML Samples use the -DCANONICAL parameter to control
outputting the data from the parser in canonical format.
The .DCANONICAL parameter can be set to "Yes", or "No". The
default is .No.
Empty Elements Empty elements can come in the format <Empty/>. The EMPTY
parameter is used to force the end tag to be present. For example,
with End Tag
<Empty></Empty>
The XML Samples use the -DEMPTY parameter to control
outputting the data from the parser with end tags on all empty
elements. The .DEMPTY parameter can be set to "Yes", or "No".
The default is No.
Deterministic Using this parameter, the XML parser will check to ensure the XML
data is deterministic.
The XML Samples use the -DDETERMINISTIC parameter to
enforce that the XML data is deterministic.
The .DDETERMINISTIC parameter can be set to "Yes", or "No".
The default is No.
XML COMMENTS XML comments can be retained in the output xml file by specifying
this option. The DXML_COMMENTS parameter can be set to
"Yes", or "No".
The default is No.
XML
Troubleshooting
Parser Error When the parser is successful, a return code of zero is generated by
the otxmlcanon utility. When an error is encountered, a non-zero
Handling
error code is returned. The parser writes the error message to the
standard output in the following format.
Where
Message Part Description
<error code> This is an integer indicating the
error code.
<error message[;detailed error The error message is a short
message]> generic description of the error.
The detailed error message
occurs when the parser has
collected additional information
about the error.
Parser Error The line number shown in the error message identifies the
Example Table approximate location of the error in the instance document or the
referenced DTD or schema. The actual location of the error could
be located a few lines above the line number indicated.
Data Model Shown here is an example of a Session Output dialog box from an
Generator Error unsuccessful data model generation.
Handling
This list shows the reasons that the data model generator could fail
and the resolutions for the errors.
Cause Resolution
The specified XML document Verify that the filename
does not exist in the working specified for the XML
directory document is correct and that
the file resides in the working
directory.
The data model generator utility will send back the following:
XML Samples This section provides three examples for XML processing. The
first two examples are inbound examples and the third is an
outbound example.
Inbound There are two inbound processing examples. With the first
Processing example, the translation occurs using one environment. With the
second example, the translation occurs using several environments.
Examples
Before you run the examples described below, environment
variables to locate the programs inittrans/otrun and set the
OT_QUEUEID need to be set. This is done by running
.\env\aiserver.bat (on Windows®) and env/aiserver.env (on Unix
and Linux).
Inbound Example 1
In this example, translation occurs using a single environment.
Inbound Example 2
In this example, translation occurs using multiple environments
thereby allowing the XML data to be completely parsed and
validated before any translator parsing occurs.
Outbound
Processing
Example
Outbound Example
Testing and
Debugging XML
Schema Data
Models
Schema files (.xsd) are checked to be well-formed XML but are not
validated. They are expected to be syntactically correct. Only the
instance document is validated beyond checking for being well-
formed.
Setting Validation Generated data models, standard data models, and the inbound and
Parameters outbound examples shown in XML Schema Samples do not
automatically validate XML data against an XSD. To validate the XML
data against an XSD, parameters must be set in the User.inc file or at
the command line. The User.inc file is an Include file.
Validating Against XSD validation can be set globally or by a specific translation. Refer
an XSD to List of Items Validated for additional information.
List of Items This list shows the items checked by the parser/validator
Validated during validation.
Additional Data This is a list of validations that occur by a data model created from
Model Validations the XSD data model generator when the –V argument is used when
generating the data model. Rules are placed on the data model
items for this validation.
List of Items Not Refer to Unsupported Items for XML Schema Samples in
Implemented Workbench User’s Guide-Appendix.
XML Schema
Troubleshooting
Parser Error When the parser is successful, a return code of zero is generated.
Handling When an error is encountered, a non-zero error code is returned by
the otxmlcanon utility.
The parser writes the error message to the standard output in the
following format.
Where
Message Part Description
<error code> This is an integer indicating the
error code.
<error message[;detailed error The error message is a short
message]> generic description of the error.
The detailed error message occurs
when the parser has collected
additional information about the
error.
Types of Errors Errors can be categorized into two groups – non-structural errors
and structural errors.
Data Model Errors are displayed in the status area of the WorkBench dialog box.
Generator Error Any non-zero return code represents an error.
Handling
Cause Resolution
The specified XML document Verify that the filename
does not exist in the working specified for the XML document
directory. (the Import filename) is correct
and that it resides in the working
directory.
The specified XML document Verify that the filename
does not refer to an existing specified for the XSD is correct
XSD. and that it resides in the working
directory or exists in the
directory indicated in the path
specified.
Root elements in the XML Verify that the root element
document and the XSD do not specified is correct and matches
match. between the instance document
and the XML Schema.
The data model generator utility sends back the following return
codes. Any return code that is not a zero value indicates an error.
Inbound Processing, When the second inbound processing method is used, the
Method 2 Error Codes following error codes may be returned. The second method
uses a pre-translation environment that parses and validates
the data before translation begins. Refer to Inbound Processing
for additional information.
The following results occur frequently and are typically the root
cause of some XML Schema Samples. Repeated use of otxsdgen
(the XSD to data model generator), and otxmlcanon (the XML
parser) can detect errors and direct you to a solution. The
recommendation is to use the two programs from the command
line for maximum effectiveness.
Inbound There are two inbound processing examples. With the first
Processing example, the translation occurs using one environment. With the
second example, the translation occurs using several environments.
Examples
Before you run the examples described below, environment
variables to locate the programs inittrans/otrun and set the
OT_QUEUEID need to be set. This is done by running
.\env\aiserver.bat (on Windows®) and env/aiserver.env (on Unix
and Linux).
Inbound Example 1
In this example, translation occurs with the use of a single
environment.
Inbound Example 2
Outbound
Processing
Example
Outbound Example
A style sheet is a way to transform data out of and into the XML
Style sheets formats. A style sheet can be used in place of an AI data model,
(XSLT) when dealing with XML data. Application Integrator™ has
developed XSL or Style sheet functions that can be used for
mapping data within a Style sheet These functions allow values to
be assigned to or referenced from AI variables (VAR->s, ARRAY-
>s), access the Profile Database, execute data model rules within a
style sheet, and access AI environment variables.
Style sheets Style sheets define the transformation (mapping) rules for parsing
Overview or constructing XML data. They work together with one or more
DTDs or XSDs, which define the overall document’s structure and
constraints. Written in the Extensible Style sheet Language:
Transformations (XSLT), it is executed by a style sheet tool, such as
Xalan, which is used in AI.
For inbound XML transformation a parser, such as Xerces in AI, is
used to verify that the xml data is well-formed, optionally validate
it against its referenced DTDs/XSDs and then populate a
Document Object Module (DOM) with the xml data. A DOM is a
tree like structure in memory containing the XML documents
values. Contained within the style sheet is XML Path Language
(XPath), which references nodes (elements and attributes) of the
DOM data tree. Once these values are referenced, the values can
then be mapped using both standard XSLT functions and AI
extended style sheet functions. These functions allow for data
manipulation, testing, formatting, accessing the Profile database,
assignment and reference of AI variables and environment
variables, and the execution of AI data model rules.
Outbound style sheets can be used to output XML, HTML and
many other text-based formats. The outbound style sheet contains
not only the rules to output the mapped values, but also literals,
which are the xml start and end tags. Standard and AI extended
functions are used to access and manipulate the values being
output. If the format of the data being output is XML, Xerces is
automatically invoked to insure it is well-formed and optionally
validate it against its referenced DTDs/XSDs.
Creating Style Style sheets are written and maintained in AI’s Workbench
sheets product. The full structure of the document is shown in what is
called a “Schema Tree” which permits mapping of all elements and
attributes, not just the ones contained in the style sheet. It is from
this “Schema Tree” that the mapping Drag-N-Drop process occurs
which creates the mapping rules in memory. Upon saving, only
the mapped nodes of the “Schema Tree” are written out with their
rules, into a newly generated style sheet. The AI translator then
recognizes during processing when a data model or style sheet is
used and invokes the proper internal processor.
3. Enter a value for the Parent Folder. This will be the location
where the style sheet will be stored. Style sheets should be
stored together with your other maps or data models. This
would be your “<OT_DIR>\Models\MyModels” or
“<OT_DIR>\Models” sub-directories.
4. Enter a name for the new Style sheet with or without the
extension of .xsl – the file is created with the extension .xsl.
5. Select the mode for the Style sheet. This will be either Source for
parsing XML data or Target for writing out XML data.
6. Select Next.
7. The next screen prompting for the Schema File and Root
Element is displayed
Click on the Model Text tab. This is a non-editable page which shows all
the modifications and rules created in the Overview page. (See below)
Click the Browse button to select the schema file which has the
substitutable elements for the selected element. The substitutable
elements are listed in the dialog from which the user selects the
appropriate element. This gets inserted in the file.
The Substitution element is retained after the addition. This allows
the user to replace the Substitution element once again with other
elements, if required.
For element name ANY (in bold) , the user right clicks and selects
Replace ANY Element.
A dialog box comes up. Click the Browse button to select a schema
file. It lists the elements which could substitute the ANY element.
The ANY element is retained after the addition to allow the users
to replace it once again with other elements, if required.
Style sheet Style sheet processing comes with inbuilt functions for data
Functions manipulation, testing formatting, and so on. A few of these
functions are: boolean, concat, count, format-number, position,
round, string-length, substring, and sum. Additional functions or
extended functions have been added for style sheet processing by
AI. These were implemented to be able to perform the same
translation/processing logic as used in data models. These
provide the ability to access AI variables (VAR->s, ARRAY->s),
Profile Database, and environment variables, along with execute
data model rules in performs.
For a list of these extended style sheet functions and new
environment variables, see Appendix A. Application Integrator
Model Functions in Workbench User’s Guide-Appendix.
These extended style sheet functions may appear as part of Map
Builder rules or as part of Custom Rules
a. Map Builder Rules
These rules are added by Workbench when you do a drag and
drop between source and target (traditional model/ XSL)
For example:
<!-- +MapBuilder(default,
/InvoiceList/Document/InvoiceNumber, PhoneNumber, ARRAY-
>InvoiceNumber_PhoneNumber,
/InvoiceList/Document/InvoiceNumber) -->
<xsl:value-of select="otxsl:array-
put('InvoiceNumber_PhoneNumber',
/InvoiceList/Document/InvoiceNumber)" />
<!-- -MapBuilder(default,
/InvoiceList/Document/InvoiceNumber, PhoneNumber, ARRAY-
>InvoiceNumber_PhoneNumber,
/InvoiceList/Document/InvoiceNumber) -->
b. Custom Rules
When you want to add a specific rule of your choice to any item
(node), the rule should be enclosed within a Custom tag. (Custom
tag, an XSL comment, is a marker for Workbench to identify that
the rule is added by the user). This is necessary to associate the
custom (user defined) rule to the corresponding item when you
reopen a previously mapped XSL.
For example:
<!--
+Custom(/InvoiceList/Document/InvoiceNumber,InvoiceNumbe
r) -->
<xsl:value-of select="otxsl:var-put('temp',
/InvoiceList/Document/InvoiceNumber)" />
<!-- -
Custom(/InvoiceList/Document/InvoiceNumber,InvoiceNumber)
-->
The syntax of the custom tag can be found in the Built-ins view
under the Tab “All” and “XSL Functions” with the function name
Custom_XSL_Rule.
Note: Any rule that is not enclosed within the Custom tag, will not be
saved into the XSL by Workbench.
General Rules This functionality deals with XML data only. The normal EDI file
formats do not work with this functionality. The process that AI
uses to parse the data is:
Step 3 - Builds a DOM tree with the data from the entire document
in memory.
The AI data model will not read from the input stream but from the
DOM tree. These are some general assumptions:
1. The Xpath functionality within data models only deals with
source (input) models and not target processing.
2. Data validation is done using the Xerces XML parser and not
within the AI model. This includes size/occurrence/ID
lookups, and so on.
3. The DMI access type is:
XMLRecord is used for all types of data including
Date/Time, Number, and Alpha characters. The schema or
.dtd validates the data element format.
4. COUNTER in the access model functionality does not apply.
5. There are two different kinds of rules that exist for Xpath
models.
a. Map Builder Rules
i. These rules are added by Workbench when you
do a drag and drop between source (xpath
model ) and target (traditional model/ XSL
model)
For example:
[]
;; +MapBuilder(default,
/note/heading, LastName, ARRAY-
>heading_LastName)
ARRAY->heading_LastName = heading
;; -MapBuilder(default, /note/heading,
LastName, ARRAY->heading_LastName)
b. Custom Rules
i. When you want to add a specific rule of your
choice, it should be done using a Custom tag.
For example:
[]
;; +Custom(/note/heading,heading)
VAR->temp = heading
The syntax of the custom tag can
be found in the Built-ins view
under the Tab “default” with the
function name Custom_Xpath_Rule.
Data Presence
Element:
An element is said to be present if the element tags are present.
The resultant value will return blank. This includes
<Element></Element> and <Element/>. If the tags are missing,
then the element is said to be missing and ABSENT mode will be
executed. The resultant value would return error 139 – No Value.
When an element is missing, error 141 is returned to the ABSENT
mode to execute the code. This means both elements with children
and elements without children. If the error code is still 141,
occurrence validation is checked. If the minimum is met and error
is either 0 or 141, the process flow goes to the next element. If
within this group, no other data was read (Match value or previous
element), error 171 is returned back to the parent where occurrence
validation is checked on the parent. If the error still exists, the
ERROR mode code executes.
OnError executes just like the normal AI mapping. When an error
occurs (Missing Required Element), this perform is executed.
There is no such thing as RECOVERY in xpath reference. You
MUST NOT set the RECOVERY switch. You will get error 200 on
any elements that have an error. The following code is prohibited
within the Xpath enable data model;
VAR->OTPriorEvar = SET_EVAR(“RECOVERY”, “YES”)
Tag:
When a group has missing children, then error 171 is returned to
the parent ABSENT mode. Occurrence validation is checked and if
not met, then ERROR mode is executed.
Code Sample 1:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*1 .. 1
Element2 { XMLRecord “Element2”}*1 .. 1
}*0 .. 1
For this code, if the data contains <Group/> or
<Group></Group>, the process flow is: Found the Group tag, go
to Element1 ABSENT, then Element 1 ERROR mode, then will
return back to Group ERROR mode with 138.
For the same code, if the data doesn’t contain the <Group> tag, the
process flow is: Error 141 is returned to the Group ABSENT mode
(never goes to the Element1 element), occurrence validation is
checked and since Group is not required, move to the next element
and clear the error.
Code Sample 2:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*0 .. 1
Element2 { XMLRecord “Element2”}*0 .. 1
}*1 .. 1
For the sample #2, if the data contains <Group/>, the process flow
is: Found the Group tag, go to element1 ABESNT, occurrence
validation is OK, get to element 2 ABSENT, occurrence validation
is OK, return 171 back to parent ABSENT, check occurrence
validation, go to parent ERROR mode (minimum is not met).
For the same code, if the data doesn’t contain the <Group> tag, the
process flow is: Error 141 is returned to the Group ABSENT mode,
occurrence validation is checked and since Group is required, goes
to Group ERROR mode with error 141.
Code Sample 3:
Group { XMLRecord “Group”
Element1 { XMLRecord “Element1”}*1 .. 1
Element2 { XMLRecord “Element2”}*1 .. 1
}*0 .. 1
For the sample #3, if the data contains
<Group>
<Element1>DATA</Element1>
</Group>
The process flow is: Found the Group tag, go to Element1
PRESENT, go to Element 2 ABSENT with error code 141. Since
occurrence validation is not met, goes to Element2 ERROR mode
with error 141, and then returns back to parent ERROR mode with
error 138.
Process Flow While the AI map is executed, the data is read from the DOM tree
in memory and not the input file.
Therefore the file position functions – GET_FILEPOS or
SET_FILEPOS will not give you a position of where the element
starts. GET_FILEPOS will return zero, which is the beginning of
the file. SET_FILEPOS can set the input position but will not affect
the process flow and the DOM tree.
The process flow can point to the DOM tree using either an
absolute reference or relative reference. Absolute reference means
that only the element defines where the data comes from. These
references can’t be grouped together. Therefore only the first
instance of the element is returned. If there are parent groupings,
the process flow does not execute these parent groups (The second
FirstLevel). When the xpath expression is defined, a leading “/”
tells the translator to use absolute reference.
The absolute reference will read the first group Data value of
Value1. It will never read the Value2 Data element even though
there is a group of 1 .. 10.
As you move down the structure in memory, the xpath tree is being
appended. If there is a leading / in the xpath expression, the tree
structure is not used and the value is taken from the root. In the
above example, even though the XMLRecord has started the tree
with /Header/FirstLevel, the Element4 uses the xpath expression
of /Header/FirstLevel/Element4 and NOT
/Header/FirstLevel/Header/FirstLevel/Element4. The data that
is found is “Data4”. It would never be able to find
‘SecondSetData4’ because it always take the first reference.
Creating Xpath While the AI map is executed, the data is read from the DOM tree
Source Models in memory and not the input file.
3. Enter a value for the Parent Folder. This is the location where
the style sheet will be stored.
4. Enter a name for the new Xpath data model.
5. Select Next.
Step 1: Obtain the Obtain all available documentation that explains the syntax,
Translation Definition structure, and mapping rules that apply to the translation you are
going to model. The syntax defines the characteristics of the
Requirements
components, such as, the character sets, fixed or delimited data,
identification, and tag definitions. The structure defines the
relationships between the components, and the occurrence
constraints. The mapping rules define the semantic meaning of the
components to accurately associate the source to the target.
If documentation is unavailable, find the person in your
organization or at your trading partner’s organization who can
provide an understanding of the content and structure of the data
for electronic commerce. From this person, obtain the syntax,
structure, and data mapping requirements.
If neither documentation nor a contact person is available to relay
the data and translation requirements, you must obtain the
information by examining the data files. This method, of course,
makes translation definition a process of trial-and-error. The more
complicated the translation requirement, the less assured you are of
an accurate modeling definition.
Character Set When an item has a unique character set, which differs from other
items, a different type of item is used.
For example, five different types of items would be used for the
following:
Numeric - Which allows for 0-9, -, +, .
Date - Which allows for 0-9 with valid month and day
values
Text - Which allows for all printable characters
Alpha - Which allows for the character set range of A-Z
Segment - Which contains a tag at the beginning and a
delimiter at the end.
Pre- and Post-Conditions Delimited data typically allows for the use of several delimiters.
These delimiters can be used either as the pre-conditions or the
post-conditions of items. You need to understand the delimiters
used in your input file to know when another type of item must be
defined.
For example, consider fixed length data within variable length
records. The data fields have no pre- or post-condition. The
record, however, has a post-condition of a delimiter (possibly a line
feed, or carriage return/line feed).
Another example would be the UN/EDIFACT standard which
specifies delimited data using three different delimiter characters.
The rules specifying the delimiters that can precede or follow
which item types are very specific in this standard’s syntax.
Sequence Sequence is the order in which items can appear. It can be rigid or
random. A rigid example is when a standard requires records to
appear in a certain order (record type A cannot appear after record
type B). A random example is when records can appear in any
order (record type A can appear before or after record type B).
Diagram Description
Record_B Record_B - is the sibling of Record_A
Field_B1 Field_B1 - is the child of Record_B
Field_B2 Field_B2 - is the sibling of Field_B1
The following are example lists of types of items for source and
target.
Source
Type: Fixed length fields in delimited records
Item Character Set Pre- Post-
Condition Condition
Alphanumeric Any character between None None
‘ ’ and ‘~’
Target
Type: Variable length fields in delimited records
Item Character Set Pre-Condition Post-Condition
Alphanumeric Any character between ‘ elem-delimiter elem-delimiter or sgmt delimiter
’ and ‘~’
Alpha A-Z, a-z, space elem-delimiter elem-delimiter or sgmt delimiter
Numeric Special Numeric Function elem-delimiter elem-delimiter or sgmt delimiter
Date Special Date Function elem-delimiter elem-delimiter or sgmt delimiter
Time Special Time Function elem-delimiter elem-delimiter or sgmt delimiter
Record Tag: Alphanumeric sgmt delimiter sgmt delimiter
Step 3: Obtain the When obtaining an input file for testing the data models, the
Test Input File(s) volume or size of the input file is not as important as having an
input file that contains all acceptable variations of the input
structure. This includes not just expected variations, but all
possible variations as defined in the structure definition. The goal
is to be able to test all possible structure and content combinations
to ensure that the translation definition will not fail once placed
into production mode.
Step 4: Lay Out The layout of the environment flow is a pictorial representation of
the Environment the various elements that need to be brought together to configure
the translator to process in a certain way (for example, it shows the
Flow
input files, output files, and other components) and the order in
which they are used. Each environment provides the ability to
alter the configuration of the translator and allows for the modular
creation of data models. Refer to Section 4. Creating Map
Component Files (Environments) for a further discussion and
illustrations of environments.
Changing environments during translation can affect the following
configuration components:
Access models
Changing the access models allows you to add, change or
remove item type definitions. This includes adding,
changing or removing access delimiter characters and
changing the use of access model COUNTERs.
Input and output files
Changing the input file allows you to bring different data into
the translation. By changing the output file, the output data
can be filtered to different files.
Profile Database key prefixes
Changing the database key prefixes provides different views
within the Profile Database. Different views may be required
at various points in the translation.
Find match limit
Step 4 should
generate:
An environment processing flow depicting the relationship
of the environments used during translation.
Step 5 should
generate:
Functional Area Definition Worksheets completed for every
new environment defined in Step 4.
New access models or the updating of new item types to
existing access models.
Step 6: Create The syntax and structure of the translation will be modeled in this
Source and Target step. First, define the data models as per the Application
Integrator™ Model worksheet. For assistance, refer to a copy of the
Data Model
worksheet found in the Standards Plug-In User's Guide for the
Declarations public standard you are using. Then enter the definitions into
Workbench. The rules for mapping will be created in Steps 8 and
9.
You can work on the source and target data models independently.
One modeler can work on the source data models while another
modeler defines the target data models. The power of Application
Integrator™ allows the two sides to be brought together at runtime
for binding. The relationship between the two sides is established
in the mapping process, through the use of mapping variables.
Create an Application Integrator™ Model Worksheet for each data
model defined on each Environment Definition Worksheet:
For each item contained within the data structure, create a line item
entry:
Section Name Instructions
Data Model Item Assign a label name, unique within this data
Name model, by which this item will be identified.
The name must begin with a letter or an
underscore (_), and should not begin with the
two letters “OT.” It should be composed of
the character set [A-Z], [a-z], [0-9] and
underscore (_). Use the various columns
under Item Name to represent the various
hierarchical levels in the structure definition.
(Used for all types of items: group, tag,
container, and defining item.)
For example:
Message_Loop
Heading_Record
Heading_Rec_Field_1
Heading_Rec_Field_2
Heading_Rec_Field_3
Detail_Line_Item_Loop
Detail_Record_1
Detail_Rec_1_Field_1
Detail_Rec_1_Field_2
Detail_Rec_1_Field_3
Detail_Record_2
Detail_Rec_2_Field_1
Detail_Rec_2_Field_2
Detail_Rec_2_Field_3
Step 7: Create a Create a map component file for each Environment Definition
Map Component Worksheet created in Step 5. Create a new map component file by
completing the Map component file dialog box opened from the
File for Each
New Map Component option of the Workbench File menu. Refer
Environment to Section 4. Creating Map Component Files (Environments) for
instructions.
Step 8: List Source Using the Application Integrator™ Variable Worksheet (an
to Target Mapping example of which can be found in the Standards Plug-In User's
Guide for the public standard you are using), complete a line item
Variables
for each piece of data that will be mapped from the source to the
target. Once this worksheet is completed, the source data modeler
will be able to begin creating the rules described in Step 9,
independent of the target data modeler. The type of variable and
its ID (label) is all that the source and target data modelers need to
know.
Create the Application Integrator™ Variable Worksheet as follows:
Section Instructions
Name
Type Identify the type of variable to be used. Each
Section Instructions
Name
variable type has different mapping attributes.
Label Assign a label for the variable that will be unique
throughout the total translation session. The label
must begin with a letter and should not begin with
the letters “OT.”
Description Enter a description of what the variable type and
label represent.
Step 9: Create Using Workbench, you can now apply rules to the data models.
Data Model Rules Since the source and target are independent of each other and
runtime-bound via the variables, either side can be done first or
independently.
The source assigns its data model item values to specific variables,
for example, VAR->PONumber = HeadingRec_PONumber. The
target assigns the variable values to its data model items, for
example, Rec1-PONo = VAR->PONumber.
The primary use of the rules is to create the movement of data from
the source to the target, establishing the desired format in the
process. However, rules are also used for the following purposes:
Logging of information for audit, message tracking, and for
reporting, using the functions LOG_REC( ) and
ERR_LOG( ), for example.
Capturing of information for later acknowledgment
creation.
Performing error recovery, such as defaulting when a value
is absent.
Verifying relational conditional compliance — a relational
condition may exist among two or more siblings within the
same parent based on the presence or absence of one of
those siblings.
Guidelines for Rule a. If a rule action fails, the remaining actions contained in the rule
Creation are not executed. For example, if a variable or data model item
is referenced for its value, and a value has not yet been
assigned to it, the action will fail. (This can occur in a tag
item’s rule that references an optional child defining item that
was not present in the tag item.) Whenever this possibility
exists, immediately start a new rule for the balance of the
actions.
Example:
Incorrect way to model Correct way to model
Tag_A Tag_A
Defining_A1 (optional) Defining_A1 (optional)
Defining_A2 (optional) Defining_A2 (optional)
[] []
VAR->Fld1 = Defining_A1 VAR->Fld1 = Defining_A1
VAR->Fld2 = Defining_A2 []
VAR->Fld2 = Defining_A2
[]
SET_ERR( )
Profile Database The following rules should be included in your data models to set
Lookups up the views into the database so that information can be accessed.
When accessing the Profile Database, your model must contain
rules that tell the translator what type of information you are going
to access and from where to access it. The type of information you
might access could be cross-references, code list verifications, or
substitutions. You could access this information from any
hierarchy level of the trading partner profile or from any of the
standard version levels. Refer to Trade Guide online help for
details on how to set up or modify the trading partner profile and
standard version code lists.
The SET_EVAR data model function allows you to set the
environment variables for these types of lookups. Refer to the
description of the SET_EVAR function in Appendix A. Application
Integrator Model Functions in Workbench User’s Guide-Appendix for
more details.
For information on the generic model used in trading partner
recognition, refer to the Map Component Files for Enveloping/ De-
enveloping section.
Database Key and Each level in the trading partner hierarchy is represented by a
Inheritance database key. Each database key is delimited by the pipe ( | )
character, and is a maximum of 12 characters long
The key is derived from the information keyed into the trading
partner’s profile, as follows:
Inheritance When looking up a value, the lookup is appended to the key prefix,
before the read. A cross-reference lookup of a part number, for
example, might be:
ABCIC|ABCFG|ABC850|part_a
By using this approach, the property of “inheritance” can be easily
applied, where inheritance denotes the use of values from higher
levels (ancestors) in the hierarchy when a specific value is not
found at the current level.
To continue with the example, if the cross-reference value of part_a
is not found at the message level ABCIC|ABCFG|ABC850, the
system automatically removes levels until the value is found or all
levels are exhausted.
ABCIC|ABCFG|ABC850|part_a
ABCIC|ABCFG|part_a
ABCIC|part_a
part_a
The property of inheritance exists for types of values that can be
stored in the Profile Database:
Substitutions
Cross-references
Verification code lists
Inheritance can lessen the redundancy in the Profile Database, since
all levels of a trading partner hierarchy (for example, all divisions
and/or all messages of the trading partner) may use the same
cross-references and codes.
Inheritance can be turned on/off as a parameter to database
functions. The data modeler determines whether or not the
inheritance feature should be used during data modeling.
For details on setting up the trading partner hierarchy, refer to the
Trade Guide online help.
Column Instructions
Heading
Description of Enter a description of what lookup is
Lookup occurring.
Side S/T Enter S when used in the source data model,
and enter T when used in the target data
model.
Type S/X/V Enter S for substitution type database
lookup;
Enter X for cross-reference type database
lookup;
Enter V for code verification type database
lookup.
Column Instructions
Heading
Label/Category/ When Type is ‘S’ for substitution, enter in the
Verify List ID label used for the substitution, for example,
$X12AckLevel.
When Type is ‘X’ for cross-reference, enter in
the category from which the cross-reference
is to occur.
When Type is ‘V’ for verification, enter in the
verify list ID under which the lookup is to
occur.
Hierarchy Level Enter the key prefix of the Profile Lookup for
Key Prefix the trading partner. The Key Prefix is
obtained from the dialog box in step 10.
Cross-references The environment keyword XREF_KEY is used to set the view into
the database to perform cross-reference lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the category that the data is stored underneath
along with the value in the input stream to be cross-referenced.
The XREF data model function allows you to do this.
In addition to cross-referencing information, you have the ability to
manipulate the cross-reference information stored in the database.
The SET_XREF data model function allows you to update values
into the cross-reference portion of the Profile Database. The
DEL_XREF data model function allows you to delete a Profile
Database cross-reference.
Verification Code Lists The environment keyword LOOKUP_KEY is used to set the view
into the database to perform verification lookups. Once you have
identified the type of information and where to access it, the next
step is to identify the verify list ID that the data is stored
underneath. The verify list ID is keyed into the Verify field of the
data model item. The verify list ID is also known as the Category
in the Xrefs/Codes dialog box. The next step is to construct a rule
identifying the data model for which the verification is to occur
and the value to be looked up. The LKUP or DEF_LKUP data
model functions allow you to do this.
Step 10: Enter the All lookups into the Profile Database should have been listed on
Profile Database the Profile Database Interface Worksheet, as part of Step 9 when
creating rules within Workbench.
Values
Using Trade Guide, enter in all of the Profile Database lookups,
including values to be used in substitutions, cross-references
(x/refs) and verification code lists. Procedures for entering this
information are found in Trade Guide online help.
Cross-references 1. From the Xrefs/Codes dialog box, add the category under
which the cross-reference will occur to the category file, if not
already present.
2. Select the category from within the list box, and enter the
extracted values from the input stream in the Value field. If a
string of values is used, these values should be trimmed of
trailing spaces and concatenated together using the pipe ‘|’
character as a delimiter between the fields.
3. Enter the value to be used for the cross-reference in the
Description field.
Step 11: Run Test Using the input file from Step 3, translate the file(s). The files can
Translations and be translated through the Run dialog box of Workbench, or at the
Debug command line, by invoking the inittrans command in Unix and
Linux or the otrun.exe command in Windows®. Refer to Section 12.
Translating and Debugging for detailed instructions.
During test translation, it may be beneficial to set a high trace level.
The trace facility set at a high level generates a step-by-step log of
the translation. Various levels of the trace can be turned on and off
as needed. Continue testing and debugging until your data model
is free of error and ready to migrate to an official test or production
area.
Step 12: Make Once the translation has been modeled, the following modified or
Backup Files created files should be backed up:
Map component files —.att
Data model files — .mdl, .xsl (in case of xsl based models,
refer Section 10. XML Mapping and Processing for more
details.)
Test and input files
Access model files — .acc
Profile Database — sdb.dat & sdb.idx
All worksheets and flowcharts should also be packaged together
for later reference. Refer to Trade Guide online help for details on
scheduling backups.
Step 13: Migrate Once you are through with the complete data mapping process,
the Data including testing, you are ready to migrate your application to an
official test or production functional area. Refer to Section 13.
Migrating to Test and Production for suggestions on migrating to a
different functional area.
Assigning Names Consider the following when assigning names to models and files.
Operating System When assigning filenames, consider naming conventions for all
Naming Conventions operating systems under which you expect to operate. Use the
least common denominator. That is, if you are expecting to use
Windows®, limit the base portion of the filename to eight
characters and the extension to three characters. Note also that the
Windows operating system is not case-sensitive, but Unix and
Linux are. If you intend to develop applications for these
platforms, consider using all lowercase or uppercase characters to
avoid any problems following migration. FTP will migrate files,
maintaining the case it encounters.
Application Integrator When assigning names to models, variables, and so on, use a prefix
Reserved Prefixes other than the two-character “OT” or “ot.” All provided
Application Integrator™ models (.mdl, .acc), generic variables, and
utility shell scripts begin with the two letters “OT” or “ot.”
Variable Length
Access model variable 40 characters
Data model variable 40 characters
Variable (temporary) variable 40 characters
Array variable 40 characters
User-defined environment variable 40 characters
Substitution variable 254 characters
Verification list ID 254 characters
Using Application The models and files supplied by Application Integrator™ and
Integrator Models Application Integrator™ standards implementation packages have
names beginning with “OT” and “ot.” These files are read-only, in
and Files
most cases.
If you desire to modify an Application Integrator™ file, such as a
sample data model, copy the file to a file with a new name and then
modify the copy.
References to Files References to files and models within the map component files
and data models can be either relative or explicit. Relative
referencing means every file and model being used is located in the
same directory. These include source, target, access models, and
map component files. Using relative referencing, they can be
moved to another file system without changes. This allows the
Before Translating
Note: For this section, the screen prints are done using the
Translate icon on the Workbench toolbar.
Select the level using the dialog box options associated with
this box. The trace level defaults to zero if a trace level is not
entered.
Refer to the “Setting a Trace Level” section later in this section
for a complete description of this option.
5. In the Translation Type etched area, select the type of
translation to be run.
6. Select the Keep Input File if you would like the input file to be
copied to a temporary file and used during translation.
7. In the Environment Variables group box, enter any additional
parameters needed for this translation. These parameters could
include input filename, output filename, and user-defined
environment variables.
Using the Name and Value box entries, the system creates an
additional parameter statement, such as
“INPUT_FILE=OTX12I.txt”.
Note: If the data model has been modified since you last ran
a translation, you will be prompted to save these changes.
The Translation Status tab displays the status of the translation run.
The Trace Log tab displays the trace file created by the translation.
9. When done, select Finish to close the translation dialog.
Setting a Trace The trace level controls the content of the translation trace log file.
Level The trace level is set using a numeric value. This value represents
the options of the trace that are set. You can manually enter the
trace level options, or use a dialog box for making selections.
The following table describes the trace levels.
Trace Description
Level
0 No Trace Setting or a Trace Setting of Zero (0)
otrans or otrun.exe version, compile date, and
time
Date/time translation began and ended
Loading of libraries, with their compiled
date/time
Translation ending status
1 Data Model Item Values Listing
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Names of source access and data models
“SOURCE VALUES”
The last source item values parsed
“VAR VALUES” - declared within this source
data model
“ARRAY VALUES” - declared within this
source data model
Names of target access and data models
“TARGET VALUES”
“VAR VALUES” - declared within the source
and this target data model
“ARRAY VALUES” - declared within the
source and this target data model
Trace Description
Level
2 Value Table Listing
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
“VALUE STACK” - Target item labels with the
values assigned to them
“VSTK” - Values being referenced off the value
stack in target Phase 2 processing
4 Source Data Model Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Two lines for each source item, as processed -
“DM: ItemLabel”, “FINISHED ItemLabel”
8 Target Data Model Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
One line for each target item, as processed -
“DM: ItemLabel”
One line each time processing returns to a
parent level - for example, “FINISHED
ItemLabel”
16 Rule Execution
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
One line for when entering rules on an item -
only if item has rules defined
One line reporting rule execution status - all
items
32 *Rule Functions This level does not output on its own.
Refer to 48.
Trace Description
Level
48 Rule Functions (48, which is 16+32)
Includes #NUMERIC/#DATE/#TIME access
function. Function NUMERIC_in: dm
PhoneNumber pic “No format”
dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
Execution of rules - assignment, functions, and
so on.
64 Source Access Items
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Source TAG item matching - “pre condition
Rec_Code met”
Source parsed values being returned back to the
source data model
128 Error Details (128)
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
The clearing of the error stack - “err_dump( )”
the capturing of an error - “err_push( )”
256 IO Detail (256) - pertains to source only
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
Shows file position entering each item
Shows each character read and checks if within
defined character set.
Function NUMERIC_in: dm PhoneNumber pic
“No format” dm left 10 .. 10 right 0 .. 0 radix
Function NUMERIC_in returns value:
“3255550961”
512 Write Output Detail
Values reported by “No Trace Setting or a Trace
Setting of Zero (0)”
The order the items are written out and the
access items used
1023 Complete Trace
2. Select each type of trace level. To select all options, choose the
Set All button. To deselect all options, choose the Clear All
button.
3. Choose the OK button to return the selected trace level;
- Or -
Choose the Cancel button to exit the trace settings without
making changes.
Note: The trace log file is generated once you run the
translation and is displayed in the Translation Results dialog.
Hint: You can also subtract the number of the items you do not
want to show on the trace from the total trace value of 1023. For
example, to show a total trace minus access items, you would
subtract 64.
For Unix and Linux systems, the program called inittrans invokes a
Translating at translation from the command line. For Windows® systems, the
the Command program otrun.exe issues a translation session. Each program is
Line invoked with arguments to specify the configuration of the
translation session (input/output files, initial environment,
environment variables, and so on.).
Invoking the The translation program inittrans has the following common
Translation syntax in a Unix or Linux development area. Refer to the following
table for a list of complete arguments.
Processes – Unix
and Linux
inittrans -at <initial map component file>
-cs <Control Server queue id> -tl <trace level> -I
Invoking the The translation program otrun.exe has the following command
Translation syntax in a Windows® development area. Refer to the “Available
Arguments to inittrans/otrun.exe” table for a list of complete
Process –
arguments.
Windows
otrun.exe –at <initial map component file>
-cs <Control Server queue id > -tl <trace
level> -I
Available Arguments to
inittrans/otrun.exe
Code Description of Item Defined Environment
Variable
Resources
-a source access file S_ACCESS
-A target access file T_ACCESS
-at initial map component file None
-cs specify AI Control Server None
queue ID
-D declare user-definable None
environment variable
-hk hierarchy key prefix HIERARCHY_KEY
-i input file INPUT_FILE
-I interactive, foreground None
versus background (default
processing)
-lg specifies a file for the None
<filename> translation session output
allowing you to translate in
the background instead of
monitoring feedback in a
Session Output message box
Parameter The code for an initial map component file “-at” is exclusive of -i,
Explanations -o, -a, -A, -s, -t, -hk, -xk, and -lk. If -at is used, these others cannot
be used. If the other codes are used with -at, they will have no
impact.
The following sets are paired: -a with -s (source access and data
model), and -A with -t (target access and data model).
The parameter -lg <filename> generates translation session
feedback to a background file. This parameter writes to the
filename specified the translation session feedback usually
displayed in the Session Output dialog box. If the file cannot be
opened or created, the Session Output message box will be
displayed. It is a good practice to keep this filename consistent for
all translations, thus making any cleanup easier by handling only
one file.
Session Id:29
Windows®:
otrun –cs %OT_QUEUEID% -list
Along with:
Ways to Set the You can alter the amount of detail contained in the trace in four
Trace Log places:
The Workbench Translate dialog box
The environment (map component file)
Within the data model rules
At the command line
Through the Run You can set the trace level through the Workbench Translate dialog
Dialog Box box by accessing the Trace Settings dialog box (accessed from the
Select button next to the Trace Level box). This method sets the
trace throughout the complete translation session. Refer to the
procedures earlier in this section for instructions on completing
this dialog box.
Using Data Model For any item in the translation session, you can define the trace
Rules reporting detail. Use the function SET_EVAR( ) and the keyword
environment variable TRACE_LEVEL to do this.
[ ]
SET_EVAR(“TRACE_LEVEL”, 1023)
At the Command Line You can specify a trace level when invoking a translation by using
the -tl parameter and passing a numeric trace level value. For
example, a complete trace (1023) is specified with the following
line:
In Unix and Linux, type:
inittrans -at train1.att -cs $OT_QUEUEID -tl 1023 –I
In Windows®, type:
otrun.exe. –at train1.att –cs %OT_QUEUEID% -tl 1023
–I
Refer to the earlier section, “Translating at the Command Line,” for
more details on command line parameters.
Viewing the Trace You can view the trace log through Workbench, the Unix or Linux
Log command line, or by opening the trace log via an editor.
Source Processing:
Source data model processing - only if source data model declared
in this environment
Source data and access model filenames —
“SOURCE_TARGET get_source acc OTFixed.acc model
\Trandev52\models\Examples.mdl”
Source item processing — repeat for each item
Data model item label
Access parsing / character reading - tags, containers,
defining
Rules performed, modes (Present/Absent/Error), conditions
and actions
Source values (values are not seen if the data model is
exited, i.e., “EXIT 503”)
Data model item values
Array values (ARRAY->) declared within this source data
model, in sequence declared
Temporary variable values (VAR->) declared within this
source data model, in sequence declared
Debugging Hints The following are hints on using the trace log for debugging.
To See Full Trace Consider setting a full trace (1023) to see all details. To see values
as they are formatted and written out off the value stack, you must
use the full trace level setting.
Excessive Looping Use a trace level value of “12”; this shows only the item’s labels
Concern with their occurrences.
Pinpointing Rule Set the trace level value to “16”; this shows the items with rule
Execution Errors errors.
LastName - PRESENT rules
<3> returning eval not instantiated
LastName: ERROR->> status after PRESENT rules 139
vs.
LastName - PRESENT rules
LastName: status OK after PRESENT rules
Keyword Meaning
ENTER fill_sdm( ): Identifies the instance of parent group
XXXXXXX parent- item XXXXXXX. The parent instance is
>instance n incremented when a new hierarchical
level is encountered.
Equal returns Indicates the result of an evaluation or
True/False condition statement.
err_dump( ) Indicates that the content of the error
dump stack is being discarded or reset
to a non-error state.
err_push( ) status nnn Assigns interval setting of an error value
msg “xxxxxxxx” to alter the state of processing
depending on keyword used, data
encountered, or data not encountered.
ERROR Indicates that the rules that follow will
be performed if the item is found to be
in error. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
error number For example, “184”
Evaluated value Displays the contents of the item just
before performing rules associated with
the item.
exec_ops status Identifies the status returned by a
condition or actions.
FINISHED Identifies the end of processing of an
XXXXXXX: occurrence of a data model item.
fp_save set to nnn in Identifies the last position read from the
xxxxxxx input stream.
Keyword Meaning
FUNC XXXXXXX set Identifies where an access model item is
to value xxxx <nnnn> being set to a value of xxxx, where the
decimal value of xxxx is nnnn. The data
model “SET” functions, SET_DECIMAL,
SET_RELEASE, SET_FIRST_DELIM,
SET_SECOND_DELIM,
SET_THIRD_DELIM,
SET_FOURTH_DELIM,
SET_FIFTH_DELIM, and set the access
model items.
Function xxxxxx_in Displays the source access model
returns value: function (xxxxxx_in) used to return a
“yyyyyy” value of yyyyyy from the data input
stream.
Function xxxxxx_out: Displays the target data model output
dm yyyyyy pic function called (xxxxxx_out) to define
FFFFFFF the item type for data model item
yyyyyy. The output format for the item
is defined with the mask format
FFFFFFF.
get_source Where source data model processing
occurs.
infile “xxxxxxx” Specifies the name of the file used to
supply input data for processing.
initial pre condition Indicates that an access model function
XXXXXXX met call has returned a valid value for an
item pre-condition and assigned its
value to data model item XXXXXXX.
instance nnn Identifies which instance of the item was
acted upon. The instance is incremented
by a group. Instances are not
incremented by defining items or tag
items. Instances are reset when control
passes back to the parent item.
IO: char “x”, out of Indicates that data found in the input
char set stream is outside the range of the valid
characters defined for the item.
Keyword Meaning
IO: func XXXXXXX Indicates that the access model function
called XXXXXXX is being called to return a
value to the data model.
level nnn Starting from the top of the data model
and working downward, the level is a
reference to the number of unique data
model group items encountered. This
key word allows you to more easily
determine where the item occurs in the
data model. The level is also referenced
by the pipe ‘|’ symbols along the left
side of the trace file as each data model
item is encountered and when
processing of the item is finished. One
‘|’ is displayed for each level, i.e., “|||”
would be displayed at level 3.
litpush “X” Defines the value of a literal constant
usually assigned to a variable.
Matching XXXXXXX Shows the comparision performed for a
to YYYYYYY tag item. The data taken from the input
stream (XXXXXXX) is compared against
the tag (YYYYYYYY) for a match.
max occurrence nnnn Identifies the maximum number of
times that an item has been defined to
occur successively in a looping
sequence.
occurrence nnnn Identifies the number of times the item
was encountered within a looping
sequence.
outfile “xxxxxxx” Specifies the name of the file used to
write processed data out to the disk.
pre_cond xxxxxxx An item is identified if the data found
(not) found meets the requirements of pre_condition
- item - post_condition. This statement
indicates whether the pre_condition has
been found.
Keyword Meaning
PRESENT Indicates that the rules that follow will
be performed if the item is found to be
present. The translator determines
which mode it is in by evaluating the
current error code value after parsing a
data model item (if it is a defining item).
An error code value of “0” defines
present mode - the data model item is
present.
put_target Where target data model processing
occurs.
radix Defines the character to be used to
signify the decimal place within a
numeric item definition.
read_set returning Displays the returned value of a
xxxx character string read in from the data
stream.
resetting fp to nnn Indicates that the data file pointer is
being reset to value “nnn,” usually the
last value referenced for functions such
as #SET_FIRST_DELIM,
#SET_SECOND_DELIM,
#SET_THIRD_DELIM,
#SET_FOURTH_DELIM, and
#SET_FIFTH_DELIM or when an item is
in error. The file pointer may also be
reset to the beginning of the file when
keywords REJECT or RELEASE are
encountered in the data model.
right nn .. nn For a date model item defined as a
numeric, this indicates the minimum to
maximum number of digits which may
appear to the right of the decimal point.
SENTINEL sequence Indicates that sentinels occur between
each grouping of occurences.
Keyword Meaning
SOURCE VALUES Indicates that keywords are marking the
start of a summarization of source
values. This summarization comes at
the end of the trace file listing for the
source data model.
SOURCE_TARGET Indicates that keywords specifying the
put_target acc names of target access and data models
xxxxxxx model are loaded at this point in the
yyyyyyy.mdl trace/translation.
status Identifies the status returned by
operations performed on the parent
item.
statusc Identifies the status returned by
operations performed on an item’s
children.
TARGET VALUES Indicates that keywords are marking the
start of a summarization of target
values. This summarization comes at
the end of the trace file listing for a
target data model.
TIME_in The “_in” (for source or model
TIME_out processing) or “_out” (for target data
model processing) is added to function
names such as TIME and DATE to
indicate access model functions.
VALUE STACK Values parsed or constructed and placed
on the value stack, beginning of Phase 2
target processing.
VAR VALUES Indicates that keywords are marking the
start of a summarization of values
assigned to all temporary variables on
the source and target data models.
VSTK The writing of each value off the value
stack - Phase 2 of target processing
VSTK->dm xxxxxxx Shows a walkthrough of data model
dm xxxxxxx value value assignments and matching
nnnn attempts.
Keyword Meaning
XXXXXX read_set err Indicates that a value read in from the
input stream is outside of the character
set XXXXXX.
XXXXXXX: status OK Indicates that rules on data model items
after PRESENT rules XXXXXXX were performed sucessfully.
Debugging when 1. Insert messages in the model using the SEND_SMSG function
Processing Large Input to mark progress during the translation at a high level, for
Files example:
SEND_SMSG(2, “Field Content:”, Field1)
2. Turn the trace on and off from within the model to get more
detail:
SET_EVAR("TRACE_LEVEL", 1023)
Refer to Appendix A. Application Integrator Model Functions
in Workbench User’s Guide-Appendix for a complete description
of these functions.
Example Trace Log The following is an example using the trace level 1023 —all options
selected.
(Output begins here)
----------
VAR values
----------
DM: LastChangeDate val "11/04/2005" instance 0 p_inst 0
p_seq 0 level 0
DM: LastChangeDate val "11/04/2005" instance 0 level 0
SENTINEL sequence 0
end LastChangeDate values
----------
ARRAY values
----------
DM: first val "Mary " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Mary " instance 0 level 2
DM: first val "John " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "John " instance 0 level 2
DM: first val "Bob " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Bob " instance 0 level 2
DM: first val "Sue " instance 0 p_inst 0
p_seq 0 level 2
DM: first val "Sue " instance 0 level 2
SENTINEL sequence 0
end first values
DM: last val "Smith " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Smith " instance 0 level 2
DM: last val "Green " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Green " instance 0 level 2
DM: last val "Jones " instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Jones " instance 0 level 2
DM: last val "Willis" instance 0 p_inst 0
p_seq 0 level 2
DM: last val "Willis" instance 0 level 2
SENTINEL sequence 0
end last values
DM: phone val "3055550961" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3055550961" instance 0 level 2
DM: phone val "2125551212" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "2125551212" instance 0 level 2
DM: phone val "3135401600" instance 0 p_inst 0
p_seq 0 level 2
DM: phone val "3135401600" instance 0 level 2
Generating User- Beyond using the reports defined for you, Application Integrator™
Defined Reports has provided the groundwork for application-specific report
generation. All Application Integrator™ reports are developed
using the same structures as data mapping (map component files
and models). To ease your generation of application-specific
reports, Application Integrator™ includes a set of models, map
component files, and other files that serve as templates for
handling both reports where the reporting data is known (for
example, you are reporting on an output file of X12 invoices (810s))
and others for reporting on data where the content is not known in
advance.
These files contain the logic to deal with the common report
characteristics:
Pages are set to 66 lines in length
Each report prints 57 lines per page
Report headings include:
− Company name, centered
− Date and time, report title, centered page number
− Up to six lines of heading information
Clean up of temporary report generated files
Reporting Where the The following diagram shows the flow of the generic report system
Data Content is Known for reports where data is pre-identified.
Printing and The printing of a report is invoked through the use of the shell
Generating Generic script OTReport.sh and can be invoked from the command line
Reports using the following syntax:
In Unix and Linux, type:
OTReport.sh <D/P> <specific_report.att> <columns>
In Windows®, type:
OTReport.bat <D/P> <specific_report.att> <columns>
For the arguments:
<D/P> — Enter either ‘D’ for display or ‘P’ for printing of report.
<specific_report.att> — Enter the name of the specific
report map component file.
<columns> — Enter the number of columns the report is be
printed into. It is an optional argument, defaults to 132.
Examples:
For Unix and Linux, type:
OTReport.sh P OTActR1.att 80
Windows®
The batch file OTReport.bat invokes a translation, such as the
following:
Reporting Where the The following diagram shows the flow of the generic report system
Data Content is for reports where data has not been pre-identified.
Unknown
Note: All printing using the “lp” command uses the shell script
“OTPrint.sh.” If necessary, the command in the shell script can be
modified to add the “-d” option to control the printer or class of
printer to which the report is to be sent.
Once you have completed the source and target data models, the
map component files, and the development testing, you are ready
to migrate your Application Integrator™ application to a test area
or production area.
This section provides background and instructions on migrating
from development areas to test areas or from development or test
areas to production areas. It includes procedures for importing
and exporting Profile Databases.
Permission The Unix and Linux operating systems assign permissions to each
Guidelines file. For new and replacement files, the required permissions for
owner, group, and other must be specified for read, write, and
execute. For replacement files, it is customary to match the existing
target permissions unless other permissions are specifically
indicated.
To delete an existing file, you need sufficient authority.
For Windows®, the operating system controls the access
permissions and passwords for each file. These are set up by the
system administrator.
Deploying Maps Once you have completed the source and target data models, the
Overview map components files, and the development testing, you are ready
to deploy your electronic commerce application to a test functional
area or a production functional area.
Deploying is the process of moving files or information from one
location to another where similar files may exist. The files or
information come from a “source” location, and are moved to a
“target” location.
To successfully deploy files or information from source to target
locations, some important questions must be considered:
Will the data to be deployed overwrite any existing data?
Is the data to be deployed dependent on data from another
file?
If the deployment must be reversed or “undone,” what
needs to be done?
This section identifies files usually associated with Workbench
development to production deployment. Typical deployment files
are:
Map component files (*.att)
Data model files (*.mdl)
Access model files (*.acc)
Environment files (*.env)
Include files (*.inc)
Stylesheets (*.xsl)
Trading Parnter Profile from within the Profile Database
files (C-ISAM: sdb.dat & sdb.idx, Oracle)
Planning Development This is only a guideline for deployment. A full analysis of all items
Deployment covered is required to ensure a successful deployment or recovery.
Deploying Files The Deploy feature is used to copy relevant mapping files to a
directory. These files can be used during debugging tasks or for
moving files into a production functional area.
Accessing the Deploy The option to Deploy a Map Component or a Data Model is
dialog box available only if the same is opened and currently active in the
respective editor (Map Component is currently displayed in the
Map Editor or if it is a Data Model, it is displayed in the Model
Editor).
1. Open the map file/model file, to be deployed, in the Map
Editor/Model Editor.
3. Review the default settings for each of the items. Change the
settings as appropriate for the deploy process for this map file.
Option Description
Deploy Access Model Determines whether any access models
associated with this editor should be
deployed.
Deploy Include file Gives the option whether to deploy
User.inc or all include files.
Export Trading Determines whether to export a single
Partner profile trading partner's record with deploy.
This value entry list box is enabled
when you are connected to the Profile
Database containing trading partner
records.
Write Activity Log Determines whether a tracking log will
be created that would have the
information of all files included in
deploy. If selected, deploy activity log is
written at the same location where the
files are deployed, in a file named as:
readme_ver_xxxxxxx.txt, where
‘xxxxxxxx’ is the system date of the
system where Workbench is running.
Deploy Location Determines the absolute path to the
target directory for deploy. This can be
either on the local system or on the
remote system (an FTP site).
Deploy Path Displays the target path selected.
File Permissions The Unix/Linux operating systems assign permissions to each file.
Guidelines For new and replacement files, the required permissions for
owner, group, and others must be specified for read, write, and
execute. For replacement files, it is customary to match the existing
target permissions unless other permissions are specifically
indicated.
To delete an existing file, the user performing the deployment
needs sufficient authority.
For Windows®, the operating system controls the access
permissions and passwords for each file. These are set up by the
system administrator.
Deploying the Trading Partner Profile from the Profile Database (C-ISAM: sdb.dat
Profile Database and sdb.idx files, Oracle) can be updated via one of three methods:
replacement, manual maintenance, and export/import.
Replacement To replace when the source and destination are both C-ISAM,. the
Profile Database two files must be replaced. They are:
sdb.dat — data file of the Profile Database
sdb.idx — index file of the Profile Database
When replacing these files they must be replaced together as a set
of files, replacing only one without the other will create a serious
problem.
To replace when the source and destination are both Oracle, the
source database can either be referenced as the destination
database via the aiserver.env/.bat file, or the source database can
be dumped and imported into the destination database using a
utility such as sqlplus.
To replace when the source and destination are not both C-ISAM or
Oracle, then the Profile Database needs tobe dumped and
imported. Using Trade Guide, run Export and choose “Profile
Database” and dump all of its content. Then in the destination,
make sure the database is empty (for C-ISAM remove the sdb.dat
and sdb.idx, for Oracle using sqlplus - truncate table SDB;) and run
the following translation upon restarting the destination control
server:
inittrans –at OTsdb.att –cs $OT_QUEUEID –
DINPUT_FILE=<source database export file> -
DOTDBIMPORT=Yes -I
Note: Use the export and import features for trading partner
profiles and standards to migrate minor changes to the trading
partner profiles and cross-reference lists.
Refer to the Trade Guide online help section “To Move Portions of the
Profile Database” for instructions.
Export/Import You can update the Profile Database by exporting the entire
database, exporting portions of the database, and then importing
the database or database portions. Refer to the Trade Guide online
help for complete instructions. Trade Guide, within Export allows
for the selection of which Trading Partner Profiles are to be
exported. Use Import Profile Database to import the profiles into
the destination database.
Deploying Access
Models
To Deploy *.acc Files 1. Obtain a list of access model files that are to be migrated.
Access model files will have a suffix of “.acc.”
findstr NumericFldNA *
Deploying Data
Models
Considerations
Data models can attach to other map component files. It is
important to identify all map component files that will be
affected by the use of any data model to be migrated. To
determine whether a data model performs an ATTACH,
use the grep/findstr command or an editor such as vi and
search the data model for the keyword ATTACH. If the
data model contains the keyword ATTACH, then the
model must be examined to determine what other
processing will occur and how it will affect production
processing.
Data models have the ability to execute external programs or
shells. When migrating a data model, it is important to
consider what effect it may have on existing processing in
the target area. To determine the external executions that
are being performed by a data model, use the grep/findstr
command or an editor such as vi, and search for the string
EXEC.
When Application Integrator™ generic data models are
modified, or general-purpose models are written, these
models should be updated into each development seat and
the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest data models.
Deploying Map Map component files identify all resources (other files) used during
Component Files a processing session. When migrating the map component files, it
is important to look at the resources referenced by the map
component file to determine what else will be affected.
Considerations
If the map component file being migrated contains
references to data models, access models or other resources
already in production, then any dependency between them
must be identified.
Map component files are usually small and can be viewed or
printed with little consideration. This will ease the process
of analysis.
Map component files may contain key prefixes for
substitutions and cross-references (xrefs). If key prefixes or
other environment variables are used, the effect on existing
Profile Database entries must be considered.
Deploying
Environment Files
grep ENVIRON1.env *
Considerations If existing environment files change, then any model that uses
those files must be considered. Use the grep command to check for
these references.
In general, if you are unsure of whether or not a file has been
changed, you can use the diff command or the Windows® fc
command to compare the files:
diff <first file> <second file> (Unix and Linux)
fc <first file> <second file> (Windows®)
Deploying Include
Files
In Unix and Linux, to determine where include files are being used,
use the grep command to search for the name of the include file
throughout the entire directory. For example, the following
command returns a list of all files using User.inc:
grep User.inc *
On Windows®, use the findstr command. For example, the
following command returns a list of all files that referenced
User.inc:
findstr /l /c: User.inc *.*
Considerations
When Application Integrator™ generic include files are
modified, or general-purpose includes are written, these
files should be updated into each development seat and the
models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest include files.
Deploying
Stylesheet Files
Considerations
When Application Integrator™ generic stylesheets are
modified, or general-purpose stylesheets are written, these
stylesheets should be updated into each development seat
and the models’ (release version) directory (/u/aidev/OT).
Thereby, all current and future development seats will be
using the latest stylesheets.
Browse for the local site and click OK. The Edit Local Site
window comes up. Click OK.
Once all the features and Plug-Ins are downloaded successfully and
their files installed into the product on the local computer, a new
configuration that incorporates these features and Plug-Ins is
formulated.
You will be prompted to re-start Workbench after the installation is
complete. Click Yes when asked to exit and restart the Workbench
for the changes to take effect.
Note: Workbench bundles only CVS client along with it. Update
manager can be used to add the CVS client as a feature to
Workbench. You have to install CVS server on the same system or
any system in your network, before you can configure CVS client
to access it.
The wizard helps you to import your project files into the CVS
repository. You can either create a new repository location or use
an existing repository location.
Follow the steps in sequence to complete sharing the project with
the CVS repository.
2. The two files are compared and the results are displayed as
shown in the following screen.
Index
#CHARSET function post-condition, 40, 45
defining, 43 pre-condition, 40
#DATE function, 39, 205, 402, 428 adding
#DATE_NA function, 43, 205 validation logic, 310
#FIFTH_DELIM function Administration database, 49
defining, 43 overview, 51
#FINDMATCH function ampersand symbol, character entity, 293
defining limit, 395 apostrophe symbol, character entity, 293
#FIRST_DELIM function Application Integrator
base values for, 43 de-enveloping files provided, 65
post-condition values, 45 enveloping files provided, 65
#FOURTH_DELIM function, 43 file suffixes, 415
#LOOKUP function, 43, 402 reserved names, 414
#NUMERIC function, 44, 197, 402, 428 Array
#NUMERIC_NA function, 44, 197 overview of support for, 225
#SECOND_DELIM function, 42, 43 ATTACH keyword
#SET_FIFTH_DELIM function, 454 using, 62
#SET_FIRST_DELIM function, 454 Attachment dialog box, 56, 403, 444
#SET_FOURTH_DELIM function, 454 troubleshooting, 169
#SET_SECOND_DELIM function, 454 Attachment file, 55, 56, 84, 167
#SET_THIRD_DELIM function, 454 defining, 167, 168, 177, 179, 180
#THIRD_DELIM function, 42, 43 error codes, 62
#TIME function, 44, 206, 402, 428 for de-enveloping, 65
#TIME_NA function, 44, 206, 207 for enveloping, 67, 69
$ (dollar symbol), 198, 204 for generic report writing, 472
$ (substitution) function, 44, 258, 400 migrating guidelines, 498
$$ (session number) keyword environment modifying, 172
variable, 57 naming conventions, 167
Absent mode rules, 223 overview, 55
Access model referencing, 416
base, 40 troubleshooting, 169
list of standard models, 40 attribute names, case sensitivity, 287
migrating guidelines, 494 attribute values, well-formed rules for, 286
overview, 39, 40 Backups
Numbers Parse
handling, 195 command line syntax checking, 259
Numeric data translator and Workbench syntax checking,
formatting, 194 260
masking characters for, 195 Parse on Errors dialog box, 258
Open Perspective, 517 parser
OTCallParser.att, 283, 285 about, 282, 293
otcsvr, 420 input, 307
OTEnvelp.att, 50, 55, 84 input argument, 311
using, 67, 69 input data stream, 314
OTmdl, 219 input default, 311
otrans, 420, 427, 447, 494 invoking, 309
OTRecogn.att, 50, 55, 56, 84, 478 otxmlcanon, 283, 307
using, 65 output, 307, 316
OTReport.sh, 474 validation error, 309
description of, 473 what it does, 293
otrun.exe, 412, 427, 431, 432 Permissions
overview, 420 guidelines when migrating data, 484
otstdump utility, 492 Positive sign
otxmlcanon, 283 masking for, 199
about, 282 Post-condition
OTXMLPre.att, 283 defined in access model, 40, 45
outbound processing understanding, 389
about, 285 values for, 45
errors, 285 Pre-condition
Outbound X12 Values dialog box, 68 defined in access model, 40
Output understanding, 389
sorting, 211 values for, 42
Output data Preferences dialog box
requirements for translating, 388 default values, 228
Output file overview, 230
specifying a secondary, 210 Present mode rules, 223
viewing in Workbench, 441 pre-translation environment
page element, 288 errors, 283
para element, 289 translating with, 283
Parameter Processing flow, 58
prompting for, 256 Profile database, 209, 422
Parent changing values, 404
overview, 37 checking, 496, 502